blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 2 327 | content_id stringlengths 40 40 | detected_licenses listlengths 0 91 | license_type stringclasses 2 values | repo_name stringlengths 5 134 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 46 values | visit_date timestamp[us]date 2016-08-02 22:44:29 2023-09-06 08:39:28 | revision_date timestamp[us]date 1977-08-08 00:00:00 2023-09-05 12:13:49 | committer_date timestamp[us]date 1977-08-08 00:00:00 2023-09-05 12:13:49 | github_id int64 19.4k 671M ⌀ | star_events_count int64 0 40k | fork_events_count int64 0 32.4k | gha_license_id stringclasses 14 values | gha_event_created_at timestamp[us]date 2012-06-21 16:39:19 2023-09-14 21:52:42 ⌀ | gha_created_at timestamp[us]date 2008-05-25 01:21:32 2023-06-28 13:19:12 ⌀ | gha_language stringclasses 60 values | src_encoding stringclasses 24 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 7 9.18M | extension stringclasses 20 values | filename stringlengths 1 141 | content stringlengths 7 9.18M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9e37e17d39062a26f93ec628392cd7540880b014 | 90723e5e55ec4f655b855aa6305308e077844165 | /students_scripts/Hans Prevost/HansPrevost_e2_objects_script.R | 80669d4c9aa39dcbc87f2a39613d1185451674fa | [] | no_license | Planktos/Intro2R_2020 | 318ec96c0b4716955ce72a0a2ff2c80c8b95bf04 | f83d27ebfa016945ea8328e125f2b77dbf19e5b1 | refs/heads/master | 2022-12-25T20:41:30.005371 | 2020-10-06T13:47:04 | 2020-10-06T13:47:04 | 291,804,451 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,795 | r | HansPrevost_e2_objects_script.R | #Exercise #2: objects
# 1) create an object that includes both numbers and character string
a = c(1, 2, 3, "dog", "fish")
typeof("dog")
# 2) create a vector object with 15 values. What happens when you set the length to 20?
b = c(1:15)
b = c(1:20)
#it makes the vector have 20 values instead of 15, because its value was reassigned.
#3) create an object that includes a list and set recursive to 'TRUE'.
c = c(4, 5, list(0.1, 0.2, 0.3), recursive = TRUE)
c
## 3a) What is the difference if you set recursive to 'FALSE'?
#setting the recursive to FALSE keeps the elements of "c" as factors,
#while if it was TRUE, it would turn all elements of "c" into a vector including the values in the list().
#4) create a matrix object that has 3 columns and 5 rows. Name the rows and columns.
d = matrix(data=1:15, nrow=5, ncol=3, dimnames = list(c("I", "am", "making", "a", "matrix"), c("who", "am", "I")))
d
##4a) Extract the value in the 2nd row and 1st column.
d[2, 1]
#5) create an array object that has 3 levels. Each level should have 6 rows and 4 columns. Name the rows and columns.
magnolia = array(data = 1:72, dim = c(6, 4, 3), dimnames = list(c("r1", "r2", "r3", "r4", "r5", "r6"), c("c1", "c2", "c3", "c4")))
magnolia
##5a) Extract the value in the 3rd level, 4th row, and 2nd column.
magnolia[4, 2, 3]
##5b) If you wanted to extract all the values in the 1st level and 2nd column, what command would you use?
magnolia[,2,1]
#6) What are the differences between a matrix and an array?
#a matrix is only 2 dimensional, while an array can handle multi-dimensional data
#7) run the following object and show me the code to what type and class of object it is.
m <- matrix(data=1:15, nrow = 5, ncol = 3,
dimnames = list(c("r1","r2","r3","r4","r5"),
c("c1","c2","c3")))
class(m)
typeof(m)
#8) What symbols are you not allowed to use in object names? Why not?
#!, ^, +, *, ?, /, @, -. They are used for other operations, like basic math
#9) create a data frame object with 4 rows and 4 columns. Give the columns the names of your four favorite cartoon charatcters.
ocean = data.frame(the.hoopla.fish.from.spongebob.who.gets.hit.by.a.brick =
c(1, 2, 3, 4), squidward = c(5, 6, 7, 8), louise = c(9, 10, 11, 12), danny = c(13, 14, 15, 16))
ocean
##9a) Assign the data in the data frame's third column to a new object.
fire = ocean[,3]
fire
##9b) Assign the data in the data frames second row to a new object.
ice = ocean[2,]
ice
#10) Explain the meme on this t-shirt.
browseURL(url = "https://www.teepublic.com/t-shirt/2305884-programmer-t-shirt-coding-joke")
# the "!" means negation so {!false} means not false, which means true.
#11) Assign the formula for a linear regression to a new object.
form = as.formula(y~mx+b)
|
a36f7468301b540d4590f5e6ebcb11f8a3be82aa | cf1ddfe118bd79a236bda3d9d57cd2219c7a0527 | /man/lspace_BrXII.xy.Rd | d532cbb7c119f075d3b83f8d80f032e9a2a45e7a | [] | no_license | cran/LMoFit | 0f9b3dabe2d9f7966fb1e405ab986627fd3d4e26 | 2dd5ac3db661cf5ccf0e0aae7d3c1a29e9abf253 | refs/heads/master | 2023-01-21T06:32:34.079387 | 2020-11-26T10:10:02 | 2020-11-26T10:10:02 | 317,818,984 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 609 | rd | lspace_BrXII.xy.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{lspace_BrXII.xy}
\alias{lspace_BrXII.xy}
\title{coordinates of the L-space of Burr Type-XII Distribution (BrXII)}
\format{
A ggplot
\describe{
\item{x}{l-variatoin "t2"}
\item{y}{l-skewness "t3"}
}
}
\source{
coded in data-raw
}
\usage{
lspace_BrXII.xy
}
\description{
This is a plot of the L-space of BrXII Distribution with L-variation on x-axis and L-skewness on y-axis.
The L-space is bounded by shape1 in the range of 0.1 to 150, and by shape2 in the range of 0.001 to 1.
}
\keyword{datasets}
|
8dac7c1895f0be2c5c910a8d43f53aebb6e935d2 | 40e7632cd9110c274d5c78b26510e4405e32f1ca | /Simple Linear Regression/SLR3.R | 919e6fd047013f7e5cf06441aeeef9a1d3673862 | [] | no_license | nitika24/Data-Science | 141f67f5c02135e03f61f78454fc70b6af1389e0 | 923fae28a4770d595cbf24620faaae90794b729f | refs/heads/master | 2022-07-19T18:58:07.431296 | 2020-05-21T15:52:21 | 2020-05-21T15:52:21 | 242,066,516 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 909 | r | SLR3.R | library(readr)
emp_data <- read_csv("D:\\Nitika\\Data Science\\DataSets\\Assignments_DataSets\\emp_data.csv")
View(emp_data)
# Exploratory data analysis
summary(emp_data)
#Scatter plot
plot(emp_data$Salary_hike, emp_data$Churn_out_rate) # plot(X,Y)
?plot
attach(emp_data)
#Correlation Coefficient (r)
cor(Salary_hike, Churn_out_rate) # cor(X,Y)
# Simple Linear Regression model
reg <- lm(Churn_out_rate ~ Salary_hike) # lm(Y ~ X)
summary(reg)
pred <- predict(reg)
reg$residuals
sum(reg$residuals)
mean(reg$residuals)
sqrt(sum(reg$residuals^2)/nrow(emp_data)) #RMSE
sqrt(mean(reg$residuals^2))
confint(reg,level=0.95)
predict(reg,interval="predict")
# ggplot for adding regresion line for data
library(ggplot2)
ggplot(data = emp_data, aes(x = Salary_hike, y = Churn_out_rate)) +
geom_point(color='blue') +
geom_line(color='red',data = emp_data, aes(x=Salary_hike, y=pred))
|
90c334bfd04ccb94d22037a5d88b12b3b1b8ac2e | fc6848d31e6804add8c25c655d9ae7c1f98ff331 | /runPcaOnStemCommonData.R | 25817e3cad0417dbb4babb3c89952cd4dc6c1553 | [] | no_license | davetgerrard/LiverStemCellProteins | 79347a2fe86601cebbbf79709e3742f432e3a09d | 9c8d7d23a6e5bfb5505a4c4b1cdd61293c8ae838 | refs/heads/master | 2016-09-05T08:48:23.183212 | 2012-05-03T10:45:44 | 2012-05-03T10:45:44 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,595 | r | runPcaOnStemCommonData.R | ##runPcaOnStemCommonData.R
## requires loading of data with createStemCommonData.R
if(!exists('output')) { output <- FALSE } # would be over-ridden if already specified
excludeIndex <- which(rowMeans(stemcommondata[,dataColumns]) > 5) # which rows are outliers.
cat("These rows are probable outliers\n")
print(stemcommondata[excludeIndex, ])
#dataColumns <- c(grep('H7',names(stemcommondata)),grep('H9',names(stemcommondata)) )
stem.pca <- princomp(stemcommondata[,dataColumns])
stem.princomp.cor <- princomp(stemcommondata[,dataColumns], cor=T)
stem.princomp.cor.scores <- cbind(subset(stemcommondata, select=c("Name","Uniprot.Id", "Uniprot.Accession")) ,stem.princomp.cor$scores)
write.table(stem.princomp.cor.scores, file=paste("output","stem.princomp.cor.scores.tab",sep="/"),row.names=F, quote=F, sep="\t")
## N.B. re-using the same object name (stem.pca) means that if this script is sourced(), the final one will be 'returned'
if(output) {
stem.pca <- prcomp(stemcommondata[,dataColumns])
pdf(file="plots/stemPCA.prcomp.pdf", width=10,height=10)
plot(stem.pca, main="Stem Cell Data\nVariance explained by Principal Components")
biplot(stem.pca,choices=c(1,2), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,3), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,4), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,5), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,6), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(2,3), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
dev.off()
stem.pca <- prcomp(stemcommondata[,dataColumns], scale.=TRUE)
pdf(file="plots/stemPCA.prcomp.cor.pdf", width=10,height=10)
plot(stem.pca, main="Stem Cell Data\nVariance explained by Principal Components")
biplot(stem.pca,choices=c(1,2), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,3), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,4), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,5), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,6), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(2,3), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
dev.off()
stem.pca <- princomp(stemcommondata[,dataColumns], cor=TRUE) # this one gave a fantastic plot of PC2 vs PC3 (but is it biology or experimenatl effect?)
pdf(file="plots/stemPCA.princomp.scale.pdf", width=10,height=10)
plot(stem.pca, main="Stem Cell Data\nVariance explained by Principal Components")
biplot(stem.pca,choices=c(1,2), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,3), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,4), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,5), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,6), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(2,3), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
dev.off()
stem.pca <- princomp(stemcommondata[-excludeIndex ,dataColumns], cor=TRUE)
pdf(file="plots/stemPCA.princomp.cor.excludeOutliers.pdf", width=10,height=10)
plot(stem.pca, main="Stem Cell Data\nVariance explained by Principal Components")
biplot(stem.pca,choices=c(1,2), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id[-excludeIndex]) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,3), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id[-excludeIndex]) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,4), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id[-excludeIndex]) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,5), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id[-excludeIndex]) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,6), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id[-excludeIndex]) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(2,3), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id[-excludeIndex]) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
dev.off()
stem.pca <- prcomp(stemcommondata[-excludeIndex ,dataColumns], scale.=TRUE)
pdf(file="plots/stemPCA.prcomp.scale.excludeOutliers.pdf", width=10,height=10)
plot(stem.pca, main="Stem Cell Data\nVariance explained by Principal Components")
biplot(stem.pca,choices=c(1,2), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id[-excludeIndex]) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,3), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id[-excludeIndex]) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,4), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id[-excludeIndex]) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,5), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id[-excludeIndex]) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(1,6), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id[-excludeIndex]) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
biplot(stem.pca,choices=c(2,3), col=c("grey","black"),cex=c(0.5,1),xlabs=stemcommondata$Uniprot.Id[-excludeIndex]) ;abline(v=0,lty=2) ; abline(h=0,lty=2)
dev.off()
} # end of if(output) block |
15e0134522401c04b2615ae662076de667f943bf | 8f94ccd8d3aed33b418cb9639dc64a159931ae4e | /man/random_scdf.Rd | 5ea87a330855d85b5acf7892e54033b6662324a0 | [] | no_license | cran/scan | 8c9d9b2dc44bbb8c339be3795a62bb5c49be87b0 | 860599c21c4c5e37746fa8e6234c6f6cc8028070 | refs/heads/master | 2023-08-22T17:47:22.450439 | 2023-08-08T13:00:02 | 2023-08-08T14:31:36 | 70,917,562 | 2 | 1 | null | null | null | null | UTF-8 | R | false | true | 1,905 | rd | random_scdf.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/random_scdf.R
\name{random_scdf}
\alias{random_scdf}
\title{Single-case data generator}
\usage{
random_scdf(design = NULL, round = NA, random_names = FALSE, seed = NULL, ...)
}
\arguments{
\item{design}{A design matrix which is created by \code{design} and specifies
all parameters.}
\item{round}{Rounds the scores to the defined decimal. To round to the second
decimal, set \code{round = 2}.}
\item{random_names}{Is \code{FALSE} by default. If set \code{random_names =
TRUE} cases are assigned random first names. If set \code{"neutral", "male"
or "female"} only gender neutral, male, or female names are chosen. The
names are drawn from the 2,000 most popular names for newborns in 2012 in
the U.S. (1,000 male and 1,000 female names).}
\item{seed}{A seed number for the random generator.}
\item{...}{arguments that are directly passed to the \code{design} function
for a more concise coding.}
}
\value{
A single-case data frame. See \code{\link{scdf}} to learn about this
format.
}
\description{
The \code{random_scdf} function generates random single-case data frames for
monte-carlo studies and demonstration purposes. \code{design} is used to set
up a design matrix with all parameters needed for the \code{random_scdf}
function.
}
\examples{
## Create random single-case data and inspect it
design <- design(
n = 3, rtt = 0.75, slope = 0.1, extreme_prop = 0.1,
missing_prop = 0.1
)
dat <- random_scdf(design, round = 1, random_names = TRUE, seed = 123)
describe(dat)
## And now have a look at poisson-distributed data
design <- design(
n = 3, B_start = c(6, 10, 14), mt = c(12, 20, 22), start_value = 10,
distribution = "poisson", level = -5, missing_prop = 0.1
)
dat <- random_scdf(design, seed = 1234)
pand(dat, decreasing = TRUE)
}
\author{
Juergen Wibert
}
\concept{mc fucntions}
\keyword{datagen}
|
78ac9cee9dabbf8da127e8f6f2c507acdcbd7ea8 | ae24b24f6a3ec183fab24bf2e1f0bfcff11d1d53 | /minBTL.R | b1360ae9921965efa753a85399bc956a00bc108f | [] | no_license | RolyYang/pbo | 56ce707a084eda3aa91e2365aba73cc4a4910ca0 | 82a49c934c873ec4e72865b5fd45aa8d3cba5659 | refs/heads/master | 2020-03-17T11:32:56.569388 | 2019-06-27T19:29:12 | 2019-06-27T19:29:12 | 133,555,979 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 574 | r | minBTL.R | #minimum backtest length
minBTL<- function(Emax=1.0,N) {
gama<-0.5772156649 #Euler-Mascheroni constant
minBTL<-(((1-gama)*qnorm(1-1/N)+gama*qnorm(1-1/N*exp(-1)))/Emax)**2
minBTL
}
#the shell of minBTL
minBTLcon<- function(Emax=1.0,N) {
minBTLcon<-2*log(N)/Emax
minBTLcon
}
exp<-c()
max<-c()
for (i in 1:1000){
exp[i]<-minBTL(1.0,i)
max[i]<-minBTLcon(1.0,i)
next
}
length<-c(1:1000)
plot(length,exp,main="Minimum Back Test Length",type="l",xlab="Number of Trials (N)",ylab="Mimunum Backtest Length (in Year)")
line(length,max)
N<-9
minBTLcon(1.0,9) #4.39445
|
32c2615818460eba42545b6fb1f7e379ed218eac | c02b1b6252a59c992a0f3ebb542f08fb0cf261a4 | /R/get_match_data.R | 8ba7f3434fc25e08956ebffdb5c07b296f7ed806 | [] | no_license | systats/lolR | d57b04d592b40906b70f0da1acc9a332b965aa23 | f2b38453460cac1c9fe24861603e75bebf549669 | refs/heads/master | 2020-03-18T07:13:38.225502 | 2018-06-02T17:13:56 | 2018-06-02T17:13:56 | 134,439,850 | 0 | 2 | null | 2018-05-31T01:11:19 | 2018-05-22T15:58:05 | HTML | UTF-8 | R | false | false | 1,108 | r | get_match_data.R | #' get_match_data
#'
#' Get basic match data
#'
#' @param x html_node
#' @return tibble(date, patch, patch_link, win, blue_team, blue_team_link, red_team, red_team_link)
#'
#' @export
get_match_data <- function(x){
prep <- x %>%
html_children()
date <- prep %>%
.[1] %>%
html_text %>%
stringr::str_trim()
patch <- prep%>%
.[2] %>%
html_text %>%
stringr::str_trim()
patch_link <-prep %>%
.[2] %>%
str_extract_href(append = T)
blue_team <- prep %>%
.[3] %>%
html_text %>%
stringr::str_trim()
blue_team_link <- prep %>%
.[3] %>%
str_extract_href(append = T)
red_team <- prep %>%
.[4] %>%
html_text %>%
stringr::str_trim()
red_team_link <- prep %>%
.[4] %>%
str_extract_href(append = T)
win <- prep %>%
.[5] %>%
html_text() %>%
stringr::str_trim()
match_data <- tibble(
date, patch, patch_link, win,
blue_team, blue_team_link,
red_team, red_team_link
)
return(match_data)
} |
ffd7cdd4a8bb1dbd5e295a204720a713ac0cc63c | cfc4a7b37657114bb93c7130eff4fc2458381a4f | /doc-ja/localized-ring-ja.rd | f9210f9334d972d8fa5df8270c03c13d8267e5dd | [
"MIT"
] | permissive | kunishi/algebra-ruby2 | 5bc3fae343505de879f7a8ae631f9397a5060f6b | ab8e3dce503bf59477b18bfc93d7cdf103507037 | refs/heads/master | 2021-11-11T16:54:52.502856 | 2021-11-04T02:18:45 | 2021-11-04T02:18:45 | 28,221,289 | 6 | 0 | null | 2016-05-05T16:11:38 | 2014-12-19T08:36:45 | Ruby | UTF-8 | R | false | false | 3,307 | rd | localized-ring-ja.rd | =begin
[((<index-ja|URL:index-ja.html>))]
= Algebra::LocalizedRing
((*(局所化環クラス)*))
与えられた環を分子・分母にした分数環を構成します。
実際のクラスを生成するには、クラスメソッド ((<::create>))
あるいは関数 ((<Algebra.LocalizedRing>))() を用います。
== ファイル名:
* ((|localized-ring.rb|))
== スーパークラス:
* ((|Object|))
== インクルードしているモジュール:
なし
== 関連する関数:
--- Algebra.LocalizedRing(ring)
((<::create>))(ring) と同じです。
--- Algebra.RationalFunctionField(ring, obj)
環 ((|ring|))、変数を表すオブジェクトを ((|obj|)) として有理関数体
を作ります。クラスメソッド ((|::var|)) で変数を得ることができます。
例: 有理関数体
require "algebra/localized-ring"
require "rational"
F = Algebra.RationalFunctionField(Rational, "x")
x = F.var
p ( 1 / (x**2 - 1) - 1 / (x**3 - 1) )
#=> x^2/(x^4 + x^3 - x - 1)
--- Algebra.MRationalFunctionField(ring, [obj1[, obj2, ...]])
環 ((|ring|))、変数を表すオブジェクトを ((|obj1|)), ((|obj2|)),... として有理関数体を作ります。クラスメソッド ((|::vars|)) で変数を得ることができます。
例: 有理関数体
require "algebra/localized-ring"
require "rational"
G = Algebra.MRationalFunctionField(Rational, "x", "y", "z")
x, y, z = G.vars
f = (x + z) / (x + y) - z / (x + y)
p f #=> (x^2 + xy)/(x^2 + 2xy + y^2)
p f.simplify #=> x/(x + y)
== クラスメソッド:
--- ::create(ring)
クラス((|ring|))で表現されるを環の元を分子・分母とする分数環
を作ります。
この戻り値は Algebra::LocalizedRing クラスのサブクラスです。
このサブクラスにはクラスメソッドとして ((|::ground|)) が定義され
((|ring|)) を返します。
生成したクラスにはクラスメソッド(({::[]}))が定義され、基礎環の
元 (({x})) に対して分数環の元 (({x/1})) を返します。
例: 有理数を作る
require "localized-ring"
F = Algebra.LocalizedRing(Integer)
p F.new(1, 2) + F.new(2, 3) #=> 7/6
例: 整数上の多項式環の商体
require "polynomial"
require "localized-ring"
P = Algebra.Polynomial(Integer, "x")
F = Algebra.LocalizedRing(P)
x = F[P.var]
p ( 1 / (x**2 - 1) - 1 / (x**3 - 1) )
#=> (x^3 - x^2)/(x^5 - x^3 - x^2 + 1)
--- ::zero
零元を返します。
--- ::unity
単位元を返します。
#--- ::[](num, den = nil)
#--- ::reduce(num, den)
== メソッド:
#--- monomial?; true; end
--- zero?
零元であるとき真を返します。
--- zero
零元を返します。
--- unity
単位元を返します。
--- ==(other)
等しいとき真を返します。
--- <=>(other)
大小関係を求めます。
--- +(other)
和を計算します。
--- -(other)
差を計算します。
--- *(other)
積を計算します。
--- **(n)
((|n|)) 乗を計算します。
--- /(other)
商を計算します。
#--- to_s
#--- inspect
#--- hash
=end
|
612021e608ed6cbbe57e0535bd9295d36706b22c | 47c5a1669bfc7483e3a7ad49809ba75d5bfc382e | /man/modify.Rd.file.Rd | b297835a05b713f4cb5aa95edae6f17657efd14a | [] | no_license | tdhock/inlinedocs | 3ea8d46ece49cc9153b4cdea3a39d05de9861d1f | 3519557c0f9ae79ff45a64835206845df7042072 | refs/heads/master | 2023-09-04T11:03:59.266286 | 2023-08-29T23:06:34 | 2023-08-29T23:06:34 | 20,446,785 | 2 | 2 | null | 2019-08-21T19:58:23 | 2014-06-03T14:50:10 | R | UTF-8 | R | false | false | 639 | rd | modify.Rd.file.Rd | \name{modify.Rd.file}
\alias{modify.Rd.file}
\title{modify Rd file}
\description{Add inline documentation from comments to an Rd file
automatically-generated by package.skeleton.}
\usage{modify.Rd.file(N, pkg,
docs, verbose = FALSE)}
\arguments{
\item{N}{Name of function/file to which we will add documentation.}
\item{pkg}{Package name.}
\item{docs}{Named list of documentation in extracted comments.}
\item{verbose}{Cat messages?}
}
\author{Toby Dylan Hocking <toby.hocking@r-project.org> [aut, cre], Keith Ponting [aut], Thomas Wutzler [aut], Philippe Grosjean [aut], Markus Müller [aut], R Core Team [ctb, cph]}
|
d81b1b3c8005554c27f0b47de75a066a2cd28a22 | fa86ca3beae271d11e99287f32e3dfba8387d1f9 | /man/ssanova0.Rd | 934e56c9c400c403a1cc45395ba6aeefe545de07 | [] | no_license | cran/gss | 3844e164e122de45a447e2dc489449828a7d7f05 | 0cc5d376904a8c14e9b2dde31d00d0a6d9507467 | refs/heads/master | 2023-08-22T07:22:40.932811 | 2023-08-16T04:10:02 | 2023-08-16T05:30:47 | 17,696,545 | 3 | 2 | null | null | null | null | UTF-8 | R | false | false | 4,756 | rd | ssanova0.Rd | \name{ssanova0}
\alias{ssanova0}
\title{Fitting Smoothing Spline ANOVA Models}
\description{
Fit smoothing spline ANOVA models in Gaussian regression. The
symbolic model specification via \code{formula} follows the same
rules as in \code{\link{lm}}.
}
\usage{
ssanova0(formula, type=NULL, data=list(), weights, subset,
offset, na.action=na.omit, partial=NULL, method="v",
varht=1, prec=1e-7, maxiter=30)
}
\arguments{
\item{formula}{Symbolic description of the model to be fit.}
\item{type}{List specifying the type of spline for each variable.
See \code{\link{mkterm}} for details.}
\item{data}{Optional data frame containing the variables in the
model.}
\item{weights}{Optional vector of weights to be used in the
fitting process.}
\item{subset}{Optional vector specifying a subset of observations
to be used in the fitting process.}
\item{offset}{Optional offset term with known parameter 1.}
\item{na.action}{Function which indicates what should happen when
the data contain NAs.}
\item{partial}{Optional symbolic description of parametric terms in
partial spline models.}
\item{method}{Method for smoothing parameter selection. Supported
are \code{method="v"} for GCV, \code{method="m"} for GML (REML),
and \code{method="u"} for Mallow's CL.}
\item{varht}{External variance estimate needed for
\code{method="u"}. Ignored when \code{method="v"} or
\code{method="m"} are specified.}
\item{prec}{Precision requirement in the iteration for multiple
smoothing parameter selection. Ignored when only one smoothing
parameter is involved.}
\item{maxiter}{Maximum number of iterations allowed for multiple
smoothing parameter selection. Ignored when only one smoothing
parameter is involved.}
}
\details{
The model specification via \code{formula} is intuitive. For
example, \code{y~x1*x2} yields a model of the form
\deqn{
y = C + f_{1}(x1) + f_{2}(x2) + f_{12}(x1,x2) + e
}
with the terms denoted by \code{"1"}, \code{"x1"}, \code{"x2"}, and
\code{"x1:x2"}.
The model terms are sums of unpenalized and penalized
terms. Attached to every penalized term there is a smoothing
parameter, and the model complexity is largely determined by the
number of smoothing parameters.
\code{ssanova0} and the affiliated methods provide a front end to
RKPACK, a collection of RATFOR routines for nonparametric regression
via the penalized least squares. The algorithms implemented in
RKPACK are of the order \eqn{O(n^{3})}.
}
\note{
For complex models and large sample sizes, the approximate solution
of \code{\link{ssanova}} can be faster.
The method \code{\link{project}} is not implemented for
\code{ssanova0}, nor is the mixed-effect model support through
\code{\link{mkran}}.
In \emph{gss} versions earlier than 1.0, \code{ssanova0} was under
the name \code{ssanova}.
}
\value{
\code{ssanova0} returns a list object of class
\code{c("ssanova0","ssanova")}.
The method \code{\link{summary.ssanova0}} can be used to obtain
summaries of the fits. The method \code{\link{predict.ssanova0}}
can be used to evaluate the fits at arbitrary points along with
standard errors. The methods \code{\link{residuals.ssanova}} and
\code{\link{fitted.ssanova}} extract the respective traits from the
fits.
}
\references{
Wahba, G. (1990), \emph{Spline Models for Observational Data}.
Philadelphia: SIAM.
Gu, C. (2013), \emph{Smoothing Spline ANOVA Models (2nd Ed)}. New
York: Springer-Verlag.
Gu, C. (2014), Smoothing Spline ANOVA Models: R Package gss.
\emph{Journal of Statistical Software}, 58(5), 1-25. URL
http://www.jstatsoft.org/v58/i05/.
}
\examples{
## Fit a cubic spline
x <- runif(100); y <- 5 + 3*sin(2*pi*x) + rnorm(x)
cubic.fit <- ssanova0(y~x,method="m")
## Obtain estimates and standard errors on a grid
new <- data.frame(x=seq(min(x),max(x),len=50))
est <- predict(cubic.fit,new,se=TRUE)
## Plot the fit and the Bayesian confidence intervals
plot(x,y,col=1); lines(new$x,est$fit,col=2)
lines(new$x,est$fit+1.96*est$se,col=3)
lines(new$x,est$fit-1.96*est$se,col=3)
## Clean up
\dontrun{rm(x,y,cubic.fit,new,est)
dev.off()}
## Fit a tensor product cubic spline
data(nox)
nox.fit <- ssanova0(log10(nox)~comp*equi,data=nox)
## Fit a spline with cubic and nominal marginals
nox$comp<-as.factor(nox$comp)
nox.fit.n <- ssanova0(log10(nox)~comp*equi,data=nox)
## Fit a spline with cubic and ordinal marginals
nox$comp<-as.ordered(nox$comp)
nox.fit.o <- ssanova0(log10(nox)~comp*equi,data=nox)
## Clean up
\dontrun{rm(nox,nox.fit,nox.fit.n,nox.fit.o)}
}
\keyword{smooth}
\keyword{models}
\keyword{regression}
|
1c0356731a9e6f381600f0680410b8d9e3b104df | 7fd0d4bd269fdbfdd33b9424887800ea39de6736 | /Psets/Pset3/lm_HR.R | f13ce4485a1edda4ab3cd977470a25f213e2abcb | [] | no_license | albertoc94/Empirical-Methods-for-Applied-Micro | 12b98ed6f5f7775c66eeb8d5121792463b7025ae | 3f01a05d54b04824710b5e392b1ae97d2f0bdb50 | refs/heads/main | 2023-03-05T00:51:43.292269 | 2021-02-19T02:46:15 | 2021-02-19T02:46:15 | 336,115,942 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 755 | r | lm_HR.R | lm_HR = function(data,testIndexes){
data_test = data[testIndexes,]
data_train = data[-testIndexes,]
x_col = setdiff(names(data_test),"medv")
x_test = as.matrix(data_test[,x_col])
y_test = as.matrix(data_test[,"medv"])
x_train = as.matrix(data_train[,x_col])
y_train = as.matrix(data_train[,"medv"])
df_lr = as.data.frame(cbind(x_train,y_train))
names(df_lr)[ncol(df_lr)]<-paste("medv")
output_lr = lm(log(medv) ~., data=df_lr)
mse_train = sqrt(mean(output_lr$residuals^2))
y_pred = predict(output_lr, newdata=as.data.frame(x_test))
mse_test = sqrt(mean((y_pred-y_test)^2))
df_lr = as.data.frame(cbind(mse_train,mse_test))
lm_HR = list(output_lr,y_pred,df_lr)
names(lm_HR) = c("output_lr","y_pred","df_lr")
return(lm_HR)
} |
150e74c6a9911d257a187e7ddb8ff5eb669ee627 | 45892c1931e1c274060e139082ecf2cf2c4753d4 | /Notebook_Mou/scripts/GeneCounts.R | 5dd168aee7c7870882c79e5575245bd1dcac9b19 | [] | no_license | k39ajdM2/Maize_Bee_transcriptomics | f8bc836e2910a6592fb74c465ae4f7e43df0ec10 | a8cbf1ad866d659aea6c0c7f3012c59fe053603e | refs/heads/main | 2023-04-07T08:01:02.310996 | 2021-04-13T22:44:25 | 2021-04-13T22:44:25 | 355,672,708 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,731 | r | GeneCounts.R | #! /usr/bin/env Rscript
library(tidyverse)
library(magrittr)
# read files
k_bee <- read_delim("~/2021_workshop_transcriptomics/Notebook_Mou/results/bee.genecounts.out.txt", delim="\t")
j_bee <- readxl::read_excel("~/2021_workshop_transcriptomics/Notebook_Jennifer/Bumblebee/results/gsnap_counts.xlsx", sheet="gene")
# look at column names
names(j_bee)
names(k_bee)
k_sub <- k_bee %>%
select(Geneid, "1-A02-A2_S8_L002_R1_001.fastq") %>%
pivot_longer(., col="1-A02-A2_S8_L002_R1_001.fastq")
j_sub <- j_bee %>%
select(Geneid, "1-A02-A2_S8") %>%
pivot_longer(., col= "1-A02-A2_S8")
test <- rbind(k_sub, j_sub) %>%
pivot_wider(., id_cols="Geneid")
# plotting different colors for each source
test %>% subset[1:20,] %>%
ggplot(., aes(x=Geneid, y=value), color=name) +
geom_point()
# plot 1st 10 genes
(test2 <- test[1:10,] %>%
pivot_longer(cols = "1-A02-A2_S8_L002_R1_001.fastq":"1-A02-A2_S8") %>%
ggplot(., aes(x=Geneid, y=value, color=name)) +
geom_point() +
theme(axis.text.x = element_text(angle = 90)) +
scale_color_manual(name="Sample 1-A02-A2_S8 Source",labels=c("Jennifer", "Me"), values=c("darkgreen", "orange")) +
labs(x= "Gene ID", y= "Gene Count"))
# gene count difference
both2 <- both %>% mutate(gene_count_diff = k_bee-j_bee)
both2[1:10,] %>%
ggplot(., aes(x=Geneid, y=gene_count_diff)) +
geom_point() +
theme(axis.text.x = element_text(angle = 90)) +
labs(x = "Gene ID", y = "gene count difference (k-j)")y
# plot Jennifer to mine gene counts - do they overlap or are they very different?
both <- rbind(k_sub, j_sub) %>%
pivot_wider(., id_cols="Geneid")
names(both) <- c("Geneid", "k_bee", "j_bee")
both %>%
ggplot(., aes(x=k_bee, y=j_bee), color=) +
geom_point()
|
62e871bf186e8f9526e6a3275a4f16209acb15b0 | 344e447769d09c6c2d53d53a90b69883b5cbcb08 | /walk-through.R | 793e6e5e93096a30957444063cb8029e71a6cc83 | [] | no_license | professorbeautiful/CTDesignExperimenter | 578c3fcf4cba477fd71655dd8719248966d12246 | ad95aaf3209b5a9b0d691999e44ee08c72ab4e56 | refs/heads/master | 2021-01-21T12:23:32.461545 | 2016-01-28T04:15:58 | 2016-01-28T04:15:58 | 8,296,518 | 3 | 1 | null | null | null | null | UTF-8 | R | false | false | 573 | r | walk-through.R | walk.through <- function() {
tb <- unlist(.Traceback)
if(is.null(tb)) stop("no traceback to use for debugging")
assign("debug.fun.list", matrix(unlist(strsplit(tb, "\\(")), nrow=2)[1,], envir=.GlobalEnv)
lapply(debug.fun.list, function(x) debug(get(x)))
print(paste("Now debugging functions:", paste(debug.fun.list, collapse=",")))
}
unwalk.through <- function() {
lapply(debug.fun.list, function(x) undebug(get(as.character(x))))
print(paste("Now undebugging functions:", paste(debug.fun.list, collapse=",")))
rm(list="debug.fun.list", envir=.GlobalEnv)
} |
b8afe8d73be1f26a6cfff01653927d7a2ec58de3 | 27fa9887b19dbf10e360856a7bdf67c77ee7b7ab | /plot3.R | 515e32e4d285d176901081b7a41485298f2a14e7 | [] | no_license | clairefoxs/ExData_Plotting1 | 57f091eaf6f68921bd386829f0dd15a848c344b8 | e7fc6aa422ff6a077913f8e749fa7be812a4a412 | refs/heads/master | 2021-01-20T16:41:38.149173 | 2015-05-09T22:42:26 | 2015-05-09T22:42:26 | 35,348,053 | 0 | 0 | null | 2015-05-09T22:35:04 | 2015-05-09T22:35:04 | null | UTF-8 | R | false | false | 1,080 | r | plot3.R | # download data from: https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2Fhousehold_power_consumption.zip
raw <- read.table("household_power_consumption.txt",
header = TRUE, sep = ";", na.strings = "?",
stringsAsFactors=FALSE, colClasses="character")
# select data from the dates 2007-02-01 and 2007-02-02
data <- raw[which(raw$Date=="1/2/2007" | raw$Date=="2/2/2007"),]
# convert Global_active_power to as.numeric
data$Global_active_power <- as.numeric(data$Global_active_power)
# combine Date and Time
data$DateTime <- paste(data$Date, data$Time, sep=" ")
# convert DateTime to as.Date
data$DateTime <- strptime(data$DateTime, format="%e/%m/%Y %H:%M:%S")
# plot and save as png
png("plot3.png")
plot(data$DateTime, data$Sub_metering_1, type="n", xlab="", ylab="Energy sub metering")
lines(data$DateTime, data$Sub_metering_1, col="black")
lines(data$DateTime, data$Sub_metering_2, col="red")
lines(data$DateTime, data$Sub_metering_3, col="blue")
legend("topright", lty=c(1,1,1), lwd=c(2,2,2), col=c("black", "red", "blue"), legend=names(data[7:9]))
dev.off()
|
3cc6301ae7cc5e30822d6db2b80717dc7257e904 | 37f44136c1b6b6878f4d9b3996f30faba4138f50 | /run_analysis.r | d6c1f1295e2964432c10f162af52322f316038d9 | [] | no_license | brentrossen/datasciencecoursera | c2b65c95e0fa4a21c44653acd590f14d1c303245 | df55555b18d725ec28433c7963870d558414c681 | refs/heads/master | 2021-01-17T12:57:45.366261 | 2016-06-30T14:07:59 | 2016-06-30T14:07:59 | 57,044,739 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,971 | r | run_analysis.r | library(dplyr)
library(tidyr)
# Download and extract the data
if(!dir.exists("data")){
dir.create("data")
}
dataZipFile <- "data/Assignment.zip"
if(!file.exists(dataZipFile)){
download.file("https://d396qusza40orc.cloudfront.net/getdata%2Fprojectfiles%2FUCI%20HAR%20Dataset.zip", "data/Assignment.zip", mode="wb")
unzip("data/Assignment.zip", exdir = "data")
}
# Load the features and labels
features <- read.table("data/UCI HAR Dataset/features.txt") %>% tbl_df
activity_labels <- read.table("data/UCI HAR Dataset/activity_labels.txt") %>% tbl_df
# Rename V1 to label_index and V2 to activity for clarity (#4)
activity_labels <- activity_labels %>% rename(label_index = V1, activity = V2)
## Read and format subject
readSubject <- function(filePath, dataset){
subject <- read.table(filePath) %>% tbl_df
colnames(subject) <- "subject"
subject
}
## Read and format x_test/train
readX <- function(filePath){
x <- read.table(filePath) %>% tbl_df
### Rename the columns from the feature labels (#3)
colnames(x) <- features$V2
### Extract the mean and std columns (#2)
x <- x[grepl("mean\\(\\)|std\\(\\)", colnames(x))]
### Remove the parentheses for readability
colnames(x) <- gsub("\\(\\)", "", colnames(x))
x
}
## Read and format y_test/train
readY <- function(filePath){
y <- read.table(filePath) %>% tbl_df
y <- y %>% rename(label_index = V1)
### Join the label to y-test using the label_index, then select only the activity column to keep,
### this provides descriptive activity names (#3)
y <- y %>%
left_join(activity_labels, by = c("label_index" = "label_index")) %>%
select(activity)
}
# Assemble the test dataset
subject_test <- readSubject("data/UCI HAR Dataset/test/subject_test.txt", "test")
x_test <- readX("data/UCI HAR Dataset/test/X_test.txt")
y_test <- readY("data/UCI HAR Dataset/test/Y_test.txt")
## Join the tables by column binding
test <- cbind(subject_test, y_test, x_test) %>% tbl_df
# Assemble the train dataset
subject_train <- readSubject("data/UCI HAR Dataset/train/subject_train.txt", "train")
x_train <- readX("data/UCI HAR Dataset/train/X_train.txt")
y_train <- readY("data/UCI HAR Dataset/train/Y_train.txt")
## Join the tables by column binding
train <- cbind(subject_train, y_train, x_train) %>% tbl_df
# Join the datasets
data <- rbind(test, train)
# Calculate the mean for the "wide form" data, this matches the
## assignment details
mean_features <- data %>%
group_by(subject, activity) %>%
summarise_each(funs(mean))
write.table(mean_features, file = "data/mean_features_wide.txt", row.names = FALSE)
# Convert the "wide form" to a long form
## Re-tidy the dataset by gathering the columnar features into a single column called feature
## and the values into a measurement column.
## The long form is generally easier to perform analysis on
tidy <- data %>% gather(feature, measurement, -subject, -activity)
tidy <- tidy %>% separate(feature, c("feature", "estimate", "axis"), extra = "drop")
tidy <- tidy %>%
### Extract elements of feature column into separate columns
mutate(
domain = ifelse(startsWith(feature, "t"), "time", ifelse(startsWith(feature, "f"), "frequency", "unknown")),
source = ifelse(grepl("Body", feature), "body", ifelse(grepl("Gravity", feature), "gravity", "unknown")),
device = ifelse(grepl("Acc", feature), "accelerometer", ifelse(grepl("Gyro", feature), "gyroscope", "unknown")),
is_jerk = grepl("Jerk", feature),
is_magnitude = grepl("Mag", feature)
) %>%
### Remove the feature column, it's now redundant
select(-feature)
## Get the mean of each measurement
mean_features <- tidy %>%
group_by(subject, activity, estimate, axis, source, device, domain, is_jerk, is_magnitude) %>%
summarize(
mean = mean(measurement)
)
write.table(mean_features, file = "data/mean_features_long_bonus.txt", row.names = FALSE)
|
40ec268c71be2d28903a4d8b78a7e9f6d5586ca4 | cd2f27faac9571f15afaf4c63e90d001b7ed33de | /man/all_stages.Rd | 7ea31b5f7760d13d551c6c3bfd535fce5755a8c3 | [
"MIT"
] | permissive | EnergyEconomyDecoupling/MWTools | 2430ad483b9bd759088e0a79572ca691ce05e9e4 | a3488a24a850d7e2338307446b66961ec3feb68a | refs/heads/master | 2023-09-04T13:03:10.451579 | 2023-08-20T09:30:56 | 2023-08-20T09:30:56 | 308,628,241 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 1,097 | rd | all_stages.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{all_stages}
\alias{all_stages}
\title{All stages for energy conversion chains}
\format{
A string list with 4 entries.
\describe{
See \code{IEATools::all_stages}.
}
A string list with 4
\describe{
\item{primary}{The string identifier for the Primary stage of the energy conversion chain.}
\item{final}{The string identifier for the Final stage of the energy conversion chain.}
\item{useful}{The string identifier for the Useful stage of the energy conversion chain.}
\item{services}{The string identifier for the Services stage of the energy conversion chain.}
}
}
\usage{
all_stages
all_stages
}
\description{
A string list containing package constants used in MWTools package functions.
This list is borrowed directly from the \code{IEATools} package.
A string list containing options for the all stages of energy conversion chain analysis.
The values of these constants are taken from the \code{IEATools} package for consistency.
}
\examples{
all_stages
all_stages
}
\keyword{datasets}
|
ec114090313b000ae0c9a7d5fb9cf3aad2276505 | f9a1d53a0b132dda1caf186a9a3910f6d0538a58 | /man/adorn_perc_and_ns.Rd | 82a27d22b9c6714f67d9f0304f048e323e35fc61 | [
"MIT"
] | permissive | dpashouwer/dpash | cf20db3ff9024de3928b3d6788a0cb660090bdb2 | 55716a67ccd97499182d0fba7fc64523b6dc6c56 | refs/heads/master | 2022-05-26T07:28:12.718083 | 2022-03-18T21:47:38 | 2022-03-18T21:47:38 | 147,966,591 | 1 | 0 | null | null | null | null | UTF-8 | R | false | true | 310 | rd | adorn_perc_and_ns.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/adorn_perc_and_ns.R
\name{adorn_perc_and_ns}
\alias{adorn_perc_and_ns}
\title{adorn_perc_and_ns}
\usage{
adorn_perc_and_ns(tabyl, denominator, digits = 0)
}
\arguments{
\item{digits}{}
}
\value{
}
\description{
adorn_perc_and_ns
}
|
0548971cf7507c3c14f802f4b9d0551ee8f6aa1c | ad6c62b455dc249c1d748c8b1d127e9e5575f866 | /test.r | dd5fadd268a10cd9312f34468e76b252dcae2d1c | [] | no_license | ibnuhikam5171/kmmi_r | c8df184c2f539d79ab47037a4b0f280df88b8008 | c947b5a8aa4f4d86615eae1303c5065c8886744b | refs/heads/main | 2023-07-03T09:16:50.698699 | 2021-08-08T22:29:47 | 2021-08-08T22:29:47 | 394,077,584 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 28 | r | test.r | 2021-9
text = "Hello world!" |
4c932c9136f136c0837226ea92a63e457452e25e | 8fe4752ea76e86aef313f22f9818b2bcb56b0a91 | /08 Outlier Detection with Mahalanobis Distance/code.R | d8141f8c818568cf8c9597c9ac7cc2882ac88d22 | [
"LicenseRef-scancode-warranty-disclaimer",
"MIT"
] | permissive | piau73/Blog-Docs | bf3122040a5ad08a6c07860206095b3f11210874 | 057c72eb4fd54a14030a275776d536585dfafb1f | refs/heads/master | 2021-06-08T15:42:58.696756 | 2016-12-12T04:02:22 | 2016-12-12T04:02:22 | null | 0 | 0 | null | null | null | null | WINDOWS-1252 | R | false | false | 6,455 | r | code.R | #---------------------------------------------------------------------------------------------------
#
# Outlier Detection with Mahalanobis Distance
#
#---------------------------------------------------------------------------------------------------
# Load libraries
library(ggplot2)
library(dplyr)
# Set color palette
cbPalette <- c("#999999", "#4288D5", "#E69F00", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7")
# Example Data
df <- read.csv("weight_height.csv")
df <- rename(df, height = ï..height) # Rename height feature
df$outlier_th <- "No"
# Histograms
ggplot(df, aes(x = weight)) + geom_histogram()
ggplot(df, aes(x = height)) + geom_histogram()
# Scatterplot
ggplot(df, aes(x = weight, y = height, color = outlier_th)) +
geom_point(size = 5, alpha = 0.6) +
labs(title = "Height - Weight Scatterplot",
subtitle = "2500 Data Points of height (cm) and weight (kg)",
caption = "Source: http://wiki.stat.ucla.edu/socr/index.php/SOCR_Data_Dinov_020108_HeightsWeights") +
ylab("Height in cm") + xlab("Weight in kg") +
scale_y_continuous(breaks = seq(160, 200, 5)) +
scale_x_continuous(breaks = seq(35, 80, 5)) +
scale_colour_manual(values=cbPalette)
# Add "Abnormality" Features
df$outlier_th[(df$weight < 41) | (df$weight > 72)] <- "Yes"
df$outlier_th[(df$height > 187) | (df$height < 160)] <- "Yes"
# Scatterplot showing Outliers itentified by feature thresholds
ggplot(df, aes(x = weight, y = height, color = outlier_th)) +
geom_point(size = 5, alpha = 0.6) +
labs(title = "Weight vs Height",
subtitle = "Outlier Detection in weight vs height data with Feature Thresholds",
caption = "Source: http://wiki.stat.ucla.edu/socr/index.php/SOCR_Data_Dinov_020108_HeightsWeights") +
ylab("Height in cm") + xlab("Weight in kg") +
scale_y_continuous(breaks = seq(160, 200, 5)) +
scale_x_continuous(breaks = seq(35, 80, 5)) +
geom_vline(xintercept = 41, linetype = "dotted") +
geom_vline(xintercept = 72, linetype = "dotted") +
geom_hline(yintercept = 160, linetype = "dotted") +
geom_hline(yintercept = 187, linetype = "dotted") +
scale_colour_manual(values=cbPalette)
# Calculate Mahalanobis
m_dist <- mahalanobis(df[, 1:2], colMeans(df[, 1:2]), cov(df[, 1:2]))
df$m_dist <- round(m_dist, 2)
# Mahalanobis Distance Histogram
ggplot(df, aes(x = m_dist)) +
geom_histogram(bins = 50) +
labs(title = "Mahalanobis Distances",
subtitle = "Histogram based on Mahalanobis Distances for Weight + Height",
caption = "Source: http://wiki.stat.ucla.edu/socr/index.php/SOCR_Data_Dinov_020108_HeightsWeights") +
xlab("Mahalanobis Distance") +
scale_y_continuous(breaks = seq(0, 700, 100))
# Maha Outliers
df$outlier_maha <- "No"
df$outlier_maha[df$m_dist > 12] <- "Yes"
# Scatterplot with Maha Outliers
ggplot(df, aes(x = weight, y = height, color = outlier_maha)) +
geom_point(size = 5, alpha = 0.6) +
labs(title = "Weight vs Height",
subtitle = "Outlier Detection in weight vs height data - Using Mahalanobis Distances",
caption = "Source: http://wiki.stat.ucla.edu/socr/index.php/SOCR_Data_Dinov_020108_HeightsWeights") +
ylab("Height in cm") + xlab("Weight in kg") +
scale_y_continuous(breaks = seq(160, 200, 5)) +
scale_x_continuous(breaks = seq(35, 80, 5)) +
scale_colour_manual(values=cbPalette)
# Outliers removed
df2 <- df %>%
filter(m_dist < 12)
ggplot(df2, aes(x = weight, y = height, color = outlier_maha)) +
geom_point(size = 5, alpha = 0.6) +
labs(title = "Weight vs Height",
subtitle = "Outlier Detection in weight vs height data - Using Mahalanobis Distances",
caption = "Source: http://wiki.stat.ucla.edu/socr/index.php/SOCR_Data_Dinov_020108_HeightsWeights") +
ylab("Height in cm") + xlab("Weight in kg") +
scale_y_continuous(breaks = seq(160, 200, 5)) +
scale_x_continuous(breaks = seq(35, 80, 5)) +
scale_colour_manual(values=cbPalette) +
geom_smooth(method = "lm", se = FALSE, color = "red")
# Housing Dataset
df <- read.csv("train.csv")
# Select only 5 features - SalePrice is the response variable
df <- df %>%
select(SalePrice, GrLivArea, GarageYrBlt, LotArea, LotFrontage)
df <- df[complete.cases(df), ]
head(df)
# Calculate Mahalanobis with predictor variables
df2 <- df[, -1] # Remove SalePrice Variable
m_dist <- mahalanobis(df2, colMeans(df2), cov(df2))
df$MD <- round(m_dist, 1)
# Scatterplot
df$outlier <- "No"
ggplot(df, aes(x = LotArea, y = SalePrice/1000, color = outlier)) +
geom_point(size = 5, alpha = 0.6) +
labs(title = "Sale Price vs Lot Area",
subtitle = "Scatterplot of Sale Price (kUSD) and Lot Area (SQFT)",
caption = "Source: Kaggle") +
ylab("Sale Price in kUSD") + xlab("Lot Area in SQFT") +
scale_y_continuous(breaks = seq(0, 800, 100)) +
scale_x_continuous(breaks = seq(0, 225000, 25000)) +
scale_colour_manual(values=cbPalette) +
geom_smooth(method = "lm", se = FALSE, color = "blue")
# Update Outlier Feature - using Threshold of 20
df$outlier[df$MD > 20] <- "Yes"
# Scatterplot with outlier detection
ggplot(df, aes(x = LotArea, y = SalePrice/1000, color = outlier)) +
geom_point(size = 5, alpha = 0.6) +
labs(title = "Sale Price vs Lot Area",
subtitle = "Scatterplot of Sale Price (kUSD) and Lot Area (SQFT) - Outlier Detection with Mahalanobis Distance",
caption = "Source: Kaggle") +
ylab("Sale Price in kUSD") + xlab("Lot Area in SQFT") +
scale_y_continuous(breaks = seq(0, 800, 100)) +
scale_x_continuous(breaks = seq(0, 225000, 25000)) +
scale_colour_manual(values=cbPalette)
# Remove outliers and create new regression line
df2 <- df[df$outlier == "No",]
ggplot(df2, aes(x = LotArea, y = SalePrice/1000, color = outlier)) +
geom_point(size = 5, alpha = 0.6) +
labs(title = "Sale Price vs Lot Area - Outliers removed",
subtitle = "Scatterplot of Sale Price (kUSD) and Lot Area (SQFT)",
caption = "Source: Kaggle") +
ylab("Sale Price in kUSD") + xlab("Lot Area in SQFT") +
scale_y_continuous(breaks = seq(0, 800, 100)) +
scale_x_continuous(breaks = seq(0, 225000, 25000)) +
scale_colour_manual(values=cbPalette) +
geom_smooth(method = "lm", se = FALSE, color = "blue")
|
f7250f2416f90e599098437e876e7364f0c3ee78 | af07a8935075d8184e00e66142c8aa007d65f882 | /man/SummaryPlotFuncGAM.Rd | fa1a2aeba43661c0a047e3fb378b2c47115ef1c7 | [] | no_license | PointProcess/SealPupProduction-JRSSC-code | 9e86c475eba2029825ed366f26b9b7dec9d117cf | f43213e396397c826f9eb7f90208b0c7e56cc1a9 | refs/heads/master | 2020-09-30T17:42:45.647879 | 2020-01-24T11:40:15 | 2020-01-24T11:40:15 | 227,339,738 | 1 | 0 | null | null | null | null | UTF-8 | R | false | true | 781 | rd | SummaryPlotFuncGAM.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/BasicFunctions.R
\name{SummaryPlotFuncGAM}
\alias{SummaryPlotFuncGAM}
\title{Do not both to write help function}
\usage{
SummaryPlotFuncGAM(
covariatesplot = TRUE,
summaryplot = TRUE,
savingFolder,
sealPhotoDataFile,
sealTransectDataFile,
dataList,
orgPhotos,
modPhotos,
results.CI.level = 0.95,
gridList,
finalResList,
countingDomain,
logicalGridPointsInsideCountingDomain,
covNewGridval,
GAMfit,
sealType,
use.covariates,
covariates.type,
covariate.fitting,
grid.pixelsize,
parallelize.noSplits,
parallelize.numCores,
noSamp,
subSampPerSamp,
time,
testing,
comment,
leaveOutTransect,
fam
)
}
\description{
Do not both to write help function
}
|
8acb6db95197178263cbbc44f5e60fa41ab91196 | 826945bcc9186f9d6dfe437386afec4307b4418e | /plot1.R | c3b107ce7e25bb90875584a248f5644ca2356fa2 | [] | no_license | ploor/ExData_Plotting1 | 4e15dc74add2dd737b8c11a5a7bf93c7594e0fc4 | 94e3d81c25a09565430c165360a5aa31ddfdf70d | refs/heads/master | 2021-01-21T01:15:47.322955 | 2014-06-08T19:36:52 | 2014-06-08T19:36:52 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,668 | r | plot1.R | ## Notes:
# The file names in the following char vectors
# will be downloaded/stored in the working directory
# filename
# pngname
# My dates display in non-English, unless I change the locale
Sys.setlocale("LC_TIME", "English")
###############################################################################
## Constants (except for plot literals)
siteUrl <- "https://d396qusza40orc.cloudfront.net/"
filename <- "exdata%2Fdata%2Fhousehold_power_consumption.zip"
fileUrl <- paste(siteUrl, filename, sep="")
filenameInZip <- "household_power_consumption.txt"
date1 <- as.Date("2007-02-01", format = "%Y-%m-%d")
date2 <- as.Date("2007-02-02", format = "%Y-%m-%d")
pngname <- "plot1.png"
###############################################################################
## download, unzip, read, and subset data
download.file(fileUrl, destfile=filename)
data <- read.table(unz(filename, filenameInZip)
,header=TRUE
,sep=";"
,na.strings = "?")
# convert to date
data$Date <- as.Date(data$Date, format = "%d/%m/%Y")
data <- subset(data, (data$Date == date1)
| (data$Date == date2))
#convert to numeric
data$Global_active_power <- as.numeric(data$Global_active_power)
#convert to POSIX
data$Time <- strptime(paste(data$Date, data$Time, sep=","), "%Y-%m-%d,%H:%M:%S")
###############################################################################
## open png device, plot, close png device
png(pngname, width=480, height=480)
hist(data$Global_active_power
,col="red"
,main="Global Active Power"
,ylim=c(0, 1200)
,xlab="Global Active Power (kilowatts)")
dev.off() |
98f562c9d890b839410635bd728ec74be51dab42 | e235bfe1d784b6046a9411d5e2c4df3d0b61f34f | /tests/testthat/test-observe.R | d3a4bd6f153d14ea7875f37af04f883561905f07 | [
"MIT"
] | permissive | tidymodels/infer | 040264d3b295c6986a9141d3d6fffe8a51e73db0 | 6854b6e5f8b356d4b518c2ca198cc97e66cd4fcb | refs/heads/main | 2023-08-07T03:34:31.100601 | 2023-07-27T21:21:58 | 2023-07-27T21:21:58 | 93,430,707 | 517 | 74 | NOASSERTION | 2023-09-06T12:46:24 | 2017-06-05T17:41:42 | R | UTF-8 | R | false | false | 3,632 | r | test-observe.R | test_that("observe() output is equal to core verbs", {
expect_equal(
gss %>%
observe(hours ~ NULL, stat = "mean"),
gss %>%
specify(hours ~ NULL) %>%
calculate(stat = "mean")
)
expect_equal(
gss %>%
observe(hours ~ NULL, stat = "t", null = "point", mu = 40),
gss %>%
specify(hours ~ NULL) %>%
hypothesize(null = "point", mu = 40) %>%
calculate(stat = "t")
)
expect_equal(
observe(
gss,
age ~ college,
stat = "diff in means",
order = c("degree", "no degree")
),
gss %>%
specify(age ~ college) %>%
calculate("diff in means", order = c("degree", "no degree")),
ignore_attr = TRUE
)
})
test_that("observe messages/warns/errors informatively", {
expect_equal(
expect_message(
gss %>%
observe(hours ~ NULL, stat = "mean", mu = 40)
) %>% conditionMessage(),
expect_message(
gss %>%
specify(hours ~ NULL) %>%
hypothesize(null = "point", mu = 40) %>%
calculate(stat = "mean")
) %>% conditionMessage()
)
expect_equal(
expect_warning(
gss %>%
observe(hours ~ NULL, stat = "t")
) %>% conditionMessage(),
expect_warning(
gss %>%
specify(hours ~ NULL) %>%
calculate(stat = "t")
) %>% conditionMessage()
)
expect_error(
expect_equal(
capture.output(
gss %>%
observe(hours ~ age, stat = "diff in means"),
type = "message"
),
capture.output(
gss %>%
specify(hours ~ age) %>%
calculate(stat = "diff in means"),
type = "message"
),
)
)
expect_error(
expect_equal(
gss %>%
observe(explanatory = age, stat = "diff in means"),
gss %>%
specify(explanatory = age) %>%
calculate(stat = "diff in means")
)
)
})
test_that("observe() works with either specify() interface", {
# unnamed formula argument
expect_equal(
gss %>%
observe(hours ~ NULL, stat = "mean"),
gss %>%
observe(response = hours, stat = "mean"),
ignore_attr = TRUE
)
expect_equal(
gss %>%
observe(
hours ~ college,
stat = "diff in means",
order = c("degree", "no degree")
),
gss %>%
specify(hours ~ college) %>%
calculate(stat = "diff in means", order = c("degree", "no degree"))
)
# named formula argument
expect_equal(
gss %>%
observe(formula = hours ~ NULL, stat = "mean"),
gss %>%
observe(response = hours, stat = "mean"),
ignore_attr = TRUE
)
expect_equal(
gss %>%
observe(formula = hours ~ NULL, stat = "mean"),
gss %>%
observe(response = hours, stat = "mean"),
ignore_attr = TRUE
)
expect_equal(
gss %>%
observe(
formula = hours ~ college,
stat = "diff in means",
order = c("degree", "no degree")
),
gss %>%
specify(formula = hours ~ college) %>%
calculate(stat = "diff in means", order = c("degree", "no degree"))
)
})
test_that("observe() output is the same as the old wrappers", {
expect_snapshot(
res_wrap <- gss_tbl %>%
chisq_stat(college ~ partyid)
)
expect_equal(
gss_tbl %>%
observe(college ~ partyid, stat = "Chisq") %>%
dplyr::pull(),
res_wrap
)
expect_snapshot(
res_wrap_2 <- gss_tbl %>%
t_stat(hours ~ sex, order = c("male", "female"))
)
expect_equal(
gss_tbl %>%
observe(stat = "t", hours ~ sex, order = c("male", "female")) %>%
dplyr::pull(),
res_wrap_2
)
})
|
77b86c805547ea11d423254af9adf2256e74c144 | f68a2e2e9050786b39921849ee3b23a5c6fb277b | /man/paths.Rd | 8df27a44b3578a56bd9de99bc46a22e6db4bba53 | [] | no_license | cran/paths | 61ed1e038e52ebadf1ed9c4adb7b658b836c47aa | 3586900ed4650517016ee08bc962227db01b221b | refs/heads/master | 2023-06-03T19:30:08.817370 | 2021-06-18T07:40:02 | 2021-06-18T07:40:02 | 261,991,229 | 1 | 0 | null | null | null | null | UTF-8 | R | false | true | 7,417 | rd | paths.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/paths.R, R/print.paths.R
\name{paths}
\alias{paths}
\alias{print.paths}
\title{Causal Paths Analysis}
\usage{
paths(
a,
y,
m,
models,
ps_model = NULL,
data,
nboot = 500,
conf_level = 0.95,
...
)
\method{print}{paths}(x, digits = 3, ...)
}
\arguments{
\item{a}{a character string indicating the name of the treatment variable. The treatment
should be a binary variable taking either 0 or 1.}
\item{y}{a character string indicating the name of the outcome variable.}
\item{m}{a list of \eqn{K} character vectors indicating the names of \eqn{K} causally ordered mediators
\eqn{M_1,\ldots, M_K}.}
\item{models}{a list of \eqn{K+1} fitted model objects describing how the outcome depends on treatment,
pretreatment confounders, and varying sets of mediators, where \eqn{K} is the number of mediators.
\itemize{
\item the first element is a baseline model of the outcome conditional on treatment and pretreatment
confounders.
\item the \eqn{k}th element is an outcome model conditional on treatment, pretreatment confounders,
and mediators \eqn{M_1,\ldots, M_{k-1}}.
\item the last element is an outcome model conditional on treatment, pretreatment confounders,
and all of the mediators, i.e., \eqn{M_1,\ldots, M_K}.
}
The fitted model objects can be of type \code{\link{lm}}, \code{\link{glm}}, \code{\link[gbm]{gbm}},
\code{\link[BART]{wbart}}, or \code{\link[BART]{pbart}}.}
\item{ps_model}{an optional propensity score model for treatment. It can be of type \code{\link{glm}},
\code{\link[gbm]{gbm}}, \code{\link[twang]{ps}}, or \code{\link[BART]{pbart}}. When it is provided,
the imputation-based weighting estimator is also used to compute path-specific causal effects.}
\item{data}{a data frame containing all variables.}
\item{nboot}{number of bootstrap iterations for estimating confidence intervals. Default is 500.}
\item{conf_level}{the confidence level of the returned two-sided confidence
intervals. Default is \code{0.95}.}
\item{...}{additional arguments to be passed to \code{boot::boot}, e.g.
\code{parallel} and \code{ncpus}. For the \code{print} method, additional arguments to be passed to
\code{print.default}}
\item{x}{a fitted model object returned by the \code{\link{paths}} function.}
\item{digits}{minimal number of significant digits printed.}
}
\value{
An object of class \code{paths}, which is a list containing the
following elements \describe{
\item{pure}{estimates of direct and path-specific effects via \eqn{M_1, \ldots, M_K}
based on the pure imputation estimator.}
\item{hybrid}{estimates of direct and path-specific effects via \eqn{M_1, \ldots, M_K}
based on the imputation-based weighting estimator.}
\item{varnames}{a list of character strings indicating the names of the pretreatment confounders (\eqn{X}),
treatment(\eqn{A}), mediators (\eqn{M_1, \ldots, M_K}), and outcome (\eqn{Y}).}
\item{formulas}{formulas for the outcome models.}
\item{classes}{classes of the outcome models.}
\item{families}{model families of the outcome models.}
\item{args}{a list containing arguments of the outcome models.}
\item{ps_formula}{formula for the propensity score model.}
\item{ps_class}{class of the propensity score model.}
\item{ps_family}{model family of the propensity score model.}
\item{ps_args}{arguments of the propensity score model.}
\item{data}{the original data.}
\item{nboot}{number of bootstrap iterations.}
\item{conf_level}{confidence level for confidence intervals.}
\item{boot_out}{output matrix from the bootstrap iterations.}
\item{call}{the matched call to the \code{paths} function.}
}
}
\description{
\code{paths} estimates path-specific causal effects in the presence of \eqn{K(\geq 1)} causally
ordered mediators. It implements the pure imputation estimator and the imputation-based weighting
estimator (when a propensity score model is provided) as detailed in Zhou and Yamamoto (2020).
The user supplies the names of the treatment, outcome, mediator variables, \eqn{K+1} fitted models
characterizing the conditional mean of the outcome given treatment, pretreatment confounders, and
varying sets of mediators, and a data frame containing all the variables. The function returns
\eqn{K+1} path-specific causal effects that together constitute the total treatment effect.
When \eqn{K=1}, the path-specific causal effects are identical to the natural direct and indirect
effects in standard causal mediation analysis.
}
\examples{
data(tatar)
m1 <- c("trust_g1", "victim_g1", "fear_g1")
m2 <- c("trust_g2", "victim_g2", "fear_g2")
m3 <- c("trust_g3", "victim_g3", "fear_g3")
mediators <- list(m1, m2, m3)
formula_m0 <- annex ~ kulak + prosoviet_pre + religiosity_pre + land_pre +
orchard_pre + animals_pre + carriage_pre + otherprop_pre + violence
formula_m1 <- update(formula_m0, ~ . + trust_g1 + victim_g1 + fear_g1)
formula_m2 <- update(formula_m1, ~ . + trust_g2 + victim_g2 + fear_g2)
formula_m3 <- update(formula_m2, ~ . + trust_g3 + victim_g3 + fear_g3)
formula_ps <- violence ~ kulak + prosoviet_pre + religiosity_pre +
land_pre + orchard_pre + animals_pre + carriage_pre + otherprop_pre
####################################################
# Causal Paths Analysis using GLM
####################################################
# outcome models
glm_m0 <- glm(formula_m0, family = binomial("logit"), data = tatar)
glm_m1 <- glm(formula_m1, family = binomial("logit"), data = tatar)
glm_m2 <- glm(formula_m2, family = binomial("logit"), data = tatar)
glm_m3 <- glm(formula_m3, family = binomial("logit"), data = tatar)
glm_ymodels <- list(glm_m0, glm_m1, glm_m2, glm_m3)
# propensity score model
glm_ps <- glm(formula_ps, family = binomial("logit"), data = tatar)
# causal paths analysis using glm
# note: For illustration purposes only a small number of bootstrap replicates are used
paths_glm <- paths(a = "violence", y = "annex", m = mediators,
glm_ymodels, ps_model = glm_ps, data = tatar, nboot = 3)
####################################################
# Causal Paths Analysis using GBM
####################################################
require(gbm)
# outcome models
gbm_m0 <- gbm(formula_m0, data = tatar, distribution = "bernoulli", interaction.depth = 3)
gbm_m1 <- gbm(formula_m1, data = tatar, distribution = "bernoulli", interaction.depth = 3)
gbm_m2 <- gbm(formula_m2, data = tatar, distribution = "bernoulli", interaction.depth = 3)
gbm_m3 <- gbm(formula_m3, data = tatar, distribution = "bernoulli", interaction.depth = 3)
gbm_ymodels <- list(gbm_m0, gbm_m1, gbm_m2, gbm_m3)
# propensity score model via gbm
gbm_ps <- gbm(formula_ps, data = tatar, distribution = "bernoulli", interaction.depth = 3)
# causal paths analysis using gbm
# note: For illustration purposes only a small number of bootstrap replicates are used
paths_gbm <- paths(a = "violence", y = "annex", m = mediators,
gbm_ymodels, ps_model = gbm_ps, data = tatar, nboot = 3)
}
\references{
Zhou, Xiang and Teppei Yamamoto. 2020. "\href{https://osf.io/2rx6p}{Tracing Causal Paths from Experimental and Observational Data}".
}
\seealso{
\code{\link{summary.paths}}, \code{\link{plot.paths}}, \code{\link{sens}}
}
|
16a13bb098f9d7ac355fc34bcd7ceb204bff3d1f | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/elec/examples/do.audit.Rd.R | 3b5535a628363faa3f8997021d3dfeba4f249fda | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 304 | r | do.audit.Rd.R | library(elec)
### Name: do.audit
### Title: do.audit
### Aliases: do.audit
### ** Examples
Z = make.cartoon(n=200)
truth = make.truth.opt.bad(Z, t=0, bound="WPM")
samp.info=CAST.calc.sample(Z, beta=0.75, stages=1, t=5 )
audit.names = CAST.sample( Z, samp.info )
do.audit( Z, truth, audit.names )
|
c59594fe3e76e41c29909a627fa8a4388f29ad24 | 91893f754317c7e6df12be6e1b53e3d3ceefaba6 | /Calib_Isodyn_Fig4&Table2_V1.0.R | 8667fd3dd296bfd0c552fe56dfc30f1351c44559 | [] | no_license | Sebastien-Lefebvre/IsoDyn | 771c95ba64323c1cf9f6aecd575c06eb7b9cadb0 | be0d58360ffc8348acded977e217fb6ef424880f | refs/heads/main | 2023-05-26T09:29:28.914938 | 2021-05-31T07:24:49 | 2021-05-31T07:24:49 | 369,138,952 | 1 | 0 | null | null | null | null | ISO-8859-2 | R | false | false | 3,972 | r | Calib_Isodyn_Fig4&Table2_V1.0.R | #-----------------------------------
#R code to calibrate the Isodyn model
#-----------------------------------
rm(list=ls())# clear the current environment
graphics.off()# clear the current plots
setwd("C:/Users/Sébastien/Desktop/To do/CodeIsodynNEW/GitHub")#your working directory
#Packages to be installed
#install.packages('deSolve')
#install.packages('matrixStats')
#install.packages('lme4')
#install.packages('nls2')
library('lme4') #for Nelder_Mead function
library("deSolve") #for numerical solution of IsoDyn
library("matrixStats") #for ColQuantiles function
library("nls2") #for calibration of the TIM and exponential model
#Compute all needed functions which are put together in a unique file
source("Functions_IsodynV1.0.R")
source("Data_Fig4.R")
#------------------
# Best optimization
#------------------
name<-c('NucheF','Guelinckx','MacAvoy')# names of the treatments
num=2#choose the treatment to use 1=NucheF, 2=Guelinkx, 3=MaCAvoy
data=eval(parse(text=name[num]))
data[["beta"]]=2/3# choice of the allometric coefficient
lower<-c(0.001,0.001,0)#lower boundary for the parameter ri,ro,Ei
upper<-c(0.5,0.5,5)#upper boundary for the parameter ri,ro,Ei
n_best=20#maximum number of loop to reach the global minimum (the best estimation)
crit=0.05#the best evaluation is crit better than the before best evaluation
#Call of the function to perform the calibration
#---------------------------
#Calibration of the Isodyn model
#---------------------------
R<-list(NULL)
model=1 #choice of the model. Model differs on the assumption for Ei and Eo
#(choice 1 Ei and Eo are equal in absolute values, choice 2 Eo=0 and Ei positive, choice 3 Eo is negative and Ei=0)
data[["model"]]=model
R[[model]]<-BestNM2(dat=data,lower=lower,upper=upper,n_best=n_best,crit=crit)# calibration for IsoDyn
#R[[1]] to print on screen preliminary results
#-----------------------------
#Calibration of the Time model
#-----------------------------
ResTIM <- nls2(SI~TIM(SI0=mean(SI[tSI==0]),SIDiet=SIdiet,L,Delta,t=tSI),
start = expand.grid(L = seq(0.01, 0.05, len = 4),Delta = seq(1, 5, len = 4)), data=data)
y=data$W
x=data$t
ResExpG<-nls2(y~Expgrowth(W0=mean(y[x==0]),k,t=x),start=expand.grid(k = seq(0.05, 0.2, len = 4)))
CVeW<-CV(data=data)[1]# calculation of coefficient of variation for body mass
CVeSI<-CV(data=data)[2]# calculation of coefficient of variation for stable isotopes
#perform bootstrap for parameter se
n_boot=2# number of bootstraps ideal is 500-1000, start with 10 (minimum is 2) but it could last very long with hundreds
ALL<-NULL
ALL<-BOOTNM(dat=data,lower=lower,upper=upper,
n_best=n_best,crit=crit,n_boot=n_boot,CVeW,CVeSI)# number of bootstraps needed to get a stable sd of parameters
ALL<-ALL[ALL[,1]>0,]#filter unsuccesfull calibration
par_isodyn<-R[[model]]$par#ri,ro,Ei,Eo,Cost,k,Lambda_iso,kg_iso,TEF_iso
par_tim<-c(coef(summary(ResExpG))[1,1],coef(summary(ResTIM))[1,1],coef(summary(ResTIM))[2,1])
se_par_tim<-c(coef(summary(ResExpG))[1,2],coef(summary(ResTIM))[1,2],coef(summary(ResTIM))[2,2])
Res_Tab_par<-par_tab(par_isodyn,par_tim,se_par_tim,matBOOT=ALL)# store the results
Res_GOF_model<-GOF2(par=par_isodyn,par2=par_tim,data=data)# calculate the goodness of fit
Plot_Isodyn(data=data,par=list(ri=R[[model]]$par[1],ro=R[[model]]$par[2],Ei=R[[model]]$par[3],Eo=R[[model]]$par[4]),
par2=par_tim,BootALL=ALL,name[num])#plot the results, automatically save the plot
#Save results
save(Res_Tab_par, file=paste(paste(name[num],'_Res_tab_par','Model_',data$model,sep=""),"RData",sep="."))
save(Res_GOF_model, file=paste(paste(name[num],'_Res_GOF','Model_',data$model,sep=""),"RData",sep="."))
save(R, file=paste(paste(name[num],'_BestISO_','Model_',data$model,sep=""),"RData",sep="."))
save(ALL, file=paste(paste(name[num],'_BOOT_','Model_',data$model,sep=""),"RData",sep="."))
|
c3f7b0252ef8ea7f6cb6ddafa18556119676196f | b4f3e2145015d8c207d2414c7cbf90666f15c260 | /umc_cm_uap/process_monitor.R | 995ad377662b1d9f2e9d27d89e642152b1928b8b | [] | no_license | shaomin4/shaomin_research | 1e9ead0d70e4e453d65c0070cfe87a77cc317a44 | e932d577431397e75f23ba6e6fb0f1ce39613be2 | refs/heads/master | 2022-11-28T15:14:47.223439 | 2020-08-10T18:23:17 | 2020-08-10T18:23:17 | 286,543,756 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,176 | r | process_monitor.R | process_monitor = function() {
# Get status of system process, use head and tail to filter some unnecessary content
process_stat = system("ps axo user:30,pid,pcpu,pmem,vsz,rss,tty,stat,time,comm", intern = T)
process_stat1 = process_stat
# Take away spaces of in the starting of rows
# process_stat = sub(x = process_stat, pattern = "^\\s+", replacement = "")
# Change mutliple spaces to one space
process_stat = gsub(x = process_stat, pattern = "\\s+", replacement = " ")
# Split one string row into multiple sting columns
process_stat = strsplit(process_stat, split = " ")
# Align process states which are different length
process_stat = align_stat(process_stat)
# Convert a list of vector to a list of one row matrix
process_stat_mtx = Map(x = process_stat[-1], f = function(x) matrix(x, 1, length(x)))
# Convert list of vectors to a data frame
process_stat_df = as.data.frame(do.call(rbind, process_stat[-1]))
# Convert data.frame columns from factor to characters
process_stat_df[] <- lapply(process_stat_df, as.character)
# Set column names
colnames(process_stat_df) = process_stat[[1]]
# Process time column
exec_more_than_24_hours_idx = which(grepl(pattern = "[0-9]*-.*", x = process_stat_df$`TIME`))
setdiff(seq_len(nrow(process_stat_df)), exec_more_than_24_hours_idx)
process_stat_df$HOURS = sapply( X = process_stat_df$`TIME`,
FUN = function(x) {
# Init days
days = 0
# If contain days format in times
if(grepl(pattern = "[0-9]*-.*", x = x)) {
splitted_time = strsplit(x = x, split = "-")[[1]]
days_str = splitted_time[1]
# Update format of times
x = splitted_time[2]
days = as.integer(days_str)
}
splitted_time = strsplit(x = x, split = ":")[[1]]
hours_str = splitted_time[1]
hours = as.integer(hours_str) + days*24
return(hours)
})
return(process_stat_df)
}
align_stat = function(stat) {
# Get lenght of state
stat_len = sapply(X = stat, FUN = length)
# Get minimum of them
min_len = min(stat_len)
# Find index which length of state larger than minimum
need_2_be_aligned_idx = which(stat_len > min_len)
for(i in need_2_be_aligned_idx) {
# Get excess index
excess_idx = setdiff(x = seq_len(stat_len[i]),
y = seq_len(min_len))
# Combind excess columns to new one string, and then combine with this string
stat[[i]][min_len] = paste(stat[[i]][min_len],
paste(stat[[i]][excess_idx], collapse = " "),
sep = " ")
# Take unnecessary columns away
stat[[i]] = stat[[i]][seq_len(min_len)]
}
return(stat)
}
|
8c77184daa2f4c04840575f01369bf37921912a7 | 1ec8c3599acff2bc8887c2f18cbf457a3d155edd | /R/get_shotchart.R | 34ca80a06aaf40d34594e7cbafc9de9b37179690 | [] | no_license | PatrickChodowski/NBAr | 157057b9f8d81e789a683c88f27dc069c4ef619d | 8f43a203ef0d8a92f69164a5e069c41fdeb3ff83 | refs/heads/master | 2021-07-07T00:18:28.152553 | 2021-04-18T10:39:04 | 2021-04-18T10:39:04 | 73,116,183 | 6 | 6 | null | 2021-04-18T10:39:04 | 2016-11-07T20:06:29 | R | UTF-8 | R | false | false | 5,269 | r | get_shotchart.R | #' Download shotchart data for player
#'
#' Downloads and process NBA.com shotchart data for given player and season.
#' @param player_id Player's ID in NBA.com DB
#' @param season Number of the year in which season started
#' @param context_measure Specify which value group you want to download. c('PTS','FGA','FGM','FG_PCT','FG3M','FG3A','FG3_PCT','PF',
#' 'EFG_PCT','TS_PCT','PTS_FB','PTS_OFF_TOV','PTS_2ND_CHANCE')
#' @param per_mode Specify if you want data divided per game or totals. Default parameter is "PerGame". c("PerGame","Totals")
#' @param season_type Choose data for preseason, regular season or postseason. Default parameter is "Regular Season". c("Regular Season","Playoffs","Pre Season","All Star")
#' @param season_segment Choose season half for the data. Empty string means whole season and it is set by default. c("","Post All-Star","Pre All-Star")
#' @param game_segment Choose game half for the data. Empty string means whole game and it is set by default. c("","First Half","Overtime","Second Half")
#' @param period Choose game period for the data. 0 means whole game and it is set by default. as.character(c(0:4))
#' @param date_from Day from which data will be collected. It is set in MM/DD/YYYY format and by default is not specified, so data is calculated for whole season.
#' @param date_to Day to which data will be collected. It is set in MM/DD/YYYY format and by default is not specified, so data is calculated for whole season.
#' @param outcome Filter by game result. It can be a loss (L) or a win (W). By default parameter is an empty string, so both are taken into account. c("","W","L")
#' @param opponent_team_id Filter by opponent's team id from nba.com database. Default "0" means all teams.
#' @param verbose Defalt TRUE - prints additional information
#'
#'
#' @return Dataset containing shot information such as location on the floor, type of shot and result per player and game.
#'
#' @author Patrick Chodowski, \email{Chodowski.Patrick@@gmail.com}
#' @keywords NBAr, Shotchart, players
#'
#' @examples
#'
#'
#' context_measure <- c('PTS','FGA','FGM','FG_PCT','FG3M','FG3A','FG3_PCT','PF',
#' 'EFG_PCT','TS_PCT','PTS_FB','PTS_OFF_TOV','PTS_2ND_CHANCE')
#' game_segment <- c('','First+Half','Overtime','Second+Half')
#' opponent_team_id <- 0
#' per_mode <- c("PerGame","Totals")[1]
#' period <- as.character(c(0:4))[1]
#' date_from <- "01/01/2017"
#' date_to <- "04/30/2017"
#' season_type <- c("Regular+Season","Playoffs","Pre+Season","All+Star")[1]
#' season_segment <- c("","Post+All-Star","Pre+All-Star")[1]
#' outcome <- c("","W","L")[1]
#'
#' get_shotchart(2853,2014)
#'
#'
#' @export get_shotchart
#' @importFrom lubridate second minute
#' @import dplyr
#' @import tidyr
#' @import httr
#' @importFrom purrr set_names
#' @import tibble
#' @importFrom glue glue
#' @importFrom magrittr %>%
#' @importFrom jsonlite fromJSON
#'
get_shotchart <- function( player_id,
season,
context_measure = "FGA",
date_from = "",
date_to = "",
game_segment = "",
period = "0",
per_mode = "PerGame",
outcome = "",
season_type = "Regular+Season",
season_segment= "",
opponent_team_id= "0",
verbose=TRUE
){
tryCatch({
season_id <- paste(season, as.numeric(substring(season,3,4))+1,sep="-")
link <- glue("https://stats.nba.com/stats/shotchartdetail?CFID=33&CFPARAMS={season_id}&ContextFilter=",
"&ContextMeasure={context_measure}",
"&DateFrom={date_from}",
"&DateTo={date_to}",
"&GameID=&GameSegment={game_segment}",
"&LastNGames=0&LeagueID=00&Location=&MeasureType=Base&Month=0&OpponentTeamID={opponent_team_id}",
"&Outcome={outcome}",
"&PaceAdjust=N&PerMode={per_mode}",
"&Period={period}",
"&PlayerID={player_id}",
"&SeasonType={season_type}",
"&TeamID=0&VsConference=&VsDivision=&mode=Advanced&showDetails=0&showShots=1&showZones=0&RookieYear=",
"&SeasonSegment={season_segment}",
"&PlayerPosition=")
verbose_print(verbose, link)
result_sets_df <- rawToChar(GET(link, add_headers(.headers = c('Referer' = 'http://google.com',
'User-Agent' = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36',
'connection' = 'keep-alive',
'Accept' = 'application/json',
'Host' = 'stats.nba.com',
'x-nba-stats-origin'= 'stats')))$content) %>% fromJSON()
index <- which(result_sets_df$resultSets$name == "Shot_Chart_Detail")
dataset <- result_sets_df$resultSets$rowSet[index][1] %>%
as.data.frame(stringsAsFactors=F) %>%
as_tibble() %>%
mutate_if(check_if_numeric, as.numeric) %>%
set_names(tolower(unlist(result_sets_df$resultSets$headers[index])))
verbose_dataset(verbose, dataset)
return(dataset)}, error=function(e) print(e$message))
}
|
3e260468eaddb249e127f626c6b2219b4a62c93f | cf0d12010f7863fd6ac85c370ef6f7318eec54bd | /rankall.R | 660d7e281abec132583cfe658b4014736fc514ea | [] | no_license | jorgenquaade/ProgAssignment3 | 278430ce760f72cdfa58eca2d98ac7b346934810 | fc3cf9b10cbe547e685287425e86b2a16dc509a7 | refs/heads/master | 2021-01-10T09:20:36.253103 | 2015-10-02T10:09:38 | 2015-10-02T10:09:38 | 43,416,663 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,502 | r | rankall.R | ## The function reads the outcome-of-care-measures.csv file and returns
## a 2-column data frame containing the hospital in each state that has
## the ranking specified in num.
## For readability a lot of the statements are broken into multiple lines
rankall <- function(outcome, num = "best") {
## Initialize parameters we want to be sure are dataframes
inputData <- data.frame()
stateData <- data.frame()
orderedData <- data.frame()
outputData <- data.frame(matrix(0, ncol = 2, nrow = 54))
colnames(outputData) <- c("hospital", "state")
rank<-0
numhosp<-0
## Read the input data
inputData <- read.csv("outcome-of-care-measures.csv",
na.strings = c("Not Available"), colClasses = "character")
## Check that outcome are valid
if (outcome == "heart attack")
colnum<-11
else if (outcome == "heart failure")
colnum<-17
else if (outcome == "pneumonia")
colnum<-23
else
stop("invalid outcome")
## Create vector of states
stateVec<-unique(inputData[, 7])
## order the statevec alphabetically and assign to outputData
stateVec <- sort(stateVec)
outputData$state <- stateVec
## Use a for loop to create dataframe containing state and hospital
## chosen from outcome and rank
## initialize index into output dataframe outputData
rownum<-1
for (i in stateVec) {
## Subset data according to input parameter state
stateData <- subset(inputData, inputData[,7]==i)
## Now stateData needs to be ordered by rank after mortality rates
## but first mortality rates must be converted to numerics
stateData[,colnum] <- as.numeric(stateData[,colnum])
## Order the rankedData after rate and hospitalname so that we can return the
## hospital with the name that comes first alphabetically
orderedData <- stateData[order(stateData[,colnum],stateData[,2]),]
## get the number of hospitals in state
numhosp <- length(orderedData[,2])
if (num == "best"){
outputData$hospital[rownum] <- orderedData[1,2]
outputData$state[rownum] <- orderedData[1,7]
}
else if (num == "worst"){
orderedData <- stateData[order(stateData[,colnum], stateData[,2], decreasing = TRUE),]
outputData$hospital[rownum] <- orderedData[1,2]
outputData$state[rownum] <- orderedData[1,7]
}
else if ((num > 0) & (num <= numhosp)){
outputData$hospital[rownum] <- orderedData[num,2]
outputData$state[rownum] <- orderedData[num,7]
}
else {
outputData$hospital[rownum] <- NA
outputData$state[rownum] <- i
}
rownum<-rownum+1
}
outputData
} |
9808a70e60bde4f10cd7ef4e20ed23a7b3f864aa | 21546434d48fb0cf84144df0082b0f0b90c5e6a1 | /inst/pabloGift/ui.R | 64e269fa8924ea526d59cb8bebbba7f2677180cd | [] | no_license | lcolladotor/pablo2013 | 998d6dd82e5e5b5533219f1264932a5452805585 | d31ebbd4fdd443adba6f5071d5e88d6711396073 | refs/heads/master | 2021-01-21T00:43:35.697202 | 2014-01-01T01:17:21 | 2014-01-01T01:17:21 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,531 | r | ui.R | ## Setup
source("server.R")
## Specify layout
shinyUI(pageWithSidebar(
headerPanel(HTML("Pablo's 2013 christmas gift (code at <a href='https://github.com/lcolladotor/pablo2013'>GitHub</a>)"), "pablo2013"),
sidebarPanel(
## Construct input options
## Choose the data
h4("Answer the questions"),
selectInput("anime1", "Best describes C.C. from Code Geass", c("Choose an answer", "Kuudere", "Tsundere", "Yandere", "Dandere")),
selectInput("anime2", "Traveled the stars for 12,000 years", c("Choose an answer", "Yoko Littner", "Noriko Takaya", "Asuka Langley", "Ranka Lee", "Nono")),
selectInput("history1", "Synonym of the Edo period", c("Choose an answer", "Heian", "Heisei", "Meiji", "Tokugawa", "Nara")),
tags$hr(),
sliderInput("code1", "When i = 5, what is the value of j? (post evaluation of j)", min=0, max=10, value=0),
sliderInput("code2", "What is the final value of j?", min=0, max=10, value=0),
tags$hr()
),
mainPanel(
tabsetPanel(
## Summary of the data. This is faster to load than the visualizations hence why I am showing this first..
tabPanel("Code",
h4("Code"),
HTML('<script src="https://gist.github.com/lcolladotor/8203492.js"></script>')
),
tabPanel("Check your answers",
h4("Code answers"),
verbatimTextOutput("runCode"),
h4("Answers detail"),
tableOutput('answers'),
h4("Number of correct answers"),
verbatimTextOutput('total'),
h4("Percent of correct answers"),
verbatimTextOutput('percent')
)
)
)
))
|
9a055187bb89765484c1d01f4e4b9d4a1653b33c | 7bbbb8a9ad9e725cf1d49b5137da16e4e64ad512 | /R/practice.R | 73a2a48626b819ee2ba72d9c071141d4168caec2 | [
"MIT"
] | permissive | pmcharrison/mdt | 56c5d5c75516faf88a608cf12f2349259d6ae6dc | f86659c00c776e87bfc7e4d7302b796160d30bb8 | refs/heads/master | 2023-08-07T10:18:17.518197 | 2023-07-26T21:26:37 | 2023-07-26T21:26:37 | 138,870,343 | 0 | 6 | NOASSERTION | 2021-12-13T11:45:19 | 2018-06-27T11:07:07 | R | UTF-8 | R | false | false | 1,019 | r | practice.R | practice <- function(media_dir) {
unlist(lapply(
list(list(id = "ex2", answer = "3"),
list(id = "ex3", answer = "1")
),
function(x) {
list(
psychTestR::audio_NAFC_page(
label = "practice_question",
prompt = psychTestR::i18n("AMDI_0013_I_0001_1"),
url = file.path(media_dir, "examples", paste0(x$id, ".mp3")),
choices = c("1", "2", "3"),
arrange_choices_vertically = FALSE,
save_answer = FALSE
),
psychTestR::reactive_page(function(answer, ...) {
psychTestR::one_button_page(shiny::div(
shiny::p(shiny::HTML(psychTestR::i18n(
if (answer == x$answer) "AMDI_0006_I_0001_1" else "AMDI_0010_I_0001_1"))),
if (answer != x$answer) {
shiny::p(shiny::HTML(psychTestR::i18n(
if (x$answer == "3") "AMDI_0012_I_0001_1" else "AMDI_0007_I_0001_1"
)))
}), button_text = psychTestR::i18n("AMDI_0016_I_0001_1"))}))}))}
|
9796de97227c11b43785f283a2d54d1d7f118e00 | b5f70d6a908b52f5937567d770bf14ee3a7a064d | /man/write_circos_links.Rd | e80f48eedba12c4a43fac789887c07ee334c26f6 | [] | no_license | cran/SanzCircos | 271eae80217650fe203918f4a514f7e7033ce08c | f369ca3a85ab99477f79fa671c742e84ad57b42c | refs/heads/master | 2020-03-15T11:48:00.992187 | 2018-05-04T09:52:54 | 2018-05-04T09:52:54 | 132,128,549 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 1,245 | rd | write_circos_links.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/write_circos_links.R
\name{write_circos_links}
\alias{write_circos_links}
\title{write_circos_links}
\usage{
write_circos_links(df, include_colors = FALSE, file_name = "links.txt",
file_path = NULL)
}
\arguments{
\item{df}{A data frame in the format of those returned by the `make_circos_links` function}
\item{include_colors}{Include colors generated by the `color_circos_links` function in the write file}
\item{file_name}{The desired file name. Defaults to links.txt in the current working directory}
\item{file_path}{The desired file path destination folder. Defaults to NULL}
}
\value{
Writes a Circos-compatible links file to the desired directory
}
\description{
A function that takes a data frame in the format of those returned by the `make_circos_links` function, and writes a "links"
file for Circos plotting
}
\examples{
df <- data.frame(lin_id = c(1,2), chr1 = c(1,1), band1 = c(1,1),
chr1_start = c(1,5), chr1_end = c(5,8),
n1 = c(5,3), chr2 = c(1,2), band2 = c(2,1),
chr2_start = c(8,1), chr2_end = c(13,5), n2 = c(5,5))
write_circos_links(df = df, file_name = "links.txt", file_path = tempdir())
}
\author{
Matthew Woodruff, Emory University
}
|
b781af8da18e305f6ebf76794a3078d544e3681e | 2c7bd9adeaa62c3a2168d32ba89e32c356904ad5 | /census_tract/Join_data.R | 0ae4c208b47b5563b4cf10099b4848e9f503b428 | [] | no_license | ankur715/Starbucks | 1f51fd21cfb96b88dd1d507d23fb03fc5ec09e21 | 8d7f4c9be1c12479dd67f820a6e4eb79469286f1 | refs/heads/master | 2022-12-01T11:29:55.277288 | 2020-08-15T04:27:23 | 2020-08-15T04:27:23 | 278,781,944 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,353 | r | Join_data.R | library(plyr)
library(tidyverse)
library(ggplot2)
library(stringr)
library(lubridate)
library(zoo)
options(scipen=999)
setwd("~/Desktop/kailua")
###################################3
#df <- read_csv('data_1.csv')
df <- read_csv('not_clean_data.csv')
df2 <- read_csv('PHKO_2_2017_2018.csv') #transtit website
df2$X1 <- NULL
df$X1 <- NULL
#############
df2$DATE <- as.Date(df2$cut_date)
df2$Time <- format(df2$cut_date,"%H:%M:%S")
df2$Hour <- format(df2$cut_date, "%H" )
df2$Min <- format(df2$cut_date, "%M" )
df2$TIME_HST <- paste(df2$Hour,"",df2$Min, sep = "")
df2$TIME_HST <- as.numeric(df2$TIME_HST)
df2 <- df2[,c(9,13,2,3,4,5,6,7,8)]
colSums(is.na(df2))
df22 <- na.locf(df2, na.rm = FALSE,na.remaining = "keep", maxgap = 8)
colSums(is.na(df22))
###################3
#not_in = anti_join(df,df22, by = c('DATE', 'TIME_HST'))
############################
df_all <- left_join(df,df22, by = c('DATE','TIME_HST'))
colSums(is.na(df_all))
na <- df_all[!complete.cases(df_all),]
##################################
#df55 <- na.locf(df_all, na.rm = FALSE,na.remaining = "keep", maxgap = 8)
#colSums(is.na(df55))
#na2 <- df55[!complete.cases(df55),]
#df60 <- drop_na(df55)
#################3
write.csv(df_all, 'master_no_clean_data.csv')
################
ff = filter(df_all, Year == 2019)
write.csv(ff, '2019_q1_q2.csv')
|
69a6ad26e797ddd76863fcf63a5b12242cfc20a6 | 78d422ce7380540b1573ff19f3400e058fbfde55 | /run_analysis.R | b1bfe855ee440622c66991b21652f74cf7d97b30 | [] | no_license | egelliott3/GettingAndCleaningData | 9cfd4e0b53431241ee1269e944d74f351bca3b41 | cfb949f99a1c95e724bc3d7525cab7bd755f1549 | refs/heads/master | 2020-04-17T06:44:40.499221 | 2016-09-11T03:15:36 | 2016-09-11T03:15:36 | 67,902,660 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 5,347 | r | run_analysis.R | library(data.table)
library(dplyr)
## Assumes all data has already been extracted to wd from the internal zip directory
## \UCI HAR Dataset. More specifically the wd should contain the files activity_labels.txt,
## features.txt, etc.
##
## Ignoring the Inertial Signals data per forum recommendations since it is not relevant
## to the calcuations we need to perform.
##
wdPath = getwd()
## Set root paths to the test and train directories relative to the working directory
testPath = file.path(wdPath, "test")
trainPath = file.path(wdPath, "train")
## Load subject files
dtTestSubject = read.table(file.path(testPath, "subject_test.txt"))
dtTrainSubject = read.table(file.path(trainPath, "subject_train.txt"))
## Load activity files
dtTestActivity = read.table(file.path(testPath, "y_test.txt"))
dtTrainActivity = read.table(file.path(trainPath, "y_train.txt"))
## Load data files
dtTestData = read.table(file.path(testPath, "X_test.txt"))
dtTrainData = read.table(file.path(trainPath, "X_train.txt"))
## Load features
dtFeatures = read.table(file.path(wdPath, "features.txt"))
setnames(dtFeatures, names(dtFeatures), c("featureId", "featureName"))
## Merge test and train sets for subject, activity and data respectively
dtSubjectMerged = rbind(dtTestSubject, dtTrainSubject)
setnames(dtSubjectMerged, "V1", "subjectId")
dtActivitytMerged = rbind(dtTestActivity, dtTrainActivity)
setnames(dtActivitytMerged, "V1", "activityId")
dtDataMerged = rbind(dtTestData, dtTrainData)
## Create a single dataset from the three we have
dtWorkingSet = cbind(cbind(dtSubjectMerged, dtActivitytMerged), dtDataMerged)
## Cleanup environment to reclaim memory
rm(dtTestSubject)
rm(dtTrainSubject)
rm(dtTestActivity)
rm(dtTrainActivity)
rm(dtTestData)
rm(dtTrainData)
rm(dtSubjectMerged)
rm(dtActivitytMerged)
rm(dtDataMerged)
## This leaves us with dtWorkingSet and dtFeatures remaining
## Filter the features data table down to only the mean() and std() entries
dtMeanStdFeatures = dtFeatures[grepl("mean\\(\\)|std\\(\\)", dtFeatures$featureName),]
## Update formatting of the featureIds to include a V to match our combinded working set
dtMeanStdFeatures = mutate(dtMeanStdFeatures, featureCode = paste0("V", featureId))
## Subset down to just the columns we want based on filtered features
dtWorkingSet = select_(dtWorkingSet, .dots=c("subjectId", "activityId", dtMeanStdFeatures$featureCode))
## Load activity labels so we can translate ids to names
dtActivityLabels = read.table(file.path(wdPath, "activity_labels.txt"))
setnames(dtActivityLabels, names(dtActivityLabels), c("activityId", "activityName"))
## Join our activity names into our working set based on activityId
dtWorkingSet = inner_join(dtWorkingSet, dtActivityLabels)
## Unpivot the dtWorkingSet so we have a feature and value per row
dtWorkingSet = melt(dtWorkingSet, c("subjectId", "activityId", "activityName"), variable.name="featureCode")
## Since our working format matches the feature format, join
dtWorkingSet = left_join(dtWorkingSet, dtMeanStdFeatures)
## Cleanup
rm(dtActivityLabels)
rm(dtFeatures)
rm(dtMeanStdFeatures)
## Parse featureName into columns that make sense based on the documentation
## contained in features_info.txt
## Create new domain column with values Time or Frequency
dtWorkingSet = mutate(dtWorkingSet, domain = factor(grepl("^t", dtWorkingSet$featureName), labels=c("Frequency", "Time")))
## Create new instrument column with values for Gyroscope or Accelerometer
dtWorkingSet = mutate(dtWorkingSet, instrument = factor(grepl("Acc", dtWorkingSet$featureName), labels=c("Gyroscope", "Accelerometer")))
## Create new acceleration column with values for Body or Gravity
dtWorkingSet = mutate(dtWorkingSet, acceleration = factor(ifelse(grepl("Acc", dtWorkingSet$featureName), ifelse(grepl("Body", dtWorkingSet$featureName), 1, 2), 0), labels=c(NA, "Body", "Gravity")))
## Create new isJerk column with binary values
dtWorkingSet = mutate(dtWorkingSet, isJerk = grepl("Jerk", dtWorkingSet$featureName))
## Create new variable column with values for Mean and StandardDeviation
dtWorkingSet = mutate(dtWorkingSet, variable = factor(grepl("mean\\(\\)", dtWorkingSet$featureName), labels=c("StandardDeviation", "Mean")))
## Create new isMagnitue column with binary values
dtWorkingSet = mutate(dtWorkingSet, isMagnitude = grepl("Mag", dtWorkingSet$featureName))
## Create new axis column with values for X, Y and Z
dtWorkingSet = mutate(dtWorkingSet, axis = factor(ifelse(grepl("-X", dtWorkingSet$featureName), 0, ifelse(grepl("-Y", dtWorkingSet$featureName), 1, ifelse(grepl("-Z", dtWorkingSet$featureName), 2, 3))), labels=c("X", "Y", "Z", NA)))
## Pare down the set to only include the columns we are interested in
dtWorkingSet = select(dtWorkingSet, subjectId, activityName, domain, instrument, acceleration, isJerk, isMagnitude, variable, axis, value)
## Aggregate the working set into a tidy set with the group row count and mean of
## the value
groupBy = group_by(dtWorkingSet, subjectId, activityName, domain, instrument, acceleration, isJerk, isMagnitude, variable, axis)
dtTidy = summarize(groupBy, count = n(), average = mean(value, rm.na = TRUE))
write.table(dtTidy, "./Tidy.txt", row.name=FALSE) |
9ad47336e6c374457afa61818cd82fe27b29f103 | 18edc48a0af4e4a492a994d18f8b1bdc74295f4b | /R/getPhenoScannerData.R | 09e93e982267f4831b30906327e811d1663a2cac | [
"MIT"
] | permissive | Gbau08/openCPUTest | f912b2f67b5fe873f142d39576c864443008ee47 | 8e0e77416f2ad05d881bc6dc646b52244606aca3 | refs/heads/master | 2020-06-09T19:44:40.247750 | 2019-07-30T15:31:42 | 2019-07-30T15:31:42 | 193,495,674 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 5,834 | r | getPhenoScannerData.R | ###################################################################
## PhenoScanner Query ##
## ##
## James Staley ##
## Email: james.staley@ucb.com ##
###################################################################
###################################################################
##### Set-up #####
###################################################################
pathToLoad = "/usr/local/src/app/data/map.Robj"
options(stringsAsFactors=F)
suppressMessages(library(phenoscanner))
suppressMessages(library(ddpcr))
suppressMessages(library(foreach))
suppressMessages(library(doParallel))
phenoscanner_snp <- function(rsid){
quiet(query_snp <- phenoscanner(snpquery=rsid, catalogue="None")$snps, all=FALSE)
if(nrow(query_snp)>0){
query_snp$nearest_gene <- query_snp$hgnc
query_snp$nearest_gene[query_snp$nearest_gene=="-"] <- query_snp$ensembl[query_snp$nearest_gene=="-"]
query_snp$nearest_gene[query_snp$nearest_gene=="-"] <- "N/A"
query_snp <- query_snp[,c("rsid", "hg19_coordinates", "a1", "a2", "eur", "consequence", "nearest_gene")]
names(query_snp) <- c("snpid", "hg19_coordinates", "effect_allele", "other_allele", "effect_allele_frequency", "variant_function", "nearest_gene")
}
cat(" ",rsid,"-- SNP\n")
return(query_snp)
}
phewas <- function(rsid){
## PhenoScanner SNP-trait look-up
phenoscanner_query <- function(rsid, type="GWAS"){
suppressMessages(library(phenoscanner))
# Sleep
if(type=="GWAS"){Sys.sleep(0)}; if(type=="pQTL"){Sys.sleep(0.15)}; if(type=="mQTL"){Sys.sleep(0.3)}; if(type=="eQTL"){Sys.sleep(0.45)}
# PhenoScanner look-up
query_results <- phenoscanner(snpquery=rsid, catalogue=type, pvalue=1)$results
# Process results
if(nrow(query_results)>0){
if(type=="GWAS"){
# library(dplyr)
load(pathToLoad)
query_results <- merge(query_results, map, by=c("dataset", "trait"), all.x=T, sort=F)[,union(names(query_results), names(map))]
# query_results <- left_join(query_results, map, by=c("dataset", "trait"))
query_results <- query_results[!(!is.na(query_results$keep) & query_results$keep=="N"),]; query_results$keep <- NULL
query_results$category[is.na(query_results$category)] <- "Unclassified"
query_results <- query_results[,c(1:8,23,9:22)]
}
if(type=="pQTL"){
query_results$category <- "Proteins"
query_results$n_cases <- 0
query_results$n_controls <- query_results$n
query_results <- query_results[,c(1:8,21,9:17,22,23,18:20)]
}
if(type=="mQTL"){
query_results$category <- "Metabolites"
query_results$n_cases <- 0
query_results$n_controls <- query_results$n
query_results <- query_results[,c(1:8,21,9:17,22,23,18:20)]
}
if(type=="eQTL"){
query_results$trait <- paste0("mRNA expression of ", query_results$exp_gene)
query_results$trait[query_results$trait=="mRNA expression of -"] <- paste0("mRNA expression of ", query_results$exp_ensembl[query_results$trait=="mRNA expression of -"])
query_results$trait[query_results$trait=="mRNA expression of -"] <- paste0("mRNA expression of ", query_results$probe[query_results$trait=="mRNA expression of -"])
query_results$trait <- paste0(query_results$trait, " in ", tolower(query_results$tissue))
query_results <- query_results[,!(names(query_results) %in% c("tissue", "exp_gene", "exp_ensembl", "probe"))]
query_results$category <- "mRNA expression"
query_results$n_cases <- 0
query_results$n_controls <- query_results$n
query_results <- query_results[,c(1:8,21,9:17,22,23,18:20)]
}
}
return(query_results)
}
foreach(type=c("GWAS", "pQTL", "mQTL", "eQTL"),
.combine = rbind) %dopar%
phenoscanner_query(rsid, type)
}
###################################################################
##### PhenoScanner Query #####
###################################################################
##### API Query #####
getPhenoScannerData <- function(rsid){
suppressMessages(library(phenoscanner))
suppressMessages(library(ddpcr))
suppressMessages(library(foreach))
suppressMessages(library(doParallel))
#rsid = "rs140463209"
snpinfo <- phenoscanner_snp(rsid)
### PheWAS
if(nrow(snpinfo)>0){
## Parallelize
cl<-makeCluster(4)
registerDoParallel(cl)
results <- phewas(rsid)
stopCluster(cl)
## Process results
if(nrow(results)>0){
results$priority <- 0
results$priority[results$category=="Immune system" | results$category=="Neurodegenerative" | results$category=="Neurological"] <- 1
results <- results[results$dataset!="GRASP",]
results$direction[results$direction=="-"] <- "minus"
results[results=="NA"] <- "N/A"; results[results=="-"] <- "N/A"
results$direction[results$direction=="minus"] <- "-"
results <- results[results$p!="N/A",]
results <- results[order(-results$priority, as.numeric(results$p)),]; results$priority <- NULL
names(results)[names(results)=="snp"] <- "snpid"
names(results)[names(results)=="a1"] <- "effect_allele"; names(results)[names(results)=="a2"] <- "other_allele"
}
cat(" ",rsid,"-- PheWAS\n")
}else{
results <- data.frame()
}
##### JSON #####
combined <- list(snps=snpinfo,results=results)
return(combined)
}
# getPhenoScannerData("rs140463209")
##### Save #####
#write(combined, file=paste0(rsid,".json"))
#
###### Timing #####
#cat(" Time taken:",as.numeric((proc.time()-ptm)[3]),"secs\n")
#
###### Exit #####
#q("no") |
efb8af09d1ec947a5c60bd4ca7a54ef7b8e67d3c | db0dcb5614d7cb05094fe5f70a3f2cf0f01f256b | /archive/M3.GT.R | e922365e9ba2f7426b527680b9935a4dd312ea4d | [] | no_license | TGDrivas/WES.WGS | 5ae75641675bccd781332e72e1f9ebf1918fda19 | 785e1ae54b8833f4fb1999f6a2d8f16550f86f8a | refs/heads/main | 2023-08-27T21:03:50.509171 | 2021-10-26T01:57:01 | 2021-10-26T01:57:01 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,507 | r | M3.GT.R | #the following 4 lines need to be adjusted to your folders
pathGT<-"/path/to/M3/M3.GT.chr"
pathGTExclusion<-"/path/to/M3.2/M3.2.GT.chr"
pathCol<-"/path/to/columns.txt"
pathLong<-"/path/to/M3/M3.long.chr"
pathOut<-"/path/to/M3/finalMask/"
pathOutExclusion<-"/path/to/M3.2/finalMask/"
library(tidyverse)
#funtion to merge the genotypes
mergeGT<-function(GT){
nGT<-length(GT)
if("1/1" %in% GT | "1|1" %in% GT){
return("1/1")
}
if("1/0" %in% GT | "0/1" %in% GT | "1/." %in% GT | "./1" %in% GT | "1|0" %in% GT | "0|1" %in% GT | "1|." %in% GT | ".|1" %in% GT){
return("0/1")
}
return("0/0")
}
columns<-scan(pathCol, what=character())
for(i in 1:22){
#the masks with gnomad/esp only
M3.GT <- read.table(paste0(pathGT,i,".txt"), quote="\"", comment.char="")
colnames(M3.GT)[2:ncol(M3.GT)]<-columns[10:length(columns)]
colnames(M3.GT)[1]<-"variants"
M3.long<-read.table(paste0(pathLong,i,".txt"), quote="\"", comment.char="")
colnames(M3.long)<-c("gene", "variants")
#tmp M3 file to work on to merge genotypes
M3.tmp<-merge(M3.long, M3.GT)
M3.tmp$gene<-droplevels(M3.tmp$gene)
nVar<-which(colnames(M3.tmp)=="variants")
M3.tmp<-M3.tmp[,-nVar]
M3.tmp<-M3.tmp %>%
group_by(gene) %>%
summarise(across(columns[10:length(columns)],mergeGT))
colnames(M3.tmp)[1]<-"ID"
#this is the vcf
nGene<-nrow(table(M3.tmp$ID))
M3<-data.frame("#CHROM"=rep(paste0("chr",i), nGene), POS=c(1:nGene), ID=names(table(M3.tmp$ID)), REF=rep("C",nGene), ALT=rep("A",nGene), QUAL=rep(".", nGene), FILTER=rep(".", nGene), INFO=rep(".", nGene), FORMAT=rep("GT", nGene))
colnames(M3)[1]<-"#CHROM"
M3<-merge(M3,M3.tmp)
M3<-M3 %>% relocate("#CHROM", "POS")
#this removes rows that are all 0/0 or 1/1 since there are no variants here, this is an artefact of subsetting on ancestry
rows.to.keep<-apply(M3, 1, function(r) {!(all(r == "1/1") | all(r == "0/0"))})
M3<-M3[which(rows.to.keep),]
write.table(M3, paste0(pathOut,"M3.chr",i,".txt"),row.names=FALSE,sep="\t", quote = FALSE)
#the masks with the merged exclusion list too
M3.2.GT <- read.table(paste0(pathGTExclusion,i,".txt"), quote="\"", comment.char="")
colnames(M3.2.GT)[2:ncol(M3.2.GT)]<-columns[10:length(columns)]
colnames(M3.2.GT)[1]<-"variants"
M3.2.long<-read.table(paste0(pathLong,i,".txt"), quote="\"", comment.char="")
colnames(M3.2.long)<-c("gene", "variants")
#tmp M3.2 file to work on to merge genotypes
M3.2.tmp<-merge(M3.2.long, M3.2.GT)
M3.2.tmp$gene<-droplevels(M3.2.tmp$gene)
nVar<-which(colnames(M3.2.tmp)=="variants")
M3.2.tmp<-M3.2.tmp[,-nVar]
M3.2.tmp<-M3.2.tmp %>%
group_by(gene) %>%
summarise(across(columns[10:length(columns)],mergeGT))
colnames(M3.2.tmp)[1]<-"ID"
#this is the vcf
nGene<-nrow(table(M3.2.tmp$ID))
M3.2<-data.frame("#CHROM"=rep(paste0("chr",i), nGene), POS=c(1:nGene), ID=names(table(M3.2.tmp$ID)), REF=rep("C",nGene), ALT=rep("A",nGene), QUAL=rep(".", nGene), FILTER=rep(".", nGene), INFO=rep(".", nGene), FORMAT=rep("GT", nGene))
colnames(M3.2)[1]<-"#CHROM"
M3.2<-merge(M3.2,M3.2.tmp)
M3.2<-M3.2 %>% relocate("#CHROM", "POS")
#this removes rows that are all 0/0 or 1/1 since there are no variants here, this is an artefact of subsetting on ancestry
rows.to.keep<-apply(M3.2, 1, function(r) {!(all(r == "1/1") | all(r == "0/0"))})
M3.2<-M3.2[which(rows.to.keep),]
write.table(M3.2, paste0(pathOutExclusion,"M3.2.chr",i,".txt"),row.names=FALSE,sep="\t", quote = FALSE)
}
|
f65a2242b9553fe1fcebc77a717581417fae216e | 2e7cb0c783a87b007ae7b1886ef36c0d9c1a28ad | /man/Quandl_df_fcn_UST.Rd | ff60bea410aa4c8c496584af50a5a64a95e0ecb4 | [] | no_license | cran/ragtop | 4ba2e9e3b6946b72a6de4d8aef23551e6844a858 | 41384b6fd79f24d3ceb89be8c12d009cd143c1ea | refs/heads/master | 2021-01-12T12:53:13.672967 | 2020-03-03T08:00:02 | 2020-03-03T08:00:02 | 69,475,870 | 0 | 3 | null | null | null | null | UTF-8 | R | false | true | 609 | rd | Quandl_df_fcn_UST.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/term_structures.R
\name{Quandl_df_fcn_UST}
\alias{Quandl_df_fcn_UST}
\title{Get a US Treasury curve discount factor function}
\usage{
Quandl_df_fcn_UST(..., envir = parent.frame())
}
\arguments{
\item{...}{Arguments passed to \code{\link{Quandl_df_fcn_UST_raw}}}
\item{envir}{Environment passed to \code{\link{Quandl_df_fcn_UST_raw}}}
}
\value{
A function taking two time arguments, which returns the discount factor from the second to the first
}
\description{
This is a caching wrapper for \code{\link{Quandl_df_fcn_UST_raw}}
}
|
40b1d7a83ee91bbd99cb5f847169d7c6786893bc | 5ca17246b7b882c06b828addc74de1286a5f4b7a | /Code/regression.R | caa1b3097639d4d0d796d5bec50a364cf853d18d | [] | no_license | lyingjia/regression_houseprice | f96d3f6866f7b0da575a27e82f703be1c340dd6f | fb81bc54769535ac134f15b647bfa76288696751 | refs/heads/master | 2023-06-28T23:30:15.208404 | 2021-08-01T03:01:47 | 2021-08-01T03:01:47 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,509 | r | regression.R |
rm(list=ls())
requiredPackages <- c("knitr", "ggplot2", "plyr", "dplyr", "corrplot", "caret", "gridExtra", "scales", "Rmisc", "ggrepel",
"randomForest", "psych", "xgboost", "ggthemes", "mice", "data.table"
# "data.table", "tidyverse", "magrittr", "tibble", "writexl", "haven", "RPostgreSQL", "arsenal", "lubridate", "ggplot2", "shiny",
)
for(p in requiredPackages){
if(!require(p,character.only = TRUE)) install.packages(p)
library(p,character.only = TRUE)
}
# TODO DEFINE PATH
Dir_main = "//auiag.corp/corpdata/nasfiler3-gPIDUnderwritingHO/ACD_Pricing&Analysis/Team/Ally/Learning&Dev/Kaggle"
PROJECT = "regression_houseprice"
# CREATE NEW FOLDER
# dir.create(file.path(Dir_main, PROJECT, "Data"))
# dir.create(file.path(Dir_main, PROJECT, "Code"))
# dir.create(file.path(Dir_main, PROJECT, "Submission"))
############################################
# IMPORT DATA
############################################
setwd(file.path(Dir_main, PROJECT, "Data"))
train <- fread(file = 'train.csv')
test <- fread(file = 'test.csv')
full <- bind_rows(train, test) # bind training & test data
summary(full)
str(full)
ggplot(data=full[!is.na(full$SalePrice),], aes(x=SalePrice)) +
geom_histogram(fill="blue", binwidth = 10000) +
scale_x_continuous(breaks= seq(0, 800000, by=100000), labels = comma)
############################################
# NUMERIC DATA
############################################
numericVars <- which(sapply(full, is.numeric)) #index vector numeric variables
numericVarNames <- names(numericVars) #saving names vector for use later on
cat('There are', length(numericVars), 'numeric variables')
full <- as.data.frame(full)
all_numVar <- full[, numericVars]
cor_numVar <- cor(all_numVar, use="pairwise.complete.obs") #correlations of all numeric variables
#sort on decreasing correlations with SalePrice
cor_sorted <- as.matrix(sort(cor_numVar[,'SalePrice'], decreasing = TRUE))
#select only high correlations
CorHigh <- names(which(apply(cor_sorted, 1, function(x) abs(x)>0.5)))
cor_numVar <- cor_numVar[CorHigh, CorHigh]
corrplot.mixed(cor_numVar, tl.col="black", tl.pos = "lt")
# check for outliers
ggplot(data=full[!is.na(full$SalePrice),], aes(x=factor(OverallQual), y=SalePrice))+
geom_boxplot(col='blue') + labs(x='Overall Quality') +
scale_y_continuous(breaks= seq(0, 800000, by=100000), labels = comma)
############################################
# IMPORT DATA
############################################
|
a8406c86e9b1341b26a641cd76cdc161d465eeb7 | 9d0465be8f3fe745a008773290a34e0e34e1cff3 | /gender_add.R | 767940861d4f292d64488390c30922e607cd1133 | [
"MIT"
] | permissive | rikunert/PLoS_API | 8914c7736e09207814084a7440603a819633f067 | 7a9922c78e6a67322cb27aac5e74d0aac55d3761 | refs/heads/master | 2021-01-12T02:00:56.052975 | 2017-11-13T13:58:21 | 2017-11-13T13:58:21 | 78,455,304 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,025 | r | gender_add.R | #I originally intended to include author gender information, but this computation slows down the
#code by a factor of 10. So, I will leave the code here. One could execute it later using the author information.
#custom function for gender assignment
# if(!require(gender)){install.packages('gender')}# article level metrics package
# library(gender)
# gender_assignment <- function(name, year){
#
# #name is complete name, including initials and surname
# #year is an integer, within range prior to that year the author's birth is assumed
#
# #parse name
# name = strsplit(name, split = ' ')[[1]]
#
# tmp_prob_male = lapply(name, gender, years = c(year - 70, year - 25), method = 'ssa')
# tmp_prob_male = do.call(rbind, tmp_prob_male)
#
# if (length(tmp_prob_male[1,2]) > 0 && !is.na(tmp_prob_male[1,2])){#take the first successful gender assignment of a name element if there is such a successful assignemnt
# if (tmp_prob_male[1,2] < .25 || tmp_prob_male[1,2] > .75){#if gender assignment is relatively certain
# prob_male = round(tmp_prob_male[1,2])#make a decision whether name indicates male or female
# } else {#if gender assignment is uncertain
# prob_male = NaN
# }
# } else {#if gender assignment was unsuccessful
# prob_male = NaN
# }
#
# return(prob_male)
# }
#
# article$gender_first = gender_assignment(tmp_authors_split[1], article$publication_year)
# article$gender_last = gender_assignment(tmp_authors_split[article$author_count], article$publication_year)
# tmp_gender_var = unlist(sapply(tmp_authors_split, gender_assignment, year = article$publication_year))#apply gender assignment to all authors, force output into matrix
# article$gender_var = sd(tmp_gender_var[!is.na(tmp_gender_var)])#standard deviation of gender assignment in this author list
#I want to get a subset of articles only
# search term
#search_term = 'language'#apparently supports regular expressions
#string_match = grep(search_term, title, ignore.case = T) == 1
|
1bde4ba70cac754dea415ddcca4e5339e8f9388b | 1fc2c96d237b86a63326467ab02104d2f9a3d09d | /src/capreg_series.R | 09e63d0372c2c55762452c790c201821ab1f6232 | [] | no_license | snowdj/input_costs | 8dd9f61c7d50b67c4e83d9be83fc80d3d078a0bb | 43d6da0bc5341db398eff41c69da30602f23adc3 | refs/heads/master | 2020-12-25T21:12:35.068383 | 2012-11-20T08:29:07 | 2012-11-20T08:29:07 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,865 | r | capreg_series.R | # capreg data input
capreg <- read.csv("data/CapregData.csv", header=TRUE)
#correct header
names(capreg) <- c("country", "dim2", "costitem", "1984", "1985", "1986", "1987", "1988" , "1989", "1990", "1991", "1992", "1993", "1994", "1995",
"1996", "1997", "1998", "1999", "2000",
"2001", "2002", "2003", "2004", "2005")
#melt data
mcapreg <- melt(capreg, id=1:3, na.rm=TRUE)
#convert years as factors to years as numeric data type (for plotting...)
mcapreg$variable <- as.numeric(as.character(mcapreg$variable))
names(mcapreg)[4] <- "year"
#loading set definitions (cost items, agric. activities etc.)
costitems <- read.csv('data/cost_items.csv', header=FALSE)
names(costitems) <- c("item", "label")
unitvalues <- read.csv('data/unit_values.csv', header=FALSE)
names(unitvalues) <- c("item", "label")
activities <- read.csv('data/activities.csv', header=FALSE)
names(activities) <- c("acode", "label")
countries <- read.csv('data/countries.csv', header=FALSE)
names(countries) <- c("countrycode", "label")
#calculating costs: multiply input price with physical quantities applied
#.. a) separate quantities and price
indicator <- mcapreg$dim2 %in% unique(activities$acode)
costsq <- mcapreg[indicator, ]
costsp <- subset(mcapreg, dim2 == "UVAP", select=c("country", "costitem", "year", "value"))
#.. b) create new data frame with costs
costsv <- merge(costsq, costsp, by=c("costitem", "country", "year"))
#.. the column 'value' will contain actual costs (monetary values/hectare)
#.. values should be scaled with 0.001
costsv <- data.frame(costsv, value=costsv$value.x*costsv$value.y*0.001)
names(costsv)[5] <- "q"
names(costsv)[6] <- "p"
# drop unnecessary data frames
rm(costsq, costsp)
#I. let's do a sample analysis for PLAP
#TODO: generalize it for all cost items...
#plant protection costs
plantp <- subset(costsv, costitem=='PLAP', select=c("country", "dim2", "year", "value"))
#for maize
maiz_plap <- subset(plantp, dim2=="MAIZ")
qplot(year,value,data=maiz_plap,colour=country,geom="line") +
opts(title="plant protection costs for maize in capreg (eur per ha)")
#some statistics (including coefficient of variance)
mystats <- function(x)(c(N=length(x), Mean=mean(x), SD=sd(x), CoV=sd(x)/mean(x)))
# plotting functions (for visualizing single time series)
mylineplot <- function(mycost,mycountry,myactivity){
mytemp <- subset(costsv,costitem==mycost & country==mycountry & dim2==myactivity)
p <- qplot(year,value,data=mytemp,geom="line")
p + opts(title=paste(mycountry,".",mycost,".",myactivity,
"mean:" ,mean(mytemp$value),
"sd:" ,sd(mytemp$value)))
}
myboxplot <- function(mycost,mycountry,myactivity){
mytemp <- subset(costsv,costitem==mycost & country==mycountry & dim2==myactivity)
p <- qplot(factor(dim2),value,data=mytemp,geom="boxplot")
p + opts(title=paste(mycountry,".",mycost,".",myactivity))
}
myplot2 <- function(mycost,mycountry,myactivity){
mytemp <- subset(costsv,costitem==mycost & country==mycountry & dim2==myactivity)
p <- ggplot(mytemp, aes(year,value)) + geom_point() + geom_line()
p + stat_smooth(method="lm")
}
#descriptive statistics on PLAP
plap_stats <- cast(plantp, dim2+country~.,mystats)
#..switch off scientific formatting of numbers
options(digits=2)
plap_stats$Mean <- format(plap_stats$Mean, scientific=FALSE)
#detrend with a linear model
#..function of (costitem,country,activity)
mydetrend <- function(mycostitem,mycountry,myactivity){
mysubset <- subset(costsv, costitem==mycostitem & country==mycountry & dim2==myactivity)
lm.temp <- lm(value ~ year, data=mysubset)
x <- summary(lm.temp)
#plotting....
p <- qplot(year,value,data=mysubset,geom="line")
p + geom_abline(slope=lm.temp$coefficients[2],intercept=lm.temp$coefficients[1],colour="red") +
opts(title=paste("adj.R squared",x$adj.r.squared))
#example call: mydetrend("PLAP","HU000000","SWHE")
}
calculate_cv <- function(mycostitem,mycountry,myactivity){
# calculates coefficient of variance
# either from a detrended serie (if r-squared is above a limit)
# or from the original serie
mysubset <- subset(costsv, costitem==mycostitem & country==mycountry & dim2==myactivity)
lm.temp <- lm(value ~ year, data=mysubset)
x <- summary(lm.temp)
if(x$adj.r.squared > 0.65) {
mycv <- sd(lm.temp$residuals)/mean(mysubset$value)
}
else{
mycv <- sd(mysubset$value)/mean(mysubset$value)
}
return(mycv)
}
calculate_cv2 <- function(x){
# calculates coefficient of variance
# either from a detrended serie (if r-squared is above a limit)
# or from the original serie
t <- 1:length(x)
lm.temp <- lm(x ~ t)
modelsum <- summary(lm.temp)
if(modelsum$adj.r.squared > 0.65) {
mycv <- sd(lm.temp$residuals)/mean(x)
}
|
c15e30f142bee87a561cb514f151bf4bab6bd5c0 | f25902704ef5ad550b1b6aacc79309ea590b0713 | /lab/lab05/tests/p18.R | a07112919f61fd2d0a0121827bf676b3b68008d1 | [] | no_license | ph142-ucb/ph142-sp21 | fd729d1b23a9236aa5581877e7e4d32d25df577f | e306c6e26b209be4ace2661078704285b8e43db4 | refs/heads/master | 2023-04-16T09:42:21.295611 | 2021-04-30T19:41:51 | 2021-04-30T19:41:51 | 323,787,795 | 0 | 0 | null | 2021-03-31T17:17:15 | 2020-12-23T03:00:50 | R | UTF-8 | R | false | false | 436 | r | p18.R | library(testthat)
test_metadata = "
cases:
- hidden: false
name: p18a
points: 0.5
- hidden: false
name: p18b
points: 0.5
name: p18
"
test_that("p18a", {
expect_true(all.equal(p18[1], qnorm(0.25, mean = 3350, sd = 440), tol = 0.01))
print("Checking: first value of p18")
})
test_that("p18b", {
expect_true(all.equal(p18[2], qnorm(0.75, mean = 3350, sd = 440), tol = 0.01))
print("Checking: second value of p18")
})
|
c9cf88e895bcdfa6357bd7a631f978059ebbac88 | 192e78b8e5651ddd15dab2f8797b4b5cf5c6498b | /R/cargarDatosDatalake.R | 86a0da30964dfbd006a466b1026b1af744fc9914 | [] | no_license | martineliasarrieta/BibliotecaAnaliticaR | cdc21236a0da54d4896742fc20a131d181144561 | e5d131f0693039ca150a6292c7a699b308e0a083 | refs/heads/master | 2022-04-24T19:52:36.464377 | 2020-04-27T14:54:35 | 2020-04-27T14:54:35 | 258,602,563 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,445 | r | cargarDatosDatalake.R | #' Cargar datos en el Datalake en AWS
#'
#' Esta función permite cargar datos en el Datalake en AWS por medio de s3,
#' para cargar la información se requiere un access_key_id y un aws_secret_key_id
#' que es generado cuando se solicita el acceso al datalake de la compañia.
#'
#' @param local_file Es el archivo local o dataframe que se almacenará en AWS, ej: 'estados_clientes.csv'
#' @param bucket_name Es el nombre del bucket en s3 de la zona del datalake donde se almacenarán los datos, ej: 'landing-zone-analitica'
#' @param folder Es el nombre de la carpeta dentro del bucket en s3, normalmente corresponde al nombre del proyecto, ej: 'movilidad_segura'
#' @param s3_file_name Es el nombre que tendra el archivo de datos dentro del datalake
#' @return Un dataframe con los datos resultantes de la consulta, si entra un arreglo retorna un arreglo con los resultados.
#' @keywords cargar datos datalake
#' @export
#' @examples
#' cargarDatosDatalake('estados_clientes.csv','landing-zone-analitica','movilidad_segura','estados_clientes_ms.csv')
#'
cargarDatosDatalake <- function(local_file, bucket_name, folder, s3_file_name){
require(data.table)
require(reticulate)
require(aws.s3)
boto3 <- reticulate::import("boto3")
client_s3 <- boto3$client("s3")
resource_s3 <- boto3$resource("s3")
path <- paste(folder, s3_file_name, sep="/")
resource_s3$meta$client$upload_file(local_file, bucket_name, path)
} |
3604da79e0d5e06a659b5e60813994b902cc37db | ee95e4d346d7f7b617b1a84d97660345f181b21b | /src/conduct_hypothesis_test.R | 526878cc5b0e7214b8425ac8473429849441df5e | [
"MIT"
] | permissive | UBC-MDS/DSCI_522_Group411 | 6886ca06a85047d8948c2cbf55f5c05725c10abc | 8f33bbb615202a896fb8d0d04d8e0126fa9d410d | refs/heads/master | 2020-12-12T13:00:45.065621 | 2020-02-08T22:25:30 | 2020-02-08T22:25:30 | 234,132,320 | 1 | 4 | MIT | 2020-02-08T22:25:31 | 2020-01-15T17:15:29 | HTML | UTF-8 | R | false | false | 2,785 | r | conduct_hypothesis_test.R | # authors: Katie Birchard, Ryan Homer, Andrea Lee
# date: 2020-01-18
#
"Conduct hypothesis test and save figures.
Usage: conduct_hypothesis_test.R --datafile=<path to the dataset> --out=<path to output directory>
Options:
<datafile> Complete URL to the feather dataset.
<out> The destination path to save the hypothesis test figures.
" -> doc
suppressMessages(library(docopt))
suppressMessages(library(tidyverse))
suppressMessages(library(feather))
suppressMessages(library(kableExtra))
suppressMessages(library(broom))
main <- function(args) {
check_args(args)
make_plot(args$datafile, args$out)
make_table(args$datafile, args$out)
}
check_args <- function(args) {
#' Check input args
#'
#' @param args Vector of args from docopt
if (!file.exists(path.expand(args$datafile))) {
stop("Unable to find datafile.")
}
# make sure path exists
if (!dir.exists(path.expand(args$out))) {
dir.create(path.expand(args$out), recursive = TRUE)
}
}
make_plot <- function(datafile, out) {
#' Create plot and save plot as image.
#'
#' @param datafile Path to the feather file, including the actual filename.
#' @param out The destination path to save the images to to.
#' @return png file of plot.
dest_path <- path.expand(out)
# Read in data
avocado <- read_feather(datafile)
# Fit model
model <- lm(average_price ~ total_volume + PLU_4046 + PLU_4225 + PLU_4770 + total_bags + small_bags + large_bags + xlarge_bags + type + year + lat + lon + season, data = avocado)
# Make residual plot
plot <- ggplot(model, aes(x = model$fitted.values, y = model$residuals)) +
geom_point(colour= "cadetblue", alpha=0.1) +
labs(title = 'Residual Plot (Linear Model)', x = "Predicted Values", y = "Residuals") +
theme_bw()
# Save plot as png
ggsave('residual_plot.png', plot, height=4, width=7, path = file.path(dest_path))
}
make_table <- function(datafile, out) {
#' Create table and save table as image.
#'
#' @param datafile Path to the feather file, including the actual filename.
#' @param out The destination path to save the images to to.
#' @return png file of table.
dest_path <- path.expand(out)
# Read in data
avocado <- read_feather(datafile)
# Fit model
model <- lm(average_price ~ total_volume + PLU_4046 + PLU_4225 + PLU_4770 + total_bags + small_bags + large_bags + xlarge_bags + type + year + lat + lon + season, data = avocado)
# Conduct hypothesis test and save table as png
write_csv(tidy(model), path = file.path(dest_path, 'hypothesis_test_table.csv'))
#p_val <- kable(tidy(model),
# caption = "Table 1. Hypothesis Test Table.") %>%
# as_image(file = file.path(dest_path, 'hypothesis_test_table.png'))
}
main(docopt(doc))
|
dfd5889040ae27c542d932d6897e8bb9dc3e8423 | 998792ae061f3bc440c4a8a337eec9ffc3e1eb70 | /DIA-LibQCscripts/PreProcessingInputfile.R | 620895216249c2a37bff657a493062c2f2620e72 | [] | no_license | CharuKMidha/Codes_MoritzLab-R | e95caabb7c6725bcd5f9f314b8db9711d69a116f | af9493902748224a5116ab1703ca3a6377fc6816 | refs/heads/master | 2022-12-31T13:41:25.036281 | 2020-10-22T23:10:28 | 2020-10-22T23:10:28 | 198,326,262 | 0 | 0 | null | 2020-03-20T18:15:12 | 2019-07-23T01:14:32 | null | UTF-8 | R | false | false | 610 | r | PreProcessingInputfile.R | #R.FileName PG.ProteinGroups PG.Quantity
#UI_6HRS_SWATH1 [ RT-Cal protein ] 29985.99609
#UI_6HRS_SWATH1 A0AVT1 320.7033691
#UI_18HRS_SWATH1 A0FGR8 354.5825806
#UI_18HRS_SWATH1 A0MZ66 101.2458038
#UI_30HRS_SWATH1 A2RUS2 59.19276428
#UI_30HRS_SWATH1 A5A3E0 12765.27637
#UI_42HRS_SWATH1 A5YKK6 547.6705322
#UI_42HRS_SWATH1 A5YKK6 547.6705322
data = read.csv("/K565/K562_Good_SN_RT2-3_Unique.txt", header = FALSE, sep ="\t")
data
library(tidyr)
finaldata = data %>%
spread(key = V2,
value = V3)
write.csv(finaldata, file = "PHL_Good_SN_RT2-3_Unique_analysis_matrix.csv")
|
b9b32f2e5d5debd6326acba3780b3e2e19ca7f69 | 21c9cf0cb148afa8be833f6aedd85e00ac0c11c5 | /ngram_building.R | 4aaeddd55d932e6c38b5636d7ec9e0a44e1cc4c4 | [] | no_license | mnicho03/Data-Science-Specialization-Capstone | d3ea51658f14e64bcf7af6bf7c71d12ec582d1c0 | 996d68e32f479674ad89f61920cab3d2433cdc33 | refs/heads/master | 2020-03-10T18:29:38.683927 | 2018-07-13T01:57:15 | 2018-07-13T01:57:15 | 129,527,082 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,721 | r | ngram_building.R | #efficiency examination - using .001 size sample
#option 1
#function to split ngrams
ngram <- function(n, text) {
textcnt(text, method = "string",n=as.integer(n),
split = "[[:space:][:digit:]]+",decreasing=T)
}
#unigrams
#determine time to create the ngram
system.time(unigrams <- ngram(1, training_text))
#time ~~ user: 2.63 / elapsed: 2.65
#function to clean and create the ngram_df
ngram_df_build <- function(ngram_object) {
ngram_df <- data.frame(term = names(ngram_object), frequency = unclass(ngram_object))
rm(ngram_object)
ngram_df$term <- as.character(ngram_df$term)
ngram_df$frequency <- as.numeric(ngram_df$frequency)
#removes mentionings of " <EOS> " (filler used to mark end of sentence/entries)
ngram_df <- ngram_df[which(ngram_df$term!="<eos>"),]
return(ngram_df)
}
#determine time to create the cleaned ngram_df
system.time(unigram_df <- ngram_df_build(unigrams))
#time ~~ user: .06 / elapsed: .06
#option 2:
library(tidytext) #for tokenization
#create df to use with dplyr manipulation below
training_text_df <- data.frame(text = training_text, stringsAsFactors = FALSE)
#function to tokenize and build the df
ngram_df_build <- function(ngram_text) {
tokenized_all <- ngram_text %>%
unnest_tokens(output = unigram, input = text) %>%
#filter out end of sentence markers
filter(!grepl(paste0("^", "endofsentencemarker", "$"), unigram)) %>%
count(unigram) %>%
select(unigram, n) %>%
rename(frequency = n) %>%
#arrange in descending order
arrange(desc(frequency))
#convert single tokens to DF
unigram_df <- as.data.frame(tokenized_all)
return(unigram_df)
}
#determine time to create the cleaned ngram_df
system.time(unigram_df <- ngram_df_build(training_text_df))
#time ~~ user: .2 / elapsed: .21
#<<<<<<<option 2 much faster>>>>>>>>>>
#ID the number of unigrams
length_unigrams <- length(unigram_df$unigram)
#frequency table for Good-Turing smoothing
#captures the 'frequency of frequencies' (e.g. 100 instances of a word appearing 3 times)
unigram_frequency_table <- data.frame(unigram = table(unigram_df$frequency))
#save off file to load in later if needed
fwrite(unigram_df, "unigram_df.txt")
#data.table marked as false to ensure it's loaded only as DF
unigram_df <- fread("unigram_df.txt", data.table = FALSE)
#option 2 for bigrams
#n-gram building
#function to build the bigram dataframe
bigram_df_build <- function(ngram_text) {
bigram_df <- ngram_text %>%
unnest_tokens(bigram, text, token = "ngrams", n = 2) %>%
tidyr::separate(bigram, c("word1", "word2"), sep = " ") %>%
na.omit() %>%
#filter out end of sentence markers
filter(!grepl(paste0("^", "endofsentencemarker", "$"), word1)) %>%
filter(!grepl(paste0("^", "endofsentencemarker", "$"), word2)) %>%
mutate(term = paste(word1, word2, sep = " ")) %>%
count(term) %>%
rename(frequency = n) %>%
#remove word1 and word2
select(term, frequency) %>%
#arrange in descending order
arrange(desc(frequency))
return(as.data.frame(bigram_df))
}
#build the DF of bigrams and calculate the runtime
system.time(bigram_df <- bigram_df_build(training_text_df))
#ID the number of bigrams
length_bigrams <- length(bigram_df$term)
#frequency table for Good-Turing smoothing
#captures the 'frequency of frequencies' (e.g. 100 instances of a word appearing 3 times)
bigram_frequency_table <- data.frame(bigram = table(bigram_df$frequency))
#trigrams
trigram_df_build <- function(ngram_text) {
trigram_df <- ngram_text %>%
unnest_tokens(trigram, text, token = "ngrams", n = 3) %>%
tidyr::separate(trigram, c("word1", "word2", "word3"), sep = " ") %>%
na.omit() %>%
#filter out end of sentence markers
filter(!grepl(paste0("^", "endofsentencemarker", "$"), word1)) %>%
filter(!grepl(paste0("^", "endofsentencemarker", "$"), word2)) %>%
filter(!grepl(paste0("^", "endofsentencemarker", "$"), word3)) %>%
mutate(term = paste(word1, word2, word3, sep = " ")) %>%
count(term) %>%
rename(frequency = n) %>%
#remove word1/2/3
select(term, frequency) %>%
#arrange in descending order
arrange(desc(frequency))
return(as.data.frame(trigram_df))
}
#build the DF of trigrams and calculate the runtime
system.time(trigram_df <- trigram_df_build(training_text_df))
#ID the number of trigrams
length_trigrams <- length(trigram_df$trigram)
#frequency table for Good-Turing smoothing
#captures the 'frequency of frequencies' (e.g. 100 instances of a word appearing 3 times)
trigram_frequency_table <- data.frame(trigram = table(trigram_df$frequency))
#quadgrams
quadgram_df_build <- function(ngram_text) {
quadgram_df <- ngram_text %>%
unnest_tokens(quadgram, text, token = "ngrams", n = 4) %>%
tidyr::separate(quadgram, c("word1", "word2", "word3", "word4"), sep = " ") %>%
na.omit() %>%
#filter out end of sentence markers
filter(!grepl(paste0("^", "endofsentencemarker", "$"), word1)) %>%
filter(!grepl(paste0("^", "endofsentencemarker", "$"), word2)) %>%
filter(!grepl(paste0("^", "endofsentencemarker", "$"), word3)) %>%
filter(!grepl(paste0("^", "endofsentencemarker", "$"), word4)) %>%
mutate(term = paste(word1, word2, word3, word4, sep = " ")) %>%
count(term) %>%
rename(frequency = n) %>%
#remove word1/2/3/4
select(term, frequency) %>%
#arrange in descending order
arrange(desc(frequency))
return(as.data.frame(quadgram_df))
}
#build the DF of quadgrams and calculate the runtime
system.time(quadgram_df <- quadgram_df_build(training_text_df))
#ID the number of quadgrams
length_quadgram <- length(quadgram_df$quadgram)
#frequency table for Good-Turing smoothing
#captures the 'frequency of frequencies' (e.g. 100 instances of a word appearing 3 times)
quadgram_frequency_table <- data.frame(quadgram = table(quadgram_df$frequency)) |
653c9c6f9899ddfee6f27bcf2ebc45c27c254919 | 5d44602423edd54356bedb8b7322a07b7dbef859 | /man/shroom-package.Rd | 028ec0f60fd69b775e60beb5f23310c7b08203b4 | [
"MIT"
] | permissive | brouwern/shroom | 3ad6ce300ba67474de64bd3bc23d72526c135a2f | 47380f48d99a362518df823cf9eecdd7bec6325d | refs/heads/master | 2020-04-05T22:33:12.410404 | 2018-12-11T14:15:45 | 2018-12-11T14:15:45 | 157,260,342 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 386 | rd | shroom-package.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/shroom-package.R
\docType{package}
\name{shroom-package}
\alias{shroom}
\alias{shroom-package}
\title{shroom: Example Analyses for Cell and Molecular Biology}
\description{
What the package does (one paragraph).
}
\author{
\strong{Maintainer}: First Last \email{first.last@example.com}
}
\keyword{internal}
|
dd1691879539fc2a198a877a381995abe55cd3a4 | fbf620f3417eff7f696b9a365782d96e7d20f3f8 | /man/GET.Rd | 9e5c1c67127a1da778e61d8758372205172b7f79 | [] | no_license | Giant316/spotifyLite | 72d74ca7444aedd505a02346b3d781c0efae993c | b0a0c8b053cf636b20ee5cdf7c49910d362a9b84 | refs/heads/main | 2023-01-23T09:28:51.872958 | 2020-12-06T22:08:56 | 2020-12-06T22:08:56 | 319,130,180 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 505 | rd | GET.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils.R
\name{GET}
\alias{GET}
\title{GET request data from Spotify with limit and offset parameters}
\usage{
GET(url, params, oAuth = get_access_token())
}
\arguments{
\item{url}{a Spotify URL request}
\item{params}{a list that contains limit and offset values}
\item{oAuth}{A valid access token from the Spotify Accounts service.}
}
\value{
}
\description{
GET request data from Spotify with limit and offset parameters
}
|
e82cef420299f93305ab7a75a14beb27ea1d8233 | 36e5d6a9fa952b091020b6b4418b80bceb45e975 | /ResearchNWA.R | ef37192e5fc384627dec1626ce9d4ce8766e70a1 | [
"MIT"
] | permissive | gaurangaurang/Social-Network-Analysis-Example | 8abd659b579fe9bc6dd37e96775e9f4b45780a9d | 671c61d1f52580f8ed8ae6b1dd93cf23fdf4dafa | refs/heads/master | 2020-04-10T06:57:27.145234 | 2018-12-07T20:11:16 | 2018-12-07T20:11:16 | 160,869,738 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,777 | r | ResearchNWA.R | #import libraries
library(igraph)
library(readr)
library(haven)
#import data
collNW <- PCMI_Personally.Know_Combined.Edgelist
dissNW <- PCMI_Discussion.Network_Combined_Edgelist
collEL <- collNW
collgraph <- graph.data.frame(collEL, directed = T)
dissEL <- dissNW
dissgraph <- graph.data.frame(dissEL, directed = T)
#First Try
set.seed(123)
plot(collgraph)
#2nd Try
set.seed(123)
#set layout
layout1 <- layout.fruchterman.reingold(collgraph)
V(collgraph)$size = degree(collgraph, mode = 'in')/5 #reduce high in degree node size
V(collgraph)$color = 'grey'
V(collgraph)[degree(collgraph, mode = 'in')>8]$color = "yellow" #High in degree nodes
E(collgraph)$color = 'grey'
plot(collgraph, edge.arrow.size = 0.25, edge.arrow.mode = '-') #got rid of arrowheads
#Remove Self Loops (3rd try)
collgraph2 <- simplify(collgraph, remove.multiple = T, remove.loops = T)
set.seed(123)
layout1 <- layout.fruchterman.reingold(collgraph2)
V(collgraph2)$size = degree(collgraph2, mode = 'in')/5
V(collgraph2)$color = 'grey'
V(collgraph2)[degree(collgraph2, mode = 'in')>8]$color = "yellow"
E(collgraph2)$color = 'grey'
plot(collgraph2, edge.arrow.size = 0.25, edge.arrow.mode = '-')
#4th try
collAttr <- PCMI_Know.Personally_Combined_Nodelist
set.seed(123)
layout1 <- layout.fruchterman.reingold(collgraph2)
V(collgraph2)$size = degree(collgraph2, mode = 'in')/5
V(collgraph2)$color = 'grey'
V(collgraph2)[degree(collgraph2, mode = 'in')>8]$color = "yellow"
V(collgraph2)$color = ifelse(collAttr[V(collgraph2), 2] =="Researcher", 'blue', 'red')
E(collgraph2)$color = 'grey'
plot(collgraph2, edge.arrow.size = 0.25, edge.arrow.mode = '-')#better than previous graphs
#Layout fruchterman.reingold without vertex labels (5th try)
collAttr <- PCMI_Know.Personally_Combined_Nodelist
set.seed(123)
layout1 <- layout.fruchterman.reingold(collgraph2, niter = 500)
V(collgraph2)$size = degree(collgraph2, mode = 'in')/5
V(collgraph2)$color = 'grey'
V(collgraph2)[degree(collgraph2, mode = 'in')>8]$color = "yellow"
V(collgraph2)$color = ifelse(collAttr[V(collgraph2), 2] =="Researcher", 'blue', 'red')
E(collgraph2)$color = 'grey'
plot(collgraph2, edge.arrow.size = 0.25, edge.arrow.mode = '-', vertex.label = NA)
# Layout KK without vertex labels
collAttr <- PCMI_Know.Personally_Combined_Nodelist
set.seed(123)
layout1 <- layout.kamada.kawai(collgraph2)
V(collgraph2)$size = degree(collgraph2, mode = 'in')/5
V(collgraph2)$color = 'grey'
V(collgraph2)[degree(collgraph2, mode = 'in')>8]$color = "yellow"
V(collgraph2)$color = ifelse(collAttr[V(collgraph2), 2] =="Researcher", 'blue', 'red')
E(collgraph2)$color = 'grey'
plot(collgraph2, edge.arrow.size = 0.25, edge.arrow.mode = '-', vertex.label = NA)
|
d4abdc5e5220cebf69a6595f516ae61e4f79ded2 | bd018facb7c65baee32a83cf3d0335dda20e9464 | /BG.library/man/dataImport.Rd | c0165a21b0b451299219c5f25cfde961caf67ae9 | [] | no_license | rscmbc3/BG.library | 0da0b668bcce1ccc80a09b7e573d4ba5f4afdc15 | 50b5bf626e0b8d12ddedfedacd4a88bdaef91519 | refs/heads/master | 2020-08-12T21:07:15.199502 | 2020-01-14T13:51:58 | 2020-01-14T13:51:58 | 214,843,138 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 745 | rd | dataImport.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/dataImport.R
\name{dataImport}
\alias{dataImport}
\title{dataImport}
\usage{
dataImport(filePath, libraryPath)
}
\arguments{
\item{filePath}{character path to csv import file}
\item{libraryPath}{character string path to BG.library code}
}
\value{
\code{dataImport.list} list containing 2 data.frames (allData and joinData)
}
\description{
import csv file and create allData and joinData objects.
allData is pump and sensor data rbinded
joinData is a roll join of pump and sensor data on datetime.\cr \cr
}
\examples{
libraryPath<-"F:/BG.library_github/BG.library/"
filePath<-"F:/BG.library_github/exampleData.csv"
dataImport.list<-dataImport(filePath,libraryPath)
}
|
e124ac9c46b1ba27b67d1e4ec1c95bdb87db11e8 | 6417af24cd07157f843b148f2a454b336e9dad29 | /R/streetnet-fns.R | a24112ed836de694e397a7beb081b064472aa80e | [] | no_license | karpfen/dodgr | 16eeec0e3d06524b465d6511acbea285f8009322 | 98948d94a30146dcf7463dc0056d02bf6dae1007 | refs/heads/master | 2021-07-04T18:28:46.158500 | 2017-09-28T12:26:56 | 2017-09-28T12:26:56 | 98,661,765 | 0 | 0 | null | 2017-07-28T15:16:20 | 2017-07-28T15:16:20 | null | UTF-8 | R | false | false | 5,026 | r | streetnet-fns.R | #' dodgr_streetnet
#'
#' Use the \code{osmdata} package to extract the street network for a given
#' location. For routing between a given set of points (passed as \code{pts}),
#' the \code{bbox} argument may be ommitted, in which case a bounding box will
#' be constructed by expanding the range of \code{pts} by the relative amount of
#' \code{expand}.
#'
#' @param bbox Bounding box as vector or matrix of coordinates, or location
#' name. Passed to \code{osmdata::getbb}.
#' @param pts List of points presumably containing spatial coordinates
#' @param expand Relative factor by which street network should extend beyond
#' limits defined by pts (only if \code{bbox} not given).
#' @param quiet If \code{FALSE}, display progress messages
#' @return A Simple Features (\code{sf}) object with coordinates of all lines in
#' the street network.
#'
#' @export
#' @examples
#' \dontrun{
#' streetnet <- dodgr_streetnet ("hampi india", expand = 0)
#' # convert to form needed for \code{dodgr} functions:
#' graph <- weight_streetnet (streetnet)
#' nrow (graph) # 5,742 edges
#' # Alternative ways of extracting street networks by using a small selection of
#' # graph vertices to define bounding box:
#' verts <- dodgr_vertices (graph)
#' verts <- verts [sample (nrow (verts), size = 200), ]
#' streetnet <- dodgr_streetnet (pts = verts, expand = 0)
#' graph <- weight_streetnet (streetnet)
#' nrow (graph)
#' # This will generally have many more rows because most street networks include
#' # streets that extend considerably beyond the specified bounding box.
#' }
dodgr_streetnet <- function (bbox, pts, expand = 0.05, quiet = TRUE)
{
if (!missing (bbox))
{
bbox <- osmdata::getbb (bbox)
bbox [1, ] <- bbox [1, ] + c (-expand, expand) * diff (bbox [1, ])
bbox [2, ] <- bbox [2, ] + c (-expand, expand) * diff (bbox [2, ])
}
else if (!missing (pts))
{
nms <- names (pts)
if (is.null (nms))
nms <- colnames (pts)
colx <- which (grepl ("x", nms, ignore.case = TRUE) |
grepl ("lon", nms, ignore.case = TRUE))
coly <- which (grepl ("y", nms, ignore.case = TRUE) |
grepl ("lat", nms, ignore.case = TRUE))
if (! (length (colx) == 1 | length (coly) == 1))
stop ("Can not unambiguous determine coordinates in graph")
x <- range (pts [, colx])
x <- x + c (-expand, expand) * diff (x)
y <- range (pts [, coly])
y <- y + c (-expand, expand) * diff (y)
bbox <- c (x [1], y [1], x [2], y [2])
} else
stop ('Either bbox or pts must be specified.')
dat <- osmdata::opq (bbox) %>%
osmdata::add_osm_feature (key = "highway") %>%
osmdata::osmdata_sf (quiet = quiet)
return (dat$osm_lines)
}
#' weight_streetnet
#'
#' Weight (or re-weight) an \code{sf}-formatted OSM street network according to
#' a named routino profile, selected from (foot, horse, wheelchair, bicycle,
#' moped, motorcycle, motorcar, goods, hgv, psv).
#'
#' @param sf_lines A street network represented as \code{sf} \code{LINESTRING}
#' objects, typically extracted with \code{get_stretnet}
#' @param wt_profile Name of weighting profile
#'
#' @return A \code{data.frame} of edges representing the street network, along
#' with a column of graph component numbers.
#'
#' @export
#' @examples
#' net <- weight_streetnet (hampi) # internal sf-formatted street network
#' class(net) # data.frame
#' dim(net) # 6096 11; 6096 streets
weight_streetnet <- function (sf_lines, wt_profile = "bicycle")
{
if (!is (sf_lines, "sf"))
stop ('sf_lines must be class "sf"')
if (!all (c ("geometry", "highway", "osm_id") %in% names (sf_lines)))
stop (paste0 ('sf_lines must be class "sf" and ',
'have highway and geometry columns'))
prf_names <- c ("foot", "horse", "wheelchair", "bicycle", "moped",
"motorcycle", "motorcar", "goods", "hgv", "psv")
wt_profile <- match.arg (tolower (wt_profile), prf_names)
profiles <- dodgr::weighting_profiles
wt_profile <- profiles [profiles$name == wt_profile, ]
wt_profile$value <- wt_profile$value / 100
dat <- rcpp_sf_as_network (sf_lines, pr = wt_profile)
graph <- data.frame (edge_id = seq (nrow (dat [[1]])),
from_id = as.character (dat [[2]] [, 1]),
from_lon = dat [[1]] [, 1],
from_lat = dat [[1]] [, 2],
to_id = as.character (dat [[2]] [, 2]),
to_lon = dat [[1]] [, 3],
to_lat = dat [[1]] [, 4],
d = dat [[1]] [, 5],
d_weighted = dat [[1]] [, 6],
highway = as.character (dat [[2]] [, 3]),
stringsAsFactors = FALSE
)
# get component numbers for each edge
graph$component <- dodgr_components (graph)$component
return (graph)
}
|
52c9ed7ddc7e990552c669b852dc0bc74f0b106d | 31059960e55e6ccf408a61bb656c16823ee2505d | /9. Power Analysis/Power_Analysis_WestFox_1Sig.R | 7e7cbae83edd456a23bbd5f4a3843e95e8dd5440 | [] | no_license | PachoAlvarez/Foxfish_chronology | 433f408502652b9ad5443a01fdc4528630526d11 | ce754fe1f5cce6af6464ca3aaa71ae4faed25dbb | refs/heads/master | 2021-05-30T01:56:48.438017 | 2015-09-22T10:06:27 | 2015-09-22T10:06:27 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 8,091 | r | Power_Analysis_WestFox_1Sig.R | #### Part 1 Bio Data
########################
#### Foxfish ####
############
windowsFonts(
A=windowsFont("Times New Roman")
)
par(family="A")
# Read in biological data
setwd("C:/Users/Ellen/Google Drive/Honours/Results/Power/Bio Data") # pc
#setwd("C:/Users/Ellen/Google Drive/Honours/Results/Power/Bio Data") # laptop
#list.files()
Data <- read.csv("Foxfish_Biodata_West.csv")
attach(Data)
# Rename parameters
Ages <- Data$Adj.zones
Year <- yy
####################
# Input Parameters #
####################
# Calculate Age-Prob Vector
Nfish <- length(Ages)
Minage <- min(Ages, na.rm=T)
Maxage <- max(Ages, na.rm=T)
AgeVec <- unique(Ages) # Minage:Maxage
AgeVec <- sort(AgeVec[is.na(AgeVec)==FALSE]) # remove NAs
FProb <- prop.table(table(Ages))
AgeDat <- cbind(Age=AgeVec, Prob=FProb)
CutAge <- 5 # Cut-off age where only fish above this age are included in simulation
# ItVec <- 5
# SampVec <- 500
# Calculate Collection Period
NSampYrs <- max(Year) - min(Year)
# Analysis coefficients
rho <- 0.4 # Correlation coefficient. This will need to be modified using a set of values within a range considered feasible based on results of other dendochronologicakl studies. Note that, in the context of the Power Analysis, this is the effect size that we are seeking to detect.
Alpha <- 0.01 # Specified level of signficance
MaxYear <- max(Year)
MinYear <- min(Year)-Maxage
Years <- rev(MinYear:MaxYear) # Years of temperature data. Number of years to simulate data
# Power analysis setup
NumIts <- ItVec # Total number of iterations - more iterations the longer the model takes to run. Test with few and then run with many and go and have coffee while you wait.
SampSizeVec <- seq(from=10, to=SampVec, by=10) # Vector of sample sizes to run the model over
SampSize <- ItVec # This will be varied when the power analysis is run
########################
# Calculate Parameters #
########################
# Read in increment data
setwd("C:/Users/Ellen/Google Drive/Honours/Results/IncrementData")
#list.files()
Incs <- read.csv("SouthFOX.csv")
detach(Data)
attach(Incs)
Dat <- Incs[,2:ncol(Incs)]
LogDat <- apply(Dat, 2, log) # Natural Log of Data
# Find slope and intercept
Getslope <- function(dat) {
Nyr <- sum(is.na(dat)==FALSE)
datInx <- which(is.na(dat)==FALSE)
X <- Years[datInx]
Y <- dat[datInx]
LM <- lm(Y~X)
return(LM$coefficients)
}
SlopeInt <- apply(LogDat, 2, Getslope)
# Standardised Widths
StandWidth <- function(LogDat) {
inter <- SlopeInt[1]
slope <- SlopeInt[2]
datInx <- which(is.na(Dat)==FALSE)
X <- Years
stwid <- exp(LogDat-(inter+slope*X))
return(stwid)
}
Stdwidth <- apply(LogDat, 2, StandWidth)
## Year effect and Variation
fun.stderr <- function(x) {
sqrt(var(x, na.rm = T) / sum(!is.na(x)))
}
N = rowSums(!is.na(Stdwidth))
Mean = rowMeans(Stdwidth, na.rm = TRUE, dims = 1)
Stdev = apply(Stdwidth, 1, sd, na.rm=TRUE)
Var = apply(Stdwidth, 1, var, na.rm=TRUE)
SE = apply(Stdwidth, 1, fun.stderr)
# Variability among individuals
SDIndWidth <- max(Stdev, na.rm=T) # Maximum value of SD of individual variation among standardised widths of the growth zones in the individual otoliths to be used
# Year effect
SDYearEff <- sd(Mean) # Mean of the SD of the variation among values of the inter-annual year effect to be used in the power analysis
##################
# Power ANalysis #
##################
DoPlot <- FALSE # Set to FALSE when running the power analysis
PowerAnalysisFunction <- function (SampSize, DoPlot=FALSE) { # You can also add the other parameters as arguments for this function, good practice but doesn't matter right now because they are defined in the GLOBAL scope
########################################
# Simulate Temperature and Year Effect #
########################################
NYears <- length(Years)
m1 <- 0 # Mean of scaled temperature
s1 <- 1 # SD of scaled temperature
m2 <- 1 # Mean of values of inter-annual effect
s2 <- SDYearEff # SD of the inter-annual year effect
X <- rnorm(NYears)
Y <- rnorm(NYears)
ScaledTemp <- cbind(Years, m1 + s1 * X)
YearEffect <- cbind(Years, m2 + s2 * (rho * X + sqrt(1- rho^2) * Y))
if (DoPlot) {
par(mar=c(5,4,4,5)+.1)
plot(Years, ScaledTemp[,2], type="b", ylim=c(-3,3), ylab="Scaled temperature", xlab="Years")
par(new=TRUE)
plot(Years, YearEffect[,2], type="b", col="red",xaxt="n",yaxt="n",xlab="",ylab="")
axis(4)
mtext("Year Effect",side=4,line=3)
}
########################
# Generate Fish Sample #
########################
FishSamp <- matrix(NA, ncol=NYears + 2, nrow=NSampYrs*SampSize) # Initialise empty matrix
CurrYear <- max(Years) # Current Year
count <- 0
ind <- 1
for (xx in 1:NSampYrs) { # Make Column 2 Sampling Years
FishSamp[ind:(ind+SampSize-1),2] <- rep((CurrYear-count), SampSize)
count <- count + 1
ind <- ind + SampSize
}
FishSamp[,1] <- sample(AgeVec, size=nrow(FishSamp), replace=TRUE, prob=FProb) # Assign random ages, with probability of each age given from age sample
AssignEffect <- function(Age, SampYear, YearEffect, SDIndWidth) {
FYears <- seq(SampYear, by=-1, length=Age)
FYrEff <- YearEffect[match(FYears, YearEffect[,1]),2]
return(FYrEff + SDIndWidth * rnorm(1))
}
ind <- 0
SampInd <- SampSize
for (xx in 1:nrow(FishSamp)) { # Assign Random Otolith Width to each fish.
FAge <- FishSamp[xx,1]
SampYear <- CurrYear - ind
if (FAge > CutAge) {
colInd <- which(Years == SampYear) + 2
FishSamp[xx, colInd:(colInd+FAge-1)] <- AssignEffect(FAge, SampYear, YearEffect, SDIndWidth)
}
if (xx > SampInd) { # counters to make sure data ends up in correct columns
ind <- ind + 1
SampInd <- SampInd + SampSize
}
}
#######################################
# Calculate Yearly mean otolith width #
#######################################
MeanOtWidth <- apply(FishSamp[,3:ncol(FishSamp)], 2, mean, na.rm=TRUE)
GotDat <- which(is.nan(MeanOtWidth)==FALSE)
if (DoPlot) {
par(mar=c(5,4,4,5)+.1)
plot(ScaledTemp[GotDat,2], MeanOtWidth[GotDat], type="p", ylim=c(0, 1.4))
}
#######################
# Do Correlation Test #
#######################
CorTest <- cor.test(ScaledTemp[GotDat,2], MeanOtWidth[GotDat])
PVal <- CorTest$p.value/2
Corr <- CorTest$estimate
IsSig <- PVal <= Alpha
# Following Norm's calcs - check with above method
# altCorr <- cor(ScaledTemp[GotDat,2], MeanOtWidth[GotDat])
# DF <- length(GotDat) - 2
# tStat <- altCorr * sqrt(DF)/sqrt(1-altCorr^2)
# pval <- (1 - pt(tStat, DF))
# pval <= Alpha # is significant correlation?
return(IsSig)
}
######################
# Run Power Analysis #
######################
PowerAnalysisFunction(SampSize=10) # Run Single Instance
CountVec <- rep(NA, length(SampSizeVec))
for (SmIt in seq_along(SampSizeVec)) { # Loop over different sample sizes
CountVec[SmIt] <- sum(sapply(1:NumIts, function (X) PowerAnalysisFunction(SampSize=SampSizeVec[SmIt])))
print(paste0("Running sample size: ", SampSizeVec[SmIt], " #", SmIt, " of ", length(SampSizeVec)))
}
ProbVec <- CountVec/NumIts
# plot(SampSizeVec, ProbVec, ylim=c(0,max(ProbVec*1.1)), type="l", lwd=2, xlab="Annual sample size", ylab="Probability of obtaining significant result", bty="l", xaxs="i", yaxs="i", main=title)
# mtext(text="rho=0.4\nalpha=0.01",side=3,outer=F,adj=1)
df <- data.frame(SampSizeVec,ProbVec)
p5 <- ggplot(df, aes(x=SampSizeVec, y=ProbVec)) + geom_line() + theme_classic() +
xlab("Sample Size") + ylab("Probability of significant result") + stat_smooth(se = FALSE)+
annotate("text", x = SampVec-(.1*SampVec), y = 0.2, label = "rho=0.4\nalpha=0.01")+
scale_y_continuous(limits = c(0, 1)) +
ggtitle("West Coast Fox Fish")
p6 <- ggplot(df, aes(x=SampSizeVec, y=ProbVec)) + geom_line() + theme_classic() +
xlab("Sample Size") + ylab("Probability of significant result") +
annotate("text", x = SampVec-(.1*SampVec), y = 0.2, label = "rho=0.4\nalpha=0.01")+
scale_y_continuous(limits = c(0, 1)) +
ggtitle("West Coast Fox Fish")
|
7b77cc4850775e494217fd964d42a28a8dc18180 | 8a6bf3b3e171cdbce3d9515cdc7eb3ed57b531e9 | /Simple/Simple_Model2_Drive.r | 5f048ab133524da808d684f4806766f9d9f79e36 | [] | no_license | ISUCyclone/Pipelines | c0828d2db3ab2b328fe834cc7258627d62e4e864 | b062ee76f0fbe7a33683d9fe925f8984a6459914 | refs/heads/master | 2021-06-25T03:38:26.824129 | 2017-08-24T18:10:26 | 2017-08-24T18:10:26 | 99,627,073 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 5,939 | r | Simple_Model2_Drive.r | Data=c(0.163, 0.153, 0.136, NA, 0.141, 0.145, 0.13, 0.206,
0.186, 0.184, NA, 0.178, 0.185, 0.185, 0.237, 0.22, 0.218, NA,
0.223, 0.235, 0.229, 0.199, 0.194, 0.183, NA, 0.182, 0.185, 0.182,
NA, NA, NA, 0.148, 0.159, 0.149, 0.138, NA, NA, NA, 0.168, 0.169,
0.17, 0.167, NA, NA, NA, 0.144, 0.147, 0.151, 0.14, NA, NA, NA,
0.153, 0.155, 0.159, 0.149, NA, NA, NA, 0.158, 0.168, 0.165,
0.155, NA, NA, NA, 0.177, 0.185, 0.181, 0.179, NA, NA, NA, 0.143,
0.147, 0.145, 0.141, NA, NA, NA, 0.144, 0.155, 0.155, 0.15, NA,
NA, NA, 0.18, 0.181, 0.181, 0.179, NA, NA, NA, 0.189, 0.19, 0.19,
0.188, NA, NA, NA, 0.208, 0.208, 0.207, 0.205, NA, NA, NA, 0.196,
0.195, 0.192, 0.189, NA, NA, NA, 0.183, 0.183, 0.183, 0.171,
NA, NA, NA, 0.21, 0.208, 0.192, 0.187, NA, NA, NA, 0.168, 0.16,
0.166, 0.151, NA, NA, NA, 0.188, 0.188, 0.187, 0.179, NA, NA,
NA, 0.19, 0.189, 0.191, 0.18, NA, NA, NA, 0.185, 0.182, 0.185,
0.185, NA, NA, NA, 0.182, 0.182, 0.186, 0.172, NA, NA, NA, 0.232,
0.232, 0.23, 0.223, NA, NA, 0.203, NA, 0.199, 0.193, 0.187, NA,
NA, 0.206, NA, 0.207, 0.205, 0.199, NA, NA, 0.22, NA, 0.219,
0.212, 0.212, NA, NA, 0.249, NA, 0.238, 0.241, 0.233, NA, NA,
0.208, NA, 0.204, 0.199, 0.192, NA, NA, 0.202, NA, 0.198, 0.203,
0.193, NA, NA, 0.21, NA, 0.215, 0.214, 0.214, NA, NA, 0.225,
NA, 0.218, 0.218, 0.211, NA, NA, 0.209, NA, 0.199, 0.201, 0.183,
NA, NA, 0.239, NA, 0.239, 0.233, 0.222, NA, NA, 0.23, NA, 0.239,
0.238, 0.235, NA, NA, 0.24, NA, 0.228, 0.23, 0.223, NA, NA, 0.205,
NA, 0.209, 0.205, 0.198, NA, NA, 0.228, NA, 0.233, 0.233, 0.23,
NA, NA, 0.214, NA, 0.214, 0.214, 0.21, NA, NA, 0.22, NA, 0.22,
0.217, 0.212, 0.246, 0.24, 0.243, NA, 0.241, 0.241, 0.236, 0.213,
0.202, 0.205, NA, 0.206, 0.206, 0.202, 0.248, 0.24, 0.215, NA,
0.215, 0.214, 0.212, 0.238, 0.238, 0.241, NA, 0.243, 0.24, 0.235,
NA, NA, 0.23, NA, 0.208, 0.205, 0.199, NA, NA, 0.231, NA, 0.226,
0.225, 0.221, NA, NA, 0.213, NA, 0.205, 0.206, 0.203, NA, NA,
0.226, NA, 0.224, 0.23, 0.222, NA, NA, 0.174, NA, 0.168, 0.159,
0.155, NA, NA, 0.241, NA, 0.23, 0.231, 0.227, NA, NA, 0.218,
NA, 0.215, 0.213, 0.21, NA, NA, 0.237, NA, 0.23, 0.225, 0.227,
NA, NA, 0.22, NA, 0.218, 0.213, 0.208, NA, NA, 0.195, NA, 0.195,
0.195, 0.193, NA, NA, 0.22, NA, 0.215, 0.215, 0.21, NA, NA, 0.215,
NA, 0.212, 0.209, 0.209, NA, NA, 0.196, NA, 0.203, 0.198, 0.195,
NA, NA, 0.232, NA, 0.229, 0.233, 0.23, NA, NA, 0.216, NA, 0.22,
0.218, 0.215, NA, NA, 0.241, NA, 0.236, 0.239, 0.233, NA, NA,
0.196, NA, 0.198, 0.194, 0.192, NA, NA, 0.214, NA, 0.21, 0.207,
0.203, NA, NA, 0.209, NA, 0.209, 0.209, 0.203, NA, NA, 0.216,
NA, 0.21, 0.21, 0.207, NA, NA, 0.215, NA, 0.208, 0.212, 0.209,
NA, NA, 0.22, NA, 0.222, 0.222, 0.218, NA, NA, 0.203, NA, 0.203,
0.205, 0.2, NA, NA, 0.223, NA, 0.223, 0.223, 0.218, NA, NA, 0.165,
NA, 0.169, 0.163, 0.157, NA, NA, 0.226, NA, 0.227, 0.225, 0.22,
NA, NA, 0.211, NA, 0.208, 0.21, 0.213, NA, NA, 0.244, NA, 0.246,
0.244, 0.24, NA, NA, 0.214, NA, 0.215, 0.222, 0.217, NA, NA,
0.219, NA, 0.218, 0.219, 0.215, NA, NA, 0.218, NA, 0.218, 0.218,
0.214, NA, NA, 0.208, NA, 0.208, 0.21, 0.217, NA, NA, 0.198,
NA, 0.204, 0.202, 0.2, NA, NA, 0.212, NA, 0.212, 0.213, 0.21,
NA, NA, 0.219, NA, 0.219, 0.219, 0.215, NA, NA, 0.23, NA, 0.232,
0.235, 0.235, NA, NA, 0.207, NA, 0.211, 0.209, 0.205, NA, NA,
0.222, NA, 0.221, 0.225, 0.221, NA, NA, 0.224, NA, 0.22, 0.218,
0.215, NA, NA, 0.204, NA, 0.206, 0.214, 0.214, 0.188, 0.186,
0.188, NA, 0.191, 0.2, 0.197, 0.234, 0.234, 0.229, NA, 0.226,
0.217, 0.215, 0.216, 0.215, 0.213, NA, 0.215, 0.225, 0.221, 0.241,
0.238, 0.236, NA, 0.234, 0.234, 0.23)
Data <- matrix(Data, nrow = 88, byrow = TRUE)
n <- length(Data) - sum(is.na(Data))
y <- numeric(n)
rowy <- numeric(n)
coly <- numeric(n)
count = 0
for(i in 1:88)
for(j in 1:7){
if(!is.na(Data[i,j])){
count = count + 1
y[count] = Data[i,j]
rowy[count] <- i
coly[count] <- j
}
}
time= c(9172, 10997, 11453 ,11515, 11613, 11779, 12072)
qua1=c(1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0,
1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0,
1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0,
1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0,
1, 0, 0, 0, 1, 0, 0, 0)
qua2=c(0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0,
0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0,
0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0,
0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0,
0, 1, 0, 0, 0, 1, 0, 0)
qua3=c(0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0,
0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0,
0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0,
0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0,
0, 0, 1, 0, 0, 0, 1, 0)
qua4=c(0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1,
0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1,
0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1,
0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1,
0, 0, 0, 1, 0, 0, 0, 1)
F3M2_Data <- list(N = 88, K =7, Y = y, n =n, Rowy = rowy, Coly = coly, T = time,
q1 = qua1, q2 = qua2, q3 = qua3, q4 = qua4)
library(rstan)
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())
fit2 <- stan(file = "/Users/pro/Projects/Pipelines/Simple/Simple_Model2.stan", data = F3M2_Data, iter = 10000, control = list(max_treedepth = 12))
|
aab1ab22811314e962861441d25ca2f24121950c | 9ded8c1e3116b174bcc4a00b0fdab700f2f9ce3c | /tests/testthat/test_simulate.R | b26eac51095339812878bbaf6dc84273b9538d14 | [
"Apache-2.0"
] | permissive | njtierney/greta | 3bcf8d69b86caf555aac11924f3a48a1232d178c | 93aaf361c04591a0d20f73c25b6e6693023482fd | refs/heads/master | 2023-04-29T03:07:48.845793 | 2022-09-09T02:37:39 | 2022-09-09T02:37:39 | 303,113,354 | 3 | 0 | NOASSERTION | 2023-04-17T02:08:28 | 2020-10-11T12:14:11 | R | UTF-8 | R | false | false | 2,292 | r | test_simulate.R | test_that("simulate produces the right number of samples", {
skip_if_not(check_tf_version())
# fix variable
a <- normal(0, 1)
y <- normal(a, 1, dim = c(1, 3))
m <- model(y, a)
# should be vectors
sims <- simulate(m)
expect_equal(dim(sims$a), c(1, dim(a)))
expect_equal(dim(sims$y), c(1, dim(y)))
sims <- simulate(m, 17)
expect_equal(dim(sims$a), c(17, dim(a)))
expect_equal(dim(sims$y), c(17, dim(y)))
})
test_that("simulate uses the local RNG seed", {
skip_if_not(check_tf_version())
# fix variable
a <- normal(0, 1)
y <- normal(a, 1)
m <- model(y)
# the global RNG seed should change if the seed is *not* specified
before <- rng_seed()
sims <- simulate(m)
after <- rng_seed()
expect_false(identical(before, after))
# the global RNG seed should not change if the seed *is* specified
before <- rng_seed()
sims <- simulate(m, seed = 12345)
after <- rng_seed()
expect_identical(before, after)
# the samples should differ if the seed is *not* specified
one <- simulate(m)
two <- simulate(m)
expect_false(identical(one, two))
# the samples should differ if the seeds are specified differently
one <- simulate(m, seed = 12345)
two <- simulate(m, seed = 54321)
expect_false(identical(one, two))
# the samples should be the same if the seed is the same
one <- simulate(m, seed = 12345)
two <- simulate(m, seed = 12345)
expect_identical(one, two)
})
test_that("simulate errors if distribution-free variables are not fixed", {
skip_if_not(check_tf_version())
# fix variable
a <- variable()
y <- normal(a, 1)
m <- model(y)
expect_snapshot_error(
sims <- simulate(m)
)
})
test_that("simulate errors if a distribution cannot be sampled from", {
skip_if_not(check_tf_version())
# fix variable
y_ <- rhyper(10, 5, 3, 2)
y <- as_data(y_)
m <- lognormal(0, 1)
distribution(y) <- hypergeometric(m, 3, 2)
m <- model(y)
expect_snapshot_error(
sims <- simulate(m)
)
})
test_that("simulate errors nicely if nsim is invalid", {
skip_if_not(check_tf_version())
x <- normal(0, 1)
m <- model(x)
expect_snapshot_error(
simulate(m, nsim = 0)
)
expect_snapshot_error(
simulate(m, nsim = -1)
)
expect_snapshot_error(
simulate(m, nsim = "five")
)
})
|
320a5f7a4e0130d01b4862de9e132982559e5248 | 7f0dd4abff0670d2bf1cbdaa5fa84a3c15d399fa | /Sentiment/Code/sentiment_men.R | 49ab854e429f76a203b2243484b7c787c9df16a4 | [] | no_license | rosthalken/supreme-court-thesis | d68dab184ce96c907a8a4a6976fc154b0581bc11 | 0310d65a47969e79f560f7c74207d11c3f8ff62d | refs/heads/master | 2023-05-27T11:24:43.071827 | 2021-06-10T15:43:58 | 2021-06-10T15:43:58 | 166,492,132 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,169 | r | sentiment_men.R | sentiment_breyer <- group_by(sentiment_result, Author, Year) %>%
filter(Author == "Breyer") %>%
summarise(year_sent = mean(`Mean Sentiment`))
sentiment_kennedy <- group_by(sentiment_result, Author, Year) %>%
filter(Author == "Kennedy") %>%
summarise(year_sent = mean(`Mean Sentiment`))
sentiment_rehnquist <- group_by(sentiment_result, Author, Year) %>%
filter(Author == "Rehnquist") %>%
summarise(year_sent = mean(`Mean Sentiment`))
sentiment_souter <- group_by(sentiment_result, Author, Year) %>%
filter(Author == "Souter") %>%
summarise(year_sent = mean(`Mean Sentiment`))
sentiment_stevens <- group_by(sentiment_result, Author, Year) %>%
filter(Author == "Stevens") %>%
summarise(year_sent = mean(`Mean Sentiment`))
sentiment_scalia <- group_by(sentiment_result, Author, Year) %>%
filter(Author == "Scalia") %>%
summarise(year_sent = mean(`Mean Sentiment`))
sentiment_thomas <- group_by(sentiment_result, Author, Year) %>%
filter(Author == "Thomas") %>%
summarise(year_sent = mean(`Mean Sentiment`))
sentiment_male <- rbind(sentiment_breyer, sentiment_kennedy, sentiment_rehnquist, sentiment_souter, sentiment_stevens, sentiment_scalia, sentiment_thomas)
plotted_sentiment_male <- ggplot(data=sentiment_male, aes(x=Year, y=year_sent, group=Author)) +
geom_line(aes(linetype=Author)) +
geom_smooth() +
xlab("Year") +
ylab("Sentiment")
all_sentiment_male <- ggplot(data=sentiment_male, aes(x=Year, y=year_sent, group=Author)) +
geom_line(aes(linetype=Author)) +
geom_smooth(aes(colour = Author), se = FALSE)
smooth_sentiment_male <- ggplot(data=sentiment_male, aes(x=Year, y=year_sent, group=Author)) +
#geom_line(aes(linetype=Author)) +
geom_smooth(aes(colour = Author), se = FALSE) +
xlab("Year") +
ylab("Sentiment") +
ylim(-.05, .25) +
xlim(1970, 2020)
plotted_sentiment_male <- ggplot(data=sentiment_male, aes(x=Year, y=year_sent)) +
geom_line(aes(linetype=Author)) +
geom_smooth() +
xlab("Year") +
ylab("Sentiment")
smooth_sentiment_male <- ggplot(data=sentiment_male, aes(x=Year, y=year_sent)) +
#geom_line(aes(linetype=Author)) +
geom_smooth() +
xlab("Year") +
ylab("Sentiment")
|
1365d34e1cecd352e47ee791c3f208fbd1430535 | d53559c5eef84599fa694b7b4c7cbec0d763fbbe | /Code/lab2.R | e09bcdbe8e68d57e0532335809a544b140874062 | [] | no_license | BrandTruong/STATS10 | 75a408d18a4080578712ca06893643d0232c1c9d | d498a78adf6f85e124410a2ae293fc80e26c8994 | refs/heads/master | 2022-12-03T19:38:08.044056 | 2020-07-16T07:02:48 | 2020-07-16T07:02:48 | 279,477,067 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,197 | r | lab2.R | setwd("~/UCLA Coursework/STATS 10") #Restart
library(mosaic)
library(maps)
NCbirths <- read.csv('births.csv')
?Comparison
4 > 3
c(3,8) >=3
c(3,8) <= 3
c(1,4,9) == 9
c(1,4,9) != 9
c(3,8) >= c(3,10)
sum(NCbirths$weight > 100) #the number of babies that weighed more than 100 ounces
sum(NCbirths$weight > 100)/1992
mean(NCbirths$weight > 100) #the proportion of babies that weighed more than 100 ounces
mean(NCbirths$Gender == "Female") #the proportion of female babies
mean(NCbirths$Gender != "Male") #gives the proportion of babies not assigned male
NCbirths$weight[c(1,2,3)] #last class
fem_weights <-NCbirths$weight[NCbirths$Gender == "Female"]
fem_weights = NCbirths$weight[NCbirths$Gender == "Female"]
sum(fem_weights)
sub1_weights <- NCbirths$weight[NCbirths$Gender == "Female" & NCbirths$Premie == "No"]
sub2_weights <- NCbirths$weight[NCbirths$Gender == "Female" | NCbirths$Premie == "No"]
#Create an object with the baby weights from NCbirths
baby_weight <- NCbirths$weight
#Create an object with the baby genders from NCbirths
baby_gender <- NCbirths$Gender
#Create a logical vector to describe if the gender is female
is_female <-baby_gender == "Female"
# Create the vector of weights containing only females
fem_weights <-baby_weight[is_female]
#Exercise 1
#a
flint <- read.csv('flint.csv')
flint <- read.csv('~/UCLA Coursework/STATS 10/flint.csv')
head(flint) #testing to see if read properly
class(flint)
#b
library(mosaic)
dangerousPb_indicator = (flint$Pb >= 15)
tally(~dangerousPb_indicator, format="proportion")
mean(flint$Pb>=15)
#c
north_flint <- flint[flint$Region=="North",]
mean(north_flint$Cu)
#d
dangerousPb_flint <- flint[flint$Pb>=15,]
mean(dangerousPb_flint$Cu)
#e
mean(flint$Pb)
mean(flint$Cu)
#f
boxplot(x = flint$Pb, main="Lead Levels(PPB) in Flint", xlab="Flint Locations",ylab="PPB")
boxplot(flint$Pb~flint$Region, main="Lead Levels(PPB) between Flint Regions")
boxplot(NCbirths$Fage, NCbirths$Mage, names=c("Fage","Mage"), main="Insert Title")
#g
mean(flint$Pb)
median(flint$Pb)
#Exercise 2
#1
life<-read.table("http://www.stat.ucla.edu/~nchristo/statistics12/countries_life.txt", header=TRUE)
#a
plot(x=life$Income, y=life$Life,xlab="Income (per capita)",ylab="Life Expectancy (years)",main="Life Expectancy vs Income")
#Discussion of positive association, reference to being impportant up to a point, mention of inability to identify causal relationship from data
#b
hist(life$Income,xlab="Income",ylab="Frequency",main="Histogram of Income")
boxplot(life$Income,xlab="Countries",ylab="Income",main="Boxplot of Income")
summary(life$Income)
#Rubric they mention potential outliers
#c
below1k <- life[life$Income<1000,]
above1k <- life[life$Income>=1000,]
#d
plot(x=below1k$Income, y=below1k$Life,xlab="Income (below 1k)",ylab="Life Expectancy",main="Life Expectancy vs Income (below 1k)")
cor(below1k$Income, below1k$Life)
#Exercise 3
maas<-read.table("http://www.stat.ucla.edu/~nchristo/statistics12/soil.txt", header=TRUE)
#a
summary(maas$lead)
summary(maas$zinc)
#b
hist(maas$lead, xlab="Lead (ppm)", ylab="Frequency",main="Histogram of Lead")
hist(log(maas$lead),xlab="Lead (ppm)",main="Histogram of log(Lead)")
#c
plot(x=log(maas$zinc),y=log(maas$lead),ylab="log(lead)", xlab="log(zinc)", main="Log(lead) vs Log(zinc)")
#include discussion of linearity, symmetry, and equal variance
#d
lead_colors <- c("green","orange","red")
lead_levels <- cut(maas$lead, c(0,150,400,1000))
plot(maas$x,maas$y,xlab="x-coordinates",ylab="y-coordinates",main="Lead Concentrations along the Maas River","n")
points(maas$x,maas$y,cex=maas$lead/mean(maas$lead)/1.5,col=lead_colors[as.numeric(lead_levels)],pch=19)
#Exercise 4
LA <-read.table("http://www.stat.ucla.edu/~nchristo/statistics12/la_data.txt", header=TRUE)
library(maps)
#a
plot(x=LA$Longitude,y=LA$Latitude,ylab="Latitude",xlab="Longitude",main="LA School Locations",pch=19,col="red")
map("county", "california", add = TRUE)
#b
LA.subset <- LA[LA$Schools>0,]
cor(LA.subset$Schools,LA.subset$Income)
plot(x=LA.subset$Income, y=LA.subset$Schools, xlab="Income",ylab="School Performance",main="School Performance vs Income",pch=21,col="black")
#relationship linear moderate positive no implication of causation |
02892ad2e07999d297f66e0e9f3746875fcd52ae | d317f7e6a38bd252cfdf69e3846905c24e14f2fc | /man/dot-init_qc_dt.Rd | d7387a348c60539560d333dc3b039fbfa4326e91 | [
"MIT"
] | permissive | Yixf-Self/SampleQC | 208bb170884c6c9dbc1596a277186abd5a43d548 | 82f4483eafdaac93c17710bf605e147ad4611f0c | refs/heads/master | 2023-07-28T06:09:25.504772 | 2021-08-29T14:05:10 | 2021-08-29T14:05:10 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 398 | rd | dot-init_qc_dt.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/make_qc_dt.R
\name{.init_qc_dt}
\alias{.init_qc_dt}
\title{Initializes qc_dt object}
\usage{
.init_qc_dt(qc_df, sample_var)
}
\arguments{
\item{qc_df}{input data}
\item{sample_var}{which column of df has sample labels? (e.g. sample, group,
batch, library)}
}
\description{
Initializes qc_dt object
}
\keyword{internal}
|
10c75681aea687f3f77a79b25f392aab9f650770 | 4cb5426e8432d4af8f6997c420520ffb29cefd3e | /P37.R | e29af027399da2621c15910f7edb744bf68a2655 | [
"CC0-1.0"
] | permissive | boyland-pf/MorpheusData | 8e00e43573fc6a05ef37f4bfe82eee03bef8bc6f | 10dfe4cd91ace1b26e93235bf9644b931233c497 | refs/heads/master | 2021-10-23T03:47:35.315995 | 2019-03-14T21:30:03 | 2019-03-14T21:30:03 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,218 | r | P37.R | # making table data sets
library(dplyr)
library(tidyr)
library(MorpheusData)
#############benchmark 37
dat <- read.table(text=
"
gear am n
3 0 15
4 0 4
4 1 8
3 1 5
", header=T)
write.csv(dat, "data-raw/p37_input1.csv", row.names=FALSE)
df_out = dat %>%
mutate(percent = n / sum(n)) %>%
gather(variable, value, n, percent) %>%
unite("new_variable", am, variable) %>%
spread(new_variable, value)
write.csv(df_out, "data-raw/p37_output1.csv", row.names=FALSE)
p37_output1 <- read.csv("data-raw/p37_output1.csv", check.names = FALSE)
fctr.cols <- sapply(p37_output1, is.factor)
int.cols <- sapply(p37_output1, is.integer)
p37_output1[, fctr.cols] <- sapply(p37_output1[, fctr.cols], as.character)
p37_output1[, int.cols] <- sapply(p37_output1[, int.cols], as.numeric)
save(p37_output1, file = "data/p37_output1.rdata")
p37_input1 <- read.csv("data-raw/p37_input1.csv", check.names = FALSE)
fctr.cols <- sapply(p37_input1, is.factor)
int.cols <- sapply(p37_input1, is.integer)
p37_input1[, fctr.cols] <- sapply(p37_input1[, fctr.cols], as.character)
p37_input1[, int.cols] <- sapply(p37_input1[, int.cols], as.numeric)
save(p37_input1, file = "data/p37_input1.rdata")
|
5ee94ffeacbd9ba1e356ff77ab64e98e5deab17f | 706bbe374869615eca6f2cfe1c576fdd304057a4 | /InSilicoVA/man/indivplot.Rd | f844a212381f5afabcc190e100fa1ded9e784a0a | [] | no_license | verbal-autopsy-software/InSilicoVA | 45213f56834cc6d8467763f76a5288db59cd6117 | 9a2eb1750a050ac29ce35ad9825b8cc3ad5a022c | refs/heads/master | 2023-04-26T21:59:30.121905 | 2023-04-19T05:25:57 | 2023-04-19T05:25:57 | 31,554,655 | 4 | 6 | null | 2019-08-21T19:08:21 | 2015-03-02T18:02:43 | R | UTF-8 | R | false | true | 4,460 | rd | indivplot.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/indivplot.r
\name{indivplot}
\alias{indivplot}
\title{plot aggregated COD distribution}
\usage{
indivplot(
x,
type = c("errorbar", "bar")[1],
top = 10,
causelist = NULL,
which.plot = NULL,
xlab = "Causes",
ylab = "COD distribution",
title = "COD distributions for the top causes",
horiz = TRUE,
angle = 60,
fill = "lightblue",
err_width = 0.4,
err_size = 0.6,
point_size = 2,
border = "black",
bw = FALSE,
...
)
}
\arguments{
\item{x}{object from \code{get.indiv} function.}
\item{type}{An indicator of the type of chart to plot. "errorbar" for line
plots of only the error bars on single population; "bar" for bar chart with
error bars on single population.}
\item{top}{The number of top causes to plot. If multiple sub-populations are
to be plotted, it will plot the union of the top causes in all
sub-populations.}
\item{causelist}{The list of causes to plot. It could be a numeric vector
indicating the position of the causes in the InterVA cause list (see
\code{\link{causetext}}), or a vector of character string of the cause
names. The argument supports partial matching of the cause names. e.g.,
"HIV/AIDS related death" could be abbreviated into "HIV"; "Other and
unspecified infect dis" could be abbreviated into "Other and unspecified
infect".}
\item{which.plot}{Specification of which group to plot if there are
multiple.}
\item{xlab}{Labels for the causes.}
\item{ylab}{Labels for the CSMF values.}
\item{title}{Title of the plot.}
\item{horiz}{Logical indicator indicating if the bars are plotted
horizontally.}
\item{angle}{Angle of rotation for the texts on x axis when \code{horiz} is
set to FALSE}
\item{fill}{The color to fill the bars when \code{type} is set to "bar".}
\item{err_width}{Size of the error bars.}
\item{err_size}{Thickness of the error bar lines.}
\item{point_size}{Size of the points.}
\item{border}{The color to color the borders of bars when \code{type} is set
to "bar".}
\item{bw}{Logical indicator for setting the theme of the plots to be black
and white.}
\item{\dots}{Not used.}
}
\description{
Produce a bar plot of the aggregated COD distribution as approximate CSMFs for a fitted \code{"insilico"} object.
}
\examples{
\dontrun{
# Toy example with 1000 VA deaths
data(RandomVA1)
fit1<- insilico(RandomVA1, subpop = NULL,
Nsim = 1000, burnin = 500, thin = 10 , seed = 1,
auto.length = FALSE)
summary(fit1, id = "d199")
# update credible interval for individual probabilities to 90\%
indiv.new <- get.indiv(fit1, CI = 0.9)
fit1$indiv.prob.lower <- indiv.new$lower
fit1$indiv.prob.upper <- indiv.new$upper
fit1$indiv.CI <- 0.9
summary(fit1, id = "d199")
# get empirical aggregated COD distribution
agg.csmf <- get.indiv(data = RandomVA2, fit1, CI = 0.95,
is.aggregate = TRUE, by = NULL)
head(agg.csmf)
# aggregate individual COD distribution by sex and age
# note the model was fitted assuming the same CSMF for all deaths
# this aggregation provides an approximate CSMF for each sub-groups
agg.by.sex.age <- get.indiv(data = RandomVA2, fit1, CI = 0.95,
is.aggregate = TRUE, by = list("sex", "age"))
head(agg.by.sex.age$mean)
# plot of aggregated individual COD distribution
# 0. plot for all data
indivplot(agg.csmf, top = 10)
# 1. plot for specific one group
indivplot(agg.by.sex.age, which.plot = "Men 60-", top = 10)
# 2. comparing multiple groups
indivplot(agg.by.sex.age, which.plot = list("Men 60+", "Men 60-"),
top = 5)
# 3. comparing multiple groups on selected causes
indivplot(agg.by.sex.age, which.plot = list("Men 60-", "Women 60-"),
top = 0, causelist = c(
"HIV/AIDS related death",
"Pulmonary tuberculosis",
"Other and unspecified infect dis",
"Other and unspecified NCD"))
}
}
\references{
Tyler H. McCormick, Zehang R. Li, Clara Calvert, Amelia C. Crampin,
Kathleen Kahn and Samuel J. Clark Probabilistic cause-of-death assignment
using verbal autopsies, \emph{Journal of the American Statistical
Association} (2016), 111(515):1036-1049.
}
\seealso{
\code{\link{insilico}}, \code{\link{summary.insilico}}
}
\author{
Zehang Li, Tyler McCormick, Sam Clark
Maintainer: Zehang Li <lizehang@uw.edu>
}
\keyword{InSilicoVA}
|
336018609d973741cde3195a649f3693ca323985 | 507c8cd029a0429db398b266f7b211b8a206747d | /server.R | 59ed30851ee53d9855cfa4411c84f185ea903cd9 | [] | no_license | mythreyis/BirthRate-Shiny-Project | bed092b8d1aa2dbd981693b41305a378225f262a | 14e8b767e4961dbf8124c9bc942320811875ca23 | refs/heads/master | 2021-01-10T06:57:56.620302 | 2015-12-21T20:15:26 | 2015-12-21T20:15:26 | 48,390,892 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,006 | r | server.R | library(shiny)
source('global.R')
birth.rate.df <- getTransformedBirthRateDF()
states <- unique(birth.rate.df$State.Name)
label.value <- as.character(sort(unique(birth.rate.df$fillKey)))
shinyServer(function(input, output) {
##Check boxes for states
output$states <- renderUI({
checkboxGroupInput('states', 'States', states, selected=c('Alaska', 'Vermont'))
})
##HTML Texts
output$mapText=renderText({paste("Birth Rate for Year ", input$Year)})
output$chartText=renderText({"Birth Rate changes in individual states"})
output$tableText=renderText({"Source Data"})
##Data Table
output$birth.rate.table <- renderDataTable(getBirthRateDF())
##NVD3 Line Chart
output$chart <- renderChart({
createLineChart(birth.rate.df %>% filter(State.Name %in% input$states))
})
##Choropleth using rMaps
output$map = renderChart({
createAnimatedMap(
Birth.Rate~State,
data = birth.rate.df[birth.rate.df$Year==input$Year,],
label.value
)
})
})
|
39505bada513473c9105c41a56f42a227453674a | 6d1747234e372917032452fddb94636c28b0122b | /Summary/src/paraphrase/R_Script/TDM.R | 4d687c7e31edf9dc16a460fa87ca60fb8880ecb7 | [] | no_license | amitamisra/Summary_Dialogs | 869e66f26c83fc627546c85d2517245f85e742c3 | e0257bac31067a78f3225cc81b6b02e65e2bdd89 | refs/heads/master | 2020-04-06T04:32:31.736617 | 2015-10-28T19:51:37 | 2015-10-28T19:51:37 | 19,559,994 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,615 | r | TDM.R | options( stringsAsFactors=F )
library(NLP,lib.loc="/Users/amita/software/Rpackage")
library(SnowballC,lib.loc="/Users/amita/software/Rpackage")
library(tm,lib.loc="/Users/amita/software/Rpackage") #load text mining library
library(MASS,lib.loc="/Users/amita/software/Rpackage")
library(stringi,lib.loc="/Users/amita/software/Rpackage")
setwd("~/git/summary_repo/Summary/src/paraphrase/data/gay-rights-debates/") #sets R's working directory to near where my files are
filelist=list.files("TDM_Dir/TDM_Inp")
for (inputfile in filelist)
{
inputfile
inpfile=paste("TDM_Dir/TDM_Inp/",inputfile, sep="")
inpfile
SCU_df<-read.csv(file=inpfile,head=TRUE,sep=",")
SCU_df[1]
SCU_df[2]
map<-list(content="SCU",id="id")
myReader <- readTabular(mapping = map)
scu_Corpus <- VCorpus(DataframeSource(SCU_df), readerControl = list(reader = myReader))
scu_Corpus
meta(scu_Corpus[[3]])
scu_Corpus<- tm_map(scu_Corpus , stripWhitespace)
scu_Corpus<- tm_map(scu_Corpus, content_transformer(tolower))
scu_Corpus <- tm_map(scu_Corpus, removeWords, c(stopwords("english"),"s1","s2")) # this stopword file is at C:\Users\[username]\Documents\R\win-library\2.13\tm\stopwords
scu_dtm_tf <-DocumentTermMatrix(scu_Corpus)
scu_dtm_tfidf<-DocumentTermMatrix(scu_Corpus,control = list(weight = weightTfIdf))
dim(scu_dtm_tf)
scu_df_tf<-as.matrix(scu_dtm_tf)
outfile_tf=paste("TDM_Dir/TDM_TF/",inputfile, sep="")
scu_df_tfidf<-as.matrix(scu_dtm_tfidf)
outfile_tfidf=paste("TDM_Dir/TDM_TFIDF/",inputfile, sep="")
write.csv(scu_df_tf, outfile_tf)
write.csv(scu_df_tfidf, outfile_tfidf)
} |
462558741a5d9789eddf3902c8a8c140f690f12c | 204c22c9812c35535577c4a36472a00a192b8272 | /R/verify.R | 9f7dd574a0e513127a0e3e19b9aff1ccd29b4736 | [] | no_license | Selbosh/scrooge | 3e22904e31c651e15ead8869150dbd7db69f00f3 | d88581aede16ac297e31378553cad21e1257f3e2 | refs/heads/master | 2021-01-11T06:07:19.696466 | 2018-02-14T13:42:54 | 2018-02-14T13:42:54 | 68,119,899 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,346 | r | verify.R | #' @title Compare two vectors numerically
#'
#' @description
#' Check if all elements of one vector are equal, larger or smaller than those of another vector,
#' allowing for errors in floating-point arithmetic.
#'
#' @details
#' \code{\%==\%} checks for (approximate) equality,
#' \code{\%>=\%} tests if `a` is greater than or equal to `b` and
#' \code{\%<=\%} tests the reverse.
#' A \code{\%<\%} or \code{\%>\%} operator would be redundant
#' (and conflict with `magrittr`).
#'
#' Be aware that some binary operators, such as \code{`/`} take precedence,
#' so make sure to wrap \code{a} and \code{b} in brackets where appropriate.
#' Use \code{verify(a, b)} for a conventional prefix operator.
#'
#' @param a the first vector to be compared
#' @param b the second vector to be compared
#'
#' @return `TRUE` or `FALSE` or `NA`
#'
#' @examples
#' 0.333333 %==% (1/3)
#' 0.999999 %<=% 1
#' -1e-16 %>=% 0
#' verify(pi, 3.141592654)
#'
#' @seealso \code{\link{all.equal}} \code{\link{Comparison}} \code{\link{identical}}
#'
#' @rdname verify
#' @export
verify <- function(a, b) all(abs(a - b) < .Machine$double.eps^.5)
#' @rdname verify
#' @export
`%==%` <- verify
#' @rdname verify
#' @export
`%>=%` <- function(a, b) all(a > b - .Machine$double.eps^.5)
#' @rdname verify
#' @export
`%<=%` <- function(a, b) all(a < b + .Machine$double.eps^.5)
|
cf47e6264ad43c2d07e7cad7d134d0d2ac3e6855 | f99a7a80e97d5287a86b196b065ba9bea15cb0d6 | /man/word_count.Rd | c28912f49abba3f544c064a3a36171effb7b17b8 | [
"MIT"
] | permissive | revelunt/pdfcount | befbf913c68c96f209283265a7e11827359d7130 | 6ffe468a18c289bcc6b9815b8d63aa6efed1608e | refs/heads/master | 2020-08-19T06:33:30.639965 | 2018-08-30T02:40:14 | 2018-08-30T02:40:14 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 2,778 | rd | word_count.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/word_count.R
\name{word_count}
\alias{word_count}
\title{Word Count a PDF}
\usage{
word_count(document, pages = NULL, count_numbers = TRUE,
count_captions = FALSE, count_equations = FALSE,
split_hyphenated = FALSE, split_urls = FALSE,
verbose = getOption("verbose", FALSE))
}
\arguments{
\item{document}{A file path specifying a PDF document.}
\item{pages}{Optionally, an integer vector specifying a subset of pages to count from. Negative values serve as negative subsets.}
\item{count_numbers}{A logical specifying whether to count numbers as words.}
\item{count_captions}{A logical specifying whether to count lines beginning with \dQuote{Table} or \dQuote{Figure} in word count.}
\item{count_equations}{A logical specifying whether to count lines ending with \dQuote{([Number])} in word count.}
\item{split_hyphenated}{A logical specifying whether to split hyphenated words or expressions as separate words.}
\item{split_urls}{A logical specifying whether to split URLs into multiple words when counting.}
\item{verbose}{A logical specifying whether to be verbose. If \code{TRUE}, the page and word counts are printed to the console and the result is is returned invisibly. If \code{FALSE}, the result is visible.}
}
\value{
A data frame with two columns, one specifying page and the other specifying word count for that page.
}
\description{
Obtain a Word Count from a PDF
}
\details{
This is useful for obtaining a word count for a LaTeX-compiled PDF. Counting words in the tex source is a likely undercount (due to missing citations, cross-references, and parenthetical citations). Counting words from the PDF is likely over count (due to hyphenation issues, URLs, ligatures, tables and figures, and various other things). This function tries to obtain a word from the PDF while accounting for some of the sources of overcounting.
It is often desirable to have word counts excluding tables and figures. A solution on TeX StackExchange (\url{https://tex.stackexchange.com/a/352394/30039}) provides guidance on how to exclude tables and figures (or any arbitrary LaTeX environment) from a compiled document, which may be useful before attempting to word count the PDF.
}
\examples{
\dontrun{
# "R-intro.pdf" manual
rintro <- file.path(Sys.getenv("R_HOME"), "doc", "manual", "R-intro.pdf")
# Online service at http://www.montereylanguages.com/pdf-word-count-online-free-tool.html
# claims the word count to be 36,530 words
# Microsoft Word (PDF conversion) word count is 36,869 words
word_count(rintro) # all pages (105 pages, 37870 words)
word_count(rintro, 1:3) # pages 1-3
word_count(rintro, -1) # skip first page
}
}
\author{
Thomas J. Leeper <thosjleeper@gmail.com>
}
|
cc170e27cd09474c706ee9555ff8984c459dcd6b | 7f6b92db5250b7939431fbd073cb9e40cbc7c5ad | /tests/testthat.R | 49606af1ccbe2d11ae9ec2e3d7f9fe8f79305f45 | [] | no_license | kevinbenac/clusterExperiment | 30af008e8680907d7bd6095f4a8b535cc3ca1668 | fb70816bfea3f0eda13508c38d4a3caedfb993e9 | refs/heads/master | 2020-03-11T16:51:29.042619 | 2018-04-17T15:09:08 | 2018-04-17T15:09:08 | 130,130,171 | 0 | 0 | null | 2018-04-18T22:45:40 | 2018-04-18T22:45:40 | null | UTF-8 | R | false | false | 78 | r | testthat.R | library(testthat)
library(clusterExperiment)
test_check("clusterExperiment")
|
20e43ee5d46173d1fd0ad536cd51b7daf03df2bc | 4fac49c14a9d87097d63ae3c0c42495f9bb98b46 | /main/ICML_diagram.R | 2f8f82b4e167b47b335f0d6f651787ea9b3c8cd9 | [
"MIT"
] | permissive | Mr8ND/TC-prediction-bands | 2765a5a3ea6240db100e9220c785237364b32acc | 92d469264c18c5c122e1dc1fe3440fccd2dd6aa4 | refs/heads/master | 2021-06-04T19:09:02.482740 | 2020-03-06T21:31:51 | 2020-03-06T21:31:51 | 69,694,073 | 6 | 1 | MIT | 2020-03-05T00:20:35 | 2016-09-30T19:02:23 | R | UTF-8 | R | false | false | 2,948 | r | ICML_diagram.R | library(TCpredictionbands)
library(gridExtra)
library(tidyverse)
image_path <- "/Users/benjaminleroy/Documents/CMU/research/TC-prediction-bands/report/images/"
my_pipeline <- TCpredictionbands::sample_output_pipeline
my_sim <- TCpredictionbands::sample_sim
my_tc_length <- nrow(TCpredictionbands::sample_tc)
my_tc <- TCpredictionbands::sample_tc %>%
mutate(col = factor(c(rep(0,3), rep(1, my_tc_length - 3))))
my_tc_name <- TCpredictionbands::sample_tc_name
# base map
sim_process <- TCpredictionbands::data_plot_paths_basic(test_list = my_sim)
latrange <- range(sim_process$lat, my_tc$lat) + c(-4,4)
longrange <- range(sim_process$long, my_tc$long) + c(-4,4)
longrange[2] <- -20
latrange[2] <- 65
ocean <- c(left = longrange[1], bottom = latrange[1],
right = longrange[2], top = latrange[2])
map <- ggmap::get_stamenmap(ocean, zoom = 4, maptype = "toner-lite")
base_graph <- ggmap::ggmap(map)
# zoom base
latrange_z <- range(my_tc$lat[1:3]) + c(-1.9,1.9)
longrange_z <- range(my_tc$long[1:3]) + c(-1.9,1.9)
ocean_z <- c(left = longrange_z[1], bottom = latrange_z[1],
right = longrange_z[2], top = latrange_z[2])
map_z <- ggmap::get_stamenmap(ocean_z, zoom = 8, maptype = "toner-lite")
base_graph_zoom <- ggmap::ggmap(map_z)
# visual true TC
first <- base_graph +
ggplot2::geom_path(data = my_tc,
ggplot2::aes_string(
x = 'long', y = 'lat',
linetype = 'col', color = 'col')) +
ggplot2::scale_color_manual(values = c("red", "black"))
first_zoom <- base_graph_zoom +
ggplot2::geom_path(data = my_tc,
ggplot2::aes_string(
x = 'long', y = 'lat',
linetype = 'col', color = 'col'),
size = 1.5) +
ggplot2::scale_color_manual(values = c("red", "black"))
# visual simulated curves
second <- ggvis_paths(TCpredictionbands::sample_sim,
base_graph = base_graph,
alpha = .05)
db_pb <- TCpredictionbands::delta_structure(data_list = sample_sim,
alpha = .1,
verbose = TRUE)
#names(db_pb$structure) <- c("long", "lat", "idx")
third <- ggvis_delta_ball_contour(db_pb$structure,base_graph = first,
color = "purple")
third <- third + aes(size = "a") + scale_size_manual(values = 1.5)
noleg <- theme(legend.position = "none")
image_path <- "report/images/"
ggsave(first_zoom + noleg + labs(x ="",y=""),
filename = paste0(image_path,"pipeline1_bigger.png"),
width = 6, height = 6)
ggsave(second + noleg + labs(x ="",y=""),
filename = paste0(image_path,"pipeline2.png"),
width = 6, height = 6)
ggsave(third + noleg + labs(x ="",y=""),
filename = paste0(image_path,"pipeline3_bigger.png"),
width = 6, height = 6)
# first 15 days of AL032009
|
d17bb87499dda29a48e395675f2a23ae0450290c | fde9c70b67e2ea092f0a3966c8b098b08ad0ffcc | /man/labZone0.Rd | b82dbdacc39dcf9c7b14056c4e33f1aea0bbfb01 | [] | no_license | hazaeljones/geozoning | 41d215022b34e6944e4ba7395dc0778c2c49ba48 | c8310ca97a775c4d55807eb3ac3ab1ae73da5334 | refs/heads/master | 2021-01-20T12:37:46.187798 | 2018-02-23T09:44:47 | 2018-02-23T09:44:47 | 90,385,766 | 3 | 0 | null | null | null | null | UTF-8 | R | false | true | 934 | rd | labZone0.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/calNei.R
\name{labZone0}
\alias{labZone0}
\title{labZone0}
\usage{
labZone0(K, qProb, dataF)
}
\arguments{
\item{K}{zoning object, as returned by the calNei function}
\item{qProb}{probability vector used to generate quantile values for Z}
\item{dataF}{data used to generate labels and zoning}
}
\value{
a zoning object with labelled zones in lab component
}
\description{
labZone0
}
\details{
assigns a class label (integer) to a zone depending on the zone mean value
and on the quantile values. Default label is 1, corresponding to mean value samller #' or equal to first quantile. For k ordered quantile values, if mean value is greater #' than quantile k plus 10% of the data range, zone label is k.
}
\examples{
data(mapTest)
dataF=mapTest$krigGrid
data(resZTest)
K=resZTest
p = K$qProb
geozoning:::labZone0(K,p,dataF)
# not run
}
\keyword{internal}
|
c50c6d8a6b85b5278910ebc6964e2d8cc7cee8e7 | d37bbaff9ade08b1faa20b02d901a5ed33fa2c11 | /gitlabCI/build.R | 02b6295e9c5192df20fb65a256a806646fce0690 | [] | no_license | linogaliana/pocker | ecdeafd0e95a54fd7cd56e1fb7f7ed8bee9a79d4 | ea9f2ce499c02c54c09d4c8a464579ec12f76658 | refs/heads/master | 2020-06-16T08:22:48.583644 | 2019-07-11T09:05:35 | 2019-07-11T09:05:35 | 195,522,642 | 11 | 1 | null | null | null | null | UTF-8 | R | false | false | 1,390 | r | build.R |
print("=============================================")
print("CHECKING RETICULATE WORKS FINE")
# First, check it works in R files
# ---------------------------------------
Sys.setenv(RETICULATE_PYTHON = "/opt/conda/bin")
print(" ---------- PYTHON PATH IN RSESSION:")
print(Sys.which("python"))
print(reticulate::py_config())
### METHOD 1: INTERACTIVE SESSION
print(" ---------- CHECK 1: OPEN PYTHON INTERPRETER INSIDE RSESSION")
reticulate::repl_python()
import numpy as np
import datetime
import statistics
def test(n = 1000):
start = datetime.datetime.now()
x = np.random.uniform(0,1,n)
for i in range(1,n):
x[i] += x[i-1]
end = datetime.datetime.now()
c = end - start
return(c.microseconds/1000)
exec_time_python = [test(100000) for k in range(100)]
print("Time for a cumulative sum over a vector of size 1000 (milliseconds):")
print(statistics.median(exec_time_python))
quit
### METHOD 2: SOURCING FILE
print(" ---------- CHECK 2: SOURCE PYTHON FILES INSIDE R")
reticulate::source_python("./gitlabCI/scripts/simple_script.py")
### METHOD 3: RMARKDOWN
print(" ---------- CHECK 3: EXECUTE PYTHON INSIDE RMARKDOWNS")
f = list.files(getwd(), 'Rmd$', full.names = TRUE, recursive = TRUE)
o = sapply(f, function(f) rmarkdown::render(f, output_options = list(self_contained = TRUE)))
dir.create('html')
copied = file.copy(o, 'html')
stopifnot(all(copied)) |
22fa9b210d2b54b7c6edf373804af51721804186 | 77b6871601794e5cc2909cd56b043bca381585a3 | /man/baseSVM.Rd | e02b8c5275bc2c481d767356ec19691e526e1ddc | [] | no_license | transbioZI/BioMM | 6fda6ca052b129f52af6d89bd23fb520807be830 | 2a020af6b477ac590d7a49e08339b03f52dd9364 | refs/heads/master | 2023-01-08T01:56:14.996462 | 2023-01-03T01:56:22 | 2023-01-03T01:56:22 | 164,238,983 | 19 | 1 | null | null | null | null | UTF-8 | R | false | true | 2,560 | rd | baseSVM.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/BioMM.R
\name{baseSVM}
\alias{baseSVM}
\title{Prediction by SVM}
\usage{
baseSVM(
trainData,
testData,
predMode = c("classification", "probability", "regression"),
paramlist = list(tuneP = TRUE, kernel = "radial", gamma = 10^(-3:-1), cost =
10^(-2:2))
)
}
\arguments{
\item{trainData}{The input training dataset. The first column
is the label or the output. For binary classes,
0 and 1 are used to indicate the class member.}
\item{testData}{The input test dataset. The first column
is the label or the output. For binary classes,
0 and 1 are used to indicate the class member.}
\item{predMode}{The prediction mode. Available options are
c('classification', 'probability', 'regression').}
\item{paramlist}{A set of model parameters defined in an R list object.
The valid option: list(kernel, gamma, cost, tuneP).
\enumerate{
\item 'tuneP': a logical value indicating if hyperparameter tuning
should be conducted or not. The default is FALSE.
\item 'kernel': options are c('linear', 'polynomial', 'radial',
'sigmoid'). The defaut is 'radial'.
\item 'gamma': the parameter needed for all kernels except 'linear'.
If tuneP is TRUE, more than one value is suggested.
\item 'cost': is the cost of constraints violation.
If tuneP is TRUE, more than one value is suggested.
}}
}
\value{
The predicted output for the test data.
}
\description{
Prediction by support vector machine (SVM) with two different settings:
'classification' and 'regression'.
}
\details{
Hyperparameter tuning is recommended in many biological data
mining applications. The best parameters can be determined via an internal
cross validation.
}
\examples{
## Load data
methylfile <- system.file('extdata', 'methylData.rds', package='BioMM')
methylData <- readRDS(methylfile)
dataY <- methylData[,1]
## select a subset of genome-wide methylation data at random
methylSub <- data.frame(label=dataY, methylData[,c(2:2001)])
trainIndex <- sample(nrow(methylSub), 12)
trainData = methylSub[trainIndex,]
testData = methylSub[-trainIndex,]
library(e1071)
predY <- baseSVM(trainData, testData,
predMode='classification',
paramlist=list(tuneP=FALSE, kernel='radial',
gamma=10^(-3:-1), cost=10^(-3:1)))
testY <- testData[,1]
accuracy <- classifiACC(dataY=testY, predY=predY)
print(accuracy)
}
\seealso{
\code{\link[e1071]{svm}}
}
\author{
Junfang Chen
}
|
efdd398684db01f852da19d0a049d1e1eeb6b64b | 7d05922c40fe074a089bc88e5a16878825e0c6ce | /caret.R | b611cc4d1aaf74be5b47168f845f1774e42f57a7 | [
"MIT"
] | permissive | prem-pandian/algos | 79c273782273fc744d0e89e6c23cdffa331a3c05 | 4bdddfc8f83aae7438eae9e13d5aa47970a28711 | refs/heads/master | 2021-01-15T22:28:56.686522 | 2014-07-08T21:58:47 | 2014-07-08T21:58:47 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,296 | r | caret.R | +-+-+-+-+ +-+-+-+-+-+-+-+
|P|r|e|m| |P|a|n|d|i|a|n|
+-+-+-+-+ +-+-+-+-+-+-+-+
::::::::::::::::::::::::::::::::::::::::
Regression Models using CARET package
::::::::::::::::::::::::::::::::::::::::
Five Different Models
Models:
1. KNN3
2. LDA
3. NNET
4. NB
5. RPART
6. SVM
Package: caret
::::::::::::::::::::::::::::::::::::::::
#Install Packages
install.packages(c("doMC","caret", "klaR", "nnet", "rpart", "e1071"))
#Load Libraries
library(doMC)
library(caret)
library(MASS)
library(klaR)
library(nnet)
library(e1071)
#library(rpart)
#register multi-core backend
registerDoMC(cores=4)
#how many workers do we have?
getDoParWorkers()
#we use the iris data set (we shuffle it first)
qTrain <- read.csv(file="TrainMerged_out.csv")
#Factor Variables
qTrainMerged <- qTrain[,c(1,2,4,5,6,7,8,9,10,11,12,13,15,16,17,18,3)]
qTrainMerged$Store <- as.factor(qTrainMerged$Store)
qTrainMerged$Dept <- as.factor(qTrainMerged$Dept)
qTrainMerged$IsHoliday <- as.factor(qTrainMerged$IsHoliday)
qTrainMerged$Month <- as.factor(qTrainMerged$Month)
qTrainMerged$Week <- as.factor(qTrainMerged$Week)
x <- qTrainMerged
#helper function to calculate the missclassification rate
posteriorToClass <- function(predicted) {
colnames(predicted$posterior)[apply(predicted$posterior,
MARGIN=1, FUN=function(x) which.max(x))]
}
missclassRate <- function(predicted, true) {
confusionM <- table(true, predicted)
n <- length(true)
tp <- sum(diag(confusionM))
(n - tp)/n
}
#evaluation function which randomly selects 10% for testing
#and the rest for training and then creates and evaluates
#all models.
evaluation <- function() {
#10% for testing
testSize <- floor(nrow(x) * 10/100)
test <- sample(1:nrow(x), testSize)
train_data <- x[-test,]
test_data <- x[test, -5]
test_class <- x[test, 5]
#create model
model_knn3 <- knn3(Species~., data=train_data)
model_lda <- lda(Species~., data=train_data)
model_nnet <- nnet(Species~., data=train_data, size=10)
model_nb <- NaiveBayes(Species~., data=train_data)
model_svm <- svm(Weekly_Sales ~., data=train_data)
model_rpart <- rpart(Species~., data=train_data)
#prediction
predicted_knn3 <- predict(model_knn3 , test_data, type="class")
predicted_lda <- posteriorToClass(predict(model_lda , test_data))
predicted_nnet <- predict(model_nnet, test_data, type="class")
predicted_nb <- posteriorToClass(predict(model_nb, test_data))
predicted_svm <- predict(model_svm, test_data)
predicted_rpart <- predict(model_rpart, test_data, type="class")
predicted <- list(svm=predicted_svm)
predicted <- list(knn3=predicted_knn3, lda=predicted_lda,
nnet=predicted_nnet, nb=predicted_nb, svm=predicted_svm,
rpart=predicted_rpart)
#calculate missclassifiaction rate
sapply(predicted, FUN=
function(x) missclassRate(true= test_class, predicted=x))
}
#now we run the evaluation
runs <- 1
#run parallel on all cores (with %dopar%)
ptime <- system.time({
pr <- foreach(1:runs, .combine = rbind) %dopar% evaluation()
})
#compare results
r <- rbind(parallel=colMeans(pr))
#plot results
cols <- gray(c(.4,.8))
barplot(r, beside=TRUE, main="Avg. Miss-Classification Rate", col=cols)
legend("topleft", rownames(r), col=cols, pch=15)
|
66e0b3a3cb1d638c89e4671b93b8f4385c075004 | 8b9df50a6c903733c53592cf43b8a4f38d4c4338 | /04_Tasks_4.3.r | c2eb6abc40bccbbff446391c50c9a7184253a3b8 | [] | no_license | GeorgiMinkov/SEM-Practicum-R | cbf3a1d30cd089adc651e5315a6a5f58fed7cee7 | 6903a4e2cd29c893e9006ef1e12aee07016c2aac | refs/heads/master | 2021-01-18T16:19:07.918236 | 2017-06-09T06:41:55 | 2017-06-09T06:41:55 | 86,735,090 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 109 | r | 04_Tasks_4.3.r | data("mammals")
attach(mammals)
head(mammals)
plot(body, brain)
plot(log(body), log(brain))
cor(body, brain)
|
191c628c4856fbd703dd47b4592371723eba09ce | 4958fcfba9cf8bd5ef2840a3d1ba89119932a4b8 | /man/queryRegions.Rd | 52424f6005f4e05d92a1f4ead994db01fa0846b7 | [] | no_license | BIMSBbioinfo/RCAS | 25375c1b62a2624a6b21190e79ac2a6b5b890756 | d6dc8f86cc650df287deceefa8aeead5670db4d9 | refs/heads/master | 2021-07-23T10:11:46.557463 | 2021-05-19T16:21:54 | 2021-05-19T16:21:54 | 43,009,681 | 4 | 4 | null | 2017-10-19T23:28:43 | 2015-09-23T15:29:13 | R | UTF-8 | R | false | true | 725 | rd | queryRegions.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{queryRegions}
\alias{queryRegions}
\title{Sample BED file imported as a GRanges object}
\format{
GRanges object with 10000 ranges and 2 metadata columns
}
\source{
\url{http://dorina.mdc-berlin.de/regulators}
}
\usage{
queryRegions
}
\value{
A GRanges object
}
\description{
This dataset contains a randomly selected sample of human LIN28A protein
binding sites detected by HITS-CLIP analysis downloaded from DoRina database
(LIN28A HITS-CLIP hESCs (Wilbert 2012)). The BED file is imported via the
\code{importBed} function and a subset of the data is selected by randomly
choosing 10000 regions.
}
\keyword{datasets}
|
dce417d7aa974378979c6f6e278497725abf90e0 | d42081597e6832e2a8ff873abaeb8b2e124daf12 | /HIFLD Geocode Prisons for Lat Long.R | 436aec90ae0099e7644cc81797f4bd1f140aaec2 | [] | no_license | benmillam/epa-facilities | ffe18484e0264913ee16faa262e1bb776763e71e | 83a10283882506b2bb3bd4b0c794bbb61f768764 | refs/heads/master | 2021-08-25T18:07:09.238131 | 2021-08-15T17:47:13 | 2021-08-15T17:47:13 | 215,188,007 | 1 | 1 | null | null | null | null | UTF-8 | R | false | false | 2,077 | r | HIFLD Geocode Prisons for Lat Long.R | #---
#title: HIFLD Geocode Prisons for Lat Long
#author: Ben Millam
#date: November 20, 2019
#description: reads in GIS shapefiles of prison boundaries, geocodes, and exports metadata csv with lat/long added;
# shapefiles downloaded from same source as HIFLD metadata csv we've been working with and include all the
# same info: https://hifld-geoplatform.opendata.arcgis.com/datasets/prison-boundaries
#---
library(sf) #for working with shape files
library(tidyverse)
setwd("C:/hack-ca-local/epa-facilities")
#read shapefile, returns class "sf" "data.frame", dfs with extra info for sf package
prisons <- st_read("Prison_Boundaries_Shapefiles/Prison_Boundaries.shp", stringsAsFactors=FALSE)
#calculate centroids of prison boundaries
prisons <- st_transform(prisons, crs = 32617) %>% #convert to utm for calculating centroids
st_centroid() %>% #centroids from original multipolygons
st_transform(crs = 4326) #back to 4326 for long/lat coordinates
#add columns for coordinates
prisons$longitude <- st_coordinates(prisons)[,1] #st_coordinates returns a matrix of long/lat
prisons$latitude <- st_coordinates(prisons)[,2]
#drop 'geometry' column for csv output, now redundant
prisons <- as.data.frame(prisons) #class "sf" "data.frame" was preventing you from dropping column or indexing with logical in next line
prisons <- prisons[!(names(prisons) %in% "geometry")]
#write csv
write_csv(prisons, 'hifld-prison_boundaries-geocoded-from-shapefiles-by-ucd-team.csv')
#the following references were helpful at some point, path through them not linear!
#https://gis.stackexchange.com/questions/296170/r-shapefile-transform-longitude-latitude-coordinates
#comment in https://github.com/r-spatial/sf/issues/75
#https://stackoverflow.com/questions/46176660/how-to-calculate-centroid-of-polygon-using-sfst-centroid
#https://gis.stackexchange.com/questions/43543/how-to-calculate-polygon-centroids-in-r-for-non-contiguous-shapes
#https://ryanpeek.github.io/2017-08-03-converting-XY-data-with-sf-package/ for converting from utm back to long lat |
37fbbbd828c31b76936eeea61e04bb60fef90553 | 2abd33ed5fb7048bde5f7715c2d404bdb31406d0 | /Week 9/Studio 9/studio9solns.R | 409775d0d2eb9303dcfaeb96906bc24b29060c17 | [] | no_license | theRealAndyYang/FIT2086-Modelling-for-Data-Analysis | ba609c3b7a63f414d5e19968f9d6650864590e5c | 81fa3a4a2ffe47dadb9702ae77115203766094a0 | refs/heads/master | 2022-11-26T03:57:22.673076 | 2020-08-05T11:13:53 | 2020-08-05T11:13:53 | 285,262,206 | 9 | 3 | null | null | null | null | UTF-8 | R | false | false | 13,803 | r | studio9solns.R | ####################################################################################
# Script-file: studio9.solns.R
# Project: FIT2086 - Studio 9
#
# Purpose: Solutions for questions in Studio 9
####################################################################################
# ----------------------------------------------------------------------------------
# Question 2
# ----------------------------------------------------------------------------------
rm(list=ls())
# ------------------------------------------------------------------------------------------------------
# 2.1
setwd("C:/Users/yjb13/OneDrive/Monash Uni/Year 2/FIT2086 Modelling for Data Analysis/Week 9/Studio 9")
source("my.prediction.stats.R")
source("wrappers.R")
library(glmnet)
library(rpart)
library(randomForest)
library(kknn)
diabetes.train = read.csv("diabetes.train.csv")
diabetes.test = read.csv("diabetes.test.csv")
summary(diabetes.train)
# We can see that all variables except sex are numeric
# 2.2
tree.diabetes = rpart(Y ~ ., diabetes.train)
# 2.3
tree.diabetes
# To interpret this output, note that each line represents a node in the tree.
#
# The first piece of information is the "node number", which labels each node uniquely
#
# The second piece of information is the condition that must be satisfied to reach this node
# The initial node in the tree is the "root" and contains all 354 individuals in the data.
#
# Node 2, for example, is reached by having a BMI < 27.75, and contains 238
# individuals. Node 4 is reached by having first a BMI < 27.75, and then an S5 < 4.61005,
# and 148 individuals satisfy these two conditions, and so on.
#
# The fourth number is a measure of "goodness-of-fit", and is not that important for our purposes
#
# The fifth number is the predicted value of diabetes progression (Y) for that node.
# Every node has a prediction associated with it. For example, the root node prediction of 148.3 is
# equal to mean(diabetes.train$Y) as it is the mean for all individuals, while the
# prediction for node 2) is mean(diabetes.train$Y[diabetes.train$BMI<27.75]) = 120.1, and so on.
#
# For classification problems, the predicted value will be the most likely class.
# This will then be followed by the probabilities of being in each of the possible target classes.
#
# The most important nodes for prediction are the ones with "*" next to them -- these are the
# leaf (terminal) nodes in the tree. From the output we can see there are 12 leaf nodes in
# this tree.
#
# There are 14 leaf (terminal) nodes in this tree.
#
# The variables used are: BMI, BP, S1, S2, S3, S5, S6
# 2.4
plot(tree.diabetes)
text(tree.diabetes, digits=3)
#
# Note that a tree may split on the same numeric variable more than once.
# This allows the tree to have splits like:
#
# (-infinity < S5 < 4.167), (4.167 < S5 < 4.61), (4.61 < S5 < 4.883), (4.883 < S5 < infinity)
#
# as can be seen in the left sub-tree of our decision tree.
# Remember that any tree can be represented by a binary tree, so we do not
# lose any generality by only splitting numeric predictors in a binary fashion
# as long as we allow a numeric predictor to be split more than once.
#
# For categorical predictors, several categories may lead to the same branch when splitting --
# that is, the tree does not necessarily split a K-category variable into K leaves; several
# different category values may lead to the same leaf.
# 2.4a
# If BMI = 28, BP = 96 and S6 = 110, we take the right-hand subtree at the first split,
# then the left-hand branch, then the right-hand branch, and arrive at the leaf with
# a predicted diabetes progression score of 244
#
# 2.4b
# If BMI = 20.1, S5 = 4.7 and S3 = 38, we take the left-hand subtree at the first split,
# then the right-hand branch, then the left-hand branch, then finally the right-hand branch
# to arrive at the leaf node with a prediction score of 161
#
# 2.4c
# The highest predicted diabetes progression score is 262, which is in the right-most leaf
# in the tree. To get to this leaf, we note that we need a BMI > 31.35 and a BP > 101.5
# So having a very high BMI is (unsurprisingly) a strong predictor of poor diabetes progression.
#
# You can find the same information by traversing the tree displayed in the R console in Q2.3
# Try this yourself to make sure you understand the ideas.
# 2.5
tree.diabetes$variable.importance
tree.diabetes$variable.importance / max(tree.diabetes$variable.importance)
# BMI, S5 and BP are the three most important. BMI seems by far the most important as it
# has around 1.5 timesthe importance score of S5.
# 2.6
sqrt(mean((predict(tree.diabetes, diabetes.test) - diabetes.test$Y)^2))
#
# RMSE = root-mean-squared error, i.e., the square root of the average squared prediction
# error made by our model on future data.
#
# The RMSE can be interpreted as follows: if we were to randomly draw a new individual from the
# population we built our tree on, predict their diabetes progression score using the particular
# values of their predictors, calculate the difference between the predicted score and their actual score
# and square, then square-rooted this value, the result would on average be (roughly) equal to the
# RMSE score calculated above.
#
# So it is the average squared error we would expect to obtain predicting on new data using our tree.
# The smaller this score, the better our predictions.
# 2.7
cv = learn.tree.cv(Y~.,data=diabetes.train,nfolds=10,m=1000)
plot.tree.cv(cv)
#
# The CV plot shows that the best tree size is around 7. This is the number
# of leaf nodes the tree should be grown to have. The initial tree we learned
# used a simple heuristic to decide when to stop growing, and can overfit.
# CV lets us reduce the number of leaves by trying to minimise prediction error on future data.
#
# As CV is a random process, sometimes it can produce a slightly different
# estimate of the number of leaf nodes to use, but they will generally be around
# 7. Taking m=1000 reduces the chance of a different size tree than 7 having the
# best CV score, but the CV process therefore takes longer as we are doing more CV iterations
# In general you want to make m as large as reasonably possible to reduce randomness
#
# The curve shows that for 7,8,9 leaf nodes the CV errors are very similar
# and it would be difficult to decide exactly which size tree is best.
prune.rpart(tree=tree.diabetes, cp = cv$best.cp)
# This prunes the tree using the optimal pruning parameter "best.cp" (look at wrappers.R, this
# is what is done in learn.tree.cv() to get our pruned tree).
#
# *Note: We never need to prune by hand -- the best tree is returned in cv$best.tree*
# 2.8
cv$best.tree
#
# The CV pruned tree has removed three of the predictors (S1, S2 and S3)
# and has less leaf nodes (7)
sqrt(mean((predict(cv$best.tree, diabetes.test) - diabetes.test$Y)^2))
#
# The RMSE is actually a *little* worse than the tree before, but
# the tree is a lot simpler. Simplfying a tree using cross-validation will not be
# guaranteed to always improve prediction scores, but it will generally do
# at most only a little worse than the full tree, and can do a lot better
# if the full tree is overfitting. Additionally, it will always produce a
# more interpretable model as the tree will be simpler.
# We can look at the new tree in the console
cv$best.tree
# We can also plot the tree as before
plot(cv$best.tree)
text(cv$best.tree)
# The characteristics that lead to the worst diabetes progression are now having
# a BMI > 27.75 (as above), but a BP < 101.5 and an S6 > 101.5
#
# This shows the potential instability of tree models.
# In our original tree we needed a BP > 101.5 to get to the leaf with the highest
# diabetes progression -- now we need a BP < 101.5. SO simplifying the tree
# has modified its behaviour somewhat.
#
# However, note that if we have BMI > 27.75 and BP > 101.5, we still predict a
# diabetes progression of 242.9 which is only slightly lower than 243.8.
# So the tree can have similar behaviour but a different structure.
# 2.9
lasso.fit = cv.glmnet.f(Y ~ ., data=diabetes.train)
glmnet.tidy.coef(lasso.fit)
sqrt(mean((predict.glmnet.f(lasso.fit,diabetes.test)-diabetes.test$Y)^2))
# In this case the linear model selects (in general -- if not rerun it a few times)
# the predictors BMI, BP, S3, S5, which are predictors deemed important
# in the trees above. The linear model outperforms our decision tree
# in prediction error, which suggests that the underlying relationship between
# diabetes progression and these variables is at least somewhat linear.
#
# It is always important to compare against a standard linear model fit using
# something like lasso, as linear/logistic models serve as an excellent
# benchmark method.
# ----------------------------------------------------------------------------------
# Question 3
# ----------------------------------------------------------------------------------
# 3.1
rf.diabetes = randomForest(Y ~ ., data=diabetes.train)
# 3.2
rf.diabetes
# We see from this that our forest contains 500 trees
# The number of variables selected at random to try splitting on
# at each step of the forwards tree growing algorithm is 3 (in this case,
# it is heuristically set by the random forest software depending on the number of predictors
# available).
#
# The mean of squared residuals is the average squared error of our
# random forest when predicting on the TRAINING data.
#
# The % Var explained is essentially equal to 100 times the "R^2" value of the
# random forest (remember, R^2 ranges from 0 (no ability to explain the data) to
# 1 (perfect explanation of the data))
#
# So in this case, our random forest has an (approximate) R^2 of about 0.435 (43.5ish/100)
# 3.3
sqrt(mean((predict(rf.diabetes, diabetes.test) - diabetes.test$Y)^2))
# The random forest does substantially better than the single "best" decision tree,
# and a little better than the linear model. Suggests that there may be some nonlinear relationship
# between diabetes progression and the predictors
#
# However, it gains this small increase in predictive performance at the expense
# of being a model you cannot understand. Obviously no one can make sense of 500 trees while
# the linear model is very easy to interpret. From the linear model
# it is straightforward to see how BMI and BP, for example, affect our predicted diabetes progression
# 3.4
rf.diabetes = randomForest(Y ~ ., data=diabetes.train, importance=TRUE, ntree=5000)
sqrt(mean((predict(rf.diabetes, diabetes.test) - diabetes.test$Y)^2))
# Takes longer to run, but in this case (small number of predictors, p = 10)
# it makes very little difference to the predictions.
# If the number of predictors was larger it may have a bigger impact
# 3.5
round( importance( rf.diabetes ), 2)
#
# The %IncMSE measure is an indication of how much the random forest software believes
# our mean-squared prediction error on new data would increase if we omitted the variable.
#
# For example, if we did not allow the random forest to use the BMI variable, we would expect
# a 99ish% increase (doubling) of our mean-squared error.
#
# Using this, it seems that BMI, BP and S5 are the stand-out important variables as omitting
# them would lead to large increases in prediction error. This matches the top three
# most important variables as suggested by a single tree using rpart.
#
# The IncNodePurity score can be interpreted in a similar way (larger = more important variable)
# It indicates how much purer the leaf nodes in the tree will be if we include a variable, so
# variables with a high score (BMI, BP, S5, S6) seem important by this measure.
# ----------------------------------------------------------------------------------
# Question 4
# ----------------------------------------------------------------------------------
# 4.1
ytest.hat = fitted( kknn(Y ~ ., diabetes.train, diabetes.test) )
sqrt(mean((ytest.hat - diabetes.test$Y)^2))
# With default settings for the number of neighbours to use, the kNN method does a bit better
# than the decision tree, but worse than the linear model and the random forest.
#
# Note that we do not learn a model -- we just use the training data (diabetes.train)
# to predict the diabetes progression for the people in diabetes.test
#
# 4.2
kernels = c("rectangular","triangular","epanechnikov","gaussian","rank","optimal")
knn = train.kknn(Y ~ ., data = diabetes.train, kmax=25, kernel=kernels)
ytest.hat = fitted( kknn(Y ~ ., diabetes.train, diabetes.test,
kernel = knn$best.parameters$kernel, k = knn$best.parameters$k) )
knn$best.parameters$kernel
knn$best.parameters$k
sqrt(mean((ytest.hat - diabetes.test$Y)^2))
# This code tries 5 different types of kernels, and neighbourhood sizes k=1 through to 25
# and chooses the combination that minimises the cross-validation error
#
# The "kernel" is a function that decides how the target values of the k closest
# points in the training data are combined. "Rectangular" kernel is equivalent to just
# using the average of all the target values of the k nearest neighbours, while
# the others weight the target values inversely proportionally to their distance from
# the point we are trying to predict.
#
# The resulting model using knn$best.parameters$kernel ("gaussian", in this case) and
# knn$best.parameters$k (24, in this case) is about as good as the linear model but
# a little worse than the random forest.
|
e48f0898495b0f6b4e0adde8450a67f85927e898 | 462d7842263e44ed5a40e78cedf52b24cfcfa3de | /R/uic.marginal.R | 8d5c73e19477693ced783b35c71873b16525c779 | [] | no_license | yutakaos/rUIC | a9f3a99743a8d5019877543991f7662d0d867de2 | 8cc05a40b23a6c84f76bb8012f185b333e2f541a | refs/heads/master | 2023-08-24T22:58:53.542578 | 2023-08-14T07:09:48 | 2023-08-14T07:09:48 | 252,107,736 | 10 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,603 | r | uic.marginal.R | #' Wrapper function for computing unified information-theoretic causality
#' for the marginal embedding dimension.
#'
#' \code{uic.marginal} returns model statistics computed from given multiple time
#' series using simplex projection and cross mapping. This function computes UICs
#' by a model averaging technique (i.e., marginalizing \code{E}). The users
#' do not have to determine the optimal \code{E} by themselves.
#'
#' @inheritParams uic
#'
#' @return
#' A data.frame where each row represents model statistics computed from a parameter set.
#' See the details in Value section of \code{uic}.
#'
#' @seealso \link{simplex}, \link{uic}, \link{uic.optimal}
#'
#' @examples
#' # simulate logistic map
#' tl <- 400 # time length
#' x <- y <- rep(NA, tl)
#' x[1] <- 0.4
#' y[1] <- 0.2
#' for (t in 1:(tl - 1)) { # causality : x -> y
#' x[t+1] = x[t] * (3.8 - 3.8 * x[t] - 0.0 * y[t])
#' y[t+1] = y[t] * (3.5 - 3.5 * y[t] - 0.1 * x[t])
#' }
#' block <- data.frame(t=1:tl, x=x, y=y)
#'
#' # UIC
#' out0 <- uic.marginal(block, lib_var="x", tar_var="y", E=0:8, tau=1, tp=-4:0)
#' out1 <- uic.marginal(block, lib_var="y", tar_var="x", E=0:8, tau=1, tp=-4:0)
#' par(mfrow=c(2, 1))
#' with(out0, plot(tp, te, type="b", pch=c(1,16)[1+(pval<0.05)]))
#' with(out1, plot(tp, te, type="b", pch=c(1,16)[1+(pval<0.05)]))
#'
uic.marginal = function (
block, lib = c(1, NROW(block)), pred = lib, group = NULL,
lib_var = 1, tar_var = 2, cond_var = NULL,
norm = 2, E = 1, tau = 1, tp = 0, nn = "e+1", num_surr = 1000,
exclusion_radius = NULL, epsilon = NULL,
is_naive = FALSE, knn_method = c("KD","BF"))
{
out <- lapply(tau, function (x) {
# compute weights for specified dimensions
simp <- simplex(
block, lib, pred, group, lib_var, c(tar_var, cond_var), norm,
E=E-1, tau=x, tp=x, nn, 0, NULL, exclusion_radius, epsilon,
is_naive, knn_method)
simp$weight = with(simp, exp(-log(rmse) - n_lib / n_pred))
simp$weight = with(simp, weight / sum(weight))
# compute UICs
outx <- lapply(tp, function (y) {
model <- uic(
block, lib, pred, group, lib_var, tar_var, cond_var, norm,
simp$E+1, tau=x, tp=y, nn, num_surr, exclusion_radius, epsilon,
is_naive, knn_method)
data.frame(rbind(apply(model * simp$weight, 2, sum)))
})
do.call(rbind, outx)
})
out <- do.call(rbind, out)
par_int <- c("E","E0","tau","tp","nn","n_lib","n_pred","n_surr")
out[,par_int] <- round(out[,par_int], 2)
out
}
# End |
001a228af38c7a71d55a80b4318ea39745f2e681 | 6cbf876ae748c6dea77c0f32cfc37204ccc303e0 | /man/update_labels.Rd | 97541c8534fd28adb7c3b28b4dc859ff6c0e3b37 | [] | no_license | cran/datamaps | 2ed2fbe100e6841e623cd25d86838db48952754b | 95ddab0ee86bd5415f2122da3f7b3c80e1c5b77a | refs/heads/master | 2021-01-25T11:39:12.747830 | 2018-05-14T19:20:29 | 2018-05-14T19:20:29 | 123,413,978 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 1,310 | rd | update_labels.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/proxies.R
\name{update_labels}
\alias{update_labels}
\title{Dynamically update labels}
\usage{
update_labels(proxy, label.color = "#000", line.width = 1, font.size = 10,
font.family = "Verdana", ...)
}
\arguments{
\item{proxy}{a proxy as returned by \code{\link{datamapsProxy}}.}
\item{label.color}{color of label.}
\item{line.width}{with of line.}
\item{font.size}{size of font label.}
\item{font.family}{family of font label.}
\item{...}{any other option.}
}
\description{
Dynamically update labels using Shiny
}
\examples{
\dontrun{
library(shiny)
ui <- fluidPage(
actionButton(
"update",
"update labels"
),
datamapsOutput("map")
)
server <- function(input, output){
states <- data.frame(st = c("AR", "NY", "CA", "IL", "CO", "MT", "TX"),
val = c(10, 5, 3, 8, 6, 7, 2))
output$map <- renderDatamaps({
states \%>\%
datamaps(scope = "usa", default = "lightgray") \%>\%
add_choropleth(st, val) \%>\%
add_labels()
})
observeEvent(input$update, {
datamapsProxy("map") \%>\%
update_labels(sample(c("blue", "red", "orange", "green", "white"), 1)) # update
})
}
shinyApp(ui, server)
}
}
|
63fe0ee957e2f1233bb058057916d9ef98029caa | 2656692dc3632c92e3078cae38988edb3e8e2c11 | /runscript.R | 32932809dc751453c0208d04ba034501b1ac082a | [] | no_license | mscrawford/PPA | 9fb740738402c301fe6e1b68755f313d11bec70b | 4bb013a608db5012c796dc3b00b754490e11dedf | refs/heads/master | 2021-06-28T18:58:30.196166 | 2021-03-09T11:41:40 | 2021-03-09T11:41:40 | 219,153,839 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,368 | r | runscript.R | # Perfect Plasticity Approximation model (Strigul et al. 2008)
# Adapted from Rüger et al. 2020 (code written by Caroline Farrior, cfarrior@gmail.com; https://github.com/cfarrior/Ruger_etal_2020)
# TODO
# Parameterize based on
# Initial community files
# Species files
# * Parameterization will have to account for instances where different species files have also different initial communities
# run_parallel
# Libraries ---------------------------------------------------------------
library(tictoc)
# Global mutable parameters -----------------------------------------------
DEBUG <- TRUE
USE_INITIAL_COMMUNITY <- TRUE
CALCULATE_INTERNAL_SEED_RAIN <- TRUE
CALCULATE_EXTERNAL_SEED_RAIN <- FALSE
# Directories
base_directory <- dirname(rstudioapi::getActiveDocumentContext()$path)
output_directory <- paste0(base_directory, "/output/")
# Files
species_file <- paste0(base_directory, "/input/PPA_FG5_filtered.csv")
initComm_file <- paste0(base_directory, "/input/PPA_initial_state_fg5_secondary.csv")
# Scripts
PPA_script <- paste0(base_directory, "/PPA.R")
postprocessing_script <- paste0(base_directory, "/postprocessing.R")
plotting_script <- paste0(base_directory, "/plot.R")
source(PPA_script)
# run ---------------------------------------------------------------------
run <- function()
{
parameterization <- parameterize()
results <- run_serial(parameterization)
saveRDS(results[[1]], file = paste0(output_directory, "/PPA_output_raw_cohort.rds"))
saveRDS(results[[2]], file = paste0(output_directory, "/PPA_output_raw_cohort_mortality.rds"))
}
# run_serial --------------------------------------------------------------
run_serial <- function(parameterization)
{
spVitals_list <- parameterization$spVitals_list
initComm <- parameterization$initComm
results <- run_simulation(spVitals_list, initComm)
return(results)
}
# run_parallel ------------------------------------------------------------
run_parallel <- function(parameterization)
{
}
# parameterize ------------------------------------------------------------
parameterize = function()
{
# Define the species present in the simulation
spVitals <- read.table(species_file, sep = ",", header = TRUE)
spVitals_list <- disaggregateSpeciesVitals(spVitals)
# Define the initial community composition
initComm <- NULL
if (USE_INITIAL_COMMUNITY)
{
initComm <- read.table(initComm_file, sep = "\t", header = FALSE)
}
return(list(spVitals_list = spVitals_list, initComm = initComm))
}
# disaggregateSpeciesVitals --------------------------------------------------
# Disaggregates the `spVitals` dataframe into its component parts, with each documenting one
# aspect of the species' vital rates. These parameters can then be exported to the
# environment of the calling function. This reduces the amount of code needed to incorporate
# a variable number of layers within the simulation.
#
# This function assumes that there is a growth rate and mortality rate for each layer,
# and that they are ordered decreasing with crown class.
disaggregateSpeciesVitals <- function(spVitals)
{
N <- nrow(spVitals) # community size (integer)
ID <- spVitals %>% select(SpeciesID) %>% pull() # species ID (vector)
G <- spVitals %>% select(contains("G")) # growth rates (matrix)
mu <- spVitals %>% select(contains("mu")) # mortality rates (matrix)
Fec <- spVitals %>% select(contains("F")) %>% pull() # fecundity rate (vector)
wd <- spVitals %>% select(contains("wd")) %>% pull() # wood density (vector)
# Set the number of layers defined within the input species data.frame
nLayers <- ncol(G) + 1
if (DEBUG)
{
assertthat::assert_that(!is.null(nLayers) & nLayers > 0)
assertthat::assert_that(ncol(G) == ncol(mu))
columnOrder <- as.numeric(substring(colnames(G), 2))
assertthat::assert_that(!is.unsorted(columnOrder) & columnOrder[length(columnOrder)] > columnOrder[1])
}
return(list(nLayers = nLayers, N = N, ID = ID, G = G, mu = mu, Fec = Fec, wd = wd))
}
# Run ---------------------------------------------------------------------
tic(); run(); toc()
source(postprocessing_script)
source(plotting_script)
|
8c5bfea25a87270b7ac14732e78dc49ad4a854b1 | f52ad1335ee9a8afbb66e174167adb33dfa8a3ff | /ui.R | cce78698aae7ee9cb45017cd52b9d95080191ff2 | [] | no_license | ppolowsk/difference-test-calculator | 4ed702754137b9053a155a9872a5435f277643db | fd482bac51f8ff9b77f3f37ced38dacd685551e7 | refs/heads/master | 2020-12-02T17:47:30.392100 | 2017-07-06T15:22:20 | 2017-07-06T15:22:20 | 96,427,925 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,597 | r | ui.R | library(shiny)
#
shinyUI(fluidPage(
# title
titlePanel("Difference Test Calculator"),
# Sidebar with a slider input for the number of bins
sidebarLayout(
sidebarPanel(
sliderInput("n", "No. of Participants:",
min=0, max=100, value=30),
sliderInput("x", "No. correct:",
min=0, max=100, value=15),
selectInput("test", "Type of Test", c("Triangle"=1/3, "Paired/Duo-trio"=1/2)),
checkboxInput("sim", "Test for Similarity?"),
conditionalPanel(
condition = "input.sim == true",
sliderInput("pd", "Proportion of Distinguishers",
min=0, max=100, post=" %", value=50)),
helpText("Adapted from the Excel sheet described in Sensory Evaluation Techniques, Fifth Edition (2016) by Civille & Carr")
),
# Show a plot of the generated distribution
mainPanel(
conditionalPanel(
condition = "input.sim == false",
plotOutput("nullplot"),
hr(),
h4("Difference Test Output"),
strong(textOutput("pvalue")),
p(textOutput("explan")),
hr()
),
conditionalPanel(
condition = "input.sim == true",
plotOutput("nullplotsim"),
hr(),
h4("Similarity Test Output"),
strong(textOutput("pvaluesim")),
strong(textOutput("power")),
p(textOutput("explansim")),
hr()
)
#p("And a quick reminder as to what alpha and beta signify..."),
#img(src="reminder.png")
)
)
)) |
3572d148985adde474aaa0fc7b021e54b5c3c4e5 | 176deb3e42481c7db657cd945e2b53ad0dab66ca | /R/loadReport.R | c903b06eb56f98250d18d3c504e674ff3f477d84 | [
"LicenseRef-scancode-public-domain-disclaimer",
"LicenseRef-scancode-warranty-disclaimer"
] | permissive | katakagi/rloadest_test | a23d4ba635eadf00cebf5f9934217b3c5e16e0fd | 74694ab65c7b62929961c45fdfa7eabf790869ef | refs/heads/master | 2023-07-28T20:15:56.696712 | 2021-10-09T01:46:54 | 2021-10-09T01:46:54 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,051 | r | loadReport.R | #' Create Load Report
#'
#' Create a 2-page pdf file report of a rating-curve load model. The
#'report contains the text output and 6 diagnostic plots.
#'
#' @param x the load model.
#' @param file the output file base name; the .pdf suffix
#'is appended to make the actual file name. if missing, then the
#'name of \code{x} is used as the base name.
#' @return The actual file name is returned invisibly.
#' @export
loadReport <- function(x, file) {
## Coding history:
## 2013Jul29 DLLorenz Original version from S+ library
##
if(missing(file))
file <- deparse(substitute(x))
retval <- setPDF(basename=file)
plot.new()
## Draw the text
par(mar=c(0,0,0,0), usr=c(0,1,0,1))
txt <- capture.output(x)
text(0, 1, paste(txt, collapse="\n"), family="mono", adj=c(0,1))
## 6 diagnostic plots
AA.lo <- setLayout(num.cols=2L, num.rows=3L)
for(i in seq(6)) {
setGraph(i, AA.lo)
plot(x, which=i, set.up=FALSE)
}
## All done, close the graph
dev.off(retval[[1]])
invisible(paste(retval[[2]], ".pdf", sep=""))
}
|
96a9c36b7f7aa024837f75878d321cf900335f43 | 80c5c61e18b0fe5558acc7ee3fcacc3d4e3431ef | /demo_code/ui_backup.R | dde7bead4f3890aa451deb06afde7571d4592dfe | [
"MIT"
] | permissive | scy-phy/wadac | c835c422238b245c4a39b7cf595a3c9e63426f32 | 3f7e3d96ba0f6fca34a41126a70c5c6b9fdbbf3b | refs/heads/master | 2022-07-17T17:06:58.391187 | 2022-01-17T07:07:01 | 2022-01-17T07:07:01 | 129,992,170 | 3 | 1 | NOASSERTION | 2022-06-22T01:05:54 | 2018-04-18T02:30:06 | Python | UTF-8 | R | false | false | 2,254 | r | ui_backup.R | library(shiny)
library(shinythemes)
shinyUI(fluidPage(navbarPage(theme=shinytheme("flatly"),tags$b("WADAC"),br(),
tabPanel("Anomaly Detector",
sidebarLayout(
sidebarPanel(div(style="display:inline-block",actionButton("start","Get Packets"), style="float:right"),
div(style="display:inline-block",actionButton("stop","STOP!"), style="float:right")
),
mainPanel(h4("MSE vs Time plot",align="center"),
plotOutput("anomaly_detector")
)
)),
tabPanel("Feature Analysis",sidebarLayout(
sidebarPanel(),
mainPanel(h4("Feature Analysis",align="center"),
plotOutput("af1"),
plotOutput("af2"),
plotOutput("af3")
)
)),
tabPanel("Attack Classification",sidebarLayout(
sidebarPanel(),
mainPanel(h4("Attack Classification",align = "center"),
plotOutput("attack_class"))
)),
tabPanel("Attack Features",sidebarLayout(
sidebarPanel(),
mainPanel(h4("Attack Features",align = "center"),
plotOutput("ac1"),
plotOutput("ac2"),
plotOutput("ac3"),
plotOutput("ac4")
)
))
)
))
|
b9e4a45e3e48fc4108d9894572ade3bca773cc51 | 4fdab51abff7642b770eff204538d7618129ca82 | /plotNo2.R | 0db1ee2de4fe5681d03f42d34f08e5b784492448 | [] | no_license | SamselTomasz/ExData_Plotting2 | 32fb8a1c601e659498fa96c7d801191fcb4caf33 | 3bbee5d5edc8c0e4281aae995c43628af699fe2b | refs/heads/master | 2021-01-19T13:50:33.939102 | 2015-04-26T22:03:19 | 2015-04-26T22:03:19 | 34,625,956 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,573 | r | plotNo2.R | ##----------------------------------------------------------------------
## this part of code is same for all plot source codes
## all ifs are there, as i was running these files number of times
## and didn't want the datasets duplicate if they are already there
## nor the file to be downloaded every each time
## lets download, unzip and load data, make sure plyr library is
## loaded
library(plyr)
if (!dir.exists("Data")){
dir.create("Data")
}
if (!file.exists("Data/data.zip")){
theLink <- "https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2FNEI_data.zip"
download.file(url=theLink, destfile="Data/data.zip", method="curl")
}
if (!file.exists("Data/Source_Classification_Code.rds")){
unzip("Data/data.zip", exdir="Data")
}
if (!exists("scc")){
scc <- data.frame()
scc <- readRDS(file ="Data/Source_Classification_Code.rds");
}
if (!exists("pm25")){
pm25 <- data.frame()
pm25 <-
}
##----------------------------------------------------------------------
## subset the dataframe for emissions from Baltimore City only
baltimore <- pm25[pm25["fips"] == "24510",]
## now lets subset the dataframe, apply sum function and combine result
## into new dataframe
newDataFrame <- ddply(baltimore, "year", summarise, total=sum(Emissions))
## and do the plot
png(filename="plotNo2.png")
plot(x=newDataFrame$year,
y=newDataFrame$total/1000,
type="l",
col="red",
ylab="Emissions in Baltimore (kilotons)",
xlab="Year"
)
dev.off()
## clean up
rm(theLink)
rm(newDataFrame)
rm(baltimore) |
0080bc34660a8277aa986e6ec26a1c82a17a1537 | 8a9e282803403adaa8ec2a8eafd048ced37265ca | /5 - Contour lines of the Gaussian distribution PDF/shiny/ui.r | f25ab0ef48fe72ab5f7c5f74ef282b9d1b52db34 | [
"MIT"
] | permissive | shadowusr/ml-course | 481f350fa34881aac21ce9ea84d3a5bfb168824f | 5e336dc47ed9dff71877e830a4d67afd8a23a8a1 | refs/heads/master | 2023-02-03T02:28:50.792565 | 2020-12-21T16:43:00 | 2020-12-21T16:43:00 | 294,619,134 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 921 | r | ui.r | library(shiny)
library(matrixcalc)
library(plotly)
ui <- fluidPage(
titlePanel("Gaussian PDF 3D demo"),
sidebarLayout(
sidebarPanel(
h6("Expected value, μ"),
fluidRow(
column(6,
numericInput("mu1", value = 0, label = "μ1")
),
column(6,
numericInput("mu2", value = 0, label = "μ2")
)
),
h6("Covariance matrix, Σ"),
fluidRow(
column(6,
numericInput("E1", value = 2, label = "Σ (1,1)")
),
column(6,
numericInput("E2", value = 0.9, label = "Σ (1,2)")
)
),
fluidRow(
column(6,
numericInput("E3", value = 0.9, label = "Σ (2,1)")
),
column(6,
numericInput("E4", value = 2, label = "Σ (2,2)")
)
),
textOutput("err")
),
mainPanel(plotlyOutput(outputId = "p"))
)
) |
5ceb7a9321db90b82f66a029fc40d7cee52bed87 | 25ec9519eeb158a777ed9865dfb57aab0809c60d | /VERSIONS/LatticeKrig.OLD/R/LKrig.normalize.basis.R | 625ffe005167c8612a92a7d5cebe91622c92eeb5 | [] | no_license | NCAR/LatticeKrig | cccdcaba2d16c96b722de6a2e499e09f5c36ccf2 | 5caccca61f52b53d215d9375dedb8553e6ee75b7 | refs/heads/master | 2021-09-14T10:49:13.136451 | 2021-08-23T21:58:31 | 2021-08-23T21:58:31 | 61,819,138 | 7 | 2 | null | null | null | null | UTF-8 | R | false | false | 2,367 | r | LKrig.normalize.basis.R | LKrig.normalize.basis.fast<- function(Level, LKinfo, x ){
if( !(LKinfo$fastNormalization)){
stop("Not setup for fast normalization")}
# convert the locations to the integer scale of the lattice
# at the level == Level
mxLevel<- LKinfo$mx[Level]
myLevel<- LKinfo$my[Level]
gridStuff<- LKinfo$grid[[Level]]
xmin<- gridStuff$x[1]
ymin<- gridStuff$y[1]
dx<- gridStuff$x[2]- gridStuff$x[1]
dy<- gridStuff$y[2]- gridStuff$y[1]
xLocation<- scale( x, center= c( xmin, ymin), scale= c( dx, dy)) + 1
nLocation<- nrow( xLocation)
setupList<- LKinfo$NormalizeList[[Level]]
return(
.Fortran("findNorm",
mx = as.integer(mxLevel),
my = as.integer(myLevel),
offset = as.double(LKinfo$overlap),
Ux = as.double(setupList$Ux),
Dx = as.double(setupList$Dx),
Uy = as.double(setupList$Uy),
Dy = as.double(setupList$Dy),
nLocation = as.integer(nLocation),
xLocation = as.double( xLocation),
weights = as.double( rep(-1,nLocation) ),
Z = matrix(as.double(0),mxLevel,myLevel)
)$weights
)
}
LKrig.make.Normalization<- function(mx,my, a.wght){
out<- list()
nlevel<- length( a.wght)
for ( l in 1:nlevel){
out<- c(out, list(LKrigMRFDecomposition( mx[l], my[l], a.wght[[l]] )) )
}
return(out)
}
LKrigMRFDecomposition<- function( mx,my,a.wght){
Ax<- diag( a.wght/2, mx)
Ax[ cbind( 2:mx, 1:(mx-1)) ] <- -1
Ax[ cbind( 1:(mx-1), 2:mx) ] <- -1
#
Ay<- diag( a.wght/2, my)
Ay[ cbind( 2:my, 1:(my-1)) ] <- -1
Ay[ cbind( 1:(my-1), 2:my) ] <- -1
eigen( Ax, symmetric=TRUE) -> hold
Ux<- hold$vectors
Dx<- hold$values
eigen( Ay, symmetric=TRUE) -> hold
Uy<- hold$vectors
Dy<- hold$values
#
return( list( Ux=Ux, Uy=Uy, Dx=Dx, Dy=Dy ))
}
LKrig.normalize.basis <- function(Level, LKinfo, PHI){
tempB <- LKrig.MRF.precision(LKinfo$mx[Level], LKinfo$my[Level],
a.wght = (LKinfo$a.wght)[[Level]],
edge = LKinfo$edge,
distance.type = LKinfo$distance.type)
tempB <- LKrig.spind2spam(tempB)
Q.level <- t(tempB) %*% tempB
wght <- LKrig.quadraticform(Q.level, PHI)
return(wght)
}
|
fc1f170218a32ca346cade9bcf00ab9ea01ab725 | 9d885c222181014158e4cb269fdd7f96e7a40e41 | /1-PARAMETERISATION/Coral_postsettlement/Juveniles_survival.R | d0a35922e9d734ae54efea6e2e310f9ef2aa9ea9 | [] | no_license | ymbozec/REEFMOD.6.4_GBR_HINDCAST_ANALYSES | c920d1f0e36ccb0e9a8fd7397a26b25a1bfd54cb | 923853d17b73c2539f45e6f8d1b9f9465a88d14b | refs/heads/main | 2023-04-10T16:10:04.479436 | 2021-07-29T13:20:17 | 2021-07-29T13:20:17 | 390,718,397 | 2 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,485 | r | Juveniles_survival.R | #__________________________________________________________________________
#
# EFFECTS OF SSC ON THE SURVIVAL OF JUVENILES
#
# Yves-Marie Bozec, y.bozec@uq.edu.au, 03/2018
#
# Uses experimental data extracted from:
# Humanes, A., A. Fink, B. L. Willis, K. E. Fabricius, D. de Beer, and A. P. Negri. 2017.
# Effects of suspended sediments and nutrient enrichment on juvenile corals.
# Marine Pollution Bulletin 125:166–175.
#__________________________________________________________________________
rm(list=ls())
humanes=data.frame(TSS=c(0,10,30,100,0,10,30,100),
S180d=c(1.000, 0.736, 0.860, 0.529, 1.000, 0.869, 0.753, 0.283),
SPECIES=c('ten','ten','ten','ten','mil','mil','mil','mil'))
select_ten = c(1,2,3,4)
select_mil = c(5,6,7,8)
# humanes$S = humanes$S180d
humanes$S = humanes$S180d^(40/180) # transform back to 40 days (duration of experiment)
# humanes$S = humanes$S180d^(1/180) # transform do daily mortality
## Test the relationship
# M1 = lm(humanes$S~ humanes$TSS)
#
# R2 = summary(M1)$r.squared
# slope = round(summary(M1)$coefficients[2],3)
# intercept = round(summary(M1)$coefficients[1],3)
# ( survival after 6mo = -0.005*TSS + 0.941 )
# Model with intercept forced to 1
M2 = lm(humanes$S-1 ~ 0+humanes$TSS)
R2 = summary(M2)$r.squared
# Don't extract slope and intercept from summary(M2) because no intercept
# Use predict instead (predict(M2)+1)
## Plot relationship between log(Linf) and log(K)
svg("FIG_juvenile_survival_with_SSC.svg", height = 4, width = 4, bg="transparent") # 3inches ~ 6cm
par(mar=c(6,6,2,2), mfrow=c(1,1), pty="m", tck=-0.015, lwd=1, las=1) #mar=c(bottom, left, top, right)’
# plot(humanes$TSS[select_ten],humanes$S[select_ten],pch=19,cex=1.5,xlim=c(0,100),
# ylim=c(0,1),xaxt='n',yaxt='n', xlab='Suspended sediment (mg.L-1)',ylab='Survival fraction after 6 month')
plot(humanes$TSS[select_ten],humanes$S[select_ten],pch=19,cex=1.5,xlim=c(0,100),
ylim=c(0.6,1),xaxt='n',yaxt='n', xlab='Suspended sediment (mg.L-1)',ylab='Survival fraction after 40 days')
points(humanes$TSS[select_mil],humanes$S[select_mil],pch=21,cex=1.5)
# lines(humanes$TSS, predict(M1),lwd=1.5)
# lines(humanes$TSS, predict(M2)+1,lwd=1.5)
lines(seq(0,100,by=20), summary(M2)$coefficients[1]*seq(0,100,by=20)+1,lwd=1.5)
axis(1,lwd=0,lwd.ticks=1,line=NA,las = 1)
axis(2,lwd=0,lwd.ticks=1,line=NA,las = 1)
dev.off()
## Model with daily survival
M2 = lm((humanes$S)^(1/180)~humanes$TSS)
summary(M2) |
0f999f5fb3e35876271ca7b1d56463b944223f57 | 4a699e4db896c62432c5a56e837244f9c33b5003 | /r/laggraphs.R | 455d78b16276ff42745364f6bd2f2f8277e29a6b | [] | no_license | sdonoso23/geospatial-ita-unem | c84675782866f986c73667fc78cb920d85c2160c | bc2452d0c7c7f9e855c07a56a127978d8e1d203e | refs/heads/master | 2020-12-30T15:55:33.162204 | 2017-05-17T17:59:50 | 2017-05-17T17:59:50 | 90,537,960 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 886 | r | laggraphs.R | library(maptools)
library(spdep)
library(tidyverse)
library(tseries)
library(lmtest)
library(ggplot2)
library(ggrepel)
library(sphet)
library(GGally)
library(spgwr)
lag.graphics<-function(dataset,wmatrix,columnid){
lista<-list()
originaldataset<-dataset
ids<-which(colnames(originaldataset)==columnid)
numeric<-sapply(dataset,is.numeric)
dataset<-dataset[,numeric]
colnames<-colnames(dataset)
colnameslag<-paste(colnames,"LAG",sep="")
n<-length(dataset)
for(i in 1:n){
lista[[i]]<-lag.listw(wmatrix,dataset[,i])
}
names(lista)<-colnameslag
for(i in 1:7){
print(ggplot(mapping=aes(dataset[,i],lista[[i]]))+geom_point()+
geom_smooth(method ="lm",se=F)+geom_text_repel(aes(label=originaldataset[,ids]))+
labs(x=names(dataset)[i],y=names(lista)[i]))
}
}
|
ce1cc7cc1155a5129ad430381cabe9fcb9b983f9 | 0a5381274bb0ec673502740f1f063fadf8afab60 | /R/scale_make.R | 2ffeb6f59d2f80e603d906bd12539fefe44d3d09 | [] | no_license | JonWayland/uncoveR | b55dba7c8683e05d8ab4bde3cd7875047bf2e634 | 37d1b15d96aa5134cdb076d47c5e856cc3a1542d | refs/heads/master | 2022-06-02T11:39:22.172612 | 2022-05-19T15:10:49 | 2022-05-19T15:10:49 | 216,682,749 | 2 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,492 | r | scale_make.R | #' Making the scales on new data
#'
#' @param dat Dataframe with continuous variables intended for scaling (uses distribution from training data)
#' @param scale_data The object created from calling `scale_form`
#' @param scaler Method for scaling. Options include: 'standard', 'minmax'
#'
#' @return Returns the original dataset `dat` with the desired numeric features scaled using the `scaler` method.
#' @export
#'
#' @examples
#' std.fit <- scale_form(iris, remove.me = c("Sepal.Width"))
#' iris2 <- scale_make(iris, std.fit, scaler = "standard")
#' head(iris2)
scale_make <- function(dat, scale_data = trainScales, scaler = NA){
if(!scaler %in% c("minmax", "standard")){
writeLines("No valid method selected. Using standardization (scaler = 'standard').")
warning("If you would like to use a minmax scaler then set scaler = 'minmax' ")
}
# Loop through all of trainScales
for(i in 1:nrow(scale_data)){
for(j in 1:ncol(dat)){
# Get the name of the variable from dat
if(names(dat)[j] == scale_data$COL_NAME[i]){
if(scaler == "standard"){
dat[,j] <- (dat[,j] - scale_data$COL_MEAN[i]) / scale_data$COL_SD[i]
}
if(scaler == "minmax"){
dat[,j] <- (dat[,j] - scale_data$COL_MIN[i]) / (scale_data$COL_MAX[i] - scale_data$COL_MIN[i])
}
if(!scaler %in% c("standard", "minmax")){
dat[,j] <- (dat[,j] - scale_data$COL_MEAN[i]) / scale_data$COL_SD[i]
}
}
}
}
return(dat)
}
|
3e6e425e9a53784cccf4ef7934a714cbf97b9d8d | 771c05fa7b58f8f2dab7938da389e9e72b3cf3d4 | /Rvasp/man/plot.atoms.add.Rd | 3a7e172364649af56d99b5b1a6d13fc4846c8b1e | [
"MIT"
] | permissive | gokhansurucu/Rvasp | 56a75b10daa606768791935530bd108204d92f4f | 8983440a96ca8acf017f47af8dbfd3f32faaad22 | refs/heads/master | 2020-04-08T21:14:33.155967 | 2014-03-14T20:08:59 | 2014-03-14T20:08:59 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 450 | rd | plot.atoms.add.Rd | \name{plot.atoms.add}
\alias{plot.atoms.add}
\title{Adds atoms to existing plot}
\usage{
plot.atoms.add(atoms, basis = NULL, direction = 3,
col = "white", cex = 3, lwd = 1, lty = 1, ...)
}
\arguments{
\item{atoms}{dataframe of atoms}
\item{direction}{of projection}
\item{basis}{basis if atoms are in direct coordinates}
\item{...}{further plotting parameters}
}
\description{
\code{plot.atoms.add} adds atoms to existing plot.
}
|
9f15cbb79f615537f726addec0d45097ca6763f8 | 42f2b611512e953590d70626d61a480317e2339b | /deseq2.r | 15a4e26d333fc93c01a97e1ca2b970c0990bd829 | [] | no_license | alexpmagalhaes/DESEQ2_script | 5836d452eafb6a752178761f56b3db179a46d946 | 796b1a5fc73e46189bff4372344b2c6622eeb371 | refs/heads/master | 2021-05-31T02:06:11.808729 | 2016-02-05T12:45:23 | 2016-02-05T12:45:23 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,731 | r | deseq2.r | ################################################################################
### R script for DESeq2 ##
### APMagalhaes ##
### september 22th, 2015 ##
################################################################################
################################################################################
### parameters: to be modified by the user ###
################################################################################
rm(list=ls()) # remove all the objects from the R session
workDir <- "/Users/Alex/Desktop/AthRNAseq_GA_COLD/Results/Results/GA regulated Transcriptome/T21h4VsGAT21h4" # working directory for the R session
projectName <- "RNAseq_GA_Cold-Ath" # name of the project
author <- "APMagalhaes" # author of the statistical analysis/report
treated<-"T21h4" # group of interest
untreated<-"GAT21h4" # group to be used as control
################################################################################
### running script ###
################################################################################
setwd(workDir)
library('DESeq2')
#setup the folder and file structure
directory<-workDir
sampleFiles <- grep("_",list.files(directory),value=TRUE)
sampleFiles
#setup the experimental desing
sampleCondition<-c(treated,treated,treated,untreated,untreated,untreated)
sampleCondition
#load the tables
sampleTable<-data.frame(sampleName=sampleFiles, fileName=sampleFiles, condition=sampleCondition)
sampleTable
#metadata for the experiment
ddsHTSeq<-DESeqDataSetFromHTSeqCount(sampleTable=sampleTable, directory=directory, design=~condition)
ddsHTSeq
colData(ddsHTSeq)$condition<-factor(colData(ddsHTSeq)$condition, levels=c(untreated,treated))
#DEseq2 analysis
dds<-DESeq(ddsHTSeq)
res<-results(dds)
res<-res[order(res$padj),]
head(res)
#MAPlot
pdf("MAPlot.pdf",width=7,height=7)
plotMA(dds,ylim=c(-2,2),main="DESeq2")
dev.off()
#output DataFrame
mcols(res,use.names=TRUE)
write.csv(as.data.frame(res),file="resultsDESeq2.csv")
#rlog transformation
rld <- rlogTransformation(dds, blind=TRUE)
write.table(as.data.frame(assay(rld),file='DATE-DESeq2-rlog-transformed-counts.txt', sep='\t'))
#heatmap
library("RColorBrewer")
library("gplots")
select <- order(rowMeans(counts(dds,normalized=TRUE)),decreasing=TRUE)[1:30]
hmcol <- colorRampPalette(brewer.pal(9, "GnBu"))(100)
pdf("rawHeatmap.pdf",width=7,height=7)
heatmap.2(counts(dds,normalized=TRUE)[select,], col = hmcol,
Rowv = FALSE, Colv = FALSE, scale="none",
dendrogram="none", trace="none", margin=c(20,12))
dev.off()
pdf("rlogHeatmap.pdf",width=7,height=7)
heatmap.2(assay(rld)[select,], col = hmcol,
Rowv = FALSE, Colv = FALSE, scale="none",
dendrogram="none", trace="none", margin=c(20, 12))
dev.off()
#Sample Clustering
pdf("SampleClstr.pdf",width=7,height=7)
distsRL <- dist(t(assay(rld)))
mat <- as.matrix(distsRL)
rownames(mat) <- colnames(mat) <- with(colData(dds),
paste(condition,sampleFiles , sep=" : "))
hc <- hclust(distsRL)
heatmap.2(mat, Rowv=as.dendrogram(hc),
symm=TRUE, trace="none",
col = rev(hmcol), margin=c(20, 20))
dev.off()
#PCA
pdf("PCAPlot.pdf",width=7,height=7)
print(plotPCA(rld, intgroup=c("condition")))
dev.off()
#Dispertion plot
pdf("DsprtnPlot.pdf",width=7,height=7)
plotDispEsts(dds)
dev.off()
ddsTC <- DESeq(ddsHTSeq, ~ strain + minute + strain:minute)
|
ba14b7e7a38f635c5ab7dbd401b2118e3612642d | 7c829273af50abf7d98abe78c862abcc7d67d3ae | /Breakout2-Track1/applications/app.R | c81e01d07fb7b6c8fdcc9fd6a82615bdac9e26c5 | [] | no_license | rnorthconference/2021Talks | 283087d4a0fd558ecb4f2d13b00e7a42f319f734 | cbb6826039a9fa3bdcb5523a1c9ddf6261e754be | refs/heads/main | 2023-09-05T09:30:54.554492 | 2021-09-30T15:52:43 | 2021-09-30T15:52:43 | 336,825,349 | 6 | 9 | null | 2021-09-30T15:52:43 | 2021-02-07T15:51:49 | HTML | UTF-8 | R | false | false | 2,521 | r | app.R | # Libraries
library(shiny)
library(tidyr)
library(ggplot2)
library(dplyr)
library(shinydashboard)
library(writexl)
############################################################################
# #
# Source R files from a folder - may be elsewhere - server #
# Anything that you can write as an R function (reactive or non-reactive) #
# In my files : datamaker/cleaner #
# #
############################################################################
src_files <- list.files('R', full.names = TRUE)
for(source_file in c(src_files)){
source(source_file,
local = TRUE)
}
######
# UI #
######
ui <-
fluidPage(
# Title of application
titlePanel(title = 'Customer Reviews'),
# Create columns within rows, columns determined by width
# Note the use of commas between elements
# Input is variables from the ui
# Output are placeholders for variables from the sever
fluidRow(
column(width = 3,
selectizeInput(inputId = 'overlay', # Every input will have an Id variable
label = 'Choose the plot overlay', # Label
choices = c("Salesperson",
"Product",
"Type of Client"))),
# Need output here to show plot
column(width = 9,
plotOutput(outputId = 'dot_plot'))
)
)
##########
# SERVER #
##########
server <- function(input, output, session){
# Render plot creates the plot
# the data is stored in the output variable
# Note the ({ }) on functions in the server
output$dot_plot <- renderPlot({
ggplot(data = Flodger_paper_co, aes(x = Salesperson,
fill = .data[[input$overlay]], #data masking
y = Review,
color = .data[[input$overlay]])) +
geom_dotplot(binaxis = "y",
stackdir = "center",
dotsize = 0.7) +
theme_minimal() +
theme(text = element_text(size = 20))
})
}
shinyApp(ui, server)
# Simple app
# library(shiny)
#
# ui <- fluidPage(
# "Hello, world!"
# )
#
# server <-
# function(input, output, session){
# #empty
# }
#
# shinyApp(ui, server)
|
052a02add99f8653c6c4dee7a2d6a2af45770b8c | d060fad3c33325ba3e6ab2e42ac9df2ff2a5abf0 | /R/check.types.R | ff027a218f89fdfb41b786fb66f5da9d59617f38 | [] | no_license | cran/bnpa | 0eb35f0d18850e4b3556c7b08c469e04aab97538 | 3f27ca031b7f6fbf30264d8582970f505f8b76ea | refs/heads/master | 2021-01-11T21:51:55.114628 | 2019-08-01T22:20:02 | 2019-08-01T22:20:02 | 78,866,497 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,657 | r | check.types.R | #'Verify types of variable
#'
#'This function receives a data set as parameter and check each type of variable returning a number indicating the type of variables in the whole data set.
#'The variables can be 1=integer, 2=numeric, 3=factor, 4=integer and numeric, 5=integer and factor, 6=numeric and factor, 7=integer,
#'numeric and factor, 8=character.
#'@param data.to.work is a data set containing the variables to be verified.
#'@param show.message is a parameter indicating if the function will or not show a message.
#'@return A variable with the code indicating the type of variable and a message (or not)
#'@author Elias Carvalho
#'@references GUJARATI, Damodar N. Basic econometrics. Tata McGraw-Hill Education, 2009.
#'@examples
#'# Clean environment
#'closeAllConnections()
#'rm(list=ls())
#'# Set enviroment
#'# setwd("to your working directory")
#'# Load packages
#'library(bnpa)
#'# Use working data sets from package
#'data(dataQuantC)
#'# Show first lines of data set
#'head(dataQuantC)
#'# Check and return a numeric value
#'show.message <- 1
#'bnpa::check.types(dataQuantC, show.message)
#'# Adding random data to dataQuantC, function will return TRUE
#'dataQuantC$Z <- round(runif(500, min=0, max=1000),2)
#'# Converting the numeric variable into factor
#'dataQuantC$Z <- factor(dataQuantC$Z)
#'# Check and return a numeric value correspondig to: 1=integer, 2=numeric, 3=factor, 4=integer and
#'# numeric, 5=integer and factor, 6=numeric and factor or 7=integer, numeric and factor.
#'show.message <- 1
#'bnpa::check.types(dataQuantC, show.message)
#'# Supressing the message
#'show.message <- 0
#'bnpa::check.types(dataQuantC, show.message)
#'@export
check.types <- function(data.to.work, show.message=0)
{
# creates a vector of 4 position to identify types: position 1=1 is character,
# 2=1 is integer, 3=1 is numeric, 4=1 is factor.
typeVars <- vector(mode = "numeric", length = 4)
# verify character
typeVars[1] <- length(sapply(data.to.work, is.character)[sapply(data.to.work, is.character)==TRUE])
# verify integer
typeVars[2] <- length(sapply(data.to.work, is.integer)[sapply(data.to.work, is.integer)==TRUE])
# verify numeric
typeVars[3] <- (length(sapply(data.to.work, is.numeric)[sapply(data.to.work, is.numeric)==TRUE]) -
length(sapply(data.to.work, is.integer)[sapply(data.to.work, is.integer)==TRUE]))
# verify factor
typeVars[4] <- length(sapply(data.to.work, is.factor)[sapply(data.to.work, is.factor)==TRUE])
# create a variable to register what type of variable we have
result.type <- 0
# check what we have
if (typeVars[1] > 0) # variables ara character only
{
if (show.message==1) cat("\n Your data set has variables of type character\n\n")
result.type <- 8 # character
} else if (typeVars[2] > 0 && # only integer
typeVars[3] == 0 &&
typeVars[4] == 0
)
{
if (show.message==1) cat("\n Your data set has variables of type integer\n\n")
result.type <- 1 # only integer
} else if (typeVars[2] == 0 &&
typeVars[3] > 0 && # only numeric
typeVars[4] == 0
)
{
if (show.message==1) cat("\n Your data set has variables of type numeric\n\n")
result.type <- 2 # only numeric
} else if (typeVars[2] == 0 &&
typeVars[3] == 0 &&
typeVars[4] > 0) # only factor
{
if (show.message==1) cat("\n Your data set has variables of type factor\n\n")
result.type <- 3 # only factor
} else if (typeVars[2] > 0 && # integer and numeric
typeVars[3] > 0 &&
typeVars[4] == 0)
{
if (show.message==1) cat("\n Your data set has variables of type integer and numeric\n\n")
result.type <- 4 # integer and numeric
} else if (typeVars[2] > 0 && # integer and factor
typeVars[3] == 0 &&
typeVars[4] > 0)
{
if (show.message==1) cat("\n Your data set has variables of type integer and factor\n\n")
result.type <- 5 # integer and factor
} else if (typeVars[2] == 0 && # numeric and factor
typeVars[3] > 0 &&
typeVars[4] > 0)
{
if (show.message==1) cat("\n Your data set has variables of type numeric and factor\n\n")
result.type <- 6 # numeric and factor
} else if (typeVars[2] > 0 && # integer, numeric and factor
typeVars[3] > 0 &&
typeVars[4] > 0)
{
if (show.message==1) cat("\n Your data set has variables of type integer, numeric and factor\n\n")
result.type <- 7 # integer, numeric and factor
}
# Return the type of variable
return(result.type)
} # check.types <-function(data.to.work)
|
865a91830b4e36e14df2d887961e735cd23c6c6f | 8bfe6bcfeae5aee36d41d3a0453b445129a8ca23 | /static/data_manage.R | 6624d8c50804e55837031febca8992d53a35a62d | [] | no_license | ck2136/reproducible_research | 26f2c2d3491d8e1b0138aecb5dafde755a6cc2ab | 9e7147cb26fa95ee37fea1403a39de71d31d9ebc | refs/heads/master | 2020-03-28T12:07:17.359486 | 2018-10-04T20:52:26 | 2018-10-04T20:52:26 | 148,270,830 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,007 | r | data_manage.R | # data_manage code
# yes all of the below packages will be required so plz install
library(tidyverse)
library(RNHANES)
library(weights)
library(ggsci)
library(ggthemes)
# The RNHANES package enables the data starting from 1999
d99 = nhanes_load_data("DEMO", "1999-2000") %>%
select(SEQN, cycle, RIAGENDR, RIDAGEYR, RIDRETH1, RIDEXPRG, INDFMINC, WTINT2YR, WTMEC2YR) %>%
transmute(SEQN=SEQN, wave=cycle, RIAGENDR, RIDAGEYR, RIDRETH1, RIDEXPRG, INDFMINC, WTINT2YR, WTMEC2YR) %>%
left_join(nhanes_load_data("BMX", "1999-2000"), by="SEQN") %>%
select(SEQN, wave, RIAGENDR, RIDAGEYR, RIDRETH1, RIDEXPRG, INDFMINC, WTINT2YR, WTMEC2YR, BMXBMI) %>%
left_join(nhanes_load_data("WHQ", "1999-2000"), by="SEQN") %>%
select(SEQN, wave, RIAGENDR, RIDAGEYR, RIDRETH1, RIDEXPRG, INDFMINC, WTINT2YR, WTMEC2YR, BMXBMI, WHQ070)
# Something entirely different from what I've usually had.
# I don't know if this will go well in terms of the merging process
A = list()
b = matrix()
c = c('a','b','c')
|
15ea51e6631b2b04445078027f355b98091e180c | 40458078168c2577b22dee7b1d04abd5c5d4d87c | /man-roxygen/roxlate-object-experiment.R | 04671794bf2d569d81b699f8dd1aeb8552fffe27 | [] | no_license | muratmaga/tfestimators | edfd89dd942dd3290d07ff0950aa762177397dca | d035c96576c69eb907006afbb37466d5547e6cac | refs/heads/master | 2022-12-24T07:40:00.947716 | 2020-09-29T18:56:32 | 2020-09-29T18:56:32 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 42 | r | roxlate-object-experiment.R | #' @param object A TensorFlow experiment.
|
f25516fec5ee7225eab00ce7196163bb441828eb | 6503d73f5659231aed739133c7343e3b0f918849 | /cachematrix.R | 9ef6ebe35575d8efd7429e95dab3481fcf65b4cc | [] | no_license | jcfrench/ProgrammingAssignment2 | 4c3dd048c2d610bbd27ea1670ca6fe8e15751d24 | eaaa01e6683768b92572fd0edcde24aaeb56c835 | refs/heads/master | 2021-01-15T11:57:54.168098 | 2015-01-21T16:46:54 | 2015-01-21T16:46:54 | 29,548,358 | 0 | 0 | null | 2015-01-20T19:41:35 | 2015-01-20T19:41:35 | null | UTF-8 | R | false | false | 3,696 | r | cachematrix.R | ## cachematrix.R for RDPeng's R-Programming Section 010: Class Assignment #2
## Author/Student: JCFrench - 01/21/2015
## Two functions, makeCacheMatrix and cacheSolve, demonstrate data caching
## and lexical scoping.
## This function creates a special "makeCacheMatrix" object that contains a
## copy of a base matrix, and may contain a cached copy of an inverse matrix.
## "makeCacheMatrix" is limited in that it does not ensure internal consistency
## between the base matrix "x", and the inverse matrix "m". Instead, consistency
## management is deferred to the "cacheSolve" function.
## 4 utility functions handle the base & inverse matrix:
## makeCacheMatrix.set() initializes the makeCacheMatrix object with a base
### matrix and clears any previously cached inverse matrix.
## makeCacheMatrix.get() returns the base matrix
## makeCacheMatrix.setinv() stores the inverse matrix
## makeCacheMatrix.getinv() returns the inverse matrix
makeCacheMatrix <- function(x = matrix()) {
m <- NULL
set <- function(base) {
x <<- base
m <<- NULL
}
get <- function() x
setinv <- function(inverse) m <<- inverse
getinv <- function() m
list(set = set, get = get, setinv = setinv, getinv = getinv)
}
## cacheSolve provides state control for the makeCacheMatrix object. When the
## "cacheSolve" function is called with a "makeCacheMatrix" object, "cacheSolve"
## retrieves the inverse matrix. If the inverse matrix has been calculated,
## it is returned. If the inverse matrix is NULL, "cacheSolve" calculates the
## inverse matrix, "solve(data,...)", stores the inverse with "setinv()"
## function, and then returns the inverse matrix.
cacheSolve <- function(x, ...) {
## Return a matrix that is the inverse of 'x'
m <- x$getinv()
if(!is.null(m)) {
message("getting cached inverse matrix")
return(m)
}
data <- x$get()
m <- solve(data, ...)
x$setinv(m)
m
}
## testCacheMatrix calculates inverse matrices, identity matrices
## This function test the above functions by calculating a inverse matrices
## with caching functions, & then performs matrix multiplication to demo
## an identity matrix calculation.
##Note that "makeCacheMatrix.set()" is not tested or called in this example.
testCacheMatrix <- function(){
## first, basic demonstration of matrix multiplication
a <- matrix(c(2,3,5,7,11,13,17,19,23),nrow=3,ncol=3)
print("Base Matrix a:")
print(a)
print(" ")
a.inv <- solve(a)
print("Inverse of Matrix a (a'):")
print(a.inv)
print(" ")
print("Identity Matrix = Matrix product of a * a':")
i1 <- a %*% a.inv
print(i1)
print("Note the rounding errors in numeric calculations,")
print("several entries in the identity matrix are almost zero.")
print(" ")
## second, demonstration of makeCacheMatrix & cacheMatrix functionality
b = makeCacheMatrix(a)
print("Base Matrix from makeCacheMatrix b:")
print(b$get())
print(" ")
print("Calculating Inverse Matrix from b with cacheSolve(b):")
print("Inverse Matrix from b:")
print(cacheSolve(b))
print(" ")
print("Re-using Inv Matrix from b cacheSolve to calculate identity matrix:")
print("Identity Matrix = Matrix product of b$get() * cacheSolve(b):")
i2 <- b$get() %*% cacheSolve(b)
print(i2)
print(" ")
print("Note: The identity matrix in each example is identical: ")
print(identical(i1,i2))
} |
21f764478650df03e03d259cef7f43ee37f14821 | 0b4c74473a3e93685f1edd5616fb656755e6ad51 | /R/Analysis/descStats.R | 5c207e6e3379b3c5d1a387eae36ffd4f6779907f | [] | no_license | s7minhas/Rhodium | f6dd805ce99d7af50dd10c2d7f6a0eeae51f3239 | 0627acabab9e5302e13844de2c117325cdf0f7ea | refs/heads/master | 2020-04-10T03:59:46.069362 | 2016-11-02T20:21:57 | 2016-11-02T20:21:57 | 8,080,455 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,556 | r | descStats.R | # Workspace
if(Sys.info()["user"]=="janus829" | Sys.info()["user"]=="s7m"){source('~/Research/Rhodium/R/setup.R')}
if(Sys.info()["user"]=="Ben"){source('/Users/Ben/Github/Rhodium/R/setup.R')}
######################################################################
# Aggregate GDP Growth and Distance
setwd(pathData)
load('combinedData.rda')
# Create bucketed version of mindist and find averag GDP gr
yData$cityDistCat=contToCat(yData$minDist.min, .25)
yData$capDistCat=contToCat(yData$capDist.min, .25)
# Conf intervals using summarySE:
gdpByCityDist=na.omit( summarySE(data=yData, measurevar='gdpGr_l0',
groupvars='cityDistCat', na.rm=TRUE) )
gdpByCapDist=na.omit( summarySE(data=yData, measurevar='gdpGr_l0',
groupvars='capDistCat', na.rm=TRUE) )
names(gdpByCityDist)[1]='distCat'; names(gdpByCapDist)[1]='distCat'
ggData=cbind(
rbind(gdpByCityDist, gdpByCapDist),
type=c(rep('Min. City Dist.$_{t-1}$',nrow(gdpByCityDist)),
rep('Min. Capital Dist.$_{t-1}$',nrow(gdpByCapDist)) ),
cut=rep( c('0-25$^{th}$','26-50$^{th}$','51-75$^{th}$','76-100$^{th}$'), 2 )
)
tmp=ggplot(ggData, aes(x=cut, y=gdpGr_l0))
tmp=tmp + geom_bar(stat='identity', fill='grey')
tmp=tmp + geom_errorbar(aes(ymin=gdpGr_l0-se, ymax=gdpGr_l0+se),width=.2)
tmp=tmp + ylab("\\% $\\Delta$ GDP$_{t}$")+xlab('')
tmp=tmp + facet_wrap(~type)
tmp=tmp + theme( axis.title.y=element_text(vjust=1),
axis.ticks=element_blank(),legend.position='none',
panel.grid.major=element_blank(), panel.grid.minor=element_blank() )
tmp
setwd(pathGraphics)
tikz(file='distGdp.tex', width=7, height=4, standAlone=FALSE)
tmp
dev.off()
######################################################################
########################################################################
# World map of cities and conflicts
setwd(pathData)
load("cityTotPopLatLongvFinal.rda")
setwd(paste0(pathData,'/PRIO - Conflict Site Data'))
prioData=read.csv("ConflictSite 4-2010_v3 Dataset.csv")
prioData$Conflict.territory=charSM(prioData$Conflict.territory)
prioData$Conflict.territory[prioData$Conflict.territory=='Yugoslavia']='Serbia'
prioData$Conflict.territory[prioData$Conflict.territory=='DRC']='Democratic Republic of Congo'
prioData$cname=countrycode(prioData$Conflict.territory, 'country.name','country.name')
cntries=unique(prioData$cname)
# Color Non-Conflict countries
worldmap=cshp(as.Date('2000-1-1'))
worldmap$CNTRY_NAME=charSM(worldmap$CNTRY_NAME)
worldmap$CNTRY_NAME[worldmap$CNTRY_NAME=='Congo, DRC']='Congo, Democratic Republic of'
Wcntries=worldmap$CNTRY_NAME
Wcntries=panel$cname[match(Wcntries, panel$CNTRY_NAME)]
noConfCntries=setdiff(Wcntries, cntries)
mapColors=rep('white',length(Wcntries))
mapColors[which(Wcntries %in% noConfCntries)] = 'grey'
setwd(pathGraphics)
# pdf(file='CityConfMap.pdf', width=12, height=6)
plot(worldmap, col=mapColors)
points(fYrCty$cleanLong, fYrCty$cleanLat, col='blue', pch=18, cex=0.5)
points(prioData$Longitude,prioData$Latitude, col='red', pch=16,cex=0.5)
# dev.off()
########################################################################
########################################################################
# Some stats on the city data
setwd(pathData)
load("cityTotPopLatLongvFinal.rda")
# Average cities listed by cntry and year
fYrCty$temp=1
cityStats=summaryBy(temp ~ Country + YearAlmanac, data=fYrCty, FUN=sum)
cityGraph=summaryBy(temp.sum ~ YearAlmanac, data=cityStats, FUN=mean)
temp=ggplot(cityGraph, aes(x=YearAlmanac, y=temp.sum.mean)) + geom_line()
temp
######################################################################## |
a10086983cfb7e089755dadab210103c2a27ae67 | 46547f39fd96c9c01d9c4933248d2b3834a6d868 | /inst/tinytest/test-annotation_ticks.R | 52435d6fd45d083f359e27eb1f48d7caee555187 | [] | no_license | csdaw/ggprism | 4251e624ef052e22b5d211ebc32ecd988aff1e41 | 0e411f4f186346d13834ed2d5187355cf549cbd8 | refs/heads/master | 2023-09-01T00:28:17.072265 | 2022-11-04T13:16:52 | 2022-11-04T13:16:52 | 251,058,802 | 130 | 18 | null | 2023-07-02T13:17:36 | 2020-03-29T14:59:24 | R | UTF-8 | R | false | false | 2,335 | r | test-annotation_ticks.R | #### Setup ---------------------------------------------------------------------
## load libraries
library(ggplot2)
p <- ggplot(msleep, aes(bodywt, brainwt)) + geom_point(na.rm = TRUE)
#### Tests ---------------------------------------------------------------------
# test that the function with default arguments works
g <- p + annotation_ticks()
expect_silent(ggplotGrob(g))
# test that the function recognises the sides argument
g <- p + annotation_ticks(sides = "trbl")
expect_silent(ggplotGrob(g))
expect_equal(length(layer_grob(g, 2L)[[1]]$children), 4)
expect_error(p + annotation_ticks(sides = "banana"))
# test that the type argument works
g1 <- p + annotation_ticks(type = "both")
g2 <- p + annotation_ticks(type = "major")
g3 <- p + annotation_ticks(type = "minor")
expect_silent(ggplotGrob(g1))
expect_silent(ggplotGrob(g2))
expect_silent(ggplotGrob(g3))
expect_equal(length(layer_grob(g1, 2L)[[1]]$children[[1]]$x0), 8)
expect_equal(length(layer_grob(g2, 2L)[[1]]$children[[1]]$x0), 5)
expect_equal(length(layer_grob(g3, 2L)[[1]]$children[[1]]$x0), 3)
expect_error(p + annotation_ticks(type = "banana"))
# test that ticks can go outside
g <- p + annotation_ticks(outside = TRUE) +
coord_cartesian(clip = "off")
expect_silent(ggplotGrob(g))
ticks <- layer_grob(g, 2L)[[1]]$children[[1]]$y1
expect_equal(
grid::convertUnit(ticks, "pt", valueOnly = TRUE),
c(rep(-4.8, 5), rep(-2.4, 3))
)
# test that tick lengths can be set
g <- p + annotation_ticks(
type = "both",
tick.length = unit(20, "pt"),
minor.length = unit(10, "pt")
)
expect_silent(ggplotGrob(g))
layer_grob(g, 2L)
expect_identical(layer_grob(g, 2L)[[1]]$children[[1]]$y1[1], unit(20, "pt"))
expect_identical(layer_grob(g, 2L)[[1]]$children[[1]]$y1[8], unit(10, "pt"))
# test that you can set the colour with both spellings
g1 <- p + annotation_ticks(colour = "red")
g2 <- p + annotation_ticks(color = "red")
expect_silent(ggplotGrob(g1))
expect_identical(layer_grob(g1, 2L)[[1]]$children[[1]]$gp$col, "#FF0000FF")
expect_silent(ggplotGrob(g2))
expect_identical(layer_grob(g2, 2L)[[1]]$children[[1]]$gp$col, "#FF0000FF")
#### Sanity checks -------------------------------------------------------------
# test that warning occurs if both colour and color are set
expect_warning(p + annotation_ticks(colour = "red", color = "blue"))
|
a4d9a899e0e04cb9129071e798ee4d7860a37183 | e774c9643d704db75f1f22b86bb889e47f874ceb | /first-commits/IwoComment.R | 9d902193b72d12f852bcaec77f9d355fabd841c5 | [] | no_license | annamtucker/quant-club | ff21e1c04dac18e6b8b2d33dcc280e8dd394488e | ac2d38b553a1f8dd0a151c42b5d12e5deec7ebe1 | refs/heads/master | 2021-04-29T16:04:22.651522 | 2019-04-19T17:29:15 | 2019-04-19T17:29:15 | 121,807,841 | 0 | 3 | null | 2018-06-22T18:32:27 | 2018-02-16T22:31:31 | R | UTF-8 | R | false | false | 292 | r | IwoComment.R | # This is IWo (Evo) Gross
#Still working out the kinks
#PhD Biology, past research focus was neonatal copperhead dispersal ecology within managed landscapes
#Presently interested in developing my coding and stats skillsets, and applying those to evo-eco and conservation-related questions.
|
808d62b36ad039d92d9f744db170b62477c8e40f | 27230338d721bad7418c5622f45fb8ec33a37e91 | /R/estimate_co_parlikar.R | 225c7d604e353991f1ae8789db68f7ee51c1aa76 | [
"MIT"
] | permissive | gkovaig/cardiac.output.R | ce69d60b77905d20494d69bf718d9342cca3eaaf | f5816ff7c77d87d424817c92a6da64b8c8b46ea7 | refs/heads/master | 2022-01-12T05:47:16.156914 | 2019-06-20T23:01:15 | 2019-06-20T23:01:15 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 812 | r | estimate_co_parlikar.R | #'
#' ESTIMATE_CO_PARLIKAR
#'
#' @param abp Arterial blood pressure waveform (125 Hz, mmHg)
#' @param feat Features computed using abpfeature()
#' @param onsets Beat onsets computed using wabp()
#' @param window.radius Window under which we compute least-squares estimate of the time constant
#' @return Vector of uncalibrated beat-to-beat estimates of cardiac output
#' @export
estimate.co.parlikar = function(abp,feat,onsets,window.radius) {
deltaP = abp[onsets[-1]] - abp[onsets[-length(onsets)]]
tau.raw = (2 * (feat[,6]-feat[,4]) - deltaP) / feat[,7]
tau.indices = lapply(1:dim(feat)[1], function(x) max(x-window.radius,1):min(x+window.radius,dim(feat)[1]))
tau = sapply(tau.indices, function(x) sum(feat[x,6]*feat[x,6])/sum(feat[x,6]*tau.raw[x]))
return(deltaP/feat[,7]+feat[,6]/tau)
}
|
7080ec8788dc19b209ededc7df7e84e64ab90bbc | 5f0cfcec5194f11137db76056ef2b3836ab80ff8 | /man/ThreeD.ABCplots.Rd | 6bdc403b3cabdc459ffacdc5113d12cfbb6773d8 | [] | no_license | JakeJing/treevo | 54d341655f1e6ddac5ab73df38c890be557e7d17 | 3429ba37e8dc7c79cf441361d07c000f07423b6e | refs/heads/master | 2021-01-12T01:20:10.296046 | 2016-10-03T01:09:15 | 2016-10-03T01:09:15 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 1,638 | rd | ThreeD.ABCplots.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ThreeD.ABCplots.R
\name{ThreeD.ABCplots}
\alias{ThreeD.ABCplots}
\title{3D ABCplots}
\usage{
ThreeD.ABCplots(particleDataFrame, parameter, show.particles = "none",
plot.parent = FALSE, realParam = FALSE, realParamValues = NA)
}
\arguments{
\item{particleDataFrame}{particleDataFrame output from doRun}
\item{parameter}{column number of parameter of interest from
particleDataFrame}
\item{show.particles}{option to show particles on 3d plot as "none" or as a
function of "weights" or "distance"}
\item{plot.parent}{option to plot lines on the floor of the 3d plot to show
particle parantage}
\item{realParam}{option to display real parameter value as a solid line,
also must give actual value for this (realParamValues). Note: this should
only be done with simulated data where real param values are recorded}
\item{realParamValues}{Value for (realParam)}
}
\description{
Plot posterior density distribution for each generation in 3d plot window
}
\details{
This opens a new interactive 3d plotting window and plots the posterior
density distribution of accepted particles from each generation. Several
options are available to add to the plot: plotting particles by weight or
distance, plotting particle parantage, and plotting the real parameter
values (if known).
}
\examples{
#data(simRun)
#ThreeD.ABCplots(particleDataFrame=results$particleDataFrame, parameter=7, show.particles="none", plot.parent=FALSE, realParam=FALSE, realParamValues=NA)
}
\author{
Barb Banbury
}
\references{
O'Meara and Banbury, unpublished
}
\keyword{ThreeD.ABCplots}
|
e205784ae9f420b4216549ad2e8fd4219cf8d9e0 | 0d74c6026340636cb7a73da2b53fe9a80cd4d5a5 | /SupportingDocs/Examples/Version05mx/ex25/ex25.R | 29d4f777f0326607ec613ef475245206d1ab7595 | [] | no_license | simsem/simsem | 941875bec2bbb898f7e90914dc04b3da146954b9 | f2038cca482158ec854a248fa2c54043b1320dc7 | refs/heads/master | 2023-05-27T07:13:55.754257 | 2023-05-12T11:56:45 | 2023-05-12T11:56:45 | 4,298,998 | 42 | 23 | null | 2015-06-02T03:50:52 | 2012-05-11T16:11:35 | R | UTF-8 | R | false | false | 4,737 | r | ex25.R | library(simsem)
library(OpenMx)
Avalues <- matrix(0, 12, 12)
Avalues[1:3, 10] <- c(1, 0.8, 1.2)
Avalues[4:6, 11] <- c(1, 0.8, 1.2)
Avalues[7:9, 12] <- c(1, 0.8, 1.2)
Afree <- matrix(FALSE, 12, 12)
Afree[1:3, 10] <- c(FALSE, TRUE, TRUE)
Afree[4:6, 11] <- c(FALSE, TRUE, TRUE)
Afree[7:9, 12] <- c(FALSE, TRUE, TRUE)
Alabels <- matrix(NA, 12, 12)
Alabels[1:3, 10] <- c(NA, "con1", "con2")
Alabels[4:6, 11] <- c(NA, "con1", "con2")
Alabels[7:9, 12] <- c(NA, "con1", "con2")
Svalues <- diag(c(rep(0.4, 9), 1, 1.2, 1.4))
Svalues[10, 11] <- Svalues[11, 10] <- 0.77
Svalues[10, 12] <- Svalues[12, 10] <- 0.58
Svalues[11, 12] <- Svalues[12, 11] <- 0.91
Svalues[1, 4] <- Svalues[4, 1] <- 0.08
Svalues[2, 5] <- Svalues[5, 2] <- 0.08
Svalues[3, 6] <- Svalues[6, 3] <- 0.08
Svalues[4, 7] <- Svalues[7, 4] <- 0.08
Svalues[5, 8] <- Svalues[8, 5] <- 0.08
Svalues[6, 9] <- Svalues[9, 6] <- 0.08
Svalues[1, 7] <- Svalues[7, 1] <- 0.016
Svalues[2, 8] <- Svalues[8, 2] <- 0.016
Svalues[3, 9] <- Svalues[9, 3] <- 0.016
Sfree <- matrix(FALSE, 12, 12)
diag(Sfree) <- TRUE
Sfree[10, 11] <- Sfree[11, 10] <- TRUE
Sfree[10, 12] <- Sfree[12, 10] <- TRUE
Sfree[11, 12] <- Sfree[12, 11] <- TRUE
Sfree[1, 4] <- Sfree[4, 1] <- TRUE
Sfree[2, 5] <- Sfree[5, 2] <- TRUE
Sfree[3, 6] <- Sfree[6, 3] <- TRUE
Sfree[4, 7] <- Sfree[7, 4] <- TRUE
Sfree[5, 8] <- Sfree[8, 5] <- TRUE
Sfree[6, 9] <- Sfree[9, 6] <- TRUE
Sfree[1, 7] <- Sfree[7, 1] <- TRUE
Sfree[2, 8] <- Sfree[8, 2] <- TRUE
Sfree[3, 9] <- Sfree[9, 3] <- TRUE
Fvalues <- cbind(diag(9), matrix(0, 9, 3))
MvaluesNested <- c(rep(c(0, -0.5, 0.5), 3), 0, 0.5, 1)
MfreeNested <- c(rep(c(FALSE, TRUE, TRUE), 3), rep(TRUE, 3))
MlabelsNested <- c(rep(c(NA, "con3", "con4"), 3), rep(NA, 3))
popNested <- mxModel("Strong Invariance Model",
type="RAM",
mxMatrix(type="Full", nrow=12, ncol=12, values=Avalues, free=Afree, labels=Alabels, byrow=TRUE, name="A"),
mxMatrix(type="Symm", nrow=12, ncol=12, values=Svalues, free=Sfree, byrow=TRUE, name="S"),
mxMatrix(type="Full", nrow=9, ncol=12, free=FALSE, values=Fvalues, byrow=TRUE, name="F"),
mxMatrix(type="Full", nrow=1, ncol=12, values=MvaluesNested, free=MfreeNested, labels=MlabelsNested, name="M"),
mxExpectationRAM("A","S","F","M", dimnames=c(paste0("y", 1:9), "f1", "f2", "f3"))
)
MvaluesParent <- c(0, -0.5, 0.5, 0, 0, 0, 0, 0.5, -0.5, 0, 0.5, 1)
MfreeParent <- c(rep(c(FALSE, TRUE, TRUE), 3), rep(TRUE, 3))
popParent <- mxModel("Weak Invariance Model",
type="RAM",
mxMatrix(type="Full", nrow=12, ncol=12, values=Avalues, free=Afree, labels=Alabels, byrow=TRUE, name="A"),
mxMatrix(type="Symm", nrow=12, ncol=12, values=Svalues, free=Sfree, byrow=TRUE, name="S"),
mxMatrix(type="Full", nrow=9, ncol=12, free=FALSE, values=Fvalues, byrow=TRUE, name="F"),
mxMatrix(type="Full", nrow=1, ncol=12, values=MvaluesParent, free=MfreeParent, name="M"),
mxExpectationRAM("A","S","F","M", dimnames=c(paste0("y", 1:9), "f1", "f2", "f3"))
)
outDatNestedModNested <- sim(NULL, n = 50:500, popNested, generate = popNested, mxFit=TRUE)
outDatNestedModParent <- sim(NULL, n = 50:500, popParent, generate = popNested, mxFit=TRUE)
anova(outDatNestedModNested, outDatNestedModParent)
cutoff <- getCutoffNested(outDatNestedModNested, outDatNestedModParent, nVal = 250)
plotCutoffNested(outDatNestedModNested, outDatNestedModParent, alpha = 0.05)
outDatParentModNested <- sim(NULL, n = 50:500, popNested, generate = popParent, mxFit=TRUE)
outDatParentModParent <- sim(NULL, n = 50:500, popParent, generate = popParent, mxFit=TRUE)
anova(outDatParentModNested, outDatParentModParent)
getPowerFitNested(outDatParentModNested, outDatParentModParent, nullNested=outDatNestedModNested, nullParent=outDatNestedModParent, nVal=250)
getPowerFitNested(outDatParentModNested, outDatParentModParent, cutoff=cutoff, nVal=250)
plotPowerFitNested(outDatParentModNested, outDatParentModParent, nullNested=outDatNestedModNested, nullParent=outDatNestedModParent)
plotPowerFitNested(outDatParentModNested, outDatParentModParent, nullNested=outDatNestedModNested, nullParent=outDatNestedModParent, usedFit="RMSEA")
plotPowerFitNested(outDatParentModNested, outDatParentModParent, nullNested=outDatNestedModNested, nullParent=outDatNestedModParent, logistic=FALSE)
cutoff2 <- c(Chi=3.84, CFI=-0.01)
getPowerFitNested(outDatParentModNested, outDatParentModParent, cutoff=cutoff2, nVal=250, condCutoff=FALSE)
plotPowerFitNested(outDatParentModNested, outDatParentModParent, cutoff=cutoff2)
plotPowerFitNested(outDatParentModNested, outDatParentModParent, cutoff=cutoff2, logistic=FALSE)
plotPowerFitNested(outDatParentModNested, outDatParentModParent, nullNested=outDatNestedModNested, nullParent=outDatNestedModParent, cutoff=cutoff2, logistic=FALSE)
|
6c98f7fbbf403da908ed5c78784af31de438a0a4 | ea524efd69aaa01a698112d4eb3ee4bf0db35988 | /R/praise.R | a6da4ee796b6c00503ef811f6f28be633169bb0c | [
"MIT"
] | permissive | r-lib/testthat | 92f317432e9e8097a5e5c21455f67563c923765f | 29018e067f87b07805e55178f387d2a04ff8311f | refs/heads/main | 2023-08-31T02:50:55.045661 | 2023-08-08T12:17:23 | 2023-08-08T12:17:23 | 295,311 | 452 | 217 | NOASSERTION | 2023-08-29T10:51:30 | 2009-09-02T12:51:44 | R | UTF-8 | R | false | false | 1,588 | r | praise.R | # nocov start
praise <- function() {
plain <- c(
"You rock!",
"You are a coding rockstar!",
"Keep up the good work.",
"Woot!",
"Way to go!",
"Nice code.",
praise::praise("Your tests are ${adjective}!"),
praise::praise("${EXCLAMATION} - ${adjective} code.")
)
utf8 <- c(
"\U0001f600", # smile
"\U0001f973", # party face
"\U0001f638", # cat grin
paste0(strrep("\U0001f389\U0001f38a", 5), "\U0001f389"),
"\U0001f485 Your tests are beautiful \U0001f485",
"\U0001f947 Your tests deserve a gold medal \U0001f947",
"\U0001f308 Your tests are over the rainbow \U0001f308",
"\U0001f9ff Your tests look perfect \U0001f9ff",
"\U0001f3af Your tests hit the mark \U0001f3af",
"\U0001f41d Your tests are the bee's knees \U0001f41d",
"\U0001f4a3 Your tests are da bomb \U0001f4a3",
"\U0001f525 Your tests are lit \U0001f525"
)
x <- if (cli::is_utf8_output()) c(plain, utf8) else plain
sample(x, 1)
}
praise_emoji <- function() {
if (!cli::is_utf8_output()) {
return("")
}
emoji <- c(
"\U0001f600", # smile
"\U0001f973", # party face
"\U0001f638", # cat grin
"\U0001f308", # rainbow
"\U0001f947", # gold medal
"\U0001f389", # party popper
"\U0001f38a" # confetti ball
)
sample(emoji, 1)
}
encourage <- function() {
x <- c(
"Keep trying!",
"Don't worry, you'll get it.",
"No one is perfect!",
"No one gets it right on their first try",
"Frustration is a natural part of programming :)",
"I believe in you!"
)
sample(x, 1)
}
# nocov end
|
1af07751b5a90b90325b22c7aec3b539a55aee51 | a37122475660395c7306c661f8baa33421228a75 | /man/PIErrors.Rd | c4e9b6baecaec7df3d97cc9863e2a4f610345f9a | [
"Apache-2.0"
] | permissive | eddyrene/PI-Web-API-Client-R | 726b1edbea0a73bf28fe9b2f44259972ddecd718 | 7eb66c08f91e4a1c3a479a5fa37388951b3979b6 | refs/heads/master | 2020-04-17T01:01:27.260251 | 2018-11-14T10:48:46 | 2018-11-14T10:48:46 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 378 | rd | PIErrors.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/PIErrors.r
\name{PIErrors}
\alias{PIErrors}
\title{Generate an instance of the PIErrors PI Web API class}
\usage{
PIErrors(errors = NULL)
}
\arguments{
\item{errors}{(array)}
}
\value{
PIErrors
}
\description{
Generate an instance of the PIErrors PI Web API class
}
|
6581fc09e48f1bcd5cd847a624ed6bb08253ec08 | 2747ba02d6dec332e3f4baf4213a6069cc181494 | /crawl.R | 191dd741f996b3aecb00ba346e4c173eec1c2cdb | [] | no_license | Ilia-Kosenkov/FitsCrawler | ffa355475ac3ed279b3ac783cf9852ce21d022c6 | 418ffb465182cbe1ef3f4ad6a47b56a89c7b6f28 | refs/heads/master | 2023-09-03T23:15:40.339439 | 2021-10-26T09:15:28 | 2021-10-26T09:15:28 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 813 | r | crawl.R | box::use(
fs[fs_path = path, dir_ls = dir_ls],
fits_keys = ./fits_keys[...],
keys_info = ./keys_info[...],
file_data = ./file_data[...],
dplyr[mutate, slice],
# purrr[map_dfr = map_dfr],
furrr[map_dfr = future_map_dfr],
future[ft_plan = plan, ft_session = multisession, ft_seq = sequential],
tictoc[tic, toc],
readr[write_csv],
glue[glue]
)
box::reload(keys_info)
#ft_plan(ft_cluster(workers = 4L))
ft_plan(ft_session(workers = 6L))
proc_file <- function(path) {
path |>
fits_keys$get() |>
keys_info$get()
}
tic()
root <- fs_path(NULL) # Update path
filter <- "B"
fs_path(root, filter) |>
dir_ls(glob = "*.fits", recurse = TRUE) |>
file_data$get() |>
mutate(map_dfr(path, proc_file, .progress = TRUE)) |>
match_type() |>
write_csv(glue("log{filter}.csv"))
toc() |
399f821d5c770dd8c1579cadcba4bd574fcc005c | db6e078591f08ae6f64404f2a833c466637e067f | /run_analysis.R | fa87e029f138d430501ee5a4e4df68e7317de90e | [] | no_license | rdsanas/GettingAndCleaningData | 73e824d32f92e30142638e1ea668ea2fa57d1c20 | 35c31e773f19cbbcdfbdfabe2fb337478b822603 | refs/heads/master | 2020-03-07T21:44:55.197054 | 2018-04-02T10:10:54 | 2018-04-02T10:10:54 | 127,735,848 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,676 | r | run_analysis.R | library(plyr)
#set the working directory
setwd("c:/R/Coursera")
# Cntl+L of windows to clear the console
cat("\014")
#console log messages to indentify start and end of process.
print("Process started")
# Step 1
# Merge the training and test sets to create one data set
###############################################################################
xTraining <- read.table("UCI HAR Dataset/train/X_train.txt")
yTraining <- read.table("UCI HAR Dataset/train/y_train.txt")
subjectTraining <- read.table("UCI HAR Dataset/train/subject_train.txt")
xTest <- read.table("UCI HAR Dataset/test/X_test.txt")
yTest <- read.table("UCI HAR Dataset/test/y_test.txt")
subjectTest <- read.table("UCI HAR Dataset/test/subject_test.txt")
# Create 'x' data set
xData <- rbind(xTraining, xTest)
# Create 'y' data set
yData <- rbind(yTraining, yTest)
# Create 'subject' data set
subjectData <- rbind(subjectRraining, subjectTest)
# Step 2
# Extract only the measurements on the mean and standard deviation for each measurement
###############################################################################
features <- read.table("UCI HAR Dataset/features.txt")
# get only columns with mean() or std() in their names
meanAndStdFeatures <- grep("-(mean|std)\\(\\)", features[, 2])
# subset the desired columns
xData <- xData[, meanAndStdFeatures]
# correct the column names
names(xData) <- features[meanAndStdFeatures, 2]
# Step 3
# Use descriptive activity names to name the activities in the data set
###############################################################################
activities <- read.table("UCI HAR Dataset/activity_labels.txt")
# update values with correct activity names
yData[, 1] <- activities[yData[, 1], 2]
# correct column name
names(yData) <- "activity"
# Step 4
# Appropriately label the data set with descriptive variable names
###############################################################################
# correct column name
names(subjectData) <- "subject"
# bind all the data in a single data set
allData <- cbind(xData, yData, subjectData)
# Step 5
# Create a second, independent tidy data set with the average of each variable
# for each activity and each subject
###############################################################################
# 66 <- 68 columns but last two (activity & subject)
averagesData <- ddply(allData, .(subject, activity), function(x) colMeans(x[, 1:66]))
write.table(averages_data, "averages_data.txt", row.name=FALSE)
#console log messages to indentify start and end of process.
print("Process completed")
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.