blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 2 327 | content_id stringlengths 40 40 | detected_licenses listlengths 0 91 | license_type stringclasses 2 values | repo_name stringlengths 5 134 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 46 values | visit_date timestamp[us]date 2016-08-02 22:44:29 2023-09-06 08:39:28 | revision_date timestamp[us]date 1977-08-08 00:00:00 2023-09-05 12:13:49 | committer_date timestamp[us]date 1977-08-08 00:00:00 2023-09-05 12:13:49 | github_id int64 19.4k 671M ⌀ | star_events_count int64 0 40k | fork_events_count int64 0 32.4k | gha_license_id stringclasses 14 values | gha_event_created_at timestamp[us]date 2012-06-21 16:39:19 2023-09-14 21:52:42 ⌀ | gha_created_at timestamp[us]date 2008-05-25 01:21:32 2023-06-28 13:19:12 ⌀ | gha_language stringclasses 60 values | src_encoding stringclasses 24 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 7 9.18M | extension stringclasses 20 values | filename stringlengths 1 141 | content stringlengths 7 9.18M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3254abb57e899464f6d03db6fe35a47389c8bbe4 | 33f92c3305ffb0bef2c43777b86d492787e8f9dd | /Classification/Logistic Regression/Lab.R | e41dc1cd2faa6a4957088f74b7c27a8626b2bc88 | [] | no_license | EarlMacalam/Machine-Learning-with-R | cd280bf75c6afaa192ef04d8a9f35b2a7fe2fbde | 0a20590e6f9db22bb4a6aee3663f0c5689563d6a | refs/heads/master | 2023-03-04T07:44:24.563605 | 2021-02-09T08:37:25 | 2021-02-09T08:37:25 | 320,460,993 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,014 | r | Lab.R |
# Data --------------------------------------------------------------------
library(ISLR)
names(Smarket)
dim(Smarket)
summary(Smarket)
cor(Smarket[,-9]) # error when a variable is qualitative
# As one would expect, the correlations between the lag variables and today’s
# returns are close to zero. In other words, there appears to be little
# correlation between today’s returns and previous days’ returns. The only
# substantial correlation is between Year and Volume. By plotting the data
# we see that Volume is increasing over time. In other words, the average
# number of shares traded daily increased from 2001 to 2005.
attach(Smarket)
plot(Volume)
# Logistic Regression -----------------------------------------------------
glm.fit=glm(Direction~Lag1+Lag2+Lag3+Lag4+Lag5+Volume,
data=Smarket ,family=binomial)
summary(glm.fit)
# The smallest p-value here is associated with Lag1. The negative coefficient
# for this predictor suggests that if the market had a positive return
# yesterday, then it is less likely to go up today. However, at a value
# of 0.15, the p-value is still relatively large, and so there is no clear
# evidence of a real association between Lag1 and Direction.
# Prediction --------------------------------------------------------------
# The type="response" option tells R to output probabilities
# of the form P(Y = 1|X)
glm.probs=predict(glm.fit,type="response")
glm.probs[1:10]
contrasts (Direction )
# Here we have printed only the first ten probabilities. We know that
# these values correspond to the probability of the market going up,
# rather than down, because the contrasts() function indicates that R
# has created a dummy variable with a 1 for Up.
glm.pred=rep("Down",1250)
glm.pred[glm.probs >.5]="Up"
The first command creates a vector of 1,250 Down elements. The second line
transforms to Up all of the elements for which the predicted probability of
a market increase exceeds 0.5.
# Given these predictions, the table() function
# can be used to produce a confusion matrix in order to determine how many
# observations were correctly or incorrectly classified.
table(glm.pred,Direction)
mean(glm.pred==Direction )
# The diagonal elements of the confusion matrix indicate correct predictions,
# while the off-diagonals represent incorrect predictions. Hence our model
# correctly predicted that the market would go up on 507 days and that
# it would go down on 145 days, for a total of 507 + 145 = 652 correct
# predictions. The mean() function can be used to compute the fraction of
# days for which the prediction was correct. In this case, logistic regression
# correctly predicted the movement of the market 52.2 % of the time.
# Assessing the model by splitting into test and training -----------------
# Splitting the data
train=(Year <2005)
Smarket.2005 = Smarket [! train ,]
dim(Smarket.2005)
Direction.2005 = Direction[!train]
## Model ##
# fit a logistic regression model using only the subset of the observations
# that correspond to dates before 2005, using the subset argument.
glm.fit=glm(Direction~Lag1+Lag2+Lag3+Lag4+Lag5+Volume ,
data=Smarket ,family=binomial,subset=train)
# obtain predicted probabilities of the stock market going up for each of
# the days in our test set—that is, for the days in 2005.
glm.probs=predict(glm.fit,Smarket.2005,type="response")
# Notice that we have trained and tested our model on two completely separate
# data sets: training was performed using only the dates before 2005,
# and testing was performed using only the dates in 2005.
# Finally, we compute the predictions for 2005 and compare them to the
# actual movements of the market over that time period.
glm.pred=rep("Down",252)
glm.pred[glm.probs >.5]="Up"
table(glm.pred,Direction.2005)
mean(glm.pred==Direction.2005)
mean(glm.pred!=Direction.2005)
# The != notation means not equal to, and so the last command computes the
# test set error rate. The results are rather disappointing: the test error
# rate is 52%, which is worse than random guessing!
# We recall that the logistic regression model had very underwhelming p-values
# associated with all of the predictors, and that the smallest p-value,
# though not very small, corresponded to Lag1. Perhaps by removing the
# variables that appear not to be helpful in predicting Direction, we can
# obtain a more effective model. After all, using predictors that have no
# relationship with the response tends to cause a deterioration in the test
# error rate (since such predictors cause an increase in variance without
# a corresponding decrease in bias), and so removing such predictors may
# in turn yield an improvement. Below we have refit the logistic regression
# using just Lag1 and Lag2, which seemed to have the highest predictive power
# in the original logistic regression model.
glm.fit=glm(Direction~Lag1+Lag2,data=Smarket ,family=binomial, subset=train)
glm.probs=predict(glm.fit,Smarket.2005,type="response")
glm.pred=rep("Down",252)
glm.pred[glm.probs >.5]="Up"
table(glm.pred,Direction.2005)
mean(glm.pred==Direction.2005)
106/(106+76)
# Now the results appear to be a little better: 56% of the daily movements have
# been correctly predicted. It is worth noting that in this case, a much
# simpler strategy of predicting that the market will increase every day
# will also be correct 56% of the time! Hence, in terms of overall error rate,
# the logistic regression method is no better than the naive approach.
# However, the confusion matrix shows that on days when logistic regression
# predicts an increase in the market, it has a 58% accuracy rate. This
# suggests a possible trading strategy of buying on days when the model
# predicts an increasing market, and avoiding trades on days when a decrease
# is predicted.
predict(glm.fit,newdata=data.frame(Lag1=c(1.2,1.5), Lag2=c(1.1,-0.8)),
type="response")
# the probability that the direction is up is 0.4791462 and 0.4960939. |
e90a23cd39d360cce318f270301c35ad8a9490d4 | c2ed8635deac21bb431ef27f02b28acc44b958d6 | /ranking_script.R | efdc73cdebf0271650ebf97ae99ca634d1d9ff19 | [] | no_license | David-B-A/Ranking-attriibutes-by-entropy | 53f3b347bc7fbe27a0e498224855695881f5c3f7 | 07eaf2ccef19433f3b3140018ddb7eca91733eaf | refs/heads/master | 2021-04-26T23:10:25.821417 | 2018-03-05T10:54:26 | 2018-03-05T10:54:26 | 123,940,502 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,950 | r | ranking_script.R | entropia <- function(s){
e=0
if(s==0){
e=0
}else if(s==1){
e=0
}else{
e = (s*log(s))+((1-s)*(log(s)))
}
}
entropiaTotal <- function(data,tipos){
et=0
nregistros = nrow(data)
nvariables = ncol(data)
for (i in 1:(nregistros-1)){
for (j in (i+1):nregistros){
sumaSim=0
for (k in 1:nvariables){
if(tipos[k]==0){
if(data[i,k]==data[j,k]){
sumaSim = sumaSim+1
}
} else{
sumaSim = sumaSim + (1-abs(data[i,k]-data[j,k])/(max(data[,k])-min(data[,k])))
}
}
simPromedio = sumaSim/nvariables
et = et + entropia(simPromedio)
}
}
return(et)
}
#The data should be in a matrix
#It is necessary to define the attribute type, and put it in a vector (ordered by the variable number) where 1 = continuous, 0 = discrete.
#Returns a vector
variableQueMenosAporta <- function(matrixData, tipos){
entropiaTotalDatos <- entropiaTotal(matrixData, tipos)
nregistros = nrow(matrixData)
nvariables = ncol(matrixData)
entropiaVector = vector(length = nvariables)
entropiaVector[] = 0
for(variable in 1:nvariables){
dataTemp <- matrixData[,-variable]
tiposTemp <- tipos[-variable]
nvariablesTemp <- nvariables-1
matrizSim <- matrix(nrow=nregistros,ncol=nregistros)
entropiaQuitandoX1 = 0;
for (i in 1:(nregistros-1)){
for (j in (i+1):nregistros){
sumaSim=0
if(nvariablesTemp>=2){
for (k in 1:nvariablesTemp){
if(tiposTemp[k]==0){
if(dataTemp[i,k]==dataTemp[j,k]){
sumaSim = sumaSim+1
}
} else{
sumaSim = sumaSim + (1-abs(dataTemp[i,k]-dataTemp[j,k])/(max(dataTemp[,k])-min(dataTemp[,k])))
}
}
}
else{
for (k in 1:nvariablesTemp){
if(tiposTemp[k]==0){
if(dataTemp[i]==dataTemp[j]){
sumaSim = sumaSim+1
}
} else{
sumaSim = sumaSim + (1-abs(dataTemp[i]-dataTemp[j])/(max(dataTemp)-min(dataTemp)))
}
}
}
simPromedio = sumaSim/nvariablesTemp
entropiaVector[variable] = entropiaVector[variable] + entropia(simPromedio)
}
}
}
difentropia = abs(entropiaVector-entropiaTotalDatos)
resultado = which(difentropia==min(difentropia))
return(resultado)
}
rankingEntropy <- function(data,tipos){
rankingVariables <- variableQueMenosAporta(data,tipos)
nvariables = ncol(data)
ndata = data
ntipos = tipos
for(variable in 2:(nvariables-1)){
ndata = ndata[,-rankingVariables[variable-1]]
ntipos = ntipos[-rankingVariables[variable-1]]
temp <- variableQueMenosAporta(ndata,ntipos)
for(a in 1:length(rankingVariables)){
if (rankingVariables[a]<=temp){
temp = temp+1
}
}
rankingVariables[variable] <- temp
}
return(rankingVariables)
} |
ca23d6751ac1a07fb0372a2852335efb344f084f | 060f07fee1aa8e10b9fa83ead06de6e7e4b2b9d8 | /Visualization of car.R | a7f763fd7b8807ebbb81feec9e16477f6b450090 | [] | no_license | ktnchikhalkar/R-code-master | 26cad8d38a64886abf4dd9a29961bb459a66cc3d | 670c84dc9f77efc55d4a3c30b48d1ccd923ece9a | refs/heads/main | 2023-06-08T23:35:31.644966 | 2021-06-26T05:28:03 | 2021-06-26T05:28:03 | 380,420,354 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,596 | r | Visualization of car.R | ###Perform Basic Visualizations for all the columns(numerical data only) on any
#data set from data set folder make sure it has more data.
#So we can make better inferences for the visualizations(boxplot,histogram)
cars<- read.csv("C:/Users/Ketan/Data science/Assigments/BAsic Statistics 1/Cars.csv")
summary(cars)
boxplot(cars)
#In boxplot Hp has lots of Out liers to its Upper extreme
##MPG has no outliers
#Vol has only 2 outliers one to upper extreme and one in lower
#Sp has lot of outliers
#Wt also has only 2 outliers one to upper extreme and one in lower
library(moments)
#For HP
mean(cars$HP)
median(cars$HP) #Mean> median
skewness(cars$HP) ##skewness is positive that means it is right skewed
kurtosis(cars$HP) ##and value here is greater than 3 so it is leptokurtic
hist(cars$HP,xlab = "HP",ylab = "Frequency") #shows skewness nature to right and a sharp peak & wide tail
#For MPG
mean(cars$MPG)
median(cars$MPG) #mean<median
skewness(cars$MPG) ###skewness is negative that means it is left skewed
kurtosis(cars$MPG) ##and value here is less than 3 so it is platykurtic
hist(cars$MPG,xlab = "mpg",ylab = "Frequency") ##shows skewness nature to left and a broad peak & thin tail
#For VOl
mean(cars$VOL)
median(cars$VOL) #Mean< median
skewness(cars$VOL) ##skewness is negative that means it is left skewed
kurtosis(cars$VOL) ##and value here is greater than 3 so it is leptokurtic
hist(cars$VOL,xlab = "vol",ylab = "Frequency") #shows skewness nature to left and a sharp peak& wide tail
#For SP
mean(cars$SP)
median(cars$SP) #Mean> median
skewness(cars$SP) ##skewness is positive that means it is right skewed
kurtosis(cars$SP) ##and value here is greater than 3 so it is leptokurtic
hist(cars$SP,xlab = "sp",ylab = "Frequency")
#shows skewness nature to right and a sharp peak & wide tail
#For WT
mean(cars$WT)
median(cars$WT) #Mean< median
skewness(cars$WT) ##skewness is negative that means it is left skewed
kurtosis(cars$WT) ##and value here is greater than 3 so it is leptokurtic
hist(cars$WT,xlab = "wt",ylab = "Frequency") #shows skewness nature to left and a sharp peak & wide tail
library(lattice)
#DOt plot
dotplot(cars$HP,main="Dot plot",col="dodgerblue4")
dotplot(cars$MPG,col="red")
dotplot(cars$VOL,col="pink")
dotplot(cars$SP,col="purple")
dotplot(cars$MPG,col="orange")
#scatter plot
plot(cars$HP,main="Dot plot",col="dodgerblue4")
plot(cars$MPG,col="red")
plot(cars$VOL,col="green")
plot(cars$SP,col="purple")
plot(cars$MPG,col="orange")
|
5f620737b61b546b726f6c2a4785af818e77418b | 5c5d84b1fe9558ddb35aec40aad8dd70b00696d1 | /man/default_compute_backend.Rd | 8c27afd11983e01ed7090b65adcb0bf46892aba2 | [] | no_license | returnString/cloudburst | 5aadba8bc6b7374524ee9ef3765a8464c665aa4a | ada9ed69099098911d7b79ef0d12afb334ee4544 | refs/heads/master | 2022-01-30T15:16:07.681891 | 2019-08-08T21:27:38 | 2019-08-08T21:27:38 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 223 | rd | default_compute_backend.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/project.R
\name{default_compute_backend}
\alias{default_compute_backend}
\title{Title}
\usage{
default_compute_backend()
}
\description{
Title
}
|
4a1dac9578b1c1c8b3bb858d45ba0f1e0a01b559 | 804019e5175b5637b7cd16cd8ff5914671a8fe4e | /scenario2b.R | 67e055bb14e16fcddc4392d5dee8490c8459a3b0 | [] | no_license | jejoenje/gmse_vary | dde8573adeff9d9c6736a7a834a0a8942c5fcc5d | 057a36896b3b77a0a3e56d1436abe7a45e38cc2d | refs/heads/master | 2022-11-28T11:31:54.294640 | 2020-08-03T10:12:42 | 2020-08-03T10:12:42 | 197,375,284 | 0 | 0 | null | 2019-12-10T21:08:45 | 2019-07-17T11:21:27 | HTML | UTF-8 | R | false | false | 3,009 | r | scenario2b.R | rm(list=ls())
### Set and create output folder for scenario run:
scenario_name = "scenario2b"
out_path = paste0("./sims/",scenario_name)
library(GMSE)
library(scales)
library(RColorBrewer)
library(doParallel)
library(truncnorm)
source('helpers.R')
source('gmse_apply_helpers.R')
source("build_para_grid.R")
### Override basic parameters according to scenario:
# gmse_paras$res_move_type = 0
# gmse_paras$res_move_to_yield = TRUE
# Create root output dir for scenario_name, if not create it:
if(!dir.exists(out_path)) {
dir.create(out_path)
}
n_years = gmse_paras$n_years
# Initialise simulation run with first time step (i.e. time = 0)
sim_old = init_sims(gmse_paras)
# Reset user budgets (truncated normal):
start_budgets = rtruncnorm(gmse_paras$stakeholders,
a = 0.8*gmse_paras$user_budget,
b = 1.2*gmse_paras$user_budget,
mean = gmse_paras$user_budget,
sd = gmse_paras$user_budget/10)
sim_old$AGENTS[2:nrow(sim_old$AGENTS),17] = start_budgets
# Set manager budget according to user budgets:
sim_old$AGENTS[1,17] = set_man_budget(u_buds = start_budgets, type = "prop", p = gmse_paras$man_bud_prop)
# Initialise output for simulation run:
yr_res = init_sim_out(sim_old)
# Loop through nunmber of years
for(i in 1:n_years) {
#print(sprintf("Time step %d", i))
### Move resources according to yield
#sim_old = move_res(sim_old, gmse_paras)
### Set next time step's user budgets
new_b = set_budgets(cur = sim_old, type = "2020", yield_type = gmse_paras$yield_type, yv = gmse_paras$yield_value, scale = FALSE)
new_b[new_b>10000] = 10000
new_b[new_b<gmse_paras$minimum_cost] = gmse_paras$minimum_cost
sim_old$AGENTS[2:nrow(sim_old$AGENTS),17] = new_b
### Set next time step's manager's budget, according to new user budgets
sim_old$AGENTS[1,17] = set_man_budget(u_buds = new_b, type = "prop", p = gmse_paras$man_bud_prop)
### Try to run next time step
sim_new = try({gmse_apply(get_res = "Full", old_list = sim_old)}, silent = T)
### Check output of next time step; if there are errors (extinctions or no obs), skip to the next sim.
### (The following function call should call break() if class(sim_new) == "try-error").
check_ext = check_gmse_extinction(sim_new, silent = T)
### So this shoudld only happen if "check_gmse_extinction(sim_new)" has not called break().
### So if NOT extinct, append output and reset time step:
if(check_ext == "ok") {
yr_res = append_output(sim_new, yr_res)
sim_old = sim_new
rm(sim_new)
} else {
break()
}
}
# Add parameters to output list:
yr_res$par = gmse_paras
# Save output list:
# Create millisecond timestamp with overprecision:
tstamp = format(Sys.time(), "%Y%m%d%H%M%OS6")
tstamp = sub("\\.","",tstamp)
saveRDS(yr_res, file = paste0(out_path,"/",tstamp,".Rds"))
# To run 30 of this script in parallel:
# seq 100 | xargs -I{} -P 6 /usr/bin/Rscript scenario2.R
|
d1162415e0f3879208c4547e49a56abfe7c96b91 | 26cac7181fcd9cb719a6af31193b141a7028b8b0 | /GCDP/cnv_3.r | 7a2652d6483b1fdb3f1b449b932c0a4309aa6e25 | [] | no_license | shubhambasu/Genome_Compression_Detection_Pipeline | 4f521097e2f6fb6a8905f4b62a20c5ee29bcb3c9 | 9992e0e2b2067481e5b93d0679654aa126b95427 | refs/heads/master | 2021-08-24T05:04:43.490780 | 2017-12-07T19:00:47 | 2017-12-07T19:00:47 | 103,571,387 | 0 | 1 | null | 2017-12-04T20:35:24 | 2017-09-14T19:09:45 | Shell | UTF-8 | R | false | false | 1,307 | r | cnv_3.r | #!/usr/bin/env Rscript
args = commandArgs(trailingOnly=TRUE)
a=read.table(args[1], as.is=T, header=F)
threshold_0=mean(a$V5)+sd(a$V5*3)
y=paste(toString(args[2]),"_single_genes_read_covg_stat_additional.txt",sep="")
write(c("First Mean=",toString(mean(a$V5)),"First SD=",toString(sd(a$V5)), "First Threshold=",toString(threshold_0)), file=y,ncolumns =5,append = TRUE, sep = "\t")
#nrow(a)
#threshold_0
a2=a[a$V6<=quantile(a$V6,0.25),]
threshold_1=mean(a2$V5)+sd(a2$V5*3)
write(c("Second Mean=" , toString(mean(a2$V5)) , "Second SD=" , toString(sd(a2$V5)) , "Second Threshold=" , toString(threshold_1)) , file=y,ncolumns =5,append = TRUE, sep = "\t")
#nrow(a2)
#threshold_1
a3=a2[a2$V9<=quantile(a2$V9,0.25),]
#nrow(a3)
threshold_2=mean(a3$V5)+sd(a3$V5*3)
write(c("Third Mean=",toString(mean(a3$V5)),"Third SD=",toString(sd(a3$V5)),"Third Threshold=",toString(threshold_2)), file=y,ncolumns =5,append = TRUE, sep = "\t")
x=paste(toString(args[2]),"_single_genes_read_covg_stat.txt",sep="")
#threshold_2
#mean(a3$V5)
#sd(a3$V5)
if (nrow(a3)<10) {
if (nrow(a2)<10){write(c(threshold_0),file=x,ncolumns =1,append = FALSE, sep = "\t")}
else {write(c(threshold_1),file=x,ncolumns =1,append = FALSE, sep = "\t")}
}else {write(c(threshold_2),file=x,ncolumns =1,append = FALSE, sep = "\t")
}
|
43e5aaa93cf89c33b5bb936068dad7474b57fd24 | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/RNeXML/examples/add_namespaces.Rd.R | 21c5e7e2909199f775c312c70997be5a972d4f20 | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,021 | r | add_namespaces.Rd.R | library(RNeXML)
### Name: add_namespaces
### Title: add namespaces
### Aliases: add_namespaces
### ** Examples
## Create a new nexml object with a single metadata element:
modified <- meta(property = "prism:modificationDate", content = "2013-10-04")
nex <- add_meta(modified) # Note: 'prism' is defined in nexml_namespaces by default.
## Write multiple metadata elements, including a new namespace:
website <- meta(href = "http://carlboettiger.info",
rel = "foaf:homepage") # meta can be link-style metadata
nex <- add_meta(list(modified, website),
namespaces = c(foaf = "http://xmlns.com/foaf/0.1/"))
## Append more metadata, and specify a level:
history <- meta(property = "skos:historyNote",
content = "Mapped from the bird.orders data in the ape package using RNeXML")
nex <- add_meta(history,
nexml = nex,
level = "trees",
namespaces = c(skos = "http://www.w3.org/2004/02/skos/core#"))
|
300b519dd30415057574a4e9ff743d894dfef9ab | 7f2a011b2b2c313468646ac57226ce627d58552d | /code/CompareAgesFossils.r | 9d5b6c93411ba2887646d99225c66c5dd5663a5d | [
"MIT"
] | permissive | spiritu-santi/angiosperm-time-tree-2.0 | 9b707ba380e693dd93e2e3c738f103b3ce785073 | d1a8a14edadd15834ccba052991886853f4c852a | refs/heads/master | 2021-06-17T13:24:28.798500 | 2021-04-27T01:42:59 | 2021-04-27T01:42:59 | 194,142,746 | 1 | 1 | MIT | 2020-02-03T04:55:23 | 2019-06-27T18:13:37 | R | UTF-8 | R | false | false | 5,605 | r | CompareAgesFossils.r | # Compares mean stem and crown ages for dating analyses implementing different fossil calibration sets.
# - SA = Constrained Calibration (CC)
# - SB = Relaxed Calibration (RC)
# - SD = Unconstrained Calibration (UC)
# - phylo = conservative set
# - complete = complete set.
# Uses additional files.
# NOTE TO SELF: this can easily be automated! Currently needs to be edited with the names for each of the three dating analyses with different calibration strategies.
# NOTE TO SELF: probably also need to homogenize strategies' codes.
rich<-read.table("SPP.RICHNESS.csv",heade=T,sep=",") # file in data folder
resSTRICT<-read.table("RES_SBphylo_v2/2.Ages_complete.csv",sep=",",header=T) # generated by the preceding scripts
resRAW<-read.table("RES_SBcomplete_v2/2.Ages_complete.csv",sep=",",header=T) # generated by the preceding scripts
pdf("1.ComparisonFossils_RC.pdf")
rich_ord<-rich[which(rich$Family==""),]; dim(rich_ord)
resSTRICT$Order_richness<-rich_ord[match(resSTRICT$Order,rich_ord$Order),"Spp_richness"]
rich_fams<-rich[which(rich$Subfamily==""),]; dim(rich_fams)
rich_fams<-rich_fams[which(rich_fams$Family!=""),]; dim(rich_fams)
resSTRICT$Family_richness<-rich_fams[match(resSTRICT$Family,rich_fams$Family),"Spp_richness"]
rich_subfams<-rich[which(rich$Subfamily!=""),]; dim(rich_subfams)
resSTRICT$Subfamily_richness<-rich_subfams[match(resSTRICT$Subfamily,rich_subfams$Subfamily),"Spp_richness"]
non<-which(rownames(resSTRICT) %in% c("Aphloiaceae", "Daphniphyllaceae",
"Brunelliaceae", "Cynomoriaceae", "Mayacaceae",
"Mitrastemonaceae", "Oncothecaceae"))
resSTRICT[non,"CG_age_Acal"]<-NA
resSTRICT$Total_Richness <- rep(NA,dim(resSTRICT)[1])
for (i in 1:dim(resSTRICT)[1]){
if(!is.na(resSTRICT$Subfamily_richness[i])){resSTRICT$Total_Richness[i] <- resSTRICT$Subfamily_richness[i];next}
if(!is.na(resSTRICT$Family_richness[i])){resSTRICT$Total_Richness[i] <- resSTRICT$Family_richness[i];next}
resSTRICT$Total_Richness[i] <- resSTRICT$Order_richness[i]}
out.group<-which(resSTRICT$Order%in%c("Pinales","Gnetales","Ginkgoales","Cycadales"))
resSTRICT<-resSTRICT[-out.group,]
resRAW$Order_richness<-rich_ord[match(resRAW$Order,rich_ord$Order),"Spp_richness"]
rich_fams<-rich[which(rich$Subfamily==""),]; dim(rich_fams)
rich_fams<-rich_fams[which(rich_fams$Family!=""),]; dim(rich_fams)
resRAW$Family_richness<-rich_fams[match(resRAW$Family,rich_fams$Family),"Spp_richness"]
rich_subfams<-rich[which(rich$Subfamily!=""),]; dim(rich_subfams)
resRAW$Subfamily_richness<-rich_subfams[match(resRAW$Subfamily,rich_subfams$Subfamily),"Spp_richness"]
non<-which(rownames(resRAW) %in% c("Aphloiaceae", "Daphniphyllaceae",
"Brunelliaceae", "Cynomoriaceae", "Mayacaceae",
"Mitrastemonaceae", "Oncothecaceae"))
resRAW[non,"CG_age_Acal"]<-NA
resRAW$Total_Richness <- rep(NA,dim(resRAW)[1])
for (i in 1:dim(resRAW)[1]){
if(!is.na(resRAW$Subfamily_richness[i])){resRAW$Total_Richness[i] <- resRAW$Subfamily_richness[i];next}
if(!is.na(resRAW$Family_richness[i])){resRAW$Total_Richness[i] <- resRAW$Family_richness[i];next}
resRAW$Total_Richness[i] <- resRAW$Order_richness[i]}
out.group<-which(resRAW$Order%in%c("Pinales","Gnetales","Ginkgoales","Cycadales"))
resRAW<-resRAW[-out.group,]
orders<-1:64
fams<-65:500
subfams<-501:dim(resRAW)[1]
print(unique(rownames(resRAW)==rownames(resSTRICT)))
resRAW<-resRAW[fams,]
resSTRICT<-resSTRICT[fams,]
plot(resRAW$SG_age_Acal,resSTRICT$SG_age_Acal,type="n",ylim=c(0,200),xlim=c(0,200),
xlab="Stem ages (complete set)",ylab="Stem ages (phylo-only set)");abline(a=0,b=1)
title("Stem age estimates (families)")
for (i in 1:length(resSTRICT$SG_age_Acal)) segments(y0=resRAW$SG_minHPD[i],y1=resRAW$SG_maxHPD[i],
x0=resSTRICT$SG_age_Acal[i],x1=resSTRICT$SG_age_Acal[i],lwd = 0.5,col="grey40")
for (i in 1:length(resSTRICT$SG_age_Acal)) segments(x0=resSTRICT$SG_minHPD[i],x1=resSTRICT$SG_maxHPD[i],
y0=resRAW$SG_age_Acal[i],y1=resRAW$SG_age_Acal[i],lwd = 0.5,col="grey40")
points(resSTRICT$SG_age_Acal,resRAW$SG_age_Acal,pch=21,cex=1,bg=rgb(1,0,0,0.8))
plot(resSTRICT$CG_age_Acal,resRAW$CG_age_Acal,type="n",ylim=c(0,200),xlim=c(0,200),
xlab="Crown ages (complete set)",ylab="Crown ages (phylo-only set)");abline(a=0,b=1)
title("Crown age estimates (families)")
for (i in 1:length(resSTRICT$SG_age_Acal)) segments(y0=resRAW$CG_minHPD[i],y1=resRAW$CG_maxHPD[i],
x0=resSTRICT$CG_age_Acal[i],x1=resSTRICT$CG_age_Acal[i],lwd = 0.5,col="grey40")
for (i in 1:length(resSTRICT$SG_age_Acal)) segments(x0=resSTRICT$CG_minHPD[i],x1=resSTRICT$CG_maxHPD[i],
y0=resRAW$CG_age_Acal[i],y1=resRAW$CG_age_Acal[i],lwd = 0.5,col="grey40")
points(resSTRICT$CG_age_Acal,resRAW$CG_age_Acal,pch=21,cex=1,bg=rgb(1,0,0,0.8))
HPD_complete<-resRAW$CG_maxHPD-resRAW$CG_minHPD
HPD_strict<-resSTRICT$CG_maxHPD-resSTRICT$CG_minHPD
HPDsC<-cbind(HPD_complete,HPD_strict)
boxplot(HPDsC,pch=19,cex=0.6,col=c("grey80","grey30"),pch=19,cex=0.5,
ylab="95% HPD crown ages")
title("HPD crown age estimates")
HPD_complete<-resRAW$SG_maxHPD-resRAW$SG_minHPD
HPD_strict<-resSTRICT$SG_maxHPD-resSTRICT$SG_minHPD
HPDsC<-cbind(HPD_complete,HPD_strict)
boxplot(HPDsC,pch=19,cex=0.6,col=c("grey80","grey30"),pch=19,cex=0.5,
ylab="95% HPD stem ages")
title("HPD stem age estimates")
dev.off()
|
4b5efdb7f9714bb9d8712344573dc48b9feea476 | bbfe62824b5a017bef3c8d29078a1e26b77c7442 | /plot3.R | 94c1ee10e45c4bc4bfc4cf4edda7de97fdcdfbbd | [] | no_license | Minimalia/ExData_Plotting1 | b77fbfee4849164c936a542869c7f5e1aa01f5c8 | 627468127a8b40a3ace3377960b1aa3cf07d7451 | refs/heads/master | 2020-12-11T06:11:25.695803 | 2016-03-05T00:06:59 | 2016-03-05T00:06:59 | 53,169,809 | 0 | 0 | null | 2016-03-04T22:32:30 | 2016-03-04T22:32:29 | null | UTF-8 | R | false | false | 1,314 | r | plot3.R | # This assignment uses data from the UC Irvine Machine Learning Repository,
# "Individual household electric power consumption Data Set" for dates:
# 1/2/2007: From line 66638 to 68077
# 2/2/2007: From line 68078 to 69517
# Need to Skip=66637 lines and read nrows = 2880
# Load needed libraries
library(plyr)
# Read data
data <- read.table("household_power_consumption.txt",sep=";",skip=66637, nrows= 2880,na.strings="?")
# Read header
header <- read.table("household_power_consumption.txt",header= TRUE,sep=";",nrows= 0)
# Names the data
names(data)<-names(header)
# Modify Date column as Date and Time as Date - Time column
data <-mutate(data,Time = strptime(paste(data$Date,data$Time),format="%d/%m/%Y %H:%M:%S"))
data <-mutate(data,Date = as.Date(Date))
# Set graphic device as PNG file
png("plot3.png", width = 480, height = 480, bg = "white")
# Plot:
with(data,plot(Time,Sub_metering_1,type ="l",xlab="",ylab=""))
par(new=TRUE)
with(data,lines(Time,Sub_metering_2,type ="l",col="red",xlab="",ylab=""))
par(new=TRUE)
with(data,lines(Time,Sub_metering_3,type ="l",col="blue",xlab="",ylab=""))
legend("topright",lty=c(1,1,1),lwd=2,col=c("black","red","blue"),legend=c("Sub_metering_1","Sub_metering_2","Sub_metering_3"))
title(xlab="",ylab="Energy sub metering")
dev.off() |
c0cde1941a1e91bbe7cc9947d74c199a4751fe3a | 7b662d432f1170f066a3b486f9da0fd967935d16 | /plot4.R | 544d20c6873ae6d091507904a47a7ab88e3535b5 | [] | no_license | RuneChess/ExData_Plotting1 | 9ee87d15bd5af65dbc14557837e62d4cd8433b53 | 6e1960639b590b5a1abf84b865360473d3f3892d | refs/heads/master | 2020-12-03T05:31:49.905198 | 2015-05-10T22:09:28 | 2015-05-10T22:09:28 | 33,731,364 | 0 | 0 | null | 2015-04-10T13:59:29 | 2015-04-10T13:59:29 | null | UTF-8 | R | false | false | 996 | r | plot4.R | library(data.table)
data <- fread("household_power_consumption.txt", colClass="character", sep=";", na.strings = "?")
data <- subset(data, Date == "1/2/2007" | Date == "2/2/2007")
data <- cbind(data, strptime(paste(data$Date, data$Time), "%d/%m/%Y %H:%M:%S"))
setnames(data, 10, "DateTime")
png(file="plot4.png", width=480, height=480)
par(mfrow=c(2,2))
plot(data$DateTime, data$Global_active_power, type="l", xlab="", ylab="Global Active Power")
plot(data$DateTime, data$Voltage, type="l", xlab="datetime", ylab="Voltage")
plot(data$DateTime, data$Sub_metering_1, type="l", col="black", xlab="", ylab="Energy sub metering")
lines(data$DateTime, data$Sub_metering_2, type="l", col="red")
lines(data$DateTime, data$Sub_metering_3, type="l", col="blue")
legend("topright", lty=c(1,1), col=c("black","red", "blue"), legend = c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"))
plot(data$DateTime, data$Global_reactive_power, type="l", xlab="datetime", ylab="Global_reactive_power")
dev.off()
|
ac9f57c0a150ea38553a96a1642ca048c0ba5352 | 97edcb6746069b6c6c7facbe82ed9bb4482d6e22 | /eSVD/man/dot-absolute_threshold.Rd | 8a34a4c96f241f79c7cc6f9dc78436f129c84a3b | [
"MIT"
] | permissive | linnykos/esvd | 7e59c5fc50f6e0bd23fcb85f1fa6aa66ea782a60 | 0b9f4d38ed20d74f1288c97f96322347ba68d08d | refs/heads/master | 2023-02-24T23:33:32.483635 | 2021-01-31T21:06:41 | 2021-01-31T21:06:41 | 129,167,224 | 2 | 2 | null | null | null | null | UTF-8 | R | false | true | 715 | rd | dot-absolute_threshold.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/initialization.R
\name{.absolute_threshold}
\alias{.absolute_threshold}
\title{Hard threshold all the entries in a matrix}
\usage{
.absolute_threshold(mat, direction, max_val = NA, tol2 = 0.001)
}
\arguments{
\item{mat}{numeric matrix with \code{n} rows and \code{p} columns}
\item{direction}{character either \code{"<="} or \code{">="} or \code{NA}}
\item{max_val}{maximum magnitude of each entry in the approximate}
\item{tol2}{numeric}
}
\value{
\code{n} by \code{d} matrix
}
\description{
While simple conceptually, this function is a bit janky-looking in its
implementation due to numeric instabilities previously encountered.
}
|
9bc16854d647939129e6594ae7d6ab0cb2cd9081 | 9bf1d95769a27f4f6e2c56e75f232369f5bca637 | /cw1.R | 23f8569a977578d71d48981a1bd4503d44fc832d | [] | no_license | wiktorbalaban/ElementyStatystki | 981dbed8b4b0adfdda67801e0da57d9b30461b66 | 7c770c03eb32316ae59a0a678c9fae7220043358 | refs/heads/master | 2020-04-05T17:40:13.044433 | 2018-11-19T17:14:21 | 2018-11-19T17:14:21 | 157,070,850 | 0 | 0 | null | null | null | null | WINDOWS-1250 | R | false | false | 1,326 | r | cw1.R | setwd("j://Desktop/elementy statystyki/ElementyStatystki")
#z1
x <- rep(c(TRUE,FALSE,TRUE,FALSE), c(3,4,2,5))
x
#z2
natur <- 1:1000
natur[seq(2, 1000, by = 2)] <- natur[seq(2, 1000, by = 2)]^(-1)
#z3
palind <- c(1:20,rep(0,10),seq(102,140,by=2),
seq(140,102,by=-2),rep(0,10),20:1)
#z4
MojaLista <- list(
imieINazwisko=c('Wiktor','Bałaban'),
PI=pi,
wekt=seq(0.02,1,by=0.02)
)
#z5
temp <- read.table('temp.txt')
tempNyC <- round((temp$NY_F - 32)/1.8, 2)
temp <- cbind(temp,tempNyC)
save(temp,file = 'Miasta.RData')
#z6
Cities <- read.csv2('Cities.csv')
Cities$MONTH <- NULL
Atlanta_C <- (Cities$ATLANTA - 32)/1.8
Phoenix_C <- (Cities$PHOENIX - 32)/1.8
SanDiego_C <- (Cities$SANDIEGO - 32)/1.8
load('Miasta.RData')
temp <- cbind(temp,Atlanta_C)
temp <- cbind(temp,Phoenix_C)
temp <- cbind(temp,SanDiego_C)
colnames(temp)[2] <- 'NY_C'
temp$Atlanta_C=round(temp$Atlanta_C,2)
temp$Phoenix_C=round(temp$Phoenix_C,2)
temp$SanDiego_C=round(temp$SanDiego_C,2)
temp$NY_F <- NULL
colnames(temp) <- c('Nowy York','Atlanta','Phoenix','San Diego')
save(temp,file = 'Miasta1.RData')
?legend
matplot(1:12,temp,type = 'b',lwd=2,pch=1:4,xlab='Miesiac',
ylab='Temperatura (w stopniach C)')
legend(x='topleft',y=0,legend=colnames(temp),col=1:4,lty=1:4,lwd=2,
pch=1:4)
|
e2965613eb91501540f8410ab985308443509424 | a29832f97abdaafd1490ec4e7a38e85505ed3790 | /man/plotSmoothsComparison.Rd | 2e5e6cc1b78d0fad50c086a375b6a901d5caf6e7 | [] | no_license | cran/growthPheno | edfd87b031ff311e8c0bd47da986a7f2ddd2bded | 6bb455e5dac33deb46536a3162238da06e30a508 | refs/heads/master | 2023-08-31T22:58:01.201522 | 2023-08-22T16:00:02 | 2023-08-22T18:31:00 | 196,967,533 | 0 | 1 | null | null | null | null | UTF-8 | R | false | false | 11,611 | rd | plotSmoothsComparison.Rd | \name{plotSmoothsComparison}
\alias{plotSmoothsComparison}
\title{Plots several sets of smoothed values for a response, possibly along with growth rates and optionally including the unsmoothed values, as well as deviations boxplots.}
\description{Plots the smoothed values for an observed \code{response} and, optionally, the unsmoothed
observed \code{response} using \code{\link{plotProfiles}}. Depending on the setting of
\code{trait.types} (\code{response}, \code{AGR} or \code{RGR}), the computed traits of the
Absolute Growth Rates (AGR) and/or the Relative Growth Rates (RGR) are plotted. This
function will also calculate and produce, using \code{\link{plotDeviationsBoxes}}, boxplots
of the deviations of the supplied smoothed values from the observed response values for the
traits and for combinations of the different smoothing parameters and for subsets of
non-smoothing-\code{\link{factor}} combinations. The observed and smoothed values are
supplied in long format i.e. with the values for each set of smoothing parameters stacked
one under the other in the supplied \code{\link{smooths.frame}}. Such data can be generated
using \code{\link{probeSmooths}}; to prevent \code{\link{probeSmooths}} producing the
plots, which it is does using \code{plotSmoothsComparison}, \code{\link{plotDeviationsBoxes}}
and \code{\link{plotSmoothsMedianDevns}}, set \code{which.plots} to \code{none}.
The smoothing parameters include \code{spline.types}, \code{df}, \code{lambdas} and
\code{smoothing.methods} (see \code{\link{probeSmooths}}).
Multiple plots, possibly each having multiple facets, are produced using \code{ggplot2}.
The layout of these plots is controlled via the arguments \code{plots.by},
\code{facet.x} and \code{facet.y}. The basic principle is that the number of levels
combinations of the smoothing-parameter \code{\link{factor}}s \code{Type}, \code{TunePar},
\code{TuneVal}, \code{Tuning} (the combination of (\code{TunePar} and \code{TuneVal}), and
\code{Method} that are included in \code{plots.by}, \code{facet.x} and
\code{facet.y} must be the same as those covered by the combinations of the values
supplied to \code{spline.types}, \code{df}, \code{lambdas} and \code{Method} and incorporated
into the \code{\link{smooths.frame}} input to \code{plotSmoothsComparison} via the
\code{data} argument. This ensures that smooths from different parameter sets are not
pooled into the same plot. The \code{\link{factor}}s other than the smoothing-parameter
\code{\link{factor}}s can be supplied to the \code{plots.by} and \code{facet} arguments.
The following profiles plots can be produced: (i) separate plots of the
smoothed traits for each combination of the smoothing parameters
(include \code{Type}, \code{Tuning} and \code{Method} in \code{plots.by});
(ii) as for (i), with the corresponding plot for the unsmoothed trait
preceeding the plots for the smoothed trait (also set \code{include.raw} to
\code{alone}); (iii) profiles plots that compare a smoothed trait for all
combinations of the values of the smoothing parameters, arranging the plots
side-by-side or one above the other (include \code{Type}, \code{Tuning} and
\code{Method} in \code{facet.x} and/or \code{facet.y} - to include the
unsmoothed trait set \code{include.raw} to one of \code{facet.x} or
\code{facet.y}; (iv) as for (iii), except that separate plots are
produced for each combination of the levels of the \code{\link{factor}}s
in \code{plot.by} and each plot compares the smoothed traits for the
smoothing-parameter \code{\link{factor}}s included in \code{facet.x}
and/or \code{facet.y} (set both \code{plots.by} and one or more of
\code{facet.x} and \code{facet.y}).
}
\usage{
plotSmoothsComparison(data, response, response.smoothed = NULL,
individuals = "Snapshot.ID.Tag", times = "DAP",
trait.types = c("response", "AGR", "RGR"),
x.title = NULL, y.titles = NULL,
profile.plot.args =
args4profile_plot(plots.by = NULL,
facet.x = ".", facet.y = ".",
include.raw = "no"),
printPlot = TRUE, ...)
}
\arguments{
\item{data}{A \code{\link{smooths.frame}}, such as is produced by
\code{\link{probeSmooths}} and that contains the data resulting from
smoothing a response over time for a set of \code{individuals}, the data
being arranged in long format both with respect to the
times and the smoothing-parameter values used in the smoothing. That is,
each response occupies a single column. The unsmoothed \code{response} and
the \code{response.smoothed} are to be plotted for different sets of values
for the smoothing parameters. The \code{\link{smooths.frame}} must include
the columns \code{Type}, \code{TunePar}, \code{TuneVal}, \code{Tuning} and
\code{Method}, and the columns nominated using the arguments
\code{individuals}, \code{times}, \code{plots.by}, \code{facet.x}, \code{facet.y},
\code{response}, \code{response.smoothed}, and, if requested,
the AGR and the RGR of the \code{response} and \code{response.smoothed}.
The names of the growth rates should be formed from \code{response} and
\code{response.smoothed} by adding \code{.AGR} and \code{.RGR} to both of them.}
\item{response}{A \code{\link{character}} specifying the response variable for which the
observed values are supplied.}
\item{response.smoothed}{A \code{\link{character}} specifying the name of the column
containing the values of the smoothed response variable, corresponding
to \code{response} and obtained for the combinations of
\code{smoothing.methods} and \code{df}, usually using smoothing splines.
If \code{response.smoothed} is \code{NULL}, then
\code{response.smoothed} is set to the \code{response} to which is added
the prefix \code{s}. }
\item{times}{A \code{\link{character}} giving the name of the column in
\code{data} containing the times at which the data was
collected, either as a \code{\link{numeric}}, \code{\link{factor}}, or
\code{\link{character}}. It will be used to provide the values to be plotted
on the x-axis. If a \code{\link{factor}} or \code{\link{character}},
the values should be numerics stored as characters.}
\item{individuals}{A \code{\link{character}} giving the name of the
\code{\link{factor}} that defines the subsets of the \code{data}
for which each subset corresponds to the \code{response} values for
an individual (e.g. plant, pot, cart, plot or unit).}
\item{trait.types}{A \code{\link{character}} giving the \code{trait.types} that
are to be plotted when \code{which.plots} is \code{profiles}.
Irrespective of the setting of \code{get.rates}, the nominated traits
are plotted. If \code{all}, each of \code{response}, \code{AGR} and
\code{RGR} is plotted.}
\item{x.title}{Title for the x-axis, used for all plots. If \code{NULL} then set to
\code{times}.}
\item{y.titles}{A \code{\link{character}} giving the titles for the y-axis,
one for each trait specified by \code{trait.types} and used for all plots.
If \code{NULL}, then set to the traits derived for \code{response}
from \code{trait.types}.}
\item{profile.plot.args}{A named \code{\link{list}} that is most easily
generated using \code{\link{args4profile_plot}}, it documenting the
options available for varying profile plots and boxplots. \emph{Note
that if \code{\link{args4profile_plot}} is to be called to change
from the default settings given in the default
\code{plotSmoothsComparison} call
and some of those settings are to be retained, then the arguments
whose settings are to be retained must also be included in the call
to \code{\link{args4profile_plot}}; be aware that if you call
\code{\link{args4profile_plot}}, then the defaults for this call are
those for \code{\link{args4profile_plot}}, \bold{NOT} the call to
\code{\link{args4profile_plot}} shown as the default for
\code{plotSmoothsComparison}.}}
\item{printPlot}{A \code{\link{logical}} indicating whether or not to print any
plots.}
\item{...}{allows passing of arguments to \code{\link{plotProfiles}}.}
}
\value{A multilevel \code{\link{list}} that contains the \code{\link{ggplot}}
objects for the plots produced. The first-level \code{\link{list}}
has a component for each \code{trait.types} and each of these is a
second-level \code{\link{list}} that contains the trait
profile plots and for a \code{trait}. It may contain components labelled
\code{Unsmoothed}, \code{all} or for one of the levels of the
\code{\link{factor}}s in \code{plots.by}; each of these third-level
\code{\link{list}}s contains a \code{\link{ggplot}} object that can
be plotted using \code{print}.
}
\author{Chris Brien}
\seealso{\code{\link{traitSmooth}}, \code{\link{probeSmooths}}, \code{\link{args4profile_plot}}, \code{\link{plotDeviationsBoxes}}, \code{\link{plotSmoothsMedianDevns}}, \code{\link{ggplot}}.}
\examples{
data(exampleData)
vline <- list(ggplot2::geom_vline(xintercept=29, linetype="longdash", size=1))
traits <- probeSmooths(data = longi.dat,
response = "PSA", response.smoothed = "sPSA",
times = "DAP",
#only df is changed from the probeSmooth default
smoothing.args =
args4smoothing(smoothing.methods = "direct",
spline.types = "NCSS",
df = c(4,7), lambdas = NULL),
which.plots = "none")
plotSmoothsComparison(data = traits,
response = "PSA", response.smoothed = "sPSA",
times = "DAP", x.title = "DAP",
#only facet.x is changed from the probeSmooth default
profile.plot.args =
args4profile_plot(plots.by = NULL,
facet.x = "Tuning", facet.y = ".",
include.raw = "no",
ggplotFuncs = vline))
}
\keyword{hplot}
\keyword{manip} |
3b42135a20856d5639cf434d0866c33f895b8750 | 44cf65e7ab4c487535d8ba91086b66b0b9523af6 | /data/Newspapers/2001.08.21.editorial.64745.0697.r | 2daf8e441d10440f0052a750bed84dc75e7d5c9f | [] | no_license | narcis96/decrypting-alpha | f14a746ca47088ec3182d610bfb68d0d4d3b504e | 5c665107017922d0f74106c13d097bfca0516e66 | refs/heads/master | 2021-08-22T07:27:31.764027 | 2017-11-29T12:00:20 | 2017-11-29T12:00:20 | 111,142,761 | 0 | 1 | null | null | null | null | UTF-8 | R | false | false | 3,304 | r | 2001.08.21.editorial.64745.0697.r | Adrian Nastase , pe urmele lui Emil Constantinescu Adrian Nastase , pe urmele lui Emil Constantinescu CORNEL NISTORESCU Marti , 21 August 2001 Concordia e un vis frumos al politicienilor romani . Sa ne iubim cu totii si sa nu mai aratam strainilor dezbinare , mincatorie , cirteala si combinatii .
pornind de la Apelul de la Snagov , trecind prin reconcilierea cu Regele , prin celebra scrisoare zice - se semnata de presedinte , Rege si patriarh si pina la initiativa lui Adrian Nastase de a pune de un armistitiu politic , toate sint strabatute de aceeasi idee .
sa ne aratam strainatatii solidari , strins uniti in jurul idealurilor nationale . Intr - un anume fel , ideea suna bine .
si cancelariile straine s - au plictisit de mincatoria romaneasca , recunoscuta deja ca o valoare traditionala .
initiativa premierului Nastase da bine si la publicul larg .
multi romani se intreaba in fiecare zi de ce nu putem fi si noi uniti cum sint ungurii , de exemplu !
si nimeni nu le poate da un raspuns .
exista , asadar , o anume disponibilitate pentru aceasta impacare - armistitiu in vederea atingerii unui scop politic major : admiterea in NATO la Summit - ul din 2002 de la Praga .
inainte de a purcede la negocierea acestui armistitiu cred ca ar trebui evaluate si consecintele sale .
nu e vorba de cele de imagine , ci de cele practice .
sa nu uitam ca atitudinea nu e noua . In 1997 , o politica asemanatoare a dus si Emil Constantinescu .
guvernul a fost dadacit indeaproape pentru a nu lua decizii care sa genereze evenimente si turbulente sociale .
si atunci , strategia a fost sa ne prezentam la Madrid cu o imagine frumoasa , unitara , credibila , un pilon de stabilitate in zona .
Victor Ciorbea a tras frina reformelor , in economie nu s - a intimplat mare lucru , si toata lumea a asteptat cu sufletul la gura decizia de la Madrid .
coalitia de atunci a pierdut astfel prima perioada de guvernare fara sa se atinga de viciile adinci din mecanismul economic .
de la Madrid ne - am ales cu o nominalizare , importanta ca semnal politic de pus la butoniera , dar care n - a fost suficienta pentru a da guvernului de atunci puterea sa faca unele schimbari obligatorii .
si , astfel , cabinetul Ciorbea a pierdut aproape totul doar pentru o liniste de parada .
ce avantaje poate avea armistitiul politic propus acum de Adrian Nastase ?
le - am amintit .
plus avantajul , pentru PSD , de a cistiga o perioada in care adversarii politici , si asa pirpirii , sa nu - i puna vreo problema . O intelegere cu sindicatele ar oferi PSD un an fara dureri de cap .
ce am pierde ? Protejate de acest armistitiu , partidul de guvernamint si executivul pot intra intr - o perioada de trai nineaca .
pot amina privatizari dificile , pot inteti aranjamentele si combinatiile in interes personal si pot lasa mai toate problemele mari ale societatii romanesti pentru 2003 , cind e oricum prea tirziu pentru taieturi dureroase , la radacina raului .
Emil Constantinescu si Victor Ciorbea au pierdut in 1997 cea mai propice perioada a lor printr - un demers similar celui initiat acum de Adrian Nastase . Aici e riscul .
sa pierdem si NATO si sa ne trezim , peste un an si jumatate , ca timpul a trecut si de data aceasta fara vreun folos .
Copyright 1996 - 2003 Evenimentul Zilei Online .
|
87ba4497fecb2f3114bb4d86c87f6ca8d8fac40b | 594a5c780c6bf31da16e0c29270c74ba6737e842 | /man/JAGS_marglik_priors.Rd | 0079e3dd99411ac18baa894056d8d394aa641f0d | [] | no_license | FBartos/BayesTools | 55faad16c0b3558d519161c33b7cf448a34ca739 | 98fa230cf191b9093fb23f878799367d940b19d1 | refs/heads/master | 2023-07-19T08:40:23.734725 | 2023-07-11T13:16:49 | 2023-07-11T13:16:49 | 363,232,801 | 8 | 1 | null | 2023-07-11T13:16:50 | 2021-04-30T18:57:23 | R | UTF-8 | R | false | true | 1,094 | rd | JAGS_marglik_priors.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/JAGS-marglik.R
\name{JAGS_marglik_priors}
\alias{JAGS_marglik_priors}
\alias{JAGS_marglik_priors_formula}
\title{Compute marginal likelihood for 'JAGS' priors}
\usage{
JAGS_marglik_priors(samples, prior_list)
JAGS_marglik_priors_formula(samples, formula_prior_list)
}
\arguments{
\item{samples}{samples provided by bridgesampling
function}
\item{prior_list}{named list of prior distribution
(names correspond to the parameter names) of parameters not specified within the
\code{formula_list}}
\item{formula_prior_list}{named list of named lists of prior distributions
(names of the lists correspond to the parameter name created by each of the formula and
the names of the prior distribution correspond to the parameter names) of parameters specified
within the \code{formula}}
}
\value{
\code{JAGS_marglik_priors} returns a numeric value
of likelihood evaluated at the current posterior sample.
}
\description{
Computes marginal likelihood for the
prior part of a 'JAGS' model within 'bridgesampling'
function
}
|
3de07d41e5b94dcd07e241bdd1d71bf8c9af92dc | 7019f53c9e7e6705fef0d904338fd0d2f2909823 | /man/KSJMetadata_code.Rd | 1203fc29dd0dbc86c8cf6f03f342825e9fcfbce2 | [] | no_license | natsuapo/kokudosuuchi | 0923d696de49306ed256f262de4f9fbbfaf1f0fa | 0a134ee7348dcd121000f1cc2f82d5af147fdc5c | refs/heads/master | 2020-04-08T10:09:06.790305 | 2018-02-28T13:52:31 | 2018-02-28T13:52:31 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 711 | rd | KSJMetadata_code.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{KSJMetadata_code}
\alias{KSJMetadata_code}
\alias{KSJMetadata_code_correspondence_tables}
\alias{KSJMetadata_code_year_cols}
\title{Corresponding Table Of Shapefile Properties}
\format{A data.frame with code and names}
\source{
\url{http://nlftp.mlit.go.jp/ksj/index.html}, \url{http://nlftp.mlit.go.jp/ksj/gml/shape_property_table.xls}
\url{http://nlftp.mlit.go.jp/ksj/index.html}
\url{http://nlftp.mlit.go.jp/ksj/index.html}
}
\usage{
KSJMetadata_code
KSJMetadata_code_correspondence_tables
KSJMetadata_code_year_cols
}
\description{
Corresponding Table Of Shapefile Properties
}
\keyword{datasets}
|
58bde3ca74bfb8df6fe5dad21d17bcc1b8473967 | 0a906cf8b1b7da2aea87de958e3662870df49727 | /biwavelet/inst/testfiles/rcpp_row_quantile/libFuzzer_rcpp_row_quantile/rcpp_row_quantile_valgrind_files/1610555749-test.R | d21323eaa6a48f64f3374955364fe3f9811c9ab8 | [] | no_license | akhikolla/updated-only-Issues | a85c887f0e1aae8a8dc358717d55b21678d04660 | 7d74489dfc7ddfec3955ae7891f15e920cad2e0c | refs/heads/master | 2023-04-13T08:22:15.699449 | 2021-04-21T16:25:35 | 2021-04-21T16:25:35 | 360,232,775 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,253 | r | 1610555749-test.R | testlist <- list(data = structure(c(7.00072806654748e-304, 3.52953696509973e+30, 3.52983194407979e+30, 3.52998361619794e+30, 3.52981852653518e+30, 3.52548331169398e+30, 1.00891825394848e-309, 4.94065645841247e-324, 4.94065645841247e-324, 4.94065645841247e-324, 9.18962101264719e-322, 3.52953630161737e+30, 1.13963354584561e-149, 7.06327445644526e-304, 4.2102729862467e-309, 5.07578878677679e-299, 4.17880723252526e-320, 2.64663131185989e-260, 1.23311460881773e-309, 2.84998694239143e-306, 6.80564733831973e+38, 2.85574051412322e-306, 3.49284541243824e+30, 1.97345617793363e-312, 3.52953806518976e+30, 3.52952113075679e+30, 3.94108692949116e-312, 5.28366675403904e-294, 3.5295369721567e+30, 3.52981852653518e+30, 3.52548331169398e+30, 1.00891825419764e-309, 2.64220905890758e-260, 3.89585164253465e-315, 2.42088035805596e-305, 0, 3.81751933333032e-310, 2.44801582990602e-307, 2.67356514137184e+29, 2.6453397382981e+29, 3.52953696534145e+30, 1.62406169103884e-302, 3.52936943248072e+30, 1.59096294155332e-149, 1.30971821677825e-134, 7.12673166917663e+33, 1.6259745431392e-260, 9.61518815692645e-310, 5.57354928129533e-308), .Dim = c(7L, 7L)), q = -1.85984411296219e-35)
result <- do.call(biwavelet:::rcpp_row_quantile,testlist)
str(result) |
2b6e9c49dea79d2e3cc78faf8b90350565509d8b | f69138fa69d215b67d8f5ca61a260a19549bf6d4 | /subgroupVecDifferences.r | a5aaf93cef4db1b90de7df1e02b8d3c403a81aec | [
"MIT"
] | permissive | JamesRekow/Canine_Cohort_GLV_Model | 302a6de25897dfd6f038f88def159c8bc4480db3 | b7a3b167650471d0ae5356d1d2a036bde771778c | refs/heads/master | 2021-09-10T06:43:39.086540 | 2018-03-21T19:07:41 | 2018-03-21T19:07:41 | 105,334,062 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,736 | r | subgroupVecDifferences.r | # James Rekow
subgroupVecDifferences = function(subgroupVec1, subgroupVec2){
# ARGS: subgroupVec1 - a partition of a numeric vector of the form 1:n
# subgroupVec2 - another partition of the same numeric vector
#
# RETURNS: numDiffs - number of elements in subgroupVec2 that are placed in a different subgroup
# than in partition1
# identify the number of elements in the set being partitioned
numElements = length(subgroupVec1)
# identify the set which the partitions divide
baseSet = 1:numElements
computeSubgroupIxVec = function(inputPartition){
# ARGS: inputPartition - partition
#
# RETURNS: subgroupIxVec - numeric vector whose ith element is the index of the subgroup to which
# the ith element of the base set belongs to in the input partition
# identify number of subgroups in input partition
numSubgroups = length(inputPartition)
# intializeVector
subgroupIxVec = baseSet
for(ii in 1:numSubgroups){
# identify elements of baseSet in subgroup ii
subgroupElements = inputPartition[[ii]]
#
subgroupIxVec[subgroupElements] = ii
} # end for
return(subgroupIxVec)
} # end computesubgroupIxVec
# compute subgroup ix vector for each partition
subgroupIxVec1 = computeSubgroupIxVec(partition1)
subgroupIxVec2 = computeSubgroupIxVec(partition2)
# count the number of elements that are in different subgroups in the two partitions
numDiffs = sum(subgroupIxVec1 != subgroupIxVec2)
return(numDiffs)
} # end subgroupVecDifferences function
|
4eae9612ddd6d261c1745332968ee1d2139d9c10 | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/PLMIX/examples/make_complete.Rd.R | ef534b6ef39d1474b3087a7e517030115a90753f | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 554 | r | make_complete.Rd.R | library(PLMIX)
### Name: make_complete
### Title: Completion of partial rankings/orderings
### Aliases: make_complete
### ** Examples
# Completion based on the top item frequencies
data(d_dublinwest)
head(d_dublinwest)
top_item_freq <- rank_summaries(data=d_dublinwest, format="ordering", mean_rank=FALSE,
pc=FALSE)$marginals["Rank_1",]
d_dublinwest_compl <- make_complete(data=d_dublinwest, format="ordering",
probitems=top_item_freq)
head(d_dublinwest_compl$completedata)
|
cae60482a5743eb108fad5f9060235ee76d5afe8 | b8180cbb66ddc84d8960b63c0605d9e57403a5b1 | /package/R/NAMESPACE.R | ea90b12cf81b27b9d4b1fe977dd4709082a439c0 | [] | no_license | compStat/programming-course | 12b41e362391d069914557f1f610af386ab49aae | cf50f8cd69430b071773fd565a0836e460bddafe | refs/heads/master | 2021-01-20T06:43:07.083209 | 2017-05-03T15:10:54 | 2017-05-03T15:10:54 | 89,914,758 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 110 | r | NAMESPACE.R | # Imports:
#' @importFrom magrittr %>%
#' @importFrom purrr map flatten_chr
#' @importFrom dplyr mutate_
NULL |
23918f0f9266515b38a20e6a84168347e1c7ae1d | 23317d77e1557fde63be28f42f4e9e91887a08d5 | /read_ceiling_catch_CheckClb.R | 0a2c5ec3cd62ecd16ad919a484d531b2646f65a3 | [] | no_license | CTC-PSC/ReadCEICatches | fd6990f375810cc62295c012e5815b1f2d883514 | eeb46e8ba6394d23b91c40dc8e5e6f3863117326 | refs/heads/master | 2021-01-18T18:05:17.733462 | 2017-03-31T17:17:03 | 2017-03-31T17:17:03 | 86,841,193 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,799 | r | read_ceiling_catch_CheckClb.R | ##########################################################################################
## ##
## This script extracts the observed and estimated ceiling fisheries catches from the ##
## Check Clb Out. The data frame "x" contains the ceiling catches ##
## ##
##########################################################################################
# ----------------------------------------------------------------------------------------
readCatchCCO <- function(directory, file) {
lines <- readLines(paste0(directory, "\\", file))
endLine <- grep("TERMINAL RUN COMPARISONS", lines) - 3
lines <- lines[1:endLine]
fisheryLines <- grep("CEILINGS SPECIFIED FOR", lines)
startLines <- fisheryLines + 6
endLines <- startLines + 37
x <- NULL
for(i in 1:length(startLines)) {
fishery <- lines[fisheryLines[i]]
fishery <- substr(fishery, 24, nchar(fishery))
catches <- lines[startLines[i]:endLines[i]]
catches <- substr(catches, 1, 20)
year <- as.numeric(substr(catches, 1,4))
obs <- as.numeric(substr(catches, 7,12))
est <- as.numeric(substr(catches, 15,20))
temp <- data.frame(year, fishery, obs, est)
x <- rbind(res, temp)
}
return(x)
}
# --------------------------------------------------------------------------------------------------------
# Test
# wd <- "C:\\Users\\gart\\Desktop\\zzz\\ctc\\annual run old base\\2017\\evaluate cei fit"
# readCatchCCO(wd, "1701BCheckClb.out")
# -----------------------------------------------------------------------------------------
|
1fa5433bb52f144aaddb74e039badb9e5db5b05d | 9df0855a52738d76c2f57337f90060e804e47c21 | /04_exploratory_data_analysis/plot2.R | 4a7cb9ee7a95d10dd4b176c48e0756857c9a4f20 | [] | no_license | J-McNamara/mcnamara_coursera_data_science_code_portfolio | 6ae11698e8a7310dd49f3674494dd4d00106f03c | eab6c8044e079b64ba138e9511f3c40804b6a88a | refs/heads/master | 2020-07-22T23:31:25.847930 | 2019-09-09T18:14:03 | 2019-09-09T18:14:03 | 207,367,446 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 726 | r | plot2.R | # Read in data
df <-read.csv('household_power_consumption.txt',stringsAsFactors=FALSE,sep=';')
keep <-as.Date(df$Date,'%d/%m/%Y') %in% c(as.Date('01/02/2007','%d/%m/%Y'),as.Date('02/02/2007','%d/%m/%Y'))
df <- df[keep,]
# Fix classes
df$Date <- strptime(paste(df$Date, df$Time, sep=' '), '%d/%m/%Y %H:%M:%S')
df$Global_active_power <- as.numeric(df$Global_active_power)
# Construct plot 1
options(device='RStudioGD')
# Write to file
png(filename = 'plot2.png', width = 480, height = 480, units = 'px')
plot(df$Date, df$Global_active_power, type='l', ylab='Global Active Power (kilowatts)', xlab='')
# axis(1, at = c(0,1500,2900),labels = c('Thu','Fri','Sat'))
# axis(2,at=c(0,2,4,6))
box()
dev.off()
|
c60b51b1a7546eea1a5d4203746de60bbdc1b667 | 8dd3464c72997d1287947c2a5ca96078d93cd7d8 | /ripleys/ripleys_lizards.R | fecf2e36c6afde125048afc0170093c6e47161b7 | [] | no_license | ian-flores/Spatial-Heterogeneity | a6c3e5be39e2da272940f04c15c4540a90730224 | 3c6d42c18f0f77c74d94887cd70e6ceba71d8005 | refs/heads/master | 2021-10-09T15:47:52.064668 | 2018-12-31T09:58:48 | 2018-12-31T09:58:48 | 108,680,046 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,388 | r | ripleys_lizards.R | #Spatial Heterogeneity Ripley's K LIZARDS
#Sets the working directory (Monophyllus)
setwd("C:/Users/Ian/Dropbox/Spatial Heterogeneity Anoles/Data")
#Sets the working directory (Mac)
setwd("~/Dropbox/Spatial Heterogeneity Anoles/Data")
#loads spatstat
library(spatstat)
#loads the data
anolis <- read.csv("SH CSV.csv", header=T)
#plots the data
plot(anolis$xcoord, anolis$ycoord,
main="A. gundlachi spatial distribution",
xlab="X", ylab="Y")
#creates a matrix with the coordinates
lagartijos <- cbind(anolis$xcoord, anolis$ycoord)
#creates the window in which the values are calculated
window_anolis <- owin(xrange=c(min(anolis$xcoord, na.rm=T),
max(anolis$xcoord, na.rm=T)),
yrange=c(min(anolis$ycoord, na.rm=T),
max(anolis$ycoord, na.rm=T)))
# Transforms the data to a ppp object
anolisppp <- as.ppp(lagartijos, W=window_anolis)
#Transforms the data to a ppp object, using SVL as a covariate
anolis_grandes <- as.ppp(lagartijos, W=window_anolis, marks=anolis$SVL)
# Estimates the K value for each value in the radius
# No se puede expandir el r??
K <- Kest(anolisppp, correction="best")
#Creates a window to show both plots in a single image
par(mfrow=c(1,2))
z <- envelope(anolisppp, fun=Kest, nsim=1000)
# Plots the K values, without the covariate SVL
plot(z, main="Distribucion Espacial de los Lagartijos" ,
xlab="Distancia (m)", ylab="K(r)", legend=F)
legend(0,600, c("K Observado", "K Teorico",
"Intervalo de Confianza"),
fill=c("black", "red", "grey"), box.col="white")
dev.copy2pdf(file="Distribucion Espacial de los Lagartijos PDF")
# Plots the K values, with covariate SVL
plot(envelope(anolis_grandes, fun=Kest, nsim=1000),
main="K Function for A. gundlachi",
xlab="Distance (m)", ylab="K(r)", sub="Including SVL")
# Plots the L transformation, without the covariate SVL
plot(envelope(anolisppp, fun=Kest, nsim=1000,
transform = expression(sqrt(./pi))),
main="L Function for A. gundlachi", ylab="L(r)", sub="Without SVL")
# Plots the L transformation, with covariate SVL
plot(envelope(anolis_grandes, fun=Kest, nsim=1000,
transform = expression(sqrt(./pi))),
main="L Function for A. gundlachi", ylab="L(r)", sub="Including SVL")
|
4a4c69e84db3986868c02eb2aa973526ba1cfae1 | c555092c911699a657b961a007636208ddfa7b1b | /man/aes_.Rd | 9bc6fc7a0ecf0cc421f9a054dc0345e8dd3effac | [] | no_license | cran/ggplot2 | e724eda7c05dc8e0dc6bb1a8af7346a25908965c | e1b29e4025de863b86ae136594f51041b3b8ec0b | refs/heads/master | 2023-08-30T12:24:48.220095 | 2023-08-14T11:20:02 | 2023-08-14T12:45:10 | 17,696,391 | 3 | 3 | null | null | null | null | UTF-8 | R | false | true | 2,156 | rd | aes_.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/aes.R
\name{aes_}
\alias{aes_}
\alias{aes_string}
\alias{aes_q}
\title{Define aesthetic mappings programmatically}
\usage{
aes_(x, y, ...)
aes_string(x, y, ...)
aes_q(x, y, ...)
}
\arguments{
\item{x, y, ...}{List of name value pairs. Elements must be either
quoted calls, strings, one-sided formulas or constants.}
}
\description{
\ifelse{html}{\href{https://lifecycle.r-lib.org/articles/stages.html#deprecated}{\figure{lifecycle-deprecated.svg}{options: alt='[Deprecated]'}}}{\strong{[Deprecated]}}
Aesthetic mappings describe how variables in the data are mapped to visual
properties (aesthetics) of geoms. \code{\link[=aes]{aes()}} uses non-standard
evaluation to capture the variable names. \code{aes_()} and \code{aes_string()}
require you to explicitly quote the inputs either with \code{""} for
\code{aes_string()}, or with \code{quote} or \code{~} for \code{aes_()}.
(\code{aes_q()} is an alias to \code{aes_()}). This makes \code{aes_()} and
\code{aes_string()} easy to program with.
\code{aes_string()} and \code{aes_()} are particularly useful when writing
functions that create plots because you can use strings or quoted
names/calls to define the aesthetic mappings, rather than having to use
\code{\link[=substitute]{substitute()}} to generate a call to \code{aes()}.
I recommend using \code{aes_()}, because creating the equivalents of
\code{aes(colour = "my colour")} or \code{aes(x = `X$1`)}
with \code{aes_string()} is quite clunky.
}
\section{Life cycle}{
All these functions are soft-deprecated. Please use tidy evaluation idioms
instead. Regarding \code{aes_string()}, you can replace it with \code{.data} pronoun.
For example, the following code can achieve the same mapping as
\code{aes_string(x_var, y_var)}.
\if{html}{\out{<div class="sourceCode r">}}\preformatted{x_var <- "foo"
y_var <- "bar"
aes(.data[[x_var]], .data[[y_var]])
}\if{html}{\out{</div>}}
For more details, please see \code{vignette("ggplot2-in-packages")}.
}
\seealso{
\code{\link[=aes]{aes()}}
}
\keyword{internal}
|
71d73c1ca4411a930b33346a73a702e328fb46dd | 431e56dbee349b1ef75b13b65afd25a9c3b75e69 | /thaime_analysis_code.R | d991b4994dadce9144f2071438bcfa50528b4c43 | [] | no_license | chaiyasitbunnag/thaime_analysis | c6bb8450ec9db47c9809aae9264c462594d7b4b9 | 891f03639506adaa3b049d99dec8444fa6f2900e | refs/heads/master | 2022-11-28T23:00:21.062968 | 2020-07-20T06:29:21 | 2020-07-20T06:29:21 | 280,832,731 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 11,029 | r | thaime_analysis_code.R | library(dplyr)
library(ggplot2)
library(xml2)
library(httr)
library(magrittr)
library(jsonlite)
library(stringr)
library(tidyr)
library(readr)
library(reshape2)
library(ggrepel)
extrafont::loadfonts(device = "win")
# --- set local settings --- #
options(scipen = 999)
Sys.setlocale("LC_ALL", "Thai")
# --- test json to df --- #
x <- "http://nscr.nesdb.go.th/wp-admin/admin-ajax.php?action=wp_ajax_ninja_tables_public_action&table_id=12653&target_action=get-all-data&default_sorting=old_first&skip_rows=0&limit_rows=0&chunk_number=0"
x <- fromJSON(x)
x <- x[2]
# store to df
data.frame(project_name = x$value[2] %>% unname(),
budget = x$value[3] %>% unname(),
agency_name = x$value[4] %>% unname(),
ministry_name = x$value[5] %>% unname(),
province = x$value[6] %>% unname()) %>%
tibble()
# --- get all data --- #
# tables stored in ajax (urls found in Network tab)
# create chunk urls 1-15 to loop over
chunk_urls <- paste0("http://nscr.nesdb.go.th/wp-admin/admin-ajax.php?action=wp_ajax_ninja_tables_public_action&table_id=12653&target_action=get-all-data&default_sorting=old_first&skip_rows=0&limit_rows=0&chunk_number=",0:15)
# create empty table to store data after loop is finished
collected_tables <- data.frame()
for(i in 1:length(chunk_urls)) {
print(i)
x <- chunk_urls[i]
x <- fromJSON(x)
x <- x[2]
data_temp <- data.frame(project_name = x$value[2] %>% unname(),
budget = x$value[3] %>% unname(),
agency_name = x$value[4] %>% unname(),
ministry_name = x$value[5] %>% unname(),
province = x$value[6] %>% unname())
collected_tables <- rbind(collected_tables, data_temp)
}
# --- clean data --- #
# assign to new var
collected_tables_2 <- unique(collected_tables)
# names
# 1. ministry
collected_tables_2$ministry_name <- trimws(collected_tables_2$ministry_name)
collected_tables_2$ministry_name <- gsub("กระทรวง", "", collected_tables_2$ministry_name)
collected_tables_2 <- collected_tables_2[collected_tables_2$ministry_name != "", ]
# 2. agency
collected_tables_2$agency_name <- trimws(collected_tables_2$agency_name)
collected_tables_2$agency_name[str_detect(collected_tables_2$agency_name, "ท้องถิ่น$") & !str_detect(collected_tables_2$agency_name, "\\/")] <- "กรมส่งเสริมการปกครองท้องถิ่น"
# 3. province
collected_tables_2$province[str_detect(collected_tables_2$province, "^จังหวัด")] <- gsub("^จังหวัด", "", collected_tables_2$province)
collected_tables_2$province <- trimws(collected_tables_2$province)
collected_tables_2$province[collected_tables_2$province == "กรุงเทพมหานคร"] <- "กรุงเทพฯ"
# 4. budget
collected_tables_2$budget <- trimws(collected_tables_2$budget)
collected_tables_2$budget <- gsub(",", "", collected_tables_2$budget)
collected_tables_2$budget <- parse_number(collected_tables_2$budget)
# top proposed ministry
n_m <-
collected_tables_2 %>%
count(ministry_name, sort = T) %>%
head(20)
# plot top proposed ministry
collected_tables_2 %>%
count(ministry_name, sort = T) %>%
head(20) %>%
ggplot(., aes(x = reorder(ministry_name, n), y = n))+
geom_col(width = 0.7, fill = "#03e8fc")+
geom_text(aes(label = n), size = 3, family = "Consolas", fontface = "bold")+
coord_flip()+
theme_minimal()+
theme(panel.grid = element_blank(),
axis.title = element_text(family = "Consolas", face = "bold"),
axis.text.x = element_blank())+
labs(x = "Ministry Name", y = "Number of Project Briefs")
# 1. top ten proposed ministry by agency
top10_ministry <-
collected_tables_2 %>%
count(ministry_name, sort = T) %>%
head(5)
top10_ministry_data <- collected_tables_2[collected_tables_2$ministry_name %in% top10_ministry$ministry_name, ]
preprocessed <-
top10_ministry_data %>%
dplyr::group_by(ministry_name, agency_name) %>%
dplyr::summarise(n = n()) %>%
top_n(10, n) %>%
ungroup() %>%
dplyr::arrange(ministry_name, n) %>%
dplyr::mutate(r = row_number())
# Plot the data
ggplot(preprocessed, aes(x = r, y = n)) +
geom_segment(aes(xend = r, yend = 0)) +
geom_col(width = 0.5, fill = "#03e8fc")+
geom_text(aes(label = n), size = 3, family = "Consolas", fontface = "bold", hjust = 0.1)+
scale_x_continuous(breaks = preprocessed$r, labels = preprocessed$agency_name) +
facet_wrap(~ ministry_name, scales = "free_y", nrow = 10) +
coord_flip()+
theme_minimal()+
theme(panel.grid = element_blank(),
axis.title.x = element_text(family = "Consolas",
face = "bold"),
axis.text.x = element_blank())+
labs(y = "Number of Projects")
# 2. top provinces (budget) from local (dla)
collected_tables_2 %>%
count(province, sort = T) %>%
View()
collected_tables_2$province[str_detect(collected_tables_2$province, "^จังหวัด")] <- gsub("^จังหวัด", "", collected_tables_2$province)
collected_tables_2$province <- trimws(collected_tables_2$province)
dla <- collected_tables_2 %>% filter(agency_name == "กรมส่งเสริมการปกครองท้องถิ่น")
dla_top20_province <-
dla %>%
group_by(province) %>%
summarise(budget = sum(budget)/1000000) %>%
arrange(-budget) %>%
head(20)
ggplot(dla_top20_province, aes(x = reorder(province, budget), y = budget)) +
geom_col(width = 0.5, fill = "#03e8fc") +
coord_flip() +
theme_minimal() +
theme(panel.grid = element_blank(),
axis.title.x = element_text(family = "Consolas",
face = "bold"),
axis.title.y = element_blank()) +
labs(y = "Total Budget Proposed to DLA (million)")
# 3. Bangkok
collected_tables_2 %>%
filter(str_detect(province, "กรุงเทพ")) %>%
View()
collected_tables_2$province[collected_tables_2$province == "กรุงเทพมหานคร"] <- "กรุงเทพฯ"
collected_tables_2$province[collected_tables_2$province == "กทม."] <- "กรุงเทพฯ"
collected_tables_2$province[collected_tables_2$province == "กทม"] <- "กรุงเทพฯ"
collected_tables_2$province[collected_tables_2$province == "กรุงเทพ"] <- "กรุงเทพฯ"
collected_tables_2$province[collected_tables_2$province == "กรุงเทพมหานครและปริมณฑล"] <- "กรุงเทพฯ และปริมณฑล"
bkk <- collected_tables_2[str_detect(collected_tables_2$province, "กรุงเทพ"), ]
bkk$budget <- round(bkk$budget / 1000000, 2)
bkk$project_name <- gsub("โครงการ", "", bkk$project_name)
bkk %>% group_by(province, project_name) %>% summarise(budget = sum(budget)) %>% arrange(-budget) %>% select(project_name)
ggplot(., aes(x = reorder(project_name, budget), y = budget)) +
geom_col(width = 0.3, fill = "#03e8fc") +
coord_flip() +
scale_y_continuous(breaks = seq(0, 1000, 100)) +
geom_text(aes(label = budget, col = province), size = 4, family = "Consolas", fontface = "bold", hjust = 0.2)+
#facet_wrap(~ province, scales = "free_y") +
theme_minimal() +
theme(panel.grid = element_blank(),
axis.title.x = element_text(family = "Consolas",
face = "bold"),
axis.title.y = element_blank(),
axis.text.x = element_blank(),
legend.title = element_blank()) +
labs(y = "Project Budget Proposed for Bangkok (million)")
# 4. top proposed budget project from all
# summary statistic
collected_tables_2 %>%
mutate(budget = round(budget/1000000, 0)) %>%
summarise(min_budget = min(budget, na.rm = T),
max_budget = max(budget, na.rm = T),
mean_budget = mean(budget, na.rm = T),
median_budget = median(budget, na.rm = T))
# there is project budgeted 240 billion
# see the max budget details
collected_tables_2[which.max(collected_tables_2$budget), ] %>%
mutate(budget = round(budget/1000000,0)) %>%
melt(id.vars = c("project_name")) %>%
View()
# check how many >= 100 billions
collected_tables_2 %>%
filter(budget >= 10^11) %>%
mutate(budget = round(budget/1000000, 0)) %>%
melt(id.vars = c("project_name")) %>%
arrange(project_name) %>%
View()
# check how many >= 10 billions
mega <-
collected_tables_2 %>%
filter(budget >= 10^10) %>%
mutate(budget = round(budget/1000000, 0))
# top budget ex mega
collected_tables_2 %>%
filter(budget <= 10^9) %>%
mutate(budget = round(budget/1000000, 0)) %>%
ggplot(., aes(x = budget)) +
geom_histogram(bins = 40, binwidth = 10) +
xlim(c(0, 300)) +
ylim(c(0, 10000))
#scale_x_continuous(breaks = seq(0, 300, 50))
# bucketing budgets
collected_tables_2 %>%
filter(!is.na(budget), budget > 0) %>%
mutate(budget_bin = case_when(budget < 10^7 ~ "< 10 Millions",
budget >= 10^7 & budget < 10^8 ~ "10 - 99.99 Millions",
budget >= 10^8 & budget < 10^9 ~ "100 - 999.99 Millions",
budget >= 10^9 & budget < 10^10 ~ "1 - 9.9 Billions",
budget >= 10^10 & budget < 10^11 ~ "10 - 99.99 Billions",
budget >= 10^11 ~ ">= 100 Billions",
TRUE ~ "Others")) %>%
count(budget_bin, sort = T)
# save data #
write.csv(collected_tables_2, "project_briefs_data.csv", row.names = F)
# import data
#x <- read.csv("https://raw.githubusercontent.com/chaiyasitbunnag/thaime_analysis/master/project_briefs_data.csv", header = T, stringsAsFactors = F)
# distribution by ministry
m_name <- read_lines("ministry_name.csv")
m_name <- m_name[-1]
m_name <- paste(m_name, collapse = "|")
m_dist_data <- collected_tables_2[grep(m_name, collected_tables_2$ministry_name), ]
g <-
m_dist_data %>%
filter(!is.na(budget)) %>%
mutate(ministry_name = reorder(ministry_name, budget, FUN = max)) %>%
group_by(ministry_name) %>%
mutate(budget = round(budget/1000000, 0))
ggplot(data = g, aes(x = ministry_name, y = budget)) +
geom_point(aes(col = ministry_name), alpha = 0.3) +
coord_flip() +
annotate("text", x = "สำนักนายกรัฐมนตรี",y =110000, label = "SME", col = "pink", fontface = "bold", family = "Consolas", size= 3) +
annotate("text", x = "เกษตรและสหกรณ์",y =220000, label = "One Stop\nService", col = "salmon", fontface = "bold", family = "Consolas", size = 3) +
theme_minimal() +
theme(legend.position = "none",
panel.grid = element_blank(),
axis.title = element_text(family = "Consolas", face = "bold"),
axis.text.x = element_text(family = "Consolas", size = 7)) +
labs(y = "Budget Proposed (Millions)", x = "Ministry Name")
|
e2c3403be69d4dd08b5e0eab06a866f4600ee091 | 150ddbd54cf97ddf83f614e956f9f7133e9778c0 | /man/svgCubicTo.Rd | 8d0c1c96306825dcafd16c36b2ce197dcccef450 | [
"CC-BY-4.0"
] | permissive | debruine/webmorphR | 1119fd3bdca5be4049e8793075b409b7caa61aad | f46a9c8e1f1b5ecd89e8ca68bb6378f83f2e41cb | refs/heads/master | 2023-04-14T22:37:58.281172 | 2022-08-14T12:26:57 | 2022-08-14T12:26:57 | 357,819,230 | 6 | 4 | CC-BY-4.0 | 2023-02-23T04:56:01 | 2021-04-14T07:47:17 | R | UTF-8 | R | false | true | 460 | rd | svgCubicTo.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/svg.R
\name{svgCubicTo}
\alias{svgCubicTo}
\title{SVG Path Quadratic Curve}
\usage{
svgCubicTo(x1, y1, x2, y2, x, y, digits = 2)
}
\arguments{
\item{x1, y1, x2, y2}{coordinates of control points}
\item{x, y}{coordinates of main point}
\item{digits}{number of digits to round to}
}
\value{
string with Q path component
}
\description{
SVG Path Quadratic Curve
}
\keyword{internal}
|
eb15564ca8be9b15900f5776a27610cf81b6cc15 | 4dabda63fe893a6de1bc3f216838e43c41996e21 | /F statistics.R | 704fe61886313f13f271e0a38d2d5d962c6dff54 | [] | no_license | L-Ippel/Methodology | 3f9520b4330da4891d1de332d2ff954aefc1668f | f8773aacc898fb26b6c8022d607515c08ce538fe | refs/heads/master | 2021-01-20T20:42:45.603982 | 2017-05-26T08:53:16 | 2017-05-26T08:53:16 | 62,639,329 | 3 | 3 | null | 2016-11-18T17:04:58 | 2016-07-05T13:33:35 | R | UTF-8 | R | false | false | 2,021 | r | F statistics.R | #########################
## F statistic - ANOVA
#########################
#########################
##generate data
#########################
n_groups <- 3 # number of groups
size_group <- 50 # sample size per group
averages <- runif(n_groups, 0,5) # average per group
std <- 2 # error standard deviation
data <- matrix(0, ncol=2, nrow=0) # first we initialize the object data, which we will append the data to later
for(k in 1:n_groups)
{
data <- rbind(data,cbind(k,y=rnorm(size_group, averages[k],std))) # for each group we generate data given the group mean
}
new.order <- sample(n_groups*size_group) # we do not want the data of a single group to arrive in a block so we randomly order the data
data <- data[new.order,]
#########################
##set starting values
#########################
average <- 0
SSt <- 0.00001 # to prevent that you'll divide by zero in the beginning of the stream
SSw <- 0.00001
n <- 0
group_parameters <- data.frame(k=numeric(), count=numeric(),average=numeric())
K <- 0 #number of groups in the data stream
#########################
##run through the data
#########################
for(i in 1:nrow(data))
{
select_group <- group_parameters[as.numeric(data[i,1]),]
if(is.na(select_group[1]))
{
select_group<- c(data[i,1],0,0) # add a new group when this is the first time that we observe this group
K <- K+1
}
n <- n+1
select_group[2]<- select_group[2]+1
d <- data[i,2]-average
dk <- data[i,2]-select_group[3]
average <- average+(data[i,2]-average)/n
select_group[3] <- select_group[3]+(data[i,2]-select_group[3])/select_group[2]
group_parameters[as.numeric(data[i,1]),]<-select_group
SSt <- SSt+ d*(data[i,2]-average)
SSw <- SSw+ dk*(data[i,2]-select_group[3])
if(n>K & K>1) #these conditions prevent you to divide by zero at a certain point
{
F <- ((SSt-SSw)/(K-1))/(SSw/(n-K))
}
print(paste("F=",as.numeric(F), "df1 =",(K-1), "df2 =",(n-K)))
}
|
3aa59a6fdb7ba39dbb1dc498bfdc810d76514775 | 6af24a8bf64b970b0ac42a94575379c22f0dff9d | /plot4.R | 715ba3816427fb881bd66fee2491a40e71220f75 | [] | no_license | shubh25/ExData_Plotting1 | 1ea5ad876ec81a1539444dc03161edc3961344ae | e91cacac9fdecef073c913b1a03f8e7bb2509c61 | refs/heads/master | 2021-01-24T21:47:41.471384 | 2015-02-08T15:07:14 | 2015-02-08T15:07:14 | 29,101,559 | 0 | 0 | null | 2015-01-11T18:38:16 | 2015-01-11T18:38:14 | null | UTF-8 | R | false | false | 1,209 | r | plot4.R | a<-read.table("household_power_consumption.txt",header=TRUE,sep=";",colClasses="character")
a$Date<-as.Date(a$Date,"%d/%m/%Y")
b<-subset(a,Date >= "2007-02-01" & Date <= "2007-02-02",select=c(Date:Sub_metering_3))
b$Global_active_power<-as.numeric(b$Global_active_power)
b$Global_reactive_power<-as.numeric(b$Global_reactive_power)
b$Voltage<-as.numeric(b$Voltage)
b$Global_intensity <-as.numeric(b$Global_intensity)
b$Sub_metering_1 <-as.numeric(b$Sub_metering_1)
b$Sub_metering_2 <-as.numeric(b$Sub_metering_2)
b$Sub_metering_3 <-as.numeric(b$Sub_metering_3)
png("plot4.png",width=480,height=480)
par(mfrow=c(2,2))
plot(b$concat,b$Global_active_power,type="l",xlab="",ylab="Global Active Power")
plot(b$concat,b$Voltage,xlab="datetime",ylab="Voltage",type="l")
plot(b$concat,b$Sub_metering_1,xlab="",ylab="Energy Sub Metering",type="l")
lines(b$concat,b$Sub_metering_2,xlab="",ylab="Energy Sub Metering",type="l",col="red")
lines(b$concat,b$Sub_metering_3,xlab="",ylab="Energy Sub Metering",type="l",col="blue")
legend("topright",legend=colnames(b[,7:9]),lty=1,bty="n",col=c("black","red","blue"),cex=1)
plot(b$concat,b$Global_reactive_power,xlab="datetime",ylab="Global_reactive_power",type="l")
dev.off()
|
3fc035c4f17d461a2b65d5b34e759d27708b96f2 | c66e9f146c5426947dce20bf597357905942d098 | /man/fo_plot_par.Rd | 4a205bf92b8f4ec4bfa23a6c1b1647eedcc51ec2 | [
"MIT"
] | permissive | spacea/projekt.2019.pacocha | 296d7780d917a4f953ebd2036709660cb1173db2 | 10dd111b0581398db5f6e312ea486e44567a2976 | refs/heads/master | 2021-06-23T09:58:08.883195 | 2021-01-11T13:28:29 | 2021-01-11T13:28:29 | 185,676,363 | 2 | 5 | NOASSERTION | 2019-05-27T23:25:21 | 2019-05-08T20:39:55 | R | UTF-8 | R | false | true | 622 | rd | fo_plot_par.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/fo_plotper.R
\name{fo_plot_par}
\alias{fo_plot_par}
\title{Parallelogram plot}
\usage{
fo_plot_par(xs, ys, r1, r2, alpha, beta)
}
\arguments{
\item{xs}{Left down apex on the X axis.}
\item{ys}{Left down apex on the Y axis.}
\item{r1}{Lenght of the basis.}
\item{r2}{Length of the side.}
\item{alpha}{Angle of rotation.}
\item{beta}{Angle between the basis and the left side of
the parallelogram.}
}
\value{
Plot
}
\description{
Function which plots a parallellogram.
}
\examples{
fo_plot_par(0,0,3,3,0,30)
}
|
09bf08f00a4a7307965bb4b3ba1e781a1f96b297 | 5f4e6a5cc136cba38e703b4e30055d35a74bb61b | /man/FundCode.Rd | f3b5a040a0dbfcb15731dff555968b6ba729320a | [] | no_license | tcspears/t1Fundamentals | 849deee8e8a68f73de90e350681483482527e610 | fcebd7d0a1ed3ec6b6f2203e916ccae34fc8928f | refs/heads/master | 2020-12-11T08:10:42.633760 | 2015-07-16T11:03:22 | 2015-07-16T11:03:22 | 37,289,528 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 411 | rd | FundCode.Rd | % Generated by roxygen2 (4.1.0): do not edit by hand
% Please edit documentation in R/FundCode.R
\name{FundCode}
\alias{FundCode}
\title{FundCode}
\usage{
FundCode(fundamental.names)
}
\arguments{
\item{fundamental.names}{A character vector of names of fundamentals (e.g. 'Total Revenue')}
}
\value{
A character vector of codes.
}
\description{
Maps names of common fundamentals to their four digit T1 codes
}
|
faf640e6a1915ad9517117fa1a566c476fad0d0d | 6b915ba9db1de8d26bec39589c77fd5d225e1fce | /de_mcmc/profile.R | 3e78ea50e7e159f17e2e2e923e0e79451f006bf9 | [
"Apache-2.0"
] | permissive | bjsmith/reversallearning | c9179f8cbdfdbbd96405f603e63e4b13dfc233af | 023304731d41c3109bacbfd49d4c850a92353978 | refs/heads/master | 2021-07-09T23:42:22.550367 | 2018-12-08T01:40:54 | 2018-12-08T01:40:54 | 100,329,149 | 0 | 2 | null | 2017-09-05T04:27:41 | 2017-08-15T02:21:25 | R | UTF-8 | R | false | false | 5,501 | r | profile.R | version="h_m1"
#our first hierarhical model for the reversal learning dataset.
verbose=TRUE
source('de_mcmc/main_m1_setup.R')
############################################## generate data
source("de_mcmc/raw_data_reward_only.R")
############################################## initialize
#data[[1]]
#NAMING CONVENTION
#start with the name of the parameter itself (e.g., "alpha", "beta", and so on)
#For bottom levels, ELIMINATE the specific level it refers to. So somewhat paradoxically, for the alpha value for each subject, do NOT append "s", but append any other values (e.g., "run_mu")
#For next level up, specify which distribution it takes an average of (e.g., "alpha_s") then the hyper parameter it is (e.g., mu, sigma)w
#if we treat the levels as LISTS rather than ARRAYS then we can associate arbitrary dimensions to each parameter
#NAMING CONVENTION 2
#start with the name of the parameter itself (e.g., "alpha", "beta", and so on)
#always refer to the specific level, e.g., "run" if this is for the run
#if it's an average of something within a group, then specify both
#e.g., the average across runs for a subject is alpha_r_mu_s
#if it's the conjugate, can eliminate the lower-level values, so that the group-level parameter is "alpha_s_mu"
#otherwise keep, so that we have alpha_r_sigma_s generated by alpha_r_sigma_s_{sigma hypers}
#we might end up with nothing called simply "alpha" but that's OK.
#call the bottom level transformed variable "f_alpha_s" for 'function of alpha_s'
level1.par.names=c("alpha_s",#"beta",
"thresh_s","tau_s")
level2.par.names<-c(paste0(level1.par.names, "_mu"),paste0(level1.par.names, "_sigma"))
par.names<-c(level1.par.names,level2.par.names)
n.pars=length(par.names)
n.chains=24
nmc=5000
burnin=1000
thin=1
keep.samples=seq(burnin,nmc,thin)
print(length(keep.samples)*n.chains)
use.optim=TRUE
optim.gamma=TRUE
migrate.prob=.1
migrate.duration=round(burnin*.25)+1
b=.001
cores=8
data<-data[seq(1,161,80)]
S=length(data)
#defining initial values
#because this time, these variables will be normal distributions, means can be zero, SDs can be ones.
#MAY NEED TO ADJUST THESE TAU VALUES IN LIGHT OF PUTTING THIS IN NORMAL DISTRIBUTION SPACE
x.init<- list()
x.init[[level1.par.names[[1]]]]<-rep(0,S)#alpha
x.init[[level1.par.names[[2]]]]<-rep(0.5,S)#thresh
x.init[[level1.par.names[3]]]=.6*(sapply(data,function(x)min(x$rt,na.rm=TRUE))) #is this the best we can do?
#mu values
for (pn in level2.par.names[1:2]){
x.init[[pn]]<-0
}
x.init[[level2.par.names[3]]]=.6*mean(sapply(data,function(x)min(x$rt,na.rm=TRUE))) #is this the best we can do?
#sigma values
for (pn in level2.par.names[4:5]){
x.init[[pn]]<-1
}
x.init[[level2.par.names[6]]]=.6*sd(sapply(data,function(x)min(x$rt,na.rm=TRUE))) #is this the best we can do?
n.parinstances<-length(unlist(x.init))#the list of parameter instances, counting each element of the vector parameters
############################################## set prior
prior=NULL
# upper and lower boundaries for the concentration parameters
prior$lower=0
prior$upper=1
########################################## run it
source(paste0("de_mcmc/model_configs/de_",version,"_config.R"))
source(paste0("de_mcmc/de_",version,"_run.R"))
#######################################################################################################################################
#run_env<-de_mcmc_execute(log.dens.like.h.m1,log.dens.prior.h.m1)#log.dens.like.f<-log.dens.like.h.m1;log.dens.prior.f<-log.dens.prior.h.m1
#log.dens.like.f<-log.dens.like.h.m1;log.dens.prior.f<-log.dens.prior.h.m1
mainDir<-getwd()
subDir=""
sfInit(parallel=TRUE, cpus=cores, type="SOCK")
#printv("setting up cluster...")
sfClusterSetupRNG()
ptm=proc.time()[3]
#printv ("running the model...")
de_m1_run(log.dens.like.f=log.dens.like.f,
log.dens.prior.f=log.dens.prior.f)
proc.time()[3]-ptm
.dens.prior.f<-log.dens.prior.h.m1
#######################################################################################################################################
log.dens.post=function(x,use.data,prior)log.dens.prior.f(x,prior) + log.dens.like.f(x,use.data)
########################################### initialize the chains
printv("intializing chains...")
theta<<-array(NA,c(n.chains,n.pars,S))
weight<<-array(-Inf,c(n.chains,S))
colnames(theta) <- par.names
printv("exporting...")
sfExportAll(except=list("theta","weight"))
print("Optimizing...(this may take some time)")
init.pars=matrix(1,n.pars)
#I don't look through becasue we no longer have a single matrix of everything
#we have a list, some of which are across-subject values, some of which are not.
x=unlist(x.init)
temp.weight=log.dens.like.f(x,use.data=data)
new.x=x
while(temp.weight==-Inf){
stop("I think this hasn't been set up properly. Not only are the values probably wrong but the function takes far too long to run!")
#is this appropriate for all the variables? I'm not sure.
new.x=rtnorm(n.parinstances,.1,0,inf.vals) #BJS to BT: why does this use a truncated normal distribution?
warning("We use this truncated normal distribution to estimate new x values, but I'm not sure that this is appropriate for all variables.")
#shouldn't we be getting the priors here?
temp.weight=log.dens.like.f(new.x,use.data=data)
}
if(use.optim==TRUE)init.pars=optim(new.x,function(x,...)-log.dens.like.f(x,...),use.data=data,control=list("maxit"=1000))$par
if(use.optim==FALSE)init.pars=new.x
print(paste("Optimization Complete."))
#}
|
ad866fa430ede4610ba8afd30baf048625412324 | a98d60bf79067a37feea7dc68f8ec64266dbc6eb | /R/shinyDataFilter_df.R | 87d6c68f89a091bded2fe6e15b9e72168e7a73aa | [
"MIT"
] | permissive | dgkf/shinyDataFilter | 468e0ef002a75daa63f6f61a6337d85626537284 | 68d20f554211e722aafbb8272ccbf8342a0aad10 | refs/heads/master | 2022-07-30T21:06:46.242421 | 2022-07-12T21:46:04 | 2022-07-12T21:46:04 | 248,626,043 | 21 | 13 | NOASSERTION | 2022-07-12T18:09:17 | 2020-03-19T23:31:39 | R | UTF-8 | R | false | false | 310 | r | shinyDataFilter_df.R | #' Handler for retrieving static code for a shinyDataFilter_df
#'
#' A function for scriptgloss::getInitializationCode to dispatch to
#'
#' @param obj shinyDataFilter_df object
#' @param name unused
#'
#' @export
getInitializationCode.shinyDataFilter_df <- function(obj, name = NULL) {
attr(obj, "code")
} |
d71f3c61d35142bec4e4967a624e21f7771d7b70 | a4021a064ad356d5b261ae116eec8d96e74eefb9 | /man/apg_families.Rd | 56cf9a26943262db3c5e20202487918b5406a9e9 | [
"CC0-1.0"
] | permissive | dlebauer/taxize_ | dc6e809e2d3ba00a52b140310b5d0f416f24b9fd | 9e831c964f40910dccc49e629922d240b987d033 | refs/heads/master | 2020-12-24T23:28:49.722694 | 2014-04-01T18:22:17 | 2014-04-01T18:22:17 | 9,173,897 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 170 | rd | apg_families.Rd | \docType{data}
\name{apg_families}
\alias{apg_families}
\title{Lookup-table for APGIII family names}
\description{
Lookup-table for APGIII family names
}
\keyword{data}
|
370ff64556c00fe00f9c3ebd2fe304474166fc87 | 38addc5be370965a7bc881b69a57001869d5e8a0 | /man/dot-runDiscretization.Rd | 52cab7304cc6516634dea315d5f0c4415be6c3e6 | [] | no_license | BMEngineeR/IRISCEM | bcce0310f192917ee7376c900861d1dd351efea9 | 4bded90bb79bd6e3542d5b21f19af9c7ef00fc91 | refs/heads/master | 2022-12-02T17:27:42.841036 | 2020-08-13T04:47:43 | 2020-08-13T04:47:43 | 287,161,759 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 311 | rd | dot-runDiscretization.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Bicluster.R
\name{.runDiscretization}
\alias{.runDiscretization}
\title{run discretization}
\usage{
.runDiscretization(object = NULL, q = 0.06, LogTransformation = FALSE)
}
\arguments{
\item{q}{}
}
\description{
run discretization
}
|
bd7e0517ce3123812b8b31d5d30f871f22523f01 | 2a25d249b634a4bae6e82a23d8e1b7744afadfba | /makeCacheMatrix.r | 7aa062d5657cdb2288c0be4092428a984c8e0284 | [] | no_license | Gavrilo-Princip/datasciencecoursera | 090ce3ede13f2fe203ccbfd0e4c17d861e89c53c | 78a9a397dbe122f19fb45990cb8986629c183485 | refs/heads/master | 2016-09-05T16:04:05.099830 | 2014-12-22T00:22:32 | 2014-12-22T00:22:32 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 867 | r | makeCacheMatrix.r | ## A pair of functions that cache the inverse of a matrix
## Creates a special matrix object that can cache its inverse
makeCacheMatrix <- function( m = matrix() ) {
## Initialize the inverse property
i <- NULL
## Method to set the matrix
set <- function( matrix ) {
m <<- matrix
i <<- NULL
}
## Method the get the matrix
get <- function() {
## Return the matrix
m
}
## Method to set the inverse of the matrix
setInverse <- function(inverse) {
i <<- inverse
}
## Method to get the inverse of the matrix
getInverse <- function() {
## Return the inverse property
i
}
## Return a list of the methods
list(set = set, get = get,
setInverse = setInverse,
getInverse = getInverse)
}
## Compute the inverse of the special matrix returned by "makeCacheMatrix"
## above. If the inverse has already been calculated (and the matrix has not
|
5aa12c6e1f660b97e63f1df56c4ed7fcc04470af | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/IAPWS95/examples/DTs.Rd.R | 44947b470accc9ec9f18fa7c5fc069ef35b889bd | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 176 | r | DTs.Rd.R | library(IAPWS95)
### Name: DTs
### Title: Density, Function of Temperature and Entropy
### Aliases: DTs
### ** Examples
T <- 500.
s <- 2.56690919
D_Ts <- DTs(T,s)
D_Ts
|
1ed4f3a185c1b377022da3acc66d5346b0a7709d | 7993cc4cdd2d47e2ac627b390482896cb6107c94 | /prelim_analysis.R | 06dc6fd37209d055ce24485b36c49ef9948a2690 | [
"MIT"
] | permissive | iRT-CS/SMDataAnalaysis | edff9c06bdf656e08b12f29ae79e866776bb322e | 5eb4e9a4c2cc8b9f4c07d1f4e247189fbaa4e672 | refs/heads/master | 2021-07-07T22:46:04.052342 | 2021-01-05T20:53:35 | 2021-01-05T20:53:35 | 221,302,691 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,494 | r | prelim_analysis.R | require(ggplot2)
require(tidyverse)
require("mongolite")
#
m <- mongo(collection = "Experiments", db = "ShallowMind", url = "mongodb://rkapur2021:okay4kench635shallowmind.pingry.org:27017")
# datasets <- read.csv(file="C:/Users/emmet/Desktop/NeuralNetworks/datasets.csv")
experiments <- read.csv(file="C:/Users/emmet/Desktop/NeuralNetworks/experiments.csv")
# neural_nets <- read.csv(file="C:/Users/emmet/Desktop/NeuralNetworks/neuralNets.csv")
# example. structure [1,1,1] so cubic
# structure = experiments[7,3]
# training_over_time = as.character(experiments[7,8])
# training_over_time = gsub("\\[|\\]", "", training_over_time)
# training_over_time = 1 - (as.numeric(strsplit(training_over_time, ",")[[1]]))
convert <- function(x) {
training_over_time = as.character(x)
training_over_time = gsub("\\[|\\]", "", training_over_time)
training_over_time = 1 - (as.numeric(strsplit(training_over_time, ",")[[1]]))
abs(training_over_time)
}
convertShapes <- function(x) {
training_over_time = as.character(x)
training_over_time = gsub("\\[|\\]", "", training_over_time)
training_over_time = as.numeric(strsplit(training_over_time, ",")[[1]])
abs(training_over_time)
}
experiments$trainingAccuracyOverTime <- lapply(experiments$trainingLossOverTime, convert)
experiments$validationAccuracyOverTime <- lapply(experiments$validationLossOverTime, convert)
experiments$neuralNetHiddenStructure <- lapply(experiments$neuralNetHiddenStructure, convertShapes)
experiments$inputShape <- as.numeric(experiments$inputShape)
experiments$outputShape <- as.numeric(experiments$outputShape)
row <- 1110
training <- data.frame(epoch = seq(1,length(experiments$trainingAccuracyOverTime[[row]]), 1), trainAcc = experiments$trainingAccuracyOverTime[[row]])
validation <- data.frame(epoch = seq(1,length(experiments$validationAccuracyOverTime[[row]]), 1), valAcc = experiments$validationAccuracyOverTime[[row]])
graph <- ggplot() +
geom_line(data = training, aes(x = epoch, y = trainAcc), color = "blue") +
geom_line(data = validation, aes(x = epoch, y = valAcc), color = "red") +
labs(title=, x="Epoch", y="Accuracy") # Adding scatterplot geom (layer1) and smoothing geom (layer2).
graph
epochToAccuracy <- function(epoch, shape) {
rowIndex <- Position(function(x) identical(x, shape), experiments$neuralNetHiddenStructure)
accuracyList <- experiments$trainingAccuracyOverTime[[rowIndex]]
if (epoch > 0 & epoch <= length(accuracyList)) {
return(accuracyList[epoch])
}
else {
return(-1)
}
}
mean_connectivity <- function(struct) {
copy <- sapply(struct, function(i) i)
struct <- struct[-1]
copy <- copy[-length(copy)]
a <- mean(copy*struct)
return(a)
}
mean_diff_over_time <- function(vector) {
vector <- unlist(vector)
diffs <- vector[-1] - head(vector, -1)
mean(diffs)
}
std_diff_over_time <- function(vector){
vector <- unlist(vector)
diffs <- vector[-1] - head(vector, -1)
sd(diffs)
}
experiments$avg_training_rate <- lapply(experiments$trainingAccuracyOverTime, mean_diff_over_time)
experiments$training_volatility <- lapply(experiments$trainingAccuracyOverTime, std_diff_over_time)
# Coefficient of overfitting for an entire neural net
# Returns list of 2 elements: first, coefficient of overfitting for neural net over the course of the epochs, and then, average coefficient of overfitting (so average over the epochs)
coeff_overfit_nn <- function(shape) {
rowIndex <- Position(function(x) identical(x, shape), experiments$neuralNetHiddenStructure)
training_accuracy_per_row <- experiments$trainingAccuracyOverTime[[rowIndex]]
validation_accuracy_per_row <- experiments$validationAccuracyOverTime[[rowIndex]]
coeff_over_epochs <- training_accuracy_per_row - validation_accuracy_per_row
return(list(coeff_over_epochs, mean(coeff_over_epochs)))
}
coeff_overfit_epoch <- function(epoch, shape) {
rowIndex <- Position(function(x) identical(x, shape), experiments$neuralNetHiddenStructure)
training_accuracy_per_row <- experiments$trainingAccuracyOverTime[[rowIndex]]
validation_accuracy_per_row <- experiments$validationAccuracyOverTime[[rowIndex]]
overfit = training_accuracy_per_row - validation_accuracy_per_row
abs(overfit)
}
# coeff_overfit_nn(c(1,0))
experiments$overfitting <- lapply(experiments$neuralNetHiddenStructure, coeff_overfit_nn)
# graph overfitting vs. complexity of neural net (increasing structures)
# mean training rate vs. connectivity
# plot different functions vs. connectivity
|
37c7a598ca929178ec469bb81d1b66bf1d3a04e9 | 2458226660b0629592674eefb2a595bc3f3a9fbe | /R_script/Prediction_analysis_CA.R | 8d0c78f5afe4120097849627a9a084175a293499 | [] | no_license | amsmith8/A-quantitative-assessment-of-site-level-factors-in-influencing-Chukar-introduction-outcomes | 36db24e2e82456a6986f141a2ff798c3d38ca980 | c7d204e17882d6668d68a42850ae83f33f2ccc7d | refs/heads/master | 2023-02-23T18:45:00.575424 | 2021-02-02T23:46:11 | 2021-02-02T23:46:11 | 253,641,121 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 16,822 | r | Prediction_analysis_CA.R | # Compute consion mtrix statistics by comparing rasters
library("raster")
library("RColorBrewer")
library("dismo")
library("maptools")
library("raster")
library("rgdal")
library("rgeos")
library("sp")
# Import necesary data
# Step 2 : Prepare data and functions
####### ######## ######## ####### ######## ########
ann_tiffs <- list.files(path = "./Predictions_CA/model_ann_folds_CA", full.names= TRUE, pattern = ".tif")
gbm_tiffs <- list.files(path = "./Predictions_CA/model_gbm_folds_CA", full.names= TRUE, pattern = ".tif")
maxent_tiffs <- list.files(path = "./Predictions_CA/model_maxent_folds_CA", full.names= TRUE, pattern = ".tif")
rf_tiffs <- list.files(path = "./Predictions_CA/model_rf_folds_CA", full.names= TRUE, pattern = ".tif")
svm_tiffs <- list.files(path = "./Predictions_CA/model_svm_folds_CA", full.names= TRUE, pattern = ".tif")
source( "./R_script/Functions.R" )
source( "./R_script/eBird_data_cleaning.R" )
#compile tiffs to stacks
model_stack_svm <- listToStack(svm_tiffs)
model_stack_gbm <- listToStack(gbm_tiffs)
model_stack_maxent <- listToStack(maxent_tiffs)
model_stack_rf <- listToStack(rf_tiffs)
model_stack_ann <- listToStack(ann_tiffs)
#Chukar range polygon file
#Ac_poly <- rgdal::readOGR("/Users/austinsmith/Documents/SDM_spatial_data/Bird_life_galliformes_fgip/Alectoris_chukar/Alectoris_chukar.shp") # via Bird Life
#crs(Ac_poly) <- crs(model_stack_rf$rfFold1)
### Seperate the polygon into native and naturalized regions
#Native <- subset(Ac_poly, Ac_poly$OBJECTID == 36 ) # historical native range for A. chukar. Similar to Christensen 1970
#Naturalized <- subset(Ac_poly, Ac_poly$OBJECTID != 36 ) # recognized regions of naturalized populations
naturalized <- circles(us_pts, d = d , dissolve=TRUE, lonlat=TRUE) #49km is the average distance recorded in Bohl 1957
naturalized <- polygons(naturalized )
# limit to contiguous US states
states <- rgdal::readOGR("/Users/austinsmith/Downloads/ne_110m_admin_1_states_provinces/ne_110m_admin_1_states_provinces.shp")
crs(states) <- crs(model_stack_rf$rfFold1)
contiguous_us <- states[states$name != "Alaska",] # remove Alaska
contiguous_us <- contiguous_us [contiguous_us $name != "Hawaii",] # remove Hawaii
plot(contiguous_us)
# Plot colors
#palette <-colorRampPalette( c( "navy" , "goldenrod1" , "red" ) )
#palette2 <- colorRampPalette( c( "skyblue2" , "firebrick" ) )
# Step 3 : Accuracy - function,s images, plots
####### ######## ######## ####### ######## ########
# MESS Model if wanting to compare novel locations - not needed for analysis 01/17/20
# us_mess <- raster( "Predictions/mess_us.tif" )
# us_mess <- crop(us_mess, naturalized )
# us_mess <- mask(us_mess, naturalized)
# us_mess <- us_mess >0
# plot(us_mess)
#Ac_raster <-mosaic(us_mess , absent_space, fun = sum) # used if wanting to replace other model
#plot(Ac_raster)
# Create raster of chukar range model
absent_space <- model_stack_rf$rfFold1 # can be any raster - just need it to convert background space to 0
absent_space [absent_space < 1] <- 0
#plot(absent_space)
# create raster with chukar range = 1, absent = 0
Ac_raster <-rasterize(naturalized , absent_space, field=1)
#plot(Ac_raster)
true_value_chukar <- merge( Ac_raster, absent_space )
#plot(true_value_chukar)
contiguous_us_chukar <- crop(true_value_chukar, contiguous_us )
#plot(contiguous_us_chukar, main = "Chukar range")
#plot(contiguous_us, add = T)
# CALCULATIONS
# import threshoolds
auc_scores <- readRDS("./RDS_objects/CA/auc_scores_CA.rds")
sensSpec_scores <- readRDS("./RDS_objects/CA/sensSpec_scores_CA.rds")
AUC_mean <- apply( auc_scores[,-1] , 1, mean , na.rm = TRUE )
AUC_sd <- apply( auc_scores[,-1] , 1 , sd , na.rm = TRUE )
auc_summary <- round ( data.frame( AUC_mean, AUC_sd ), 4 )
row.names(auc_summary) <- c( "ANN" , "GBM" , "MaxEnt" , "RF" , "SVM" )
auc_summary
# ANN
ann_cm_results <- rbind(
compute_raster_cm (model_stack_ann$annFold1 , sensSpec_scores[ 1 , 2 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_ann$annFold2 , sensSpec_scores[ 1 , 3 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_ann$annFold3 , sensSpec_scores[ 1 , 4 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_ann$annFold4 , sensSpec_scores[ 1 , 5 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_ann$annFold5 , sensSpec_scores[ 1 , 6 ] , contiguous_us_chukar )
)
ann_cm_results <-data.frame( ann_cm_results)
row.names(ann_cm_results) <- c( "ANN_Fold_1", "ANN_Fold_2" , "ANN_Fold_3" , "ANN_Fold_4" , "ANN_fold_5" )
ann_cm_results$Model <- c( rep("ANN", 5 ) )
ann_summary <- c(mean(ann_cm_results[,1]), sd(ann_cm_results[,1]),
mean(ann_cm_results[,2]), sd(ann_cm_results[,2]),
mean(ann_cm_results[,3]), sd(ann_cm_results[,3])
)
# GBM
gbm_cm_results <- rbind(
compute_raster_cm (model_stack_gbm$gbmFold1 , sensSpec_scores[ 2 , 2 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_gbm$gbmFold2 , sensSpec_scores[ 2 , 3 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_gbm$gbmFold3 , sensSpec_scores[ 2 , 4 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_gbm$gbmFold4 , sensSpec_scores[ 2 , 5 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_gbm$gbmFold5 , sensSpec_scores[ 2 , 6 ] , contiguous_us_chukar )
)
gbm_cm_results <-data.frame( gbm_cm_results)
row.names(gbm_cm_results) <- c( "GBM_Fold_1", "GBM_Fold_2" , "GBM_Fold_3" , "GBM_Fold_4" , "GBM_fold_5" )
gbm_cm_results$Model <- c( rep("GBM", 5 ) )
gbm_summary <- c(mean(gbm_cm_results[,1]), sd(gbm_cm_results[,1]),
mean(gbm_cm_results[,2]), sd(gbm_cm_results[,2]),
mean(gbm_cm_results[,3]), sd(gbm_cm_results[,3])
)
# MaxEnt
maxent_cm_results <- rbind(
compute_raster_cm (model_stack_maxent$maxentFold1 , sensSpec_scores[ 3 , 2 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_maxent$maxentFold2 , sensSpec_scores[ 3 , 3 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_maxent$maxentFold3 , sensSpec_scores[ 3 , 4 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_maxent$maxentFold4 , sensSpec_scores[ 3 , 5 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_maxent$maxentFold5 , sensSpec_scores[ 3 , 6 ] , contiguous_us_chukar )
)
maxent_cm_results <-data.frame( maxent_cm_results)
row.names(maxent_cm_results) <- c( "MaxEnt_Fold_1", "MaxEnt_Fold_2" , "MaxEnt_Fold_3" , "MaxEnt_Fold_4" , "MaxEnt_fold_5" )
maxent_cm_results$Model <- c( rep("MaxEnt", 5 ) )
maxent_summary <- c(mean(maxent_cm_results[,1]), sd(maxent_cm_results[,1]),
mean(maxent_cm_results[,2]), sd(maxent_cm_results[,2]),
mean(maxent_cm_results[,3]), sd(maxent_cm_results[,3])
)
# RF
rf_cm_results <- rbind(
compute_raster_cm (model_stack_rf$rfFold1 , sensSpec_scores[ 4 , 2 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_rf$rfFold2 , sensSpec_scores[ 4 , 3 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_rf$rfFold3 , sensSpec_scores[ 4 , 4 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_rf$rfFold4 , sensSpec_scores[ 4 , 5 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_rf$rfFold5 , sensSpec_scores[ 4 , 6 ] , contiguous_us_chukar )
)
rf_cm_results <-data.frame( rf_cm_results)
row.names(rf_cm_results) <- c( "RF_Fold_1", "RF_Fold_2" , "RF_Fold_3" , "RF_Fold_4" , "RF_fold_5" )
rf_cm_results$Model <- c( rep("RF", 5 ) )
rf_summary <- c(mean(rf_cm_results[,1]), sd(rf_cm_results[,1]),
mean(rf_cm_results[,2]), sd(rf_cm_results[,2]),
mean(rf_cm_results[,3]), sd(rf_cm_results[,3])
)
# SVM
svm_cm_results <- rbind(
compute_raster_cm (model_stack_svm$svmFold1 , sensSpec_scores[ 5 , 2 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_svm$svmFold2 , sensSpec_scores[ 5 , 3 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_svm$svmFold3 , sensSpec_scores[ 5 , 4 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_svm$svmFold4 , sensSpec_scores[ 5 , 5 ] , contiguous_us_chukar ),
compute_raster_cm (model_stack_svm$svmFold5 , sensSpec_scores[ 5 , 6 ] , contiguous_us_chukar )
)
svm_cm_results <-data.frame( svm_cm_results)
row.names(svm_cm_results) <- c( "SVM_Fold_1", "SVM_Fold_2" , "SVM_Fold_3" , "SVM_Fold_4" , "SVM_Fold_5" )
svm_cm_results$Model <- c( rep("SVM", 5 ) )
svm_summary <- c(mean(svm_cm_results[,1]), sd(svm_cm_results[,1]),
mean(svm_cm_results[,2]), sd(svm_cm_results[,2]),
mean(svm_cm_results[,3]), sd(svm_cm_results[,3])
)
prediction_summary <- data.frame(round(rbind(ann_summary,gbm_summary,maxent_summary,rf_summary,svm_summary),4))
row.names(prediction_summary) <- c( "ANN" , "GBM" , "MaxEnt" , "RF" , "SVM" )
colnames(prediction_summary) <- c( "Acc_mean" , "Acc_sd" , "Sens_mean" , "Sens_sd" , "Spec_mean" , "Spec_sd" )
prediction_summary
###
# Statistic summary
statistics_summary <- cbind( auc_summary, prediction_summary)
###
all_cm_results <- do.call("rbind", list( ann_cm_results,
gbm_cm_results,
maxent_cm_results,
rf_cm_results,
svm_cm_results
)
)
# Reconfigure for plotting
all_cm_gglist <- reshape2::melt( all_cm_results , id.var='Model' )
all_cm_gglist$Stage <- "Predicition"
# Reconfigure the auc matrix to match the spatial statistics
auc_melt <- reshape2::melt( auc_scores , id.var='model' )
colnames(auc_melt)[1] <- "Model"
auc_melt[,2] <- "AUC"
auc_melt$Stage <- "Evaluation"
# All data
final_table <- rbind( auc_melt , all_cm_gglist )
library(ggpubr)
plot1 <- ggbarplot(final_table, x = "variable", y = "value", ,position = position_dodge(0.8),
title = "Model statistics - California points",
xlab = "",
ylab = "Score",
fill = "Model",
palette = brewer.pal(n = 5, name = "Reds"),
legend = "bottom",
facet.by = "Stage",
add = c("mean_sd"),
rremove("x.text")
)
plot1 + theme(plot.title = element_text(hjust = 0.5) ) + facet_grid(~Stage, space = "free_x", scales = "free_x")
saveRDS( all_cm_results , "./RDS_objects/CA/all_cm_results_CA.rds" )
saveRDS( statistics_summary , "./RDS_objects/CA/statistics_summary_CA.rds" )
# ENSEMBLES
# Avergae ann
ann_mean <- mean(model_stack_ann)
ann_mean_sensSpec <- rowMeans(sensSpec_scores[1,-1])
# ANN - Voting
ann_sum <- sum(model_stack_ann$annFold1 > sensSpec_scores[1,2],
model_stack_ann$annFold2 > sensSpec_scores[1,3],
model_stack_ann$annFold3 > sensSpec_scores[1,4],
model_stack_ann$annFold4 > sensSpec_scores[1,5],
model_stack_ann$annFold5 > sensSpec_scores[1,6])
# CM Results
ann_ensembles <- data.frame( rbind( compute_raster_cm (ann_mean , ann_mean_sensSpec , contiguous_us_chukar ) ,
cm_raster( crop( ann_sum >= 3 , contiguous_us_chukar) , contiguous_us_chukar ),
cm_raster( crop( ann_sum == 5 , contiguous_us_chukar) , contiguous_us_chukar )
)
)
row.names( ann_ensembles ) <- c("mean", "n >= 3", "n = 5" )
ann_ensembles$Model <- c( rep("ANN", 3 ) )
ann_ensembles$Method<- c("Mean", "PV", "UD" )
#
# Avergae gbm
gbm_mean <- mean(model_stack_gbm)
gbm_mean_sensSpec <- rowMeans(sensSpec_scores[2,-1])
# gbm - Voting
gbm_sum <- sum(model_stack_gbm$gbmFold1 > sensSpec_scores[2,2],
model_stack_gbm$gbmFold2 > sensSpec_scores[2,3],
model_stack_gbm$gbmFold3 > sensSpec_scores[2,4],
model_stack_gbm$gbmFold4 > sensSpec_scores[2,5],
model_stack_gbm$gbmFold5 > sensSpec_scores[2,6])
# CM Results
gbm_ensembles <- data.frame( rbind( compute_raster_cm (gbm_mean , gbm_mean_sensSpec , contiguous_us_chukar ) ,
cm_raster( crop( gbm_sum >= 3 , contiguous_us_chukar) , contiguous_us_chukar ),
cm_raster( crop( gbm_sum == 5 , contiguous_us_chukar) , contiguous_us_chukar )
)
)
row.names( gbm_ensembles ) <- c( "mean", "n >= 3", "n = 5" )
gbm_ensembles$Model <- c( rep("GBM", 3 ) )
gbm_ensembles$Method<- c("Mean", "PV", "UD" )
# Avergae maxent
maxent_mean <- mean(model_stack_maxent)
maxent_mean_sensSpec <- rowMeans(sensSpec_scores[3,-1])
# maxent - Voting
maxent_sum <- sum(model_stack_maxent$maxentFold1 > sensSpec_scores[3,2],
model_stack_maxent$maxentFold2 > sensSpec_scores[3,3],
model_stack_maxent$maxentFold3 > sensSpec_scores[3,4],
model_stack_maxent$maxentFold4 > sensSpec_scores[3,5],
model_stack_maxent$maxentFold5 > sensSpec_scores[3,6])
# CM Results
maxent_ensembles <- data.frame( rbind( compute_raster_cm (maxent_mean , maxent_mean_sensSpec , contiguous_us_chukar ) ,
cm_raster( crop( maxent_sum >= 3 , contiguous_us_chukar) , contiguous_us_chukar ),
cm_raster( crop( maxent_sum == 5 , contiguous_us_chukar) , contiguous_us_chukar )
)
)
row.names( maxent_ensembles ) <- c( "mean", "n >= 3", "n = 5" )
maxent_ensembles$Model <- c( rep("MaxEnt", 3 ) )
maxent_ensembles$Method<- c("Mean", "PV", "UD" )
# Avergae rf
rf_mean <- mean(model_stack_rf)
rf_mean_sensSpec <- rowMeans(sensSpec_scores[4,-1])
# rf - Voting
rf_sum <- sum(model_stack_rf$rfFold1 > sensSpec_scores[4,2],
model_stack_rf$rfFold2 > sensSpec_scores[4,3],
model_stack_rf$rfFold3 > sensSpec_scores[4,4],
model_stack_rf$rfFold4 > sensSpec_scores[4,5],
model_stack_rf$rfFold5 > sensSpec_scores[4,6])
# CM Results
rf_ensembles <- data.frame( rbind( compute_raster_cm (rf_mean , rf_mean_sensSpec , contiguous_us_chukar ),
cm_raster( crop( rf_sum >= 3 , contiguous_us_chukar) , contiguous_us_chukar ),
cm_raster( crop( rf_sum == 5 , contiguous_us_chukar) , contiguous_us_chukar )
)
)
row.names( rf_ensembles ) <- c( "mean", "n >= 3", "n = 5" )
rf_ensembles$Model <- c( rep("RF", 3 ) )
rf_ensembles$Method<- c("Mean", "PV", "UD" )
# Avergae svm
svm_mean <- mean(model_stack_svm)
svm_mean_sensSpec <- rowMeans(sensSpec_scores[5,-1])
svm_mean_cm <- compute_raster_cm (svm_mean , svm_mean_sensSpec , contiguous_us_chukar )
# svm - Voting
svm_sum <- sum(model_stack_svm$svmFold1 > sensSpec_scores[5,2],
model_stack_svm$svmFold2 > sensSpec_scores[5,3],
model_stack_svm$svmFold3 > sensSpec_scores[5,4],
model_stack_svm$svmFold4 > sensSpec_scores[5,5],
model_stack_svm$svmFold5 > sensSpec_scores[5,6])
# CM Results
svm_ensembles <- data.frame( rbind( cm_raster( crop( svm_sum >= 1, contiguous_us_chukar) , contiguous_us_chukar ),
cm_raster( crop( svm_sum >= 3 , contiguous_us_chukar) , contiguous_us_chukar ),
cm_raster( crop( svm_sum == 5 , contiguous_us_chukar) , contiguous_us_chukar )
)
)
row.names( svm_ensembles ) <- c( "mean", "n >= 3", "n = 5" )
svm_ensembles$Model <- c( rep("SVM", 3 ) )
svm_ensembles$Method<- c("Mean", "PV", "UD" )
alg_ensembles <- rbind( ann_ensembles , gbm_ensembles , maxent_ensembles , rf_ensembles , svm_ensembles )
all_ensembles_gglist <- reshape2::melt( alg_ensembles , id.var=c('Method',"Model" ))
plot2 <- ggbarplot(all_ensembles_gglist , x = "variable", y = "value", ,position = position_dodge(0.8),
title = "Ensembles",
xlab = "",
ylab = "Score",
fill = "Model",
palette = brewer.pal(n = 5, name = "Reds"),
legend = "bottom",
facet.by = "Method",
add = c("mean_sd"),
rremove("x.text"),
x.text.angle = 45,
)
plot2 + theme(plot.title = element_text(hjust = 0.5) )
# TOTAL ENSEMBLE
# svm - Voting
complete_sum <- sum(ann_sum , gbm_sum , maxent_sum , rf_sum , svm_sum )
# CM -
complete_uv_cm <- rbind( cm_raster( crop( complete_sum >= 13 , contiguous_us_chukar) , contiguous_us_chukar ),
cm_raster( crop( complete_sum == 25 , contiguous_us_chukar) , contiguous_us_chukar )
)
row.names(complete_uv_cm) <- c( "n >= 13", "n = 25" )
plot( crop( complete_sum >= 13 , contiguous_us_chukar ) )
plot(states, add = T)
saveRDS( alg_ensembles , "./RDS_objects/CA/alg_ensembles_CA.rds" )
saveRDS( all_ensembles_gglist , "./RDS_objects/CA/all_ensembles_gglist_CA.rds" )
saveRDS( complete_uv_cm , "./RDS_objects/CA/complete_uv_cm_CA.rds" )
|
9125c32b4b5b8dd86b1fd532f3528e1c3c3a0b80 | d8f2507d78fc366c416274c1b53ac9acde96ee54 | /Data and Analysis Simulations/Code/Generate Simulated Education Data - v1.3 - Refocused on just data creation (the data dirtying process for CIM data camp has been separated).r | 17cb77e7ac9d93b7d34ba22f885262fa1b3671d1 | [] | no_license | nsmader/Integrated-Evaluation-Code-and-Sim-Data | 080b5c7a31f6542dcf9393ed37fc02222b5f0a33 | 8589f5bed9b7b41a6011d771b569223651bbbc82 | refs/heads/master | 2021-01-01T18:03:37.713282 | 2013-06-07T02:57:03 | 2013-06-07T02:57:03 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 9,551 | r | Generate Simulated Education Data - v1.3 - Refocused on just data creation (the data dirtying process for CIM data camp has been separated).r | ####################################################################################
# #
# Develop Simulation Dataset for Analysis Mock-ups in YMCA-Chapin Collaboration #
# #
# Author: Nicholas Mader <nmader@chapinhall.org> #
# #
####################################################################################
# Clear the work space
rm(list=ls())
#--------------------#
# LIST OF PRIORITIES #
#--------------------#
# 1. Dosage in treatment
# 2. School attendance outcome
# 3. Multiple interventions
# 4. Geocoding
#---------------------------------#
# SPECIFY DATA GENERATING PROCESS #
#---------------------------------#
# Data is ~loosely~ reflective of a 9th grade class in CPS
MyDir <- "C:/Users/nmader/Google Drive/Chapin Hall - YMCA Project Collaboration/Data and Analysis Simulations/Code"
setwd(MyDir)
set.seed(60637) # The seed value could be any number. I'm just picking an auspicious one.
library("MASS")
library("plyr")
library("ggplot2")
library("foreign")
"%&%" <- function(...){paste(..., sep = "")}
comment <- function(...){}
nKids <- 24268 # To make the sample equivalent to a year of HS enrollment in CPS
nSchools <- 106
sTrtName <- "After School Programming"
# Set skeleton for the data draws
sDataFeatures <- c("PretestDraw", "StudFactor", "TreatmentDraw" , "BpiDraw", "RaceDraw", "SchDraw", "GenderDraw") #
nDataFeatures <- length(sDataFeatures)
# Generate a variance-Covariance matrix for all data features. These features will be transformed below into new distributions and magnitudes.
mVCV <- matrix(nrow = nDataFeatures, ncol = nDataFeatures, dimnames = list(sDataFeatures, sDataFeatures))
for (i in 1:nDataFeatures) {
for (j in 1:i) {
if (i == j) mVCV[i, j] <- 1 else {
rho <- max(min(rnorm(1, sd = 0.2), 1), -1) # This will result in correlations generally close to 0, and truncated within [0,1]
mVCV[i,j] <- rho
mVCV[j,i] <- rho # These assignments ensure that the variance covariance matrix is symmetric
}
}
}
# Ensure that we have the relationships that we want
FlipSigns <- function(x1, x2, mult) {
# Note: the "<<-" assignment operator ensures that the assignment is applied to mVCV in the global environment (i.e. that it persists after the function is run)
mVCV[x1, x2] <<- mult*abs(mVCV[x1, x2])
mVCV[x2, x1] <<- mult*abs(mVCV[x2, x1])
}
FlipSigns("TreatmentDraw", "PretestDraw", -1)
FlipSigns("TreatmentDraw", "BpiDraw", +2)
FlipSigns("PretestDraw", "BpiDraw", -1)
#--------------------------------------------------#
# DRAW SAMPLE AND GENERATE STUDENT CHARACTERISTICS #
#--------------------------------------------------#
#### Draw a sample of kids from the above
vMu = as.vector(rep(0, nDataFeatures))
KidData <- mvrnorm(n = nKids, mu = vMu, Sigma = mVCV)
colnames(KidData) <- sDataFeatures
StudId <- 1:nKids
dfKidData <- data.frame(StudId, KidData)
attach(dfKidData)
#### Set values and scales for each variable
# Create Gender
bFemale <- as.numeric(pnorm(GenderDraw) <= 0.51) # Without the "as.numeric()" function, bGender would be a series of "true/false" values. Another way to accomplish the "true/false" to "1/0" conversion is to multiply by 1.
cGender <- ifelse(1==bFemale, "Treble", "Bass") # Convert the gender distinction into an abstract one
# Create Race
cRaceDraw <- cut(pnorm(RaceDraw), breaks = c(0.0, 0.4, 0.5, 0.6, 1.0), include.lowest = TRUE)
cRace <- factor(cRaceDraw, labels = c("Sour", "Salty", "Bitter", "Sweet"))
# Rescale Pretest to Mean 100, and broader standard deviation
Pretest <- round(PretestDraw*20 + 100)
#hist(PretestScaled)
# Rescale Bpi
Bpi <- round(exp(BpiDraw/1.5 + 2)/3) # NSM: this is messing with things to get something relatively flat, with an interesting tail
#hist(Bpi, breaks = 0:ceiling(max(Bpi)))
#table(Bpi)
# Assign to treatment
bTreated <- as.numeric(pnorm(TreatmentDraw)<=0.2)
#---------------------------------------------------------------#
# GENERATE SCHOOL CHARACTERISTICS AND COMBINE WITH STUDENT DATA #
#---------------------------------------------------------------#
# Construct names for schools (comes from http://www.fanfiction.net/s/7739576/1/106-Stories-of-Us)
sSchNamesData <- read.csv2(file = paste(MyDir, "Raw Data/Random Words for School Names.csv", sep=""), sep = ",", header = TRUE)
attach(sSchNamesData) # NSM: although I like attaching tons of stuff to the working space (because I don't know any better), R doesn't seem to like it much. Thoughts on cleaner practices?
SchTypeDraw <- cut(runif(nSchools), breaks = c(0.0, 0.3, 0.5, 0.7, 0.8, 0.9, 1.0), include.lowest = TRUE)
sSchType <- as.character(factor(SchTypeDraw, labels = c("Academy", "School", "High School", "Preparatory", "Charter", "International")))
sSchName <- paste(SchNamesList, sSchType)
# Combine information
cSchYinYang <- factor(runif(nSchools)<.2, labels = c("Yin", "Yang"))
dfSchData <- data.frame(1:nSchools, sSchName, (rnorm(nSchools, sd = sqrt(10)) + ifelse(cSchYinYang=="Yin",0,4)), as.character(cSchYinYang))
colnames(dfSchData) <- c("SchNum", "SchName", "SchEffect", "SchType")
# Assign Kids to Schools
AssignedSchNum <- cut(pnorm(SchDraw), nSchools, labels=1:nSchools)
# Merge Student and School Data
dfKidData_Aug <- data.frame(dfKidData, bFemale, cGender, cRace, Pretest, Bpi, AssignedSchNum)
dfMyData <- merge(x = dfKidData_Aug, y = dfSchData, by.x = "AssignedSchNum", by.y = "SchNum")
attach(dfMyData)
#-----------------------------#
# CONSTRUCT OUTCOME VARIABLES #
#-----------------------------#
# Draw errors to get a random component
e <- data.frame(mvrnorm(n = nKids, mu = c(0, 0, 0, 0), Sigma = matrix(c(1.0, 0.3, 0.2, 0.2,
0.3, 1.0, 0.2, 0.2,
0.2, 0.2, 1.0, 0.2,
0.2, 0.2, 0.2, 1.0), nrow = 4)))
colnames(e) <- c("e1", "e2", "e3", "e4")
attach(e)
# Generate several years of post-test data, where the Data Generating Process is the same except for increasing mean and increasing error variance
Posttest1 <- 50 + 0.90*Pretest + (-2.5)*Bpi + 3.0*bFemale + SchEffect + 15*StudFactor + 20*bTreated + 3.0*(bTreated*Bpi) + e2*30 # + (-0.0008)*Pretest^2
Posttest2 <- 60 + 0.90*Posttest1 + (-2.5)*Bpi + 3.0*bFemale + SchEffect + 15*StudFactor + 20*bTreated + 3.0*(bTreated*Bpi) + e3*40 # + (-0.0008)*Pretest^2
Posttest1 <- round(Posttest1)
Posttest2 <- round(Posttest2)
#summary(Posttest1)
#hist(Posttest1)
# Generate binary outcome, with instrument that can be used for selection. Interpretation is dropping out, shocked by ... pregnancy?
ystar <- 0.5 + (-0.03)*Pretest + 0.5*Bpi + ifelse(cRace == "Sour", 1.0, 0) + e2
DroppedOut <- as.numeric(ystar > 0)
mean(DroppedOut)
# Generate continuous outcome observed conditional on binary outcome
# Interpretation is Act conditional on reaching an age where student would apply to college.
ActTrue <- 15 + 0.05*Pretest + (-0.5)*Bpi + e3
Act <- ActTrue
is.na(Act) <- (DroppedOut == 1)
summary(Act)
#----------------------------------------#
# INSPECT THE PROPERTIES OF THE DATA SET #
#----------------------------------------#
# # # Construct conditional averages and descriptives # # #
aggregate(x = cbind(Pretest, Posttest1, Posttest2, Bpi, SchEffect, bTreated), by = list(cRace), FUN = "mean")
aggregate(x = cbind(Pretest, Posttest1, Posttest2, Bpi, SchEffect, bTreated), by = list(cGender), FUN = "mean")
cor(cbind(Pretest, Posttest1, Posttest2, Bpi, bTreated))
var(SchEffect); var(Posttest1); var(Posttest2)
# # # Draw quantile curve of how pretests, Bpi, and different racial composition varies across schools # # #
# ....
# # # Can we recover the true parameters? # # #
# NSM: I need to return to this section. There are more visualizations to explore here, and I've changed the data structure a decent amount since
# first writing this section up
Pretest2 <- Pretest^2
dTreated.Bpi <- bTreated * Bpi
SchInds <- model.matrix(~AssignedSchNum)
MyReg1 <- lm(Posttest1 ~ Pretest + Bpi + bFemale + bTreated + SchInds)
summary(MyReg1)
MyReg2 <- lm(Posttest2 ~ Posttest1 + Bpi + bFemale + bTreated + SchInds)
summary(MyReg2)
# Inspect bias if Bpi is not controlled for (since we may initially withhold this from the Data Campers)
MyReg <- lm(Posttest1 ~ Pretest + bFemale + bTreated + SchInds)
summary(MyReg)
#---------------------#
# Leftovers for Later #
#---------------------#
# double the sample and reverse only the signs of the errors to ensure that regressions give exact parameters back (in linear regression)
# continuous treatment effect
# multiple treatments
# endogeneity of the treatment, which would come with instruments
# count outcome
# floor effects for tests (for application of tobit)
|
fe32ec0d4de4d8f7046a12411e8aa227176c389d | 6b86ab2ec4bf5ac6e64ec83ad7206afc26b51a5e | /Lessons/Lesson5/DU5.R | 395ab08950fd957377bee5ef2ce27cac08838d3a | [] | no_license | Lulu631/GeneralInsurance_Class | 9c1a7ebabcf2d3edc6a0f7f542ea78b1d462aa3c | 54dee1225524f6c8f67046ef2218f9ec220b7a4f | refs/heads/Class2019 | 2020-04-23T21:10:12.574973 | 2019-04-01T19:51:23 | 2019-04-01T19:51:23 | 171,461,411 | 0 | 0 | null | 2019-03-26T10:43:59 | 2019-02-19T11:25:24 | HTML | UTF-8 | R | false | false | 3,792 | r | DU5.R | install.packages("lubridate")
library(lubridate)
library(ggplot2)
knitr::opts_chunk$set(echo = TRUE)
setwd("C:/Users/userPC/Documents/GeneralInsurance_Class/Data")
dt_Claims <- read.csv("lesson5_Claims.csv") %>% distinct(NrClaim, .keep_all = TRUE)
dt_pol_w_claims <- left_join(dt_Policy,
dt_Claims,
by = c("NrPolicy", "NrObject")
)
dt_pol_w_claims <-
dt_pol_w_claims %>% mutate(Time_Exposure = lubridate::dmy(Dt_Exp_End) - lubridate::dmy(Dt_Exp_Start))
dt_pol_w_claims <-
dt_pol_w_claims %>%
mutate(Ult_Loss = Paid + Reserves,
Burning_Cost = ifelse(is.na(Ult_Loss), 0, Ult_Loss / as.integer(Time_Exposure))
)
########### Najskor sa zameriame na premennu Customer Type
dt_pol_w_claims %>%
ggplot(aes(y = Burning_Cost, x = Customer_Type)) +
geom_jitter()
# Vidime, ze typ C ma jedneho outliera a zaroven ma vacsiu disperziu ako typ S, preto ocakavame, ze typ C bude mat vacsi vplyv na BC
dt_pol_w_claims %>%
group_by(Customer_Type) %>%
summarise(BC_avg = mean(Burning_Cost, na.rm = TRUE),
BC_median = median(Burning_Cost, na.rm = TRUE),
cnt = n()) %>%
arrange(desc(BC_avg))
# Typ C ma aj vacsi priemer BC ako typ S, teda ocakavame ze bude mat vacsi vplyv na BC
dt_pol_w_claims %>%
ggplot(aes(y = Burning_Cost, x = Customer_Type)) +
geom_boxplot() +
ylim(0, 100)
# C ma aj vyssie BC ako S, pricom S ma menej vyssich BC
model_ct <- glm(data = dt_pol_w_claims %>% filter(Burning_Cost != 0, Burning_Cost < 100),
formula = Burning_Cost ~ Customer_Type,
family = Gamma())
summary(model_ct)
# zjavne do regresie vstupuje iba typ S, no nie je signifikantny
# Teda na zaklade tohto modelu nevieme predikovat vysku BC
########### Dalej budeme analyzovat premennu Veh_type1 - typy vozidiel podla "pouzitia"
dt_pol_w_claims %>%
ggplot(aes(y = Burning_Cost, x = Veh_type1)) +
geom_jitter()
# kedze mame vela typov, obrazok nie je velmi citatelny, no vidime, ze vacsiu disperziu ma typ "private car", "commercial car <3100 kg" a "commercial car <3500 kg" => asi budu vplyvat viac
dt_pol_w_claims %>%
group_by(Veh_type1) %>%
summarise(BC_avg = mean(Burning_Cost, na.rm = TRUE),
BC_median = median(Burning_Cost, na.rm = TRUE),
cnt = n()) %>%
arrange(desc(BC_avg))
# najvyssiu strednu hodnotu ma taxi, no to moze byt sposobene malym poctom dat
dt_pol_w_claims %>%
ggplot(aes(y = Burning_Cost, x = Veh_type1)) +
geom_boxplot() +
ylim(0, 100)
# vidime, ze asi private car, commercial car <3100 kg, commercial car <3500 kg, pripadne articulated vehicle a driving school car budu vyrazne ovplyvnovat BC
model_vt <- glm(data = dt_pol_w_claims %>% filter(Burning_Cost != 0, Burning_Cost < 100),
formula = Burning_Cost ~ Veh_type1,
family = Gamma())
summary(model_vt)
# nase tusenie sa potvrdilo, teda oba typy commercial car, private car, driving school car su signifikantne a dokonca aj taxi category A => vplyvaju na BC
# Na zaklade tohto modelu vieme dobre predikovat vysku BC pre jednotlive typy aut
########### Skombinujme obe premenne
model <- glm(data = dt_pol_w_claims %>% filter(Burning_Cost != 0, Burning_Cost < 100),
formula = Burning_Cost ~ Customer_Type + Veh_type1,
family = Gamma())
summary(model)
# Kombinaciou oboch premennych dostavame rovnky zaver: customer_type nie je signifikantny a typy commercial car, private car, driving school car a taxi category A su
# Lepsimi modelmi na modelovanie BC by boli kombinacie roznych faktorov, pripadne vsetkych a nasledne porovnanie podla niektoreho z porovnavacich kriterii
|
b6b0f8703ca30e5518815b0d563e62d6517763cf | fcfb3739aa7e701e9b8cb7c76ba5ba213b53f9a3 | /R/summary_dv.R | 9420eacba211ea2e7550d062aaf0c9f5cbaee187 | [] | no_license | rbcarvin/demoPackage | 12b6df3fd04b0cfa8e58a70725651fc77a0002ad | 04176c239e3546757ff9f7a3efab120b7ad7a10e | refs/heads/master | 2021-01-25T06:17:02.327021 | 2017-06-06T16:31:07 | 2017-06-06T16:31:07 | 93,538,522 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,312 | r | summary_dv.R | #' @title summary_dv
#'
#' @description Calculate min max day of year
#'
#' @importFrom dataRetrieval readNWISdv
#' @importFrom dataRetrieval renameNWISColumns
#' @importFrom dplyr mutate group_by summarise
#'
#' @export
#' @param site character USGS site ID
summary_dv <- function(site){
doy <- Flow <- Date <- ".dply" #this is how Laura tricked the note when it was sad that there were no quotes around column names
dv_data <- readNWISdv(siteNumbers=site,
parameterCd = "00060", startDate = "", endDate = "")
dv_summ <- renameNWISColumns(dv_data)
dv_summ <- mutate(dv_summ, doy = as.numeric(strftime(Date, format = "%j")))
dv_summ <- group_by(dv_summ, doy)
dv_summ <- summarise(dv_summ,
max = max(Flow, na.rm = TRUE),
min = min(Flow, na.rm = TRUE))
return(dv_summ)
}
.onAttach <- function(libname, pkgname) {
packageStartupMessage("Although this software program has been used by the U.S. Geological Survey (USGS), no warranty, expressed or implied, is made by the USGS or the U.S. Government as to the accuracy and functioning of the program and related program material nor shall the fact of distribution constitute any such warranty, and no responsibility is assumed by the USGS in connection therewith.")
}
|
ea3b23e6ca5e7aeee1421f595938e0446fa83880 | 32a3b1ead3c2b172866588dc484cfa4b62b12caa | /R/calcPCor.R | 7d87cffe9da8a40c4a84927223cfee2f12eeab74 | [] | no_license | arcolombo/junk | df08a10503a3fe46674d30fe7f8ee06bc1c18d99 | 5d39a5893bb7dd6328d587791b9735de14b2ff45 | refs/heads/master | 2020-04-13T15:28:36.085665 | 2016-03-31T16:47:00 | 2016-03-31T16:47:00 | 51,175,677 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,059 | r | calcPCor.R | #' A function to calculate the Variance Inflation Factor (VIF) for each of the gene sets input in geneSets This function depends on which method you used to calculate the variance originally. If you assumed pooled variance, then the variance will be calculated using LIMMA's "interGeneCorrelation" method (see Wu and Smyth, Nucleic Acids Res. 2012). Otherwise, the method will calculate the VIF separately for each group and return the average of each group's vif.
#' @param eset a matrix of log2(expression values). This must be the same dataset that was used to create geneResults
#' @param geneResults A QSarray object, as generated by either makeComparison or aggregateGeneSet
#' @param useCAMERA The method used to calculate variance. See the description for more details. By default, it uses the parameter stored in the geneResults
#' @param useAllData Boolean parameter determining whether to use all data in eset to calculated the VIF, or whether to only use data from the groups being contrasted. Only used if useCAMERA==FALSE
#' @import limma
#' @export
#' @return correlation of vif between gene sets
calcPCor = function(eset, ##a matrix of log2(expression values). This must be the same dataset that was used to create geneResults
geneResults, ##A QSarray object, as generated by either makeComparison or aggregateGeneSet
useCAMERA = geneResults$var.method=="Pooled", ##The method used to calculate variance. See the description for more details. By default, it uses the parameter stored in the geneResults
# geneSets=NULL, ##a list of pathways calculate the vif for, each item in the list is a vector of names that correspond to the gene names from Baseline/PostTreatment
useAllData = TRUE ##Boolean parameter determining whether to use all data in eset to calculated the VIF, or whether to only use data from the groups being contrasted. Only used if useCAMERA==FALSE
){
if(is.null(geneResults$pathways)){stop("Pathway Information not found. Please provide a list of gene sets.")}
geneSets = geneResults$pathways
if(class(geneResults) != "QSarray"){stop("geneResults must be a QSarray object, as created by makeComparison")}
##create design matrix
if(useCAMERA){
labels = geneResults$labels
paired=F
if(!is.null(geneResults$pairVector)){paired=T; pairVector = geneResults$pairVector}
f = "~0+labels"
designNames = levels(labels)
if(paired){
f = paste(f,"+pairVector",sep="")
designNames = c(designNames, paste("P",levels(pairVector)[-1],sep=""))
}
design <- model.matrix(formula(f))
colnames(design) <- designNames
}
Cor = sapply(names(geneSets),function(i){
GNames<-names(geneResults$mean)[geneSets[[i]]]
gs.i = which(rownames(eset)%in%GNames)
# gs.i = geneSets[[i]]
if(length(gs.i)<2){warning("GeneSet '",i,"' contains one or zero overlapping genes. NAs produced.");return(NA)}
if(useCAMERA){
return(interGeneCorrelation(eset[gs.i,],design)$vif)
}
else{
##pooled covariance matrix
grps = split(1:ncol(eset),geneResults$labels)
if(!useAllData){
toInclude = sub("\\s","",strsplit(geneResults$contrast,"-")[[1]]) ##only calc vif for the groups that were compared
grps = grps[toInclude]
}
covar.mat = cov(t(eset[gs.i,grps[[1]]])) * (length(grps[[1]])-1)
if(length(grps)>1){
for(i in 2:length(grps)){
covar.mat = covar.mat + ( cov(t(eset[gs.i,grps[[i]]])) * (length(grps[[i]])-1) )
}
}
covar.mat = covar.mat / (ncol(eset) - length(grps))
##multiply matrix by the sd.alpha vectors
if(!is.null(geneResults$sd.alpha)){
a = geneResults$sd.alpha[rownames(eset)[gs.i]]
covar.mat = t(covar.mat*a)*a
}
Cor = (covar.mat)
Cor = sapply(1:ncol(Cor),function(i)Cor[i,]/sqrt(covar.mat[i,i]))
Cor = sapply(1:ncol(Cor),function(i)Cor[,i]/sqrt(covar.mat[i,i]))
Cor[!is.finite(Cor)] = 0
return(Cor)
}
})
return(Cor)
}
|
9eb88b7bac180b26bb4041a65168cd0885d865e5 | 4697f98924cc3a8c1b2e5b92656d36374ee2274e | /01_data_preprocessing/data_preprocessing.R | 51c24887fafcc371622ae77ec7aeb134b040ca28 | [] | no_license | rails-camp/machine-learning-immersive | 1362ddf2f481913db73bee4ef7960db38345184a | a89306e76e5a3f23f45350db799ef01230f87bcf | refs/heads/master | 2020-12-02T11:29:33.518558 | 2017-07-13T21:41:03 | 2017-07-13T21:41:03 | 96,644,126 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 410 | r | data_preprocessing.R | # Import the CSV data
raw_data = read.csv('raw_data.csv')
# Fix missing data
raw_data$Age = ifelse(is.na(raw_data$Age),
ave(raw_data$Age, FUN = function(x) mean(x, na.rm = TRUE)),
raw_data$Age)
raw_data$Income = ifelse(is.na(raw_data$Income),
ave(raw_data$Income, FUN = function(x) mean(x, na.rm = TRUE)),
raw_data$Income) |
d8971e909438e68fba2a0a7606593be007f69af0 | b725c869e75d12a159e53fbfb45e983a008b6b71 | /cachematrix.R | bd510730ca926e29b093e6dbfbe9a25cc9716f53 | [] | no_license | StSebastian/ProgrammingAssignment2 | 05be8bb95d3df314c39d7b28e24d7c3d0b48d034 | ed0b3d40cfe94301654fc171724e33eeb83dfb86 | refs/heads/master | 2021-01-18T06:32:21.160731 | 2014-05-24T13:31:08 | 2014-05-24T13:31:08 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,989 | r | cachematrix.R | ## Function 'makeCacheMatrix' takes a matrix as an input argument
## and returns an object of type 'list'. In addition it stores the matrix in
## the variable 'x' and its inverse in the variable 'inv'. The list object
## which is returned by the function is actually a list of four other functions.
## This four functions can be used to retrieve and set the values of 'x' (the
## matrix) and 'inv' (its inverse).
## Function 'cacheSolve' takes an object 'z' created by the function makeCacheMatrix
## as input argument. If the inverse 'inv' hasn't been calculated and stored in object 'z')
## it will be computed, returned and stored in the variable 'inv' of object 'z'.
## Otherwise the message 'getting cached data' will be returned together with the stored
## inverse 'inv'.
## When using 'makeCacheMatrix' it should be assigned to a variable 'DEMO' which
## can be used to:
## retrieve the matrix 'x' --> DEMO$get()
## retrieve its inverse 'inv' --> DEMO$getinv()
## set the matrix 'x' and delete the inverse 'inv' --> DEMO$set(new matrix)
## set the inverse 'inv' --> DEMO$setinv(inverse matrix)
makeCacheMatrix <- function(x = matrix()) {
inv <- NULL
set <- function (y) {
x <<- y
inv <<- NULL
}
get <- function() x
setinv <- function(inverse) inv <<- (inverse)
getinv <- function() inv
list(set=set, get=get, setinv=setinv, getinv=getinv)
}
## Function 'cacheSolve' takes an object 'z' created by the function makeCacheMatrix as
## input argument. If the inverse 'inv' hasn't been calculated yet (and stored in object 'z')
## the inverse 'inv' of matrix 'x' (of object 'z') will be computed, returned and stored
## in the variable 'inv' of object 'z'. Otherwise the message 'getting cached data' will
## be returned followed by the stored inverse 'inv'.
cacheSolve <- function(z, ...) {
inv <- z$getinv()
if(!is.null(inv)) {
message("getting cached data")
return(inv)
}
data <- z$get()
z$setinv(solve(data,...))
z$getinv()
}
|
762bbbc6bd945ea1be0352fb7d8888d6561ccbd1 | a9d8be148a0fcc0d2e950e9bae7c6316a37d5072 | /inchcape.R | 29801148c786ff70562eb773dde9493c03171da6 | [] | no_license | strwyn/AWS | ee3961d4d758dd1eb64d3c1f65bfe7a671148eec | 72295b1207b9e4feae5c26794a3380b73c6532ec | refs/heads/master | 2020-09-12T09:40:18.392390 | 2019-11-18T07:20:54 | 2019-11-18T07:20:54 | 222,383,890 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 35,378 | r | inchcape.R | #read.csv("Inchcape SGS cleanup_0811.csv")
library(plyr)
no.enum<-c('port', 'ports', '[0-9]+', '[ivx]+')
CheckGradeRaw<-function(){
}
CheckPortCityCode<-function(){
#country and city code?
}
CheckQtDrft<-function(){
}
CheckCmdty<-function(){
}
plural<-function(str){
len<-nchar(str)
str.end<-substring(str, len)
str.pre<-substr(str, 1, len -1)
if (str.end == 'y'){
return (paste0(str.pre, 'ies'))
}else if( str.end == 'x' ||
str.end == 's' ||
str.end == 'z'){
return (paste0(str,'es'))
}else{
return (paste0(str, 's'))
}
}
PluralPhrase<-function(str, num){
if (abs(num) > 1){
return (paste(num, plural(str)))
}else{
return (paste(num, str))
}
}
maccb<-function(data){
cb<-pipe('pbcopy', 'w')
cat(data, file = cb)
close(cb)
}
tosqlv<-function(acol, link = !tovec, tovec = F){
init.line<- ""
if (link){
if (nrow(acol) > 1){
sapply(1:(nrow(acol) -1), function(i){
init.line<<-paste0(init.line,
"('",
paste(acol[i, ],
collapse = "','"),
"'), ", collpase = "")
})
}
}else{
if (tovec){
return(sapply(1:nrow(acol), function(i){
paste0("('",
paste(acol[i, ],
collapse = "','"),
"')")
}))
}else{
if (nrow(acol) > 1){
sapply(1:(nrow(acol) -1), function(i){
init.line<<-paste(init.line, '"' ,
"('",
paste(acol[i, ],
collapse = "','"),
"'), ", '",\n ' ,sep = "")
})
}
return(cat(paste(init.line,
"('", paste(acol[nrow(acol),],
collapse = "','"),
"')" , '"',sep = '')))
}
}
}
wincb<-function(data){
writeClipboard(data)
}
is.empty.df<-function(dfr){
if(!is.data.frame(dfr)){
stop(deparse(substitute(dfr)), " is not a data frame.")
}else if (!nrow(dfr) && !ncol(dfr)){
return (TRUE)
}else{return (FALSE)}
}
GetDate<-function(line,
up=25,
down = 25,
delistart = 'startdate',
deliend = 'enddate',
bargo = 'bargo'){
link = as.character(as.matrix(line[bargo]))
startpos<-regexpr(delistart, link)
endpos<-regexpr(deliend, link)
startDate<-as.Date(substr(link,
startpos + 1 + nchar(delistart),
startpos + 10 + nchar(delistart)))-up
endDate<-as.Date(substr(link,
endpos + 1 + nchar(deliend),
endpos + 10 + nchar(deliend)))+down
http<-paste0(substr(link, 1, startpos - 1),
delistart, '=' ,startDate, '&',
deliend, '=', endDate,
substr(link, endpos + 11 + nchar(deliend), nchar(link)))
#writeClipboard(http) #windows method.
browseURL(GetPort(line))
browseURL(http)
print(http)
print(line[c('imo',
'grade',
'grade_raw',
'operation_date',
'operation',
'direction',
'port')])
}
GetPort<-function(line){
paste0("https://www.google.com/maps/place/",
as.character(as.matrix(line['port'])))
}
getDateWindow<-function(date_arrive,
date_depart,
window.radius){
if (!is.na(date_arrive) && !is.na(date_depart)) {
return(paste0("((date_arrive >= date'",
date_arrive, "' - ", window.radius,
" and date_arrive <= date'",
date_arrive,"' + ", window.radius,
") or (date_depart >= date'",
date_depart, "' - ", window.radius,
" and date_depart <= date'",
date_depart,"' + ", window.radius, "))"))
}else if (!is.na(date_arrive)){
return(paste0("(date_arrive >= date'",
date_arrive, "' - ", window.radius,
" and date_arrive <= date'",
date_arrive,"' + ", window.radius,
")"))
}else if (!is.na(date_depart)){
return(paste0("(date_depart >= date'",
date_depart, "' - ", window.radius,
" and date_depart <= date'",
date_depart,"' + ", window.radius,
")"))
}else{
return("")
}
}
str.ambi<-function(name){
return(
paste0("%",
paste(strsplit(name, ' ')[[1]],
collapse = '%'),
"%"))
}
lagflag<-function(timediff){
if (timediff > 0){
return ("later")
}else if (timediff < 0){
return ("earlier")
}else{
return ("exactly")
}
}
ismatchdir<-function(loadunl, dgh.arr, dgh.dpt){
if ((loadunl == 'U' && dgh.dpt - dgh.arr < 0) ||
(loadunl == 'L' && dgh.dpt - dgh.arr > 0) ){
return (TRUE)
}else{return (FALSE)}
}
isemptydir<-function(loadunl){
if (loadunl == ' '){
return (TRUE)
}else{return (FALSE)}
}
isemptydgh<-function(dgh){
if (is.na(dgh)){
return (TRUE)
}else{
return (FALSE)
}
}
ismatchdir.loadunl<-function(rec, loadunl){
if ((rec == 'Load' && loadunl == 'L') ||
(rec == 'Discharge' && loadunl == 'U')){
return (TRUE)
}else{return (FALSE)}
}
ismatchdir.dgh<-function(rec, dghdiff){
if ((rec == 'Load' && dghdiff >= 0) ||
(rec == 'Discharge' && dghdiff <= 0)){
return (TRUE)
}else{
return (FALSE)
}
}
ismatchpoi.cmdty<-function(cmdty, poi.cmdty){
if (grepl(cmdty, poi.cmdty)){
return (TRUE);
}else{
return (FALSE)
}
}
GetCmdty <- function(dir,
dir.load = 'Load',
dir.unl = 'Discharge',
load.col = 'load_cmdty',
unl.col = 'unl_cmdty'){
if (dir == dir.load) {
return (load.col)
}
if (dir == dir.unl){
return (unl.col)
}
return (NA)
}
daystonote<-function(day,
has.star = TRUE,
reverse = FALSE,
lower = 10,
upper = 60,
equal.out ="Exact match."){
if (reverse){
day <- -day
}
if (!is.null(day) && !is.na(day)){
if (day > 0){
if (day == 1){
return (paste0(day, " day later"))
}else{
raw.note<-paste0(day, " days later")
if (inRange(day, lower, upper) &&
has.star){
return (paste0("*", raw.note))
}else if (day > upper &&
has.star){
return (paste0("**", raw.note))
}
return (raw.note)
}
}else if (day == 0){
return (equal.out)
}else{
if (day == -1){
return (paste0(abs(day), " day earlier"))
}else{
raw.note<-paste0(abs(day), " days earlier")
if (inRange(day, -lower, -upper) &&
has.star){
return (paste0("*", raw.note))
}else if (day < -upper &&
has.star){
return (paste0("**", raw.note))
}
return (raw.note)
}
}
}else{
return (NA)
}
}
NameEnum<-function(name,
sep = '%',
spl = ' ',
ends = sep,
has.head = (ends == sep)){
ele<-strsplit(name, spl)[[1]]
ele<-ele[ele != '']
if (length(ele) == 1){
if (has.head){
return(c(paste0(sep, name, ends), ends))
}
return(c(paste0(ends, name, ends), ends))
}else{
rest<-paste(ele[-1], collapse = spl)
if (has.head){
return(c(paste0(sep,
ele[1],
NameEnum(rest, sep, spl, ends, has.head)),
paste0(NameEnum(rest, sep, spl, ends, has.head))))
}
return(c(paste0(ends,
ele[1],
NameEnum(rest, sep, spl, ends, has.head = T)),
paste0(NameEnum(rest, sep, spl, ends, has.head = F))))
}
}
GetNameEnum<-function(name,
sep = "%",
ends = sep,
spl = ' ',
has.head = (ends == sep),
rmv = no.enum){
#add name breaking down parenthesis here
#if (grepl('\\(', name) && grepl('\\)', name)){
#
#}
#
enum.raw<-NameEnum(name, sep, spl, ends, has.head)
enum.raw<-enum.raw[order(nchar(enum.raw), decreasing = T)]
rmv<- paste(paste0("^", sep, rmv, sep, "$") ,collapse = "|")
#print(rmv)
enum.raw<-enum.raw[!grepl(rmv, tolower(enum.raw))]
return(enum.raw[-length(enum.raw)])
}
GetNameQuery<-function(subnames,
table.alias = '' ,
colname = 'name',
method = 'ilike',
logic = 'or'){
#requires 'subnames' to be vector of character.
logic <- paste0(' ', logic, ' ')
subnames<-paste0("'", GetNameEnum(subnames), "'")
if (table.alias == ''){
query.keyword<-paste0(colname, ' ', method)
}else{
query.keyword<-paste0(table.alias, '.', colname, ' ', method)
}
return(paste0('(',
paste(paste(query.keyword,
subnames),
collapse = logic),
')'))
}
CheckImoVessel<-function(
csv,
tname.vessel
){
#tname.vessel<-"ves"
#print(csv)
is.emptyvessel<-F
tmp.notes <- ""
DropTable(tname.vessel)
psql("select vessel, a.imo, a.name
into temporary ", tname.vessel,
" from tanker a, live.as_vessel_exp b where a.imo = ", csv[,'imo'],
" and a.imo = b.imo")
#table.vessel<-GetTable(tname.vessel)
if (!CountTable(tname.vessel)){
tmp.notes<-paste0(tmp.notes,
"No imo = ",
csv[,'imo'] ,
" found in live.as_vessel_exp")
table.imo<-psql("select * from tanker where imo = ", csv[,'imo'])
if (!nrow(table.imo)){
tmp.notes<-paste0(tmp.notes, " (Nor in table 'tanker'.)")
DropTable(tname.vessel)
#print(GetTable(tname.vessel))
psql("select vessel, a.imo, a.name into temporary ",
tname.vessel,
" from tanker a, live.as_vessel_exp b where name ilike '",
csv[, 'vessel'], "' and a.imo = b.imo")
#table.liketanker<-GetTable(tname.vessel)
if (!CountTable(tname.vessel)) {
DropTable(tname.vessel)
psql("select vessel, a.imo, a.name into temporary ",
tname.vessel,
" from tanker a, live.as_vessel_exp b where ",
GetNameQuery(csv[, 'vessel'], 'a'),
" and a.imo = b.imo")
#table.liketanker<-GetTable(tname.vessel)
if (!CountTable(tname.vessel)) {
#DropTable(tname.vessel)
tmp.notes<-paste0(tmp.notes,
"\nAnd no vessel named '",
csv[, 'vessel'],
"' or alike found in tanker/live.as_vessel_exp.")
is.emptyvessel <-T
}else{
# tmp.notes<-paste0(tmp.notes,
# "\nAnd only similar vessel(imo) ",
# GetColAgg(tname.vessel,
# "name || '(' || imo || ')'"),
# ' found in tanker/live.as_vessel_exp.')
}
}else{
#print(1)
# tmp.notes<-paste0(tmp.notes,
# "\nVessel ", csv[, 'vessel'], " found in tanker, imo(s): ",
# GetColAgg(tname.vessel, 'imo'))
}
# notes<-NotePaste("No imo = ",
# csv[,'imo'] ,
# " found in live.as_vessel_exp.",
# notes, note.pos = 0)
# return (data.frame(notes = notes))
}else{
#assume imo is unique.
if (toupper(table.imo[,'name']) == toupper(csv[,'vessel'])){
tmp.notes<-paste0(tmp.notes, " (But found in tanker, name matched: ",
table.imo[,'name'],")")
}else{
tmp.notes<-paste0(tmp.notes, " (But found in tanker, name NOT matched: ",
table.imo[,'name'], ")")
}
}
}
return (list(tmp.notes, is.emptyvessel))
}
CheckArr<-function(csv,
closest.arr,
tname.vessel,
tname.cmdtycode,
tname.cmdtypoi,
notes){
num.dscpc <- 0
num.unkwn <- 0
#check vessel name
closest.vessel<-psql("select name, imo from ",
tname.vessel,
" where vessel = ",closest.arr[, 'vessel'])[1,]
if (toupper(closest.vessel[, 'name']) != toupper(csv[, 'vessel'])){
notes<-NotePaste(notes,
"only similar vessel(imo) ",
closest.vessel[,"name"],
" (", closest.vessel[,'imo'], ")",
' found in tanker/live.as_vessel_exp.')
num.dscpc <- num.dscpc + 1
if (closest.vessel[, 'imo'] != csv[, 'imo']){
notes<-NotePaste(notes,
"Imo not matched either.", sep = ' ')
}
}else if (closest.vessel[, 'imo'] != csv[, 'imo']){
notes<-NotePaste(notes,
"Vessel ", closest.vessel[, 'name'], " found with different imo(s): ",
closest.vessel[, 'imo'])
num.dscpc <- num.dscpc + 1
}
#check cmdty
grade.cmdty<- psql("select cmdty from ",
tname.cmdtycode,
" where code = '",
csv[, 'grade'],
"';")
if (!nrow(grade.cmdty) || !any(!is.na(grade.cmdty[,'cmdty']))){
notes<-NotePaste(notes,
"No cmdty code(s) found for grade ",
csv[, 'grade'],
". (Maybe not our focus?)")
num.unkwn <- num.unkwn + 1
}else{
grade.cmdty <- na.omit(grade.cmdty)
if(nrow(grade.cmdty)>1){
notes<-NotePaste(notes,
"WARNING: Found >1 crude/product_codes for grade ",
csv[, 'grade'],
".")
}
if (tolower(csv[, 'direction']) == 'transfer'){
notes<-NotePaste(notes, "Direction is 'Transfer': ignored.")
}else{
poi.cmdty <- psql("select ", GetCmdty(csv[, 'direction']),
" from ", tname.cmdtypoi,
" where poi = ",
closest.arr[, 'poi'],
";")
isnot.inpoicmdty <- sapply(grade.cmdty[,'cmdty'],
function(i){
return(!grepl(i, poi.cmdty))
})
if (any(isnot.inpoicmdty)){
#print(csv)
notes<-NotePaste(notes,
'Poi ', closest.arr[,'poi'],
' may be able to ',
tolower(csv[, 'direction']),' ',
grade.cmdty[isnot.inpoicmdty,
'cmdty'])
num.dscpc <- num.dscpc + 1
}
# print(grade.cmdty[sapply(grade.cmdty[,'cmdty'],
# function(i){
# return(!grepl(i, poi.cmdty))
# }),
# 'cmdty'])
}
}
#check draught
is.completedgh <-
!isemptydgh(closest.arr[,'draught_arrive']) &&
!isemptydgh(closest.arr[,'draught_depart'])
if (is.completedgh &&
!isemptydir(closest.arr[,'loadunl'])) {
dghdiff<- round(closest.arr[,'draught_depart'] -
closest.arr[,'draught_arrive'], 1)
if (ismatchdir(closest.arr[,'loadunl'],
closest.arr[,'draught_arrive'],
closest.arr[,'draught_depart'])){
if (!ismatchdir.loadunl(csv[, 'direction'],
closest.arr[,'loadunl'])) {
notes<-NotePaste(notes, 'Direction not matched: ',
closest.arr[,'loadunl'],
' in asvt_arrival.')
num.dscpc <- num.dscpc + 1
#has.othererror<- TRUE
}
}else{
#print (csv)
notes<-NotePaste(notes,
'Draught change (',
dghdiff ,
') and loadunl (',
closest.arr[,'loadunl'],
') in asvt_arrival not matched: agent showed ',
csv[, 'direction'])
num.dscpc <- num.dscpc + 1
#has.othererror <- TRUE
}
}else if (!is.completedgh &&
!isemptydir(closest.arr[,'loadunl'])){
if (!ismatchdir.loadunl(csv[, 'direction'],
closest.arr[,'loadunl'])) {
notes<-NotePaste(notes,
'Direction not matched: ',
closest.arr[,'loadunl'],
' in asvt_arrival.')
num.dscpc <- num.dscpc + 1
}else{
notes<-NotePaste(notes,
'(Direction matches.)')
}
notes<- NotePaste(notes,
'Draught change not found in asvt_arrival.')
num.unkwn <- num.unkwn + 1
#has.othererror <- TRUE
}else if (is.completedgh &&
isemptydir(closest.arr[,'loadunl'])){
dghdiff<- round(closest.arr[,'draught_depart'] -
closest.arr[,'draught_arrive'], 1)
if (!ismatchdir.dgh(csv[, 'direction'], dghdiff)){
notes<-NotePaste(notes,
'Direction not matched: ',
'Draught change ',
dghdiff,
' in asvt_arrival.')
num.dscpc <- num.dscpc + 1
}else{
notes<-NotePaste(notes,
'(Draught change ',
dghdiff, ' which matches.)')
}
notes<- NotePaste(notes,
'Load direction not found in asvt_arrival.')
num.unkwn <- num.unkwn + 1
#has.othererror <- TRUE
}else{
notes<- NotePaste(notes,
'No draught records or loadunl found in asvt_arrival, direction not checked.')
num.unkwn <- num.unkwn + 2
#has.othererror <- TRUE
}
#check date, destination_arrive
closest.poiportname<-psql("select port.name from as_poi, port where as_poi.port = port.code and poi = ",
closest.arr[, 'poi'])[1,1]
ismatch.destination_arrive <-
!is.na(closest.arr[,'destination_arrive']) &&
(toupper(closest.arr[,'destination_arrive']) == toupper(csv[,'port']))
ismatch.poiportname<-
toupper(closest.poiportname) == toupper(csv[, 'port'])
if (!closest.arr[,'diff']){
if (!ismatch.destination_arrive && !ismatch.poiportname){
print(closest.poiportname)
print(csv[,'port'])
if (any(sapply(GetNameEnum(toupper(csv[,'port']),
sep = ' ',
ends = ''),
function(i){grepl(i,
toupper(closest.poiportname))}))){
notes<-NotePaste(paste0(csv[, 'operation'],
": date_arrive matches operation_date by dubious port '",
closest.poiportname, "' (poi ",
closest.arr[,'poi'],
") not matching port ",
csv[,'port'], " (",
PluralPhrase("discrepancy", num.dscpc + 1), ", ",
num.unkwn, " unknown. See below if any)."),
notes, note.pos = 0)
}else if (any(sapply(GetNameEnum(toupper(csv[,'port']),
sep = ' ',
ends = ''),
function(i){grepl(i,
toupper(closest.arr[,'destination_arrive']))}))){
notes<-NotePaste(paste0(csv[, 'operation'],
": date_arrive matches operation_date by dubious destination_arrive '",
closest.arr[,'destination_arrive'], "', while (poi ",
closest.arr[,'poi'],", port '",
closest.poiportname,
"') not matching port ",
csv[,'port'], " (",
PluralPhrase("discrepancy", num.dscpc + 2), ", ",
num.unkwn,
" unknown. See below if any)."),
notes, note.pos = 0)
}else{
notes<-NotePaste("(will further check destination_arrive and port: ",
closest.arr[,'destination_arrive'],', ',
closest.arr[,'poi'],' at ',
closest.poiportname,")",
notes, note.pos = 0)
}
}else if(!ismatch.destination_arrive && ismatch.poiportname){
notes<-NotePaste(paste0(csv[, 'operation'],
": date_arrive matches operation_date (",
PluralPhrase("discrepancy", num.dscpc), ", ",
num.unkwn,
" unknown. See below if any)."),
notes, note.pos = 0)
}else if(ismatch.destination_arrive && !ismatch.poiportname){
notes<-NotePaste(paste0(csv[, 'operation'],
": date_arrive matches operation_date by destination_arrive, while (poi ",
closest.arr[,'poi'],", port '",
closest.poiportname,
"') not matching port ",
csv[,'port'], " (",
PluralPhrase("discrepancy", num.dscpc + 1), ", ",
num.unkwn,
" unknown. See below if any)."),
notes, note.pos = 0)
}else{
notes<-NotePaste(paste0(csv[, 'operation'],
": date_arrive matches operation_date (",
PluralPhrase("discrepancy", num.dscpc), ", ",
num.unkwn,
" unknown. See below if any)."),
notes, note.pos = 0)
}
}else{
if (!ismatch.destination_arrive && !ismatch.poiportname){
if (any(sapply(GetNameEnum(toupper(csv[,'port'])),
function(i){grepl(i,
toupper(closest.poiportname))}))){
notes<-NotePaste(paste0('Dubious ',
daystonote(closest.arr[ ,'diff']),' ',
csv[, 'operation'],
" found by similar port '",
closest.poiportname, "' (poi ",
closest.arr[,'poi'],
") not matching port ",
csv[,'port'], " (",
PluralPhrase("discrepancy", num.dscpc + 2), ", ",
num.unkwn,
" unknown. See below if any)."),
notes, note.pos = 0)
}else if (any(sapply(GetNameEnum(toupper(csv[,'port'])),
function(i){grepl(i,
toupper(closest.arr[,'destination_arrive']))}))){
notes<-NotePaste(paste0('Dubious ',
daystonote(closest.arr[ ,'diff']),' ',
csv[, 'operation'],
" found by similar destination_arrive '",
closest.arr[,'destination_arrive'], "', while (poi ",
closest.arr[,'poi'],", port '",
closest.poiportname,
"') not matching port ",
csv[,'port'], " (",
PluralPhrase("discrepancy", num.dscpc + 3), ", ",
num.unkwn,
" unknown. See below if any)."),
notes, note.pos = 0)
}else{
notes<-NotePaste("(will further check destination_arrive and port: ",
closest.arr[,'destination_arrive'],', ',
closest.arr[,'poi'],' at ',
closest.poiportname,") and",
csv[,'port'],
notes, note.pos = 0)
}
}else if(!ismatch.destination_arrive && ismatch.poiportname){
notes<-NotePaste(paste0('Dubious ',
daystonote(closest.arr[ ,'diff']),' ',
csv[, 'operation'],
" found (",
PluralPhrase("discrepancy", num.dscpc + 1), ", ",
num.unkwn,
" unknown. See below if any)."),
notes, note.pos = 0)
}else if(ismatch.destination_arrive && !ismatch.poiportname){
notes<-NotePaste(paste0('Dubious ',
daystonote(closest.arr[ ,'diff']),' ',
csv[, 'operation'],
" found by destination_arrive, while (poi ",
closest.arr[,'poi'],", port'" ,
closest.poiportname,
"') not matching port ",
csv[,'port'], " (",
PluralPhrase("discrepancy", num.dscpc + 2), ", ",
num.unkwn,
" unknown. See below if any)."),
notes, note.pos = 0)
}else{
notes<-NotePaste(paste0('Dubious ',
daystonote(closest.arr[ ,'diff']),' ',
csv[, 'operation'],
" (",
PluralPhrase("discrepancy", num.dscpc + 1), ", ",
num.unkwn,
" unknown. See below if any)."),
notes, note.pos = 0)
}
#else{
# notes<-NotePaste(paste0(csv[,'operation'],
# ' date is ',
# daystonote(table.arrall[1,'diff'], F),
# '.'),
# notes)
# }
}
return(notes)
}
#vessel-imo: live.as_vessel_exp
IterInchcape<-function(csv){
notes<-""
#check vessel
tname.vessel<-"ves"
#DropTable(tname.vessel)
# psql("select vessel
# into temporary ", tname.vessel,
# " from live.as_vessel_exp where imo = ", csv[,'imo'])
# #table.vessel<-GetTable(tname.vessel)
# if (!CountTable(tname.vessel)){
# notes<-NotePaste("No imo = ",
# csv[,'imo'] ,
# " found in live.as_vessel_exp",
# notes, note.pos = 0)
# table.imo<-psql("select * from tanker where imo = ", csv[,'imo'])
# if (!nrow(table.imo)){
# notes<-NotePaste(notes, " (Nor in table 'tanker'.)", sep = '')
# return (data.frame(notes = notes))
# }else{
# #assume imo is unique.
# if (toupper(table.imo[,'name']) == toupper(csv[,'vessel'])){
# notes<-NotePaste(notes, " (But found in tanker, name matched: ",
# table.imo[,'name'],")",
# sep = '')
# }else{
# notes<-NotePaste(notes, " (But found in tanker, name NOT matched: ",
# table.imo[,'name'],")",
# sep = '')
# }
# }
# }
result.checkimo<-CheckImoVessel(csv, tname.vessel)
notes<-NotePaste(result.checkimo[[1]],
notes, note.pos = 0)
if (result.checkimo[[2]]){
#DropTable(tname.vessel)
return (data.frame(notes = notes, stringsAsFactors = F))
}
#check ports
tname.portpoi<-'port_poi'
DropTable(tname.portpoi)
psql("select as_poi.poi, port.name into temporary ",
tname.portpoi, " from as_poi, port
where as_poi.port = port.code
and port.name ilike '", csv[,'port'],"'")
#table.portpoi<-GetTable(tname.portpoi)#!nrow(table.portpoi)
#is.similarports<-FALSE
if (!CountTable(tname.portpoi)){
DropTable(tname.portpoi)
psql("select as_poi.poi, port.name into temporary ",
tname.portpoi, " from as_poi, port
where as_poi.port = port.code
and ", GetNameQuery(csv[,'port'], 'port'))
#table.portpoi<-GetTable(tname.portpoi)
if (!CountTable(tname.portpoi)){
notes<-NotePaste(notes, "No port '",csv[,'port'],"' or alike found.")
}else{
notes<-NotePaste(notes, "Only similar ports like '",csv[,'port'],"' found.")
#is.similarports<-TRUE
}
}
#check arrival from port, poi
tname.arrpoi<-'arr_poi'
DropTable(tname.arrpoi)
psql("select
* into temporary ", tname.arrpoi, "
from
asvt_arrival
where
poi in (select poi from ", tname.portpoi,")
and vessel in (select vessel from ", tname.vessel,")")
tname.arrdest<-'arr_dest'
DropTable(tname.arrdest)
# psql("select
# * into temporary ", tname.arrdest, "
# from
# asvt_arrival
# where
# destination_arrive ilike ", csv[,'port'],
# "and vessel in (select vessel from ", tname.vessel, ")")
psql("select
* into temporary ", tname.arrdest, "
from
asvt_arrival
where ",
GetNameQuery(csv[,'port'], '' ,'destination_arrive'),
"and vessel in (select vessel from ", tname.vessel, ")")
if (!CountTable(tname.arrpoi) && CountTable(tname.arrdest) > 0){
notes<-NotePaste(notes,
"Poi ",
psql("select string_agg(distinct(poi)::text, ', ') from ",
tname.arrdest)[1,1],
" seems not in port ",
csv[,'port'],
". (maybe in ",
psql("select string_agg(distinct(name)::text, ', ') from as_poi, port where as_poi.port = port.code and as_poi.poi in (select poi from ",
tname.arrdest,
")")[1,1],
" ?)")
}
tname.arrall<-'arr_all'
DropTable(tname.arrall)
psql("select
abs(date_arrive::date - date '", csv[,'operation_date'],"'),
(date_arrive::date - date '", csv[,'operation_date'],"') as diff,
* into temporary ", tname.arrall, "
from(
select * from ", tname.arrpoi,
" union
select * from ", tname.arrdest,
") t
order by abs")
closest.arr <- GetTable(tname.arrall, 1)
if (!nrow(closest.arr)){
if (CountTable(tname.vessel) > 0){
found.vessel<-psql("select name, imo from ",
tname.vessel)
#print(found.vessel)
if (any(toupper(found.vessel[, 'name']) !=
toupper(csv[, 'vessel']))){
notes<-NotePaste(notes,
"similar vessel(imo) ",
paste0(found.vessel[,"name"],
"(", found.vessel[,'imo'], ")",
collapse = ', '),
' found in tanker/live.as_vessel_exp.')
# num.dscpc <- num.dscpc + 1
# if (closest.vessel[, 'imo'] != csv[, 'imo']){
# notes<-NotePaste(notes,
# "Imo not matched either.", sep = ' ')
# }
}else if (any(found.vessel[, 'imo'] != csv[, 'imo'])){
notes<-NotePaste(notes,
"Vessel ", found.vessel[1, 'name'], " found with different imo(s): ",
paste0(found.vessel[, 'imo'], collapse = ','))
#num.dscpc <- num.dscpc + 1
}
}
notes<-NotePaste(notes, "No records found in asvt_arrival.")
}else{
#print(table.arrall)
#closest.arr <- table.arrall[1,]
# if (!closest.arr[,'diff']){
# if (!is.na(closest.arr[,'destination_arrive'])
# && (toupper(closest.arr[,'destination_arrive'])
# != toupper(csv[,'port']))){
# notes<-NotePaste(paste0(csv[, 'operation'],
# " operation_date matches date_arrive, but destination_arrive '",
# closest.arr[,'destination_arrive'],
# "' does not match. (see below if any)"),
# notes, note.pos = 0)
# }else{
# notes<-NotePaste(paste0(csv[, 'operation'],
# " operation_date matches date_arrive. (see below if any)"),
# notes, note.pos = 0)
# }
# }else{
notes<-CheckArr(csv,
closest.arr,
tname.vessel,
tname.cmdtycode,
tname.cmdtypoi,
notes)
if (closest.arr[,'diff'] < 0){
later.arr <- psql("select * from ",
tname.arrall,
" where diff > 0 limit 1")
#need is.empty.df(.df) method if .df = SOMEEMPTYDF[1,].
#it has zero cols but 1 row.
if (nrow(later.arr) > 0){
notes<- NotePaste(notes,
"Also found: ",
CheckArr(csv,
later.arr,
tname.vessel,
tname.cmdtycode,
tname.cmdtypoi,
""))
}else{
notes<-NotePaste(notes, 'No later arrivals found.')
}
}
# }
}
# DropTables(tname.vessel,
# tname.portpoi,
# tname.arrdest,
# tname.arrpoi)
return (data.frame(notes = notes, stringsAsFactors = F))
}
walkinchcape<-function(start = 1,
end = nrow(csv),
csv=ccsv,
erase = (start == 1),
var){
#csv<-data.frame(csv, stringsAsFactors = F)
if (erase){
eval(substitute(
var <- data.frame(stringsAsFactors = F)
),
envir = .GlobalEnv)
#inchcapecheck<<-data.frame()
}
for (i in start:end){
print(i)
eval(substitute(
var<-rbind.fill(var, IterInchcape(csv[i,]))
),
envir = .GlobalEnv)
#inchcapecheck<<-rbind.fill(inchcapecheck, IterInchcape(csv[i,]))
}
}
insertinchcape<-function(pos = GetPos(pat, vars),
start = pos[1],
end = pos[length(pos)],
csv,
pat = 'No imo = ',
vars = inchcapecheck.noimo
){
#notes <- ""
for (i in pos[pos >= start & pos <= end]){
print(i)
#notes<-c(notes, IterInchcape(csv[i,]))
#print(check)
eval(substitute(vars[i,]<- IterInchcape(csv[i,])), envir = .GlobalEnv)
#inchcapecheck.noimo[i,]<<- IterInchcape(csv[i,])
#print(inchcapecheck.noimo[i,])
}
#eval(substitute(var[pos,]<- ), envir = .GlobalEnv)
} |
9da4345ba274d1a831215424250e209bca0bc3f3 | ef1dba9a14358ff3866b69f9cdab7f00abdefbfc | /rcode/tabular/create_tables.R | 6044d17f495fef7d2eb9283166283ced49f8e287 | [
"MIT"
] | permissive | LindaNab/me_neo_motivatingexample | 52edc72dd62703abda6c09fefb365a91f554c479 | 09fd81f8a24600729615bbfe1f64f637ff0f5435 | refs/heads/master | 2023-04-08T12:58:16.066590 | 2021-03-30T21:53:03 | 2021-03-30T21:53:03 | 253,829,214 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,455 | r | create_tables.R | #############################################################
## Internal validation sampling strategies for exposure
## measurement error correction
## Motivating Example
##
## Tabularise summaries
## lindanab4@gmail.com - 20200303
#############################################################
##############################
# 0 - Load librairies + source code
##############################
library(xtable)
library(taRifx)
source(file = "./rcode/analyses/analysis_scen.R")
sum_analysis <-
readRDS(file = "./results/summaries/summary.Rds")
sum_init_analysis <-
readRDS(file = "./results/summaries/summary_init_analysis.Rds")
##############################
# 1 - Helper functions
##############################
# Function that creates a string of effect_est% (ci_lower%-ci_upper%)
effect_est_and_ci <- function(row_of_summary){
effect_est <- round(as.numeric(row_of_summary[["effect_est"]]), 0)
ci_lower <- round(as.numeric(row_of_summary[["ci_lower"]]), 0)
ci_upper <- round(as.numeric(row_of_summary[["ci_upper"]]), 0)
paste0(effect_est, "%", " (", ci_lower, "%-", ci_upper, "%)")
}
##############################
# 2 - Create table 1
##############################
# Select values needed from summary object
table1 <-
sum_init_analysis[, c("method",
"effect_est",
"ci_lower",
"ci_upper")]
# Format estimates from analysis
table1 <- cbind(table1[, c("method")],
c("", ""),
apply(table1,
1,
FUN = effect_est_and_ci),
c("", ""))
table1 <- as.data.frame(table1, stringsAsFactors = F)
# Change columnames
colnames(table1) <- c("Method","", "Effect size (CI)", "")
table1$Method <- c("Reference",
"Naive")
##############################
# 3 - Create table 2
##############################
# Select values needed from summary object
table2 <-
sum_analysis[, c("method",
"sampling_strat",
"effect_est",
"ci_lower",
"ci_upper")]
# Format estimates from analysis
table2 <- cbind(table2[, c("method", "sampling_strat")],
apply(table2,
1,
FUN = effect_est_and_ci))
# Format table2 to long format
table2 <- reshape(table2,
idvar = "method",
timevar = "sampling_strat",
direction = "wide")
table2 <- unfactor.data.frame(table2)
# Change columnames
colnames(table2) <- c("Method","", "Effect size (CI)", "")
table2 <- rbind.data.frame(c("", "Random", "Uniform", "Extremes"),
table2)
# Change rownames
table2$Method <- c("",
"Complete case",
"Regression calibration",
"Efficient regression calibration",
"Inadmissible regression calibration")
# Create TeX table combining table1 and table2
caption <- "Estimated association between visceral adipose tissue (VAT) and insulin resistance using different methods to deal with the induced measurement error if VAT measures are replaced by WC measures"
table <- rbind(table1, table2)
table_xtable <- print(xtable(table,
caption = caption),
include.rownames = FALSE)
file_con <- file("./results/tables/table1.txt")
writeLines(table_xtable, file_con)
close(file_con)
|
9dfb57738cda9da78240b3709946d4d4f5114b42 | c58a1595115fea554db8cd6578279f574eabfa0e | /man/chk_multiple_of_n.Rd | 5ec96213c132544469e0c0fd2f01cacbc2daff2e | [
"MIT"
] | permissive | bayesiandemography/demcheck | 129aca86fecda02be83bea73e639fb45d366c651 | c52c3e4201e54ead631e587ebf94f97f9c7a05a0 | refs/heads/master | 2021-12-28T15:40:54.771894 | 2021-12-17T03:10:50 | 2021-12-17T03:10:50 | 200,993,678 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 717 | rd | chk_multiple_of_n.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/chk-composite.R, R/err-composite.R
\name{chk_multiple_of_n}
\alias{chk_multiple_of_n}
\alias{err_multiple_of_n}
\title{Check that 'x1' is a multiple of 'n'}
\usage{
chk_multiple_of_n(x, name, n)
err_multiple_of_n(x, name, n)
}
\arguments{
\item{x}{A scalar.}
\item{name}{The name for \code{x} that
will be used in error messages.}
\item{n}{A scalar.}
}
\description{
\code{chk_multiple_of_n} differs from
\code{\link{chk_multiple_of}} only in the
error message. \code{chk_multiple_of_n} refers
to the value of \code{n}, rather than to the
variable \code{"n"}.
}
\examples{
x <- 10L
n <- 2L
chk_multiple_of_n(x = x, n = n, name = "x")
}
|
2ab5ef7969436aa9516bd7084845461c5a9c07f9 | 77f3698afbe1b46e0689669d2d25ac569698184e | /start_script.R | 56fa22175eac99e6e01044451ef97421429295fe | [] | no_license | kid-codei/website | af56b1dc8bede643be7418100cf12f8e96fc1606 | b1b2d869212a586d94a0d904c86c725c5389befb | refs/heads/master | 2023-04-27T06:12:08.588894 | 2023-04-18T04:55:26 | 2023-04-18T04:55:26 | 227,481,524 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 122 | r | start_script.R | install.packages("blogdown")
library(blogdown)
blogdown::install_hugo()
blogdown::new_site(theme="nurlansu/hugo-sustain")
|
d2f4f1a2f70530b2a946d8da4fb7b91a852dfc1e | 806d34708c7e3a6fd99d5a626e4e082e4c9d95b7 | /RBD.R | 0670381ce9c1cdb830d7eb7764cd5426efe6ad7e | [] | no_license | andrebuenogama/STA6166material | 0a6423f9e67dc036f165f9362f69405621e063e5 | 4262922abc771c848d03adc531049bb5a30d5033 | refs/heads/master | 2022-10-08T05:15:54.882314 | 2020-06-12T16:59:17 | 2020-06-12T16:59:17 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,387 | r | RBD.R | ex0603dat=read.table("http://www.stat.ufl.edu/~winner/data/biostat/ex0603.dat",header=FALSE,col.names=c("subj", "intagnt", "thcl"))
head(ex0603dat)
# Create qualitative factor variable for intagnt, and assign names to levels
fintagnt=factor(ex0603dat$intagnt, levels=1:3,
labels=c("Placebo", "Famotidine", "Cimetidine"))
# We have to assign subj (Subject id) as a factor level or the linear model will treat
# it as a numeric (continuous) variable and fit a regression
ex0603=data.frame(thcl=ex0603dat$thcl, fintagnt, subj=factor(ex0603dat$subj))
attach(ex0603)
head(ex0603)
# create easy to view table
table=xtabs(thcl~subj+fintagnt);table
round(addmargins(table,c(1,2),FUN=mean),2)
# Fit the ANOVA for the RBD with subject and interacting agent as independent variables
ex0603.rbd=aov(thcl~fintagnt+subj,data=ex0603)
anova(ex0603.rbd)
ex0603.Tukey=TukeyHSD(ex0603.rbd,"fintagnt",level=0.95)
print(ex0603.Tukey)
#windows(width=5,height=5,pointsize=10)
plot(ex0603.Tukey, sub="Theophylline Data", adj=0)
mtext("Tukey Honest Significant Differences",side=3,line=0.5)
# Do Bonferoni using our own made function
source("http://www.stat.ufl.edu/~athienit/Bonf.R")
Bonf(thcl~fintagnt+subj,data=ex0603,level=0.95)
# Let's calculate the Relative Efficiency
df=anova(ex0603.rbd)[,"Df"]
MS=anova(ex0603.rbd)[,"Mean Sq"]
(df[2]*MS[2]+(df[2]+1)*df[1]*MS[3])/(sum(df)*MS[3])
|
92318af67ecfc76917f32d76e8a81586fce92485 | bbf82f06ad6eb63970964a723bc642d2bcc69f50 | /R/expressionPlot.R | f9a9403af08b952a98f57fe53e864be65d1c7fd7 | [] | no_license | cran/BiSEp | 60af18b5fefd060e9b77df54ae4d5b169d1f8987 | fc7bae903d881648e84d7b31d20c96b99b529f7d | refs/heads/master | 2021-01-09T22:39:07.127740 | 2017-01-26T11:03:06 | 2017-01-26T11:03:06 | 17,678,095 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,038 | r | expressionPlot.R | expressionPlot <- function(bisepData=data, gene1, gene2)
{
if(missing(bisepData)) stop("Need to input expression data matrix")
if(missing(gene1)) stop("Need to specify gene 1")
if(missing(gene2)) stop("Need to specify gene 2")
# Extract objects from input + stop if object incorrect
if("BISEP" %in% names(bisepData) && "BI" %in% names(bisepData) && "DATA" %in% names(bisepData))
{
biIndex <- bisepData$BI
big.model <- bisepData$BISEP
data2 <- bisepData$DATA
}
else
{
stop("Input object isn't from BISEP function")
}
# Do some gene formatting
gene1 <- toupper(gene1)
gene2 <- toupper(gene2)
if(length(which(rownames(data2) %in% gene1)) == 0) stop("Gene 1 not recognised")
if(length(which(rownames(data2) %in% gene2)) == 0) stop("Gene 2 not recognised")
plot(as.numeric(data2[gene1,]), as.numeric(data2[gene2,]), pch=16, main=paste(gene1, "vs.", gene2, "Log2 Gene Expression plot", sep=" "), xlab=gene1, ylab=gene2)
abline(v=big.model[gene1,1], col="red")
abline(h=big.model[gene2,1], col="red")
}
|
c92259d4d362fc4d8f1cf42877f926bd9cad896d | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/nlme/examples/simulate.lme.Rd.R | 92a6ec1ee898de198a6ee94181b2740be61499d2 | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 404 | r | simulate.lme.Rd.R | library(nlme)
### Name: simulate.lme
### Title: Simulate Results from 'lme' Models
### Aliases: simulate.lme plot.simulate.lme print.simulate.lme
### Keywords: models
### ** Examples
## No test:
orthSim <-
simulate.lme(list(fixed = distance ~ age, data = Orthodont,
random = ~ 1 | Subject), nsim = 200,
m2 = list(random = ~ age | Subject))
## End(No test)
|
72bfa77de8964fbf1750476961c9fa8e4a738738 | 9566a11ec3b4b437c2cfc9d2a6816e063d5f7f6a | /R/plot_case.R | 236d7dfcd5be283656234d5b2b3e01566ddef903 | [
"MIT"
] | permissive | EdoardoCostantini/plotmipca | 2d1b67c1b767936775bb74c5d4e55b4c1c2d750f | 7c8e51a1d59e6c3c13b531588a11e7fd84845662 | refs/heads/master | 2023-06-16T11:35:26.587626 | 2023-05-24T22:39:08 | 2023-05-24T22:39:08 | 518,990,480 | 0 | 0 | MIT | 2023-05-24T22:39:08 | 2022-07-28T20:48:20 | R | UTF-8 | R | false | false | 1,215 | r | plot_case.R | #' Plot case study results
#'
#' Generate the main plot for the case study.
#'
#' @param results object containing results produced by the simulation study
#' @param y dependent variable in the substantive analysis model
#' @return Returns the ggplot
#' @author Edoardo Costantini, 2023
#' @examples
#' # Define example inputs
#' results <- dataFdd
#' y <- "PTSD-RI parent score"
#'
#' # Use the function
#' plot_case(
#' results = dataFdd,
#' y = "yp"
#' )
#'
#' @export
plot_case <- function(results, y) {
# Plot YP
ggplot2::ggplot(
data = results,
ggplot2::aes(
x = Time,
y = .data[[y]],
group = rep
)
) +
ggplot2::facet_grid(
rows = ggplot2::vars(trt),
cols = ggplot2::vars(imp),
scales = "free"
) +
ggplot2::geom_line(linewidth = .1) +
ggplot2::theme(
# Text
text = ggplot2::element_text(size = 10),
# Legend
legend.position = "bottom",
# Background
panel.border = ggplot2::element_rect(fill = NA, color = "gray"),
panel.background = ggplot2::element_rect(fill = NA)
)
} |
3046510f6b32478d3ee3967b56201e956274e9cc | ad7d43e8486a9548cfa2ee8bd796f785f0d3a6e4 | /man/StrikeDuration.Rd | 80f469e90418c53bc04f43080daa465d85e91fe3 | [] | no_license | arubhardwaj/AER | 2dbc76eed0f6afe509414344e88f425af7f60a02 | 1306cd80cf990b8a87add2484fa2a6fba4095382 | refs/heads/master | 2022-04-01T03:46:48.620767 | 2020-02-06T05:20:52 | 2020-02-06T05:20:52 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,004 | rd | StrikeDuration.Rd | \name{StrikeDuration}
\alias{StrikeDuration}
\title{Strike Durations}
\description{
Data on the duration of strikes in US manufacturing industries, 1968--1976.
}
\usage{data("StrikeDuration")}
\format{
A data frame containing 62 observations on 2 variables for the period 1968--1976.
\describe{
\item{duration}{strike duration in days.}
\item{uoutput}{unanticipated output (a measure of unanticipated aggregate
industrial production net of seasonal and trend components).}
}
}
\details{
The original data provided by Kennan (1985) are on a monthly basis, for the period 1968(1) through 1976(12). Greene (2003) only provides the June data for each year. Also, the duration for observation 36 is given as 3 by Greene while Kennan has 2. Here we use Greene's version.
\code{uoutput} is the residual from a regression of the logarithm of industrial production in manufacturing on time, time squared, and monthly dummy variables.
}
\source{
Online complements to Greene (2003).
\url{http://pages.stern.nyu.edu/~wgreene/Text/tables/tablelist5.htm}
}
\references{
Greene, W.H. (2003). \emph{Econometric Analysis}, 5th edition. Upper Saddle River, NJ: Prentice Hall.
Kennan, J. (1985). The Duration of Contract Strikes in US Manufacturing.
\emph{Journal of Econometrics}, \bold{28}, 5--28.
}
\seealso{\code{\link{Greene2003}}}
\examples{
data("StrikeDuration")
library("MASS")
## Greene (2003), Table 22.10
fit_exp <- fitdistr(StrikeDuration$duration, "exponential")
fit_wei <- fitdistr(StrikeDuration$duration, "weibull")
fit_wei$estimate[2]^(-1)
fit_lnorm <- fitdistr(StrikeDuration$duration, "lognormal")
1/fit_lnorm$estimate[2]
exp(-fit_lnorm$estimate[1])
## Weibull and lognormal distribution have
## different parameterizations, see Greene p. 794
## Greene (2003), Example 22.10
library("survival")
fm_wei <- survreg(Surv(duration) ~ uoutput, dist = "weibull", data = StrikeDuration)
summary(fm_wei)
}
\keyword{datasets}
|
b44d1a9bef7a494df6009b74085ac810740e64d6 | 37b51ada441c3679a42b82754d0e2f24c3ce70a2 | /man/AFMImageAnalyser-class-initialize.Rd | 76629ef84d2738324ceca4cc53564348b35df448 | [] | no_license | cran/AFM | a01d77751de195ca8a701cdf44ee3134ebaa00b4 | 98e8b5222e078af4d2840a20a2b58ec2196d684d | refs/heads/master | 2021-05-04T11:23:09.648739 | 2020-10-07T07:00:06 | 2020-10-07T07:00:06 | 48,076,498 | 1 | 1 | null | null | null | null | UTF-8 | R | false | true | 1,438 | rd | AFMImageAnalyser-class-initialize.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/AFMImageAnalyser.R
\name{initialize,AFMImageAnalyser-method}
\alias{initialize,AFMImageAnalyser-method}
\title{Constructor method of AFMImageAnalyser Class.}
\usage{
\S4method{initialize}{AFMImageAnalyser}(
.Object,
AFMImage,
variogramAnalysis,
psdAnalysis,
fdAnalysis,
gaussianMixAnalysis,
networksAnalysis,
threeDimensionAnalysis,
mean,
variance,
TotalRrms,
Ra,
fullfilename
)
}
\arguments{
\item{.Object}{an AFMImageAnalyser object}
\item{AFMImage}{an \code{AFMImage}}
\item{variogramAnalysis}{\code{\link{AFMImageVariogramAnalysis}}}
\item{psdAnalysis}{\code{\link{AFMImagePSDAnalysis}}}
\item{fdAnalysis}{\code{\link{AFMImageFractalDimensionsAnalysis}}}
\item{gaussianMixAnalysis}{\code{\link{AFMImageGaussianMixAnalysis}}}
\item{networksAnalysis}{\code{\link{AFMImageNetworksAnalysis}}}
\item{threeDimensionAnalysis}{\code{\link{AFMImage3DModelAnalysis}}}
\item{mean}{the mean of heights of the \code{\link{AFMImage}}}
\item{variance}{the variance of heights of the \code{\link{AFMImage}}}
\item{TotalRrms}{the total Root Mean Square Roughness of the \code{\link{AFMImage}} calculated from variance}
\item{Ra}{mean roughness or mean of absolute values of heights}
\item{fullfilename}{to be removed?}
}
\description{
Constructor method of AFMImageAnalyser Class.
}
|
352b8e890c179691baaa2700a4ad326fefdd9d7e | ee19e49beaea1d5c696dc16fd72a30d5e2343540 | /diff.R | d7bfe07b7de4f7c923b99f9599a73c2394b4ed35 | [] | no_license | GintsB/class_imbalance_for_LR | b471ced23b6c5f2577dc2692fddbb0e129602c39 | 667c4f3486d7c874955288142e9ec9390eacf064 | refs/heads/main | 2023-02-24T16:32:25.051088 | 2021-01-24T18:25:22 | 2021-01-24T18:26:38 | 332,520,920 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,035 | r | diff.R | # This file takes the resulting RDS fails generated by `Real_data.R` and extracts
# the main results (AUC and QS values as well as differences between methods (oversampling, SMOTE) and control).
# Nothing needs to be set manually.
library(tidyverse) # General R work
source("functions.R")
data_id_vec <- c(271, 326, 332)
type_vec <- c("MLE", "LASSO")
for (data_id in data_id_vec) {
diff_list <- list()
for (i in seq_along(type_vec)) {
res <- readRDS(file = paste0("data_", data_id, "_", type_vec[i], "_res.RDS"))
diff_list[[i]] <- res$results$values %>%
mutate(AUC_diff_up = `up~AUC` - `control~AUC`,
AUC_diff_smote = `smote~AUC` - `control~AUC`,
QS_diff_up = `up~QS` - `control~QS`,
QS_diff_smote = `smote~QS` - `control~QS`) %>%
pivot_longer(cols = c(contains("diff"), contains("~"))) %>%
mutate(type = type_vec[i])
}
diff <- rbind(diff_list[[1]], diff_list[[2]])
saveRDS(diff, file = paste0("data_", data_id, "_diff.RDS"))
} |
f6134d8c65e759de7a54cd013b56cbaefc1fdcac | 7917fc0a7108a994bf39359385fb5728d189c182 | /cran/paws.machine.learning/man/rekognition_get_face_detection.Rd | 8f46f9b8a8917102628a94554184de702ac844c7 | [
"Apache-2.0"
] | permissive | TWarczak/paws | b59300a5c41e374542a80aba223f84e1e2538bec | e70532e3e245286452e97e3286b5decce5c4eb90 | refs/heads/main | 2023-07-06T21:51:31.572720 | 2021-08-06T02:08:53 | 2021-08-06T02:08:53 | 396,131,582 | 1 | 0 | NOASSERTION | 2021-08-14T21:11:04 | 2021-08-14T21:11:04 | null | UTF-8 | R | false | true | 5,243 | rd | rekognition_get_face_detection.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/rekognition_operations.R
\name{rekognition_get_face_detection}
\alias{rekognition_get_face_detection}
\title{Gets face detection results for a Amazon Rekognition Video analysis
started by StartFaceDetection}
\usage{
rekognition_get_face_detection(JobId, MaxResults, NextToken)
}
\arguments{
\item{JobId}{[required] Unique identifier for the face detection job. The \code{JobId} is returned
from \code{\link[=rekognition_start_face_detection]{start_face_detection}}.}
\item{MaxResults}{Maximum number of results to return per paginated call. The largest
value you can specify is 1000. If you specify a value greater than 1000,
a maximum of 1000 results is returned. The default value is 1000.}
\item{NextToken}{If the previous response was incomplete (because there are more faces to
retrieve), Amazon Rekognition Video returns a pagination token in the
response. You can use this pagination token to retrieve the next set of
faces.}
}
\value{
A list with the following syntax:\preformatted{list(
JobStatus = "IN_PROGRESS"|"SUCCEEDED"|"FAILED",
StatusMessage = "string",
VideoMetadata = list(
Codec = "string",
DurationMillis = 123,
Format = "string",
FrameRate = 123.0,
FrameHeight = 123,
FrameWidth = 123
),
NextToken = "string",
Faces = list(
list(
Timestamp = 123,
Face = list(
BoundingBox = list(
Width = 123.0,
Height = 123.0,
Left = 123.0,
Top = 123.0
),
AgeRange = list(
Low = 123,
High = 123
),
Smile = list(
Value = TRUE|FALSE,
Confidence = 123.0
),
Eyeglasses = list(
Value = TRUE|FALSE,
Confidence = 123.0
),
Sunglasses = list(
Value = TRUE|FALSE,
Confidence = 123.0
),
Gender = list(
Value = "Male"|"Female",
Confidence = 123.0
),
Beard = list(
Value = TRUE|FALSE,
Confidence = 123.0
),
Mustache = list(
Value = TRUE|FALSE,
Confidence = 123.0
),
EyesOpen = list(
Value = TRUE|FALSE,
Confidence = 123.0
),
MouthOpen = list(
Value = TRUE|FALSE,
Confidence = 123.0
),
Emotions = list(
list(
Type = "HAPPY"|"SAD"|"ANGRY"|"CONFUSED"|"DISGUSTED"|"SURPRISED"|"CALM"|"UNKNOWN"|"FEAR",
Confidence = 123.0
)
),
Landmarks = list(
list(
Type = "eyeLeft"|"eyeRight"|"nose"|"mouthLeft"|"mouthRight"|"leftEyeBrowLeft"|"leftEyeBrowRight"|"leftEyeBrowUp"|"rightEyeBrowLeft"|"rightEyeBrowRight"|"rightEyeBrowUp"|"leftEyeLeft"|"leftEyeRight"|"leftEyeUp"|"leftEyeDown"|"rightEyeLeft"|"rightEyeRight"|"rightEyeUp"|"rightEyeDown"|"noseLeft"|"noseRight"|"mouthUp"|"mouthDown"|"leftPupil"|"rightPupil"|"upperJawlineLeft"|"midJawlineLeft"|"chinBottom"|"midJawlineRight"|"upperJawlineRight",
X = 123.0,
Y = 123.0
)
),
Pose = list(
Roll = 123.0,
Yaw = 123.0,
Pitch = 123.0
),
Quality = list(
Brightness = 123.0,
Sharpness = 123.0
),
Confidence = 123.0
)
)
)
)
}
}
\description{
Gets face detection results for a Amazon Rekognition Video analysis
started by \code{\link[=rekognition_start_face_detection]{start_face_detection}}.
Face detection with Amazon Rekognition Video is an asynchronous
operation. You start face detection by calling
\code{\link[=rekognition_start_face_detection]{start_face_detection}} which returns
a job identifier (\code{JobId}). When the face detection operation finishes,
Amazon Rekognition Video publishes a completion status to the Amazon
Simple Notification Service topic registered in the initial call to
\code{\link[=rekognition_start_face_detection]{start_face_detection}}. To get the
results of the face detection operation, first check that the status
value published to the Amazon SNS topic is \code{SUCCEEDED}. If so, call
\code{\link[=rekognition_get_face_detection]{get_face_detection}} and pass the job
identifier (\code{JobId}) from the initial call to
\code{\link[=rekognition_start_face_detection]{start_face_detection}}.
\code{\link[=rekognition_get_face_detection]{get_face_detection}} returns an array
of detected faces (\code{Faces}) sorted by the time the faces were detected.
Use MaxResults parameter to limit the number of labels returned. If
there are more results than specified in \code{MaxResults}, the value of
\code{NextToken} in the operation response contains a pagination token for
getting the next set of results. To get the next page of results, call
\code{\link[=rekognition_get_face_detection]{get_face_detection}} and populate the
\code{NextToken} request parameter with the token value returned from the
previous call to \code{\link[=rekognition_get_face_detection]{get_face_detection}}.
}
\section{Request syntax}{
\preformatted{svc$get_face_detection(
JobId = "string",
MaxResults = 123,
NextToken = "string"
)
}
}
\keyword{internal}
|
eaa7187072a7080f547ac4ae580b1a890b6b03d4 | f22f6f06e85e6e77b371cc370163013a8049e245 | /Project 2.R | 2c5351a115a60a57ba161b16ae33264177476272 | [] | no_license | stefkaps/F17-eDA-Project2 | fa3c4701605f8459e32dd79993444ae1312255f2 | 11bf1670a64e935b56b25efe60a9c6573d99a444 | refs/heads/master | 2021-07-12T20:13:09.957844 | 2017-10-14T02:41:03 | 2017-10-14T02:41:03 | 106,124,521 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 9,793 | r | Project 2.R | library(tidyverse)
library(modelr)
require(dplyr)
require(data.world)
require(MASS)
require(ISLR)
require(ggplot2)
data.world::set_config(save_config(auth_token = "eyJhbGciOiJIUzUxMiJ9.eyJzdWIiOiJwcm9kLXVzZXItY2xpZW50OnRhcnJhbnRybCIsImlzcyI6ImFnZW50OnRhcnJhbnRybDo6MDE1OTQxYzQtNTUyZC00YjI3LWIxNGEtYzllN2ExMjYxN2FiIiwiaWF0IjoxNTA1MzEzMjAyLCJyb2xlIjpbInVzZXJfYXBpX3dyaXRlIiwidXNlcl9hcGlfcmVhZCJdLCJnZW5lcmFsLXB1cnBvc2UiOnRydWV9.vWrAbNkyU0mhgsdXXL-bxESWzppmpm8wguw9uI7pJ64ZsDtovi8kbWbPYS5pPcX8DDnVMuYxJJWHhqdxv--R_w"))
#vignette("quickstart", package = "data.world")
project <- "https://data.world/marcusgabe-ut/parkinsons-data"
df <- data.world::query(
data.world::qry_sql("SELECT * FROM parkinsons"),
dataset = project
)
summary(df)
attach(df)
# add column with binary version of status
df = df %>% dplyr::mutate(status2 = ifelse(status == "true", 1, 0))
## ##
## Create training and testing data ##
## ##
train = sample(nrow(df), 97)
#View(train)
test = df[-train,]
#View(test)
# KNN requires the testing and training sets to be the same length
# since there are an odd number of rows, choose 97 from the 98 testing set
test_knn = sample(nrow(test), 97)
## ##
## Correlation charts ##
## ##
# predictors about vocal fundamental frequency
fund_freq_df = dplyr::select(df, mdvp_fo_hz, mdvp_flo_hz, mdvp_fhi_hz)
# predictors about variation in vocal fundamental frequency
freq_var_df = dplyr::select(df, mdvp_jitter, mdvp_jitter_abs,mdvp_rap,mdvp_ppq,jitter_ddp)
# predictors about variation in amplitude
amp_var_df = dplyr::select(df, mdvp_shimmer, mdvp_shimmer_db, shimmer_apq3, shimmer_apq5, mdvp_apq, shimmer_dda)
# predictors for other vocal aspects
other_preds_df = dplyr::select(df, nhr, hnr, rpde, d2, dfa, spread1, spread2, ppe)
# fundamental frequency variables are all correlated
pairs(fund_freq_df)
# variation in fundamental frequency variables (jitters) are all correlated
pairs(freq_var_df)
# variation in amplitude variables (shimmers) are all correlated
pairs(amp_var_df)
# other variables mostly not correlated except for spread1&ppe and spread1&spread2
pairs(other_preds_df)
# predictors with one fund freq variable, one jitter, one shimmer, and the uncorrelated other variables
try_preds_df = dplyr::select(df, mdvp_fo_hz, mdvp_jitter, mdvp_shimmer, nhr, rpde, d2, dfa, spread1)
# mdvp_jitter&mdvp_shimmer and jitter&nhr are correlated
pairs (try_preds_df)
# remove nhr, and make two sets of uncorrelated predictors - one with jitter, one with shimmer
uncor_preds1_df = dplyr::select(df, mdvp_fo_hz, mdvp_jitter, rpde, d2, dfa, spread1)
pairs(uncor_preds1_df)
uncor_preds2_df = dplyr::select(df, mdvp_fo_hz, mdvp_shimmer, rpde, d2, dfa, spread1)
pairs(uncor_preds2_df)
## ##
## LR ##
## ##
#
# GLM1 - analyzing average hertz and jitter
#
glm1.fit=glm(status2 ~ mdvp_fo_hz + mdvp_jitter,
data=df, family=binomial,
subset=train)
summary(glm1.fit)
glm1.probs=predict(glm1.fit,newdata=test,type="response")
#glm1.probs[1:5]
glm1.pred=ifelse(glm1.probs>0.5,"1","0")
status2.test = test$status2
table(glm1.pred,status2.test)
mean(glm1.pred==status2.test)
#
# GLM2 - analyzing average hertz and shimmer
#
glm2.fit=glm(status2 ~ mdvp_fo_hz + mdvp_shimmer,
data=df, family=binomial,
subset=train)
summary(glm2.fit)
glm2.probs=predict(glm2.fit,newdata=test,type="response")
#glm2.probs[1:5]
glm2.pred=ifelse(glm2.probs>0.5,"1","0")
status2.test = test$status2
table(glm2.pred,status2.test)
mean(glm2.pred==status2.test)
#
# GLM3 - analyzing five uncorrelated predictors with jitter
#
glm3.fit=glm(status2 ~ mdvp_fo_hz + mdvp_jitter + rpde + d2 + dfa + spread1,
data=df, family=binomial,
subset=train)
summary(glm3.fit)
glm3.probs=predict(glm3.fit,newdata=test,type="response")
#glm.probs[1:5]
glm3.pred=ifelse(glm3.probs>0.5,"1","0")
status2.test = test$status2
table(glm3.pred,status2.test)
mean(glm3.pred==status2.test)
#
# GLM4 - analyzing five uncorrelated predictors with shimmer
#
glm4.fit=glm(status2 ~ mdvp_fo_hz + mdvp_shimmer + rpde + d2 + dfa + spread1,
data=df, family=binomial,
subset=train)
summary(glm4.fit)
glm4.probs=predict(glm4.fit,newdata=test,type="response")
#glm4.probs[1:5]
glm4.pred=ifelse(glm4.probs>0.5,"1","0")
status2.test = test$status2
table(glm4.pred,status2.test)
mean(glm4.pred==status2.test)
# mean of best model
glm_mean = mean(glm1.pred==status2.test)
## ##
## LDA ##
## ##
#
# LDA1 - analyzing average hertz and jitter
#
lda1.fit=lda(status ~ mdvp_fo_hz + mdvp_jitter,
data=df, subset=train)
lda1.fit
plot(lda1.fit)
lda1.pred=predict(lda1.fit, test)
lda1_df = data.frame(lda1.pred)
ggplot(lda1_df) + geom_histogram(mapping=aes(x=LD1))
ggplot(lda1_df) + geom_boxplot(mapping = aes(x=class,y=LD1))
table(lda1.pred$class,test$status)
table(lda1.pred$class==test$status)
table(lda1.pred$class!=test$status)
mean(lda1.pred$class==test$status)
#
# LDA2 - analyzing average hertz and shimmer
#
lda2.fit=lda(status ~ mdvp_fo_hz + mdvp_shimmer,
data=df, subset=train)
lda2.fit
plot(lda2.fit)
lda2.pred=predict(lda2.fit, test)
lda2_df = data.frame(lda2.pred)
ggplot(lda2_df) + geom_histogram(mapping=aes(x=LD1))
ggplot(lda2_df) + geom_boxplot(mapping = aes(x=class,y=LD1))
table(lda2.pred$class,test$status)
table(lda2.pred$class==test$status)
table(lda2.pred$class!=test$status)
mean(lda2.pred$class==test$status)
#
# LDA3 - analyzing five uncorrelated predictors with jitter
#
lda3.fit=lda(status ~ mdvp_fo_hz + mdvp_jitter + rpde + d2 + dfa + spread1,
data=df, subset=train)
lda3.fit
plot(lda3.fit)
lda3.pred=predict(lda3.fit, test)
lda3_df = data.frame(lda3.pred)
ggplot(lda3_df) + geom_histogram(mapping=aes(x=LD1))
ggplot(lda3_df) + geom_boxplot(mapping = aes(x=class,y=LD1))
table(lda3.pred$class,test$status)
table(lda3.pred$class==test$status)
table(lda3.pred$class!=test$status)
mean(lda3.pred$class==test$status)
#
# LDA4 - analyzing five uncorrelated predictors with shimmer
#
lda4.fit=lda(status ~ mdvp_fo_hz + mdvp_shimmer + rpde + d2 + dfa + spread1,
data=df, subset=train)
lda4.fit
plot(lda4.fit)
lda4.pred=predict(lda4.fit, test)
lda4_df = data.frame(lda4.pred)
ggplot(lda4_df) + geom_histogram(mapping=aes(x=LD1))
ggplot(lda4_df) + geom_boxplot(mapping = aes(x=class,y=LD1))
table(lda4.pred$class,test$status)
table(lda4.pred$class==test$status)
table(lda4.pred$class!=test$status)
mean(lda4.pred$class==test$status)
# mean of best model
lda_mean = mean(lda1.pred$class==test$status)
## ##
## QDA ##
## ##
#
# QDA1 - analyzing average hertz and jitter
#
qda1.fit = qda(status ~ mdvp_fo_hz + mdvp_jitter,
data=df, subset=train)
qda1.fit
qda1.pred = predict(qda1.fit, test)
table(qda1.pred$class,test$status)
table(qda1.pred$class==test$status)
table(qda1.pred$class!=test$status)
mean(qda1.pred$class==test$status)
#
# QDA2 - analyzing average hertz and shimmer
#
qda.fit2 = qda(status ~ mdvp_fo_hz + mdvp_shimmer,
data=df, subset=train)
qda.fit2
qda.pred2 = predict(qda.fit2, test)
table(qda.pred2$class,test$status)
table(qda.pred2$class==test$status)
table(qda.pred2$class!=test$status)
mean(qda.pred2$class==test$status)
#
# QDA3 - analyzing 5 uncorrelated predictors with jitter
#
qda.fit3 = qda(status ~ mdvp_fo_hz + mdvp_jitter + rpde + d2 + dfa + spread1,
data=df, subset=train)
qda.fit3
qda.pred3 = predict(qda.fit3, test)
table(qda.pred3$class,test$status)
table(qda.pred3$class==test$status)
table(qda.pred3$class!=test$status)
mean(qda.pred3$class==test$status)
#
# QDA4 - analyzing 5 uncorrelated predictors with shimmer
#
qda.fit4 = qda(status ~ mdvp_fo_hz + mdvp_shimmer + rpde + d2 + dfa + spread1,
data=df,subset=train)
qda.fit4
qda.pred4 = predict(qda.fit4, test)
table(qda.pred4$class,test$status)
table(qda.pred4$class==test$status)
table(qda.pred4$class!=test$status)
mean(qda.pred4$class==test$status)
# mean of best model
qda_mean = mean(qda1.pred$class==test$status)
## ##
## KNN ##
## ##
#
# KNN1 - analyzing average hertz and jitter
#
predictors1=cbind(mdvp_fo_hz, mdvp_jitter)
knn1.pred=class::knn(predictors1[train, ],predictors1[test_knn,],status[train],k=1)
table(knn1.pred,status[test_knn])
mean(knn1.pred==status[test_knn])
#
# KNN2 - analyzing average hertz and shimmer
#
predictors2=cbind(mdvp_fo_hz, mdvp_shimmer)
knn2.pred=class::knn(predictors2[train, ],predictors2[test_knn,],status[train],k=1)
table(knn2.pred,status[test_knn])
mean(knn2.pred==status[test_knn])
#
# KNN3- analyzing five uncorrelated predictors with jitter
#
predictors3=cbind(mdvp_fo_hz, mdvp_jitter, rpde, d2, dfa, spread1)
knn3.pred=class::knn(predictors3[train, ],predictors3[test_knn,],status[train],k=1)
table(knn3.pred,status[test_knn])
mean(knn3.pred==status[test_knn])
#
# KNN4 - analyzing five uncorrelated predictors with shimmer
#
predictors4=cbind(mdvp_fo_hz, mdvp_shimmer, rpde, d2, dfa, spread1)
knn4.pred=class::knn(predictors4[train, ],predictors4[test_knn,],status[train],k=1)
table(knn4.pred,status[test_knn])
mean(knn4.pred==status[test_knn])
# mean of best model
knn_mean = mean(knn1.pred==status[test_knn])
## ##
## Comparison of the mean correct predictions ##
## ##
glm_mean
lda_mean
qda_mean
knn_mean
|
f6a1131bdd8b0bf03b2a9f1eac1023d13db75ed6 | efffa9293c84cbf59a2c5096c816dff6576d0409 | /data-preparation.R | 64ac6abaa55ef5b27ab908d3d6a653de81fb2612 | [] | no_license | RMitra90/covid-tracking | c2111cb8858e22d2a7e437a0bf2715679602fd4e | 7b52bc5e633cea0b012b61beda59c4ab5b4605e0 | refs/heads/master | 2023-04-05T10:02:05.866334 | 2021-04-10T14:47:27 | 2021-04-10T14:47:27 | 276,250,381 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,105 | r | data-preparation.R | # The COVID Tracking Project
# Data Source: https://covidtracking.com/data/download
setwd("G:/Initiatives/covid/covid-tracking")
covid <- read.csv('daily.csv', stringsAsFactors = F)
library(dplyr)
library(lubridate)
library(openintro)
# Remove deprecated columns
deprecated_columns <- c('deathIncrease', 'hospitalized', 'hospitalizedIncrease', 'lastModified',
'negativeIncrease', 'posNeg', 'positiveIncrease', 'total',
'totalTestResults', 'totalTestResultsIncrease')
covid <- covid[,!(names(covid) %in% deprecated_columns)]
# Convert date column to date
covid$date <- ymd(covid$date)
# Convert state abbreviations to full names
covid <- rename(covid, state.abbr = state)
covid$state <- abbr2state(covid$state.abbr)
covid$state[covid$state.abbr == 'AS'] <- 'American Samoa'
covid$state[covid$state.abbr == 'FM'] <- 'Federated States of Micronesia'
covid$state[covid$state.abbr == 'GU'] <- 'Guam'
covid$state[covid$state.abbr == 'MH'] <- 'Marshall Islands'
covid$state[covid$state.abbr == 'MP'] <- 'Northern Mariana Islands'
covid$state[covid$state.abbr == 'PR'] <- 'Puerto Rico'
covid$state[covid$state.abbr == 'PW'] <- 'Palau'
covid$state[covid$state.abbr == 'VI'] <- 'U.S. Virgin Islands'
covid$state[covid$state.abbr == 'UM'] <- 'U.S. Minor Outlying Islands'
# Remove columns that are not relevant
irrelevant_columns <- c('commercialScore', 'negativeRegularScore', 'negativeScore',
'positiveScore', 'score', 'grade', 'fips', 'dataQualityGrade',
'totalTestsViral', 'positiveTestsViral', 'negativeTestsViral',
'positiveCasesViral')
covid <- covid[,!(names(covid) %in% irrelevant_columns)]
NROW(covid)
sum(complete.cases(covid$positive))
sum(complete.cases(covid$negative))
sum(complete.cases(covid$recovered))
sum(complete.cases(covid$death))
covid.df <- covid %>%
group_by(state) %>%
arrange(desc(date)) %>%
mutate(Positive_Increase = lag(positive) - positive)
write.csv(covid.df, 'covid_processed_data.csv')
|
feee2e9aaf8d4d1e7e59ab498d3ab90577980663 | 7f043b1e94f565d92e96e2a7c5e3b0efc8ef141e | /plot1.R | 726e4a83859e3bc91226af4db1d7d012b029971c | [] | no_license | ellieroster/ExData_Plotting1 | 3cbbfe24a6e92a9c32962d5688759fa1622a66cf | 9ea46d0ad085933e3899769d6cfac3d4a59881b4 | refs/heads/master | 2020-12-01T11:50:34.775896 | 2015-07-12T20:39:30 | 2015-07-12T20:39:30 | 38,928,798 | 0 | 0 | null | 2015-07-11T14:39:28 | 2015-07-11T14:39:28 | null | UTF-8 | R | false | false | 637 | r | plot1.R | plot1 <- function() {
## Reads text file of household energy consumption
## Subsets file to include data from 2/1/2007-2/2/2007
## Creates .png of histogram of useage of Global Active Power - color is red
##
## Closes graphic device (.png file)
energy <-read.table("./data/household_power_consumption.txt",sep=";", na.strings = "?", header = TRUE)
nenergy <- subset(energy, Date == "1/2/2007" | Date == "2/2/2007")
png(file="./R-work/ExData_Plotting1/plot1.png",width=480,height=480)
hist(nenergy$Global_active_power, main = "Global Active Power", col="red",xlab="Global Active Power (kilowatts)")
dev.off()
}
|
479eb2acaf89eb05e1025d87a4c4ba89ec987834 | fd9bb969de48e83e138735c70deae02535d002ad | /R/kmo.R | c4146610c50e62278d55820f3105717d4ee98691 | [] | no_license | storopoli/FactorAssumptions | defe7532dd91adfc3bc225f751ee6cf8287c44b9 | 18115c73a42d249e84c57b15d07ae8ad475d1821 | refs/heads/master | 2022-05-31T14:12:19.821562 | 2022-03-08T08:57:43 | 2022-03-08T08:57:43 | 197,268,919 | 3 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,400 | r | kmo.R | #' Calculates the Kayser-Meyer-Olkin (KMO)
#'
#' \code{kmo()} handles both positive definite and not-positive definite matrix by employing the \emph{Moore-Penrose} inverse (pseudoinverse)
#'
#' @param x a matrix or dataframe
#' @param squared TRUE if matrix is squared (such as adjacency matrices), FALSE otherwise
#' @return A list with \enumerate{
#' \item \code{overall} - Overall KMO value
#' \item \code{individual} - Individual KMO's dataframe
#' \item \code{AIS} - Anti-image Covariance Matrix
#' \item \code{AIR} - Anti-image Correlation Matrix
#'}
#'
#' @importFrom MASS ginv
#' @importFrom stats cor
#' @importFrom stats sd
#'
#' @examples
#' set.seed(123)
#' df <- as.data.frame(matrix(rnorm(100*10, 1, .5), ncol=10))
#' kmo(df, squared = FALSE)
#' @export
kmo = function(x, squared=TRUE){
if (squared == TRUE) {
stopifnot(nrow (x) == ncol(x))
rownames(x) <- colnames(x)
# checking for sd = 0 and removing row and col
x <- x[!sapply(x, function(x) { sd(x) == 0} ), !sapply(x, function(x) { sd(x) == 0} )]
X <- cor(as.matrix(x))
}
else {
X <- cor(as.matrix(x))
}
iX <- ginv(X)
S2 <- diag(diag((iX^-1)))
AIS <- S2%*%iX%*%S2 # anti-image covariance matrix
IS <- X+AIS-2*S2 # image covariance matrix
Dai <- sqrt(diag(diag(AIS)))
IR <- ginv(Dai)%*%IS%*%ginv(Dai) # image correlation matrix
AIR <- ginv(Dai)%*%AIS%*%ginv(Dai) # anti-image correlation matrix
a <- apply((AIR - diag(diag(AIR)))^2, 2, sum)
AA <- sum(a)
b <- apply((X - diag(nrow(X)))^2, 2, sum)
BB <- sum(b)
MSA <- b/(b+a) # indiv. measures of sampling adequacy
AIR <- AIR-diag(nrow(AIR))+diag(MSA) # Examine the anti-image of the correlation matrix. That is the negative of the partial correlations, partialling out all other variables.
kmo_overall <- BB/(AA+BB) # overall KMO statistic
individual = as.data.frame(MSA)
ans <- list( overall = kmo_overall,
individual = individual,
AIS = AIS,
AIR = AIR)
if (any(individual < 0.5)){
message(sprintf("There is still an individual KMO value below 0.5: "),
rownames(individual)[which.min(apply(individual,MARGIN=1,min))]," - ",
min(individual))
} else {
message("Final Solution Achieved!")
}
return(ans)
} # end of kmo()
|
2cdecfdeb9de9af1f8567a1f4ebf0e0d340e7ca2 | 0a906cf8b1b7da2aea87de958e3662870df49727 | /distr6/inst/testfiles/C_EmpiricalMVPdf/libFuzzer_C_EmpiricalMVPdf/C_EmpiricalMVPdf_valgrind_files/1610035595-test.R | f6d81097a2caaa6ff517564427b7cdd2741e9e0d | [] | no_license | akhikolla/updated-only-Issues | a85c887f0e1aae8a8dc358717d55b21678d04660 | 7d74489dfc7ddfec3955ae7891f15e920cad2e0c | refs/heads/master | 2023-04-13T08:22:15.699449 | 2021-04-21T16:25:35 | 2021-04-21T16:25:35 | 360,232,775 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 420 | r | 1610035595-test.R | testlist <- list(data = structure(c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(9L, 7L)), x = structure(c(8.42090025536172e-227, 1.88278760074632e-183, 8.44197891476879e-227), .Dim = c(1L, 3L )))
result <- do.call(distr6:::C_EmpiricalMVPdf,testlist)
str(result) |
0fd1ee80d9e65109fb5a3f0fe9cc4bf0b7bb56a1 | 807a8fb756efda6130a60901f760429708779f80 | /R/save_objects_as_rdata.R | 502f475c87a4ebe4f421426b68e4fbe12ee02bd5 | [] | no_license | navanhalem/dalextutorial | 683b6f4a18d81d01e6325b05e6621492f076490d | 77afa4b56047e9aaa6e9755a670f68e0142f48be | refs/heads/master | 2023-03-01T18:16:19.139180 | 2021-02-07T14:15:29 | 2021-02-07T14:15:29 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,340 | r | save_objects_as_rdata.R | rm(list = ls())
# Imputed titanic dataset copied from github.com/pbiecek/models/gallery
load(file.path("data", "27e5c637a56f3e5180d7808b2b68d436.rda"))
johnny_d <- titanic[0, c(-5, -9)]
johnny_d[1, ] <- list(gender = "male", age = 8, class = "1st",
embarked = "Southampton", fare = 72,
sibsp = 0, parch = 0)
henry <- titanic[0, c(-5, -9)]
henry[1, ] <- list(gender = "male", age = 47, class = "1st",
embarked = "Cherbourg", fare = 25,
sibsp = 0, parch = 0)
titanic_lrm <- rms::lrm(survived == "yes" ~ class + gender + rms::rcs(age) +
sibsp + parch + fare + embarked,
data = titanic)
set.seed(1313)
titanic_rf <- randomForest::randomForest(survived ~ class + gender + age +
sibsp + parch + fare + embarked,
data = titanic)
set.seed(1313)
titanic_gbm <- gbm::gbm(survived == "yes" ~ class + gender + age + sibsp +
parch + fare + embarked,
data = titanic,
n.trees = 15000,
distribution = "bernoulli")
set.seed(1313)
titanic_svm <- e1071::svm(survived == "yes" ~ class + gender + age + sibsp +
parch + fare + embarked,
data = titanic,
type = "C-classification",
probability = TRUE)
titanic_lrm_exp <- DALEX::explain(titanic_lrm,
data = titanic[, -9],
y = titanic$survived == "yes",
label = "Logistic Regression",
type = "classification",
verbose = FALSE)
titanic_rf_exp <- DALEX::explain(model = titanic_rf,
data = titanic[, -9],
y = titanic$survived == "yes",
label = "Random Forest",
verbose = FALSE)
titanic_gbm_exp <- DALEX::explain(model = titanic_gbm,
data = titanic[, -9],
y = titanic$survived == "yes",
label = "Generalized Boosted Regression",
verbose = FALSE)
titanic_svm_exp <- DALEX::explain(model = titanic_svm,
data = titanic[, -9],
y = titanic$survived == "yes",
label = "Support Vector Machine",
verbose = FALSE)
lime_rf <- DALEXtra::predict_surrogate(explainer = titanic_rf_exp,
new_observation = johnny_d,
size = 1000,
seed = 1,
type = "localModel")
save(titanic,
johnny_d,
henry,
titanic_lrm,
titanic_rf,
titanic_gbm,
titanic_svm,
titanic_lrm_exp,
titanic_rf_exp,
titanic_gbm_exp,
titanic_svm_exp,
lime_rf,
file = file.path("inst", "extdata", "objects.Rdata"))
rm(list = ls())
|
49c13350f0aa15fc066133cfacac370038ca4b46 | f175009c4c613a87e57a3e6632a8c331821fe3cd | /auth.R | 69cd9781d1ad5d2b0a966f2fb9b8c0353cbbcfe6 | [] | no_license | linlilin/SM-Analytics-using-R | c33862de745fbc9948768a1ccd006d4046e170db | cc5ffbaaa8e292a194135366c34abf0061ae2967 | refs/heads/master | 2021-01-17T06:34:04.932539 | 2016-06-15T05:35:35 | 2016-06-15T05:35:35 | 61,172,568 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 854 | r | auth.R | library(twitteR)
library(devtools)
library(RCurl)
library(streamR)
library(stringr)
library(RJSONIO)
library(ROAuth)
requestURL <- "https://api.twitter.com/oauth/request_token"
accessURL <- "https://api.twitter.com/oauth/access_token"
authURL <- "https://api.twitter.com/oauth/authorize"
consumerKey <- "sAPghZUBAsklj1a964sXAVLc6" # From dev.twitter.com
consumerSecret <- "KT2TwnyUa4WglygXJZBMTjhwNX9CZOZkILw8iViNpqKgdJUac4" # From dev.twitter.com
my_oauth <- OAuthFactory$new(consumerKey = consumerKey,
consumerSecret = consumerSecret,
requestURL = requestURL,
accessURL = accessURL,
authURL = authURL)
my_oauth$handshake(cainfo = system.file("CurlSSL", "cacert.pem", package = "RCurl"))
# PART2
save(my_oauth, file = "my_oauth.Rdata") |
fdc6e2d935a643c8b2177699ff57fc3c378c1fc1 | 56599cad16d944464d64a709bdf98b24a04fceff | /run_analysis.R | 6da9119c5d46289a7ce48545ddd3098f7f8bf927 | [] | no_license | ntdatascience/cleaningdata | f3e3032e4c6ba048df0d4a4e1b2e6ecaf83fa5e8 | eb72ed2529afd82b3e3876d1ffdc977951569366 | refs/heads/master | 2021-01-21T13:07:51.578113 | 2014-09-21T19:43:02 | 2014-09-21T19:43:02 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,772 | r | run_analysis.R | library(dplyr)
library(plyr)
library(formatR)
library(reshape)
library(utils)
## this function cleans up the feature names to add domain meaning and adjustments to improve readability
getFeaturesColumnNames <- function(features.col.names) {
features.col.names <- sapply(features.col.names, gsub, pattern = "([A-Z])", replacement = "_\\1")
features.col.names <- sapply(features.col.names, gsub, pattern = "-", replacement = "_")
features.col.names <- sapply(features.col.names, gsub, pattern = "^f", replacement = "freq")
features.col.names <- sapply(features.col.names, gsub, pattern = "^t", replacement = "time")
features.col.names <- sapply(features.col.names, gsub, pattern = "\\(\\)", replacement = "_")
features.col.names <- sapply(features.col.names, gsub, pattern = "_+$", replacement = "")
features.col.names <- sapply(features.col.names, gsub, pattern = "_+", replacement = "_")
features.col.names <- sapply(features.col.names, tolower)
features.col.names
}
buildHarDataFrame <- function(data.dir, subject.category) {
## the "def" files are our description lookup files that define features and activity names
features.def.file.name <- paste0(data.dir, "/", "features.txt")
activity.def.file.name <- paste0(data.dir, "/", "activity_labels.txt")
subjects.file.name <- paste0(data.dir, "/", subject.category, "/", "subject_", subject.category, ".txt")
har.file.name <- paste0(data.dir, "/", subject.category, "/", "X_", subject.category, ".txt")
activity.file.name <- paste0(data.dir, "/", subject.category, "/", "y_", subject.category, ".txt")
col.definition.df <- read.csv(features.def.file.name, header = FALSE, sep = " ")
colnames(col.definition.df) <- c("position", "description")
features.col.names <- col.definition.df[, "description"]
features.col.names <- getFeaturesColumnNames(features.col.names)
# let's only gather up the mean and standard deviation features that we want
desired.columns <- regexpr("(mean..|std..)(-[XYZ]){0,1}$", col.definition.df[, "description"]) > 0
desired.columns <- col.definition.df[desired.columns, ][, "position"]
activity.definition.df <- read.csv(activity.def.file.name, header = FALSE, sep = " ")
colnames(activity.definition.df) <- c("activity_number", "activity")
sapply(activity.definition.df[, "activity"], gsub, pattern = "_", replacement = " ")
sapply(activity.definition.df, tolower)
raw.har.file <- file(har.file.name)
subject.no.df <- read.csv(subjects.file.name, header = FALSE)
training.activity.df <- read.csv(activity.file.name, header = FALSE)
raw.har.data <- readLines(raw.har.file)
raw.har.data <- gsub("^ +| +$", "", raw.har.data)
raw.har.data <- gsub(" +", ",", raw.har.data)
## write(raw.cleaning.data, 'working/training_cleaned.txt')
har.data.df <- read.csv(textConnection(raw.har.data), header = FALSE)
colnames(har.data.df) <- features.col.names
har.data.df <- har.data.df[, desired.columns]
## here's the binding of data file with subject number and activity performed
har.data.df <- cbind(har.data.df, subject.no.df)
colnames(har.data.df)[ncol(har.data.df)] = "subject_number"
har.data.df <- cbind(har.data.df, training.activity.df)
colnames(har.data.df)[ncol(har.data.df)] = "activity_number"
har.data.df <- cbind(har.data.df, rep("train", nrow(har.data.df)))
## added concept of study_group so that once fies are combined we can see which subject was part of
## test or train; not a requirement
colnames(har.data.df)[ncol(har.data.df)] = "study_group"
har.data.df <- merge(har.data.df, activity.definition.df, by.x = "activity_number", by.y = "activity_number")
## removing activity number since we have the more meaningful activity name ("activity")
har.data.df$activity_number <- NULL
har.data.df
}
#
# Main
#
# collect out train and test data
# then combine into one file
study.train.df <- buildHarDataFrame("ucihardata", "train")
study.test.df <- buildHarDataFrame("ucihardata", "test")
study.df <- rbind(study.train.df, study.test.df)
# Get required columns so that we can calculate mean across variables by subject number and activity group
# Flip table so that variables/calculations are easier to work with
# Perform average of variables across subject and activity for each variable
# Then arrange/order by subject, activity, variable/calculation
har.mean.df <-
melt(study.df,c("subject_number","study_group","activity")) %>%
ddply(c("subject_number","activity","variable"),summarise,mean=mean(value)) %>%
arrange(subject_number,activity,variable)
write.table(har.mean.df,"har_mean_results.txt") |
31e771002e9f948baf8a70f4f39a35cc6a7607de | 4cfcdda656da5db3bcaa87a4869773f3184d660f | /OHT.R | 0720f6dd033b49b45c1c847c2d9a47bd9e581ef7 | [] | no_license | p1org/DMBHGraphs | d4b98bd8218e47c5c93d1ba66eabfcb510c07080 | aceff8bee3712af1a63461661c08597af36f1c80 | refs/heads/master | 2022-10-05T09:18:07.011351 | 2019-09-05T23:58:52 | 2019-09-05T23:58:52 | 58,963,763 | 1 | 4 | null | 2022-09-05T17:47:12 | 2016-05-16T20:08:21 | R | UTF-8 | R | false | false | 3,324 | r | OHT.R | ###########################################################################
## Using R version 3.0.0
## Using igraph version 0.6.5-1 -WARNING do not use an older version!!
## Authors: Despina Stasi, Sonja Petrovic, Elizabeth Gross.
##
## Code for example in Section 4.1. in http://arxiv.org/abs/1401.4896
##
###########################################################################
###########################################################################
## Load library and code:
library("igraph")
source("p1walk.R")
###########################################################################
## THE GRAPH
d.OHT = graph.empty(8)
b.OHT = graph(c(1,2, 2,3, 1,4, 4,5, 5,6, 5,7, 7,8), d=FALSE)
Plot.Mixed.Graph(d.OHT, b.OHT)
###########################################################################
## Calculate the MLE:
mle = Get.MLE(d.OHT, b.OHT, maxiter=30000, tol = 1e-04)
#write.table(mle, file="mle-OHT.txt", col.names=F, row.names=F)
###########################################################################
## RUN a chain with 500K steps for p-value estimate:
oldMLE=mle #use line below instead if you do not wish to re-compute the MLE:
#oldMLE = matrix(scan(file = "mle-OHT.txt"), ncol=4,byrow=TRUE)
runtime500K = system.time( OHT.500Kwalk <- Estimate.p.Value.for.Testing( d.OHT,b.OHT,steps.for.walk=500000, mleMatr=oldMLE) )
# p-value estimates after 50K burn-in steps:
gof.OHT.500K = OHT.500Kwalk[[3]] #these are the chi-square values
pvalues.OHT.500K.50Kburnin = Estimate.p.Value.From.GoFs(gof.OHT.500K , 50000)
pvalues.OHT.500K.50Kburnin[[1]] #gets new p-value estimate
## GRAPHICS: p-values and GoF histograms
hist(c(gof.OHT.500K[1],gof.OHT.500K[50001:450000]), main="OHT graph",sub="450K steps (after 50K burn-in steps)", xlab="Goodness of Fit Statistic", ylab="Frequency", xlim=c(22,36))
abline(v= gof.OHT.500K[1], col="red")
plot(pvalues.OHT.500K.50Kburnin[[2]], main="OHT",sub="450K steps (after 50K burn-in steps)", xlab="Length of Walk", ylab="p-value estimates", ylim=c(0,1))
###########################################################################
## Fiber enumeration tests:
t.15K = system.time(OHT.15K <- Enumerate.Fiber(d.OHT,b.OHT,numsteps=15000))
length(OHT.15K[[1]]) #this is the number of graphs discovered
OHT.15K[[4]] #this is the tv-distance
# Various plots from sampling:
graph.counts.OHT.15K = as.numeric(OHT.15K[[3]])
barplot(graph.counts.OHT.15K , main = "OHT - Graphs", sub="15,000 steps", xlab="Distinct Graphs", ylab="Frequency in walk")
empty.move.counts.OHT.15K = as.numeric(OHT.15K[[5]])
barplot(empty.move.counts.OHT.15K, main = "OHT - Empty Moves", sub="15,000 steps", xlab="Distinct Graphs", ylab="Number of empty moves per graph in walk")
distinct.visits.counts.OHT.15K = as.numeric(OHT.15K[[3]])-as.numeric(OHT.15K[[5]])
barplot(distinct.visits.counts.OHT.15K, main = "OHT - Number of distinct visits per graph", sub="15,000 steps", xlab="Distinct Graphs", ylab="Number of visits")
## Fiber enumeration for 50,000 steps:
t.50K = system.time(OHT.50K <- Enumerate.Fiber(D.OHT,B.OHT,numsteps=50000))
length(OHT.50K[[1]]) #591 graphs in the fiber
OHT.50K[[4]] #tv-distance
|
61f9ba9577b8932b829b35df90e470a5f9e930f1 | 5b4c00154d71d49ceee250e9ac5cddb2261ab716 | /FFL/FFL_Model.R | d164b310eeadfe6675c9067fa2bc038b30a4a5ad | [] | no_license | mfabla/Kaggle | 3f7ea433dd6a052a8315cc1df3b6aec472df654a | 18edefce828581e4d393388d66592a4353b5fbd9 | refs/heads/master | 2021-05-14T08:36:56.635348 | 2018-01-04T20:27:23 | 2018-01-04T20:27:23 | 116,303,477 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 7,729 | r | FFL_Model.R | #### FFL Model #####
# Aug 31, 2017
# set_env -----------------------------------------------------------------
library(rIA)
set_env(clear_env = T, dir = "C:/Users/AblMi001/Desktop/FFL", extra_pkgs = c("caret", "randomForest", "gbm", "xgboost", "xlsx"))
# pull data ---------------------------------------------------------------
team_ref <- read.csv("Team_Ref.csv")
play_11 <- read.csv("2011_PlayerStats.csv")
play_12 <- read.csv("2012_PlayerStats.csv")
play_13 <- read.csv("2013_PlayerStats.csv")
play_14 <- read.csv("2014_PlayerStats.csv")
play_15 <- read.csv("2015_PlayerStats.csv")
play_16 <- read.csv("2016_PlayerStats.csv")
players <- rbind(play_11, play_12, play_13, play_14, play_15, play_16) %>% mutate(FantPt = ifelse(is.na(FantPt), 0, FantPt))
play_misc <- read.csv("2011-2016_MiscStats.csv")
team_11 <- read.csv("2011_TeamOff.csv")
team_12 <- read.csv("2012_TeamOff.csv")
team_13 <- read.csv("2013_TeamOff.csv")
team_14 <- read.csv("2014_TeamOff.csv")
team_15 <- read.csv("2015_TeamOff.csv")
team_16 <- read.csv("2016_TeamOff.csv")
teams <- rbind(team_11, team_12, team_13, team_14, team_15, team_16)
team_records <- read.csv("2011-2016_TeamRecords.csv")
team_stats <- read.csv("2011-2016_TeamStats.csv")
# clean data --------------------------------------------------------------
players1 <- players %>%
dplyr::mutate(YR_plus1 = YR + 1) %>%
left_join(select(.data = players, YR, PlayerId, FantPt), by = c("YR_plus1" = "YR", "PlayerId")) %>%
dplyr::rename(FantPt = FantPt.x, FantPt_proj = FantPt.y) %>%
select(-YR_plus1, -OvRank)
teams1 <- inner_join(teams, team_ref, by = c("Tm_Name"))
play_team <- inner_join(players1, teams1, by = c("YR", "Tm"))
pos_trans <- play_team %>%
dplyr::mutate(ct = 1) %>%
dcast(YR + PlayerId ~FantPos, fill = 0, value.var = "ct")
df <- inner_join(play_team, pos_trans, by = c("YR", "PlayerId")) %>%
select(YR, PlayerId, Name, FantPos, Tm, Tm_Name, FantPt_proj, FantPt, QB, RB, WR, TE, Age, everything()) %>%
dplyr::mutate(RY.A = ifelse(is.na(RY.A), 0, RY.A),
Tgt = ifelse(is.na(Tgt), 0, Tgt),
ReY.R = ifelse(is.na(ReY.R), 0, ReY.R)) %>%
dplyr::filter(!is.na(Age)) %>%
#dplyr::filter(G >= 12) %>%
inner_join(select(.data = play_misc, - Name, -X2PM), by = c("PlayerId", "YR")) %>%
inner_join(team_stats, by = c("YR", "Tm_Name")) %>%
inner_join(team_records, by = c("YR", "Tm_Name"))
df[is.na(df)] <- 0
df_pergame <- as.data.frame(sapply(select(.data = df, FantPt , PCmp:ReTD , -RY.A, -ReY.R, Total_TD, Total_Pts), function(x) x/df$G))
df1 <- select(.data = df, -FantPt, -(PCmp:ReTD), -Total_TD, -Total_Pts) %>% cbind(df_pergame) %>% filter(G >= 6) %>% select(-G, -GS)
df_hist <- df1 %>% dplyr::filter(YR != 2016) %>% dplyr::filter(!is.na(FantPt_proj))
df_2017 <- df1 %>% dplyr::filter(YR == 2016)
#df_hist <- df %>% dplyr::filter(YR != 2016) %>% dplyr::filter(!is.na(FantPt_proj))
#df_2017 <- df %>% dplyr::filter(YR == 2016)
# prep data ---------------------------------------------------------------
#split data
set.seed(83)
training <- createDataPartition(df_hist$FantPt_proj, p = .6, list = F)
df_train <- df_hist[training, -c(1:6)] #remove nominal data: [1] "YR" "PlayerId" "Name" "FantPos" "Tm" "Tm_Name"
df_test <- df_hist[-training, -c(1:6)]
# model -------------------------------------------------------------------
#linear regression
lm.model <- train(FantPt_proj~., df_train, method = "lm")
summary(lm.model)
varImp(lm.model)
lm.model.pred <- predict(lm.model, df_test)
RMSE(lm.model.pred, df_test$FantPt_proj) #47.9
plot(lm.model.pred, df_test$FantPt_proj, col = "red", main = "LM Model", xlim = c(0,500), ylim = c(0,500))
abline(0,1)
#cv linear regression
ctr <- trainControl(method = "cv", number = 10)
cv.model <- train(FantPt_proj~., df_hist[,-c(1:6)], trControl = ctr, method = "lm")
summary(cv.model)
varImp(cv.model)
cv.model.pred <- predict(cv.model, df_test)
RMSE(cv.model.pred, df_test$FantPt_proj) #46.6
plot(cv.model.pred, df_test$FantPt_proj, col = "red", main = "CV Model", xlim = c(0,500), ylim = c(0,500))
abline(0,1)
#rf
set.seed(83)
rf.model <- randomForest(FantPt_proj~., df_train, ntree = 2500, mtry = 17, importance = T)
rf.model.pred <- predict(rf.model, df_test)
RMSE(rf.model.pred, df_test$FantPt_proj) #47.7 2500/17
plot(rf.model.pred, df_test$FantPt_proj, col = "red", main = "RF Model", xlim = c(0,500), ylim = c(0,500))
abline(0,1)
varImpPlot(rf.model)
#gbm
gbmGrid <- expand.grid(
n.trees = c(100, 175, 250),
shrinkage = c(0.1, 0.15, 0.2),
interaction.depth = c(2, 4, 6),
n.minobsinnode = c(10))
gbmCtrl <- trainControl(method = "cv", number = 8)
set.seed(83)
gbm.model <- train(as.matrix(df_train[, -c(1)]), df_train$FantPt_proj, tuneGrid = gbmGrid, trControl = gbmCtrl, method = "gbm")
varImp(gbm.model)
head(gbm.model$results[with(gbm.model$results, order(RMSE)),]) #get the top 5 models
# shrinkage interaction.depth n.minobsinnode n.trees RMSE Rsquared RMSESD RsquaredSD
#1 0.10 2 10 100 49.93067 0.5542623 1.583764 0.06120063
#19 0.20 2 10 100 50.17523 0.5539047 2.039507 0.06452302
#4 0.10 4 10 100 51.01560 0.5384620 1.297813 0.05904259
#2 0.10 2 10 175 51.04656 0.5357207 1.818982 0.06694802
#10 0.15 2 10 100 51.09855 0.5372372 1.621631 0.05914257
#20 0.20 2 10 175 51.10587 0.5401385 2.141261 0.06595684
gbm.model.pred <- predict(gbm.model, df_test)
RMSE(gbm.model.pred, df_test$FantPt_proj) #47.8
plot(gbm.model.pred, df_test$FantPt_proj, col = "red", main = "GBM Model", xlim = c(0,500), ylim = c(0,500))
abline(0,1)
#xgboost (non-caret)
xgb.model <- xgboost(data = as.matrix(df_train[,-c(1)]),
label = df_train$FantPt_proj,
objective = "reg:linear",
seed = 83,
eval_metric = "rmse",
nrounds = 100,
eta = 0.1175,
max_depth = 2,
nthread = 2)
xgb.model.pred <- predict(xgb.model, as.matrix(df_test[,-1]))
RMSE(xgb.model.pred, df_test$FantPt_proj) #47.6 100/.1175
plot(xgb.model.pred, df_test$FantPt_proj, col = "red", main = "XGBOOST Model", xlim = c(0,500), ylim = c(0,500))
abline(0,1)
xgb.importance(feature_names = names(df_train[,-1]) ,model = xgb.model)
# final predictions -------------------------------------------------------
df_2017_matx <- df_2017[, -c(1:6)] # for gbm/xgboost if used
final_pred <- predict(cv.model, df_2017)
final_pred1 <- predict(xgb.model, data.matrix(df_2017_matx[, -1]))
#final_pred <- predict(gbm.model, df_2017_matx)
final_drfts <- df_2017 %>% dplyr::mutate(FantPt_proj = final_pred)
final_drfts1 <- df_2017 %>% dplyr::mutate(FantPt_proj = final_pred1) #xgboost
final_drfts2 <- df_2017 %>% dplyr::mutate(FantPt_proj_cv = final_pred,
FantPt_proj_xg = final_pred1,
FantPt_proj = (FantPt_proj_cv + FantPt_proj_xg)/2) #ensemble
# output ------------------------------------------------------------------
write.xlsx(final_drfts, "FFL_2017_DraftPcks.xlsx", sheetName = "Best CV LM", row.names = F)
write.xlsx(final_drfts1, "FFL_2017_DraftPcks.xlsx", sheetName = "XGBoost", row.names = F, append = T )
write.xlsx(final_drfts2, "FFL_2017_DraftPcks.xlsx", sheetName = "Ensemble", row.names = F, append = T )
|
fc5fd4d5719c4b148e941a6e63fb397924c96b3e | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/VineCopula/examples/BB7Copula-class.Rd.R | 228d4dda7e6d3da721766588791270ae22285f1a | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,052 | r | BB7Copula-class.Rd.R | library(VineCopula)
### Name: BB7Copula-class
### Title: Classes '"BB7Copula"', '"surBB7Copula"', '"r90BB7Copula"' and
### '"r270BB7Copula"'
### Aliases: BB7Copula-class dduCopula,numeric,BB7Copula-method
### ddvCopula,numeric,BB7Copula-method dduCopula,matrix,BB7Copula-method
### ddvCopula,matrix,BB7Copula-method getKendallDistr,BB7Copula-method
### kendallDistribution,BB7Copula-method surBB7Copula-class
### dduCopula,numeric,surBB7Copula-method
### ddvCopula,numeric,surBB7Copula-method
### dduCopula,matrix,surBB7Copula-method
### ddvCopula,matrix,surBB7Copula-method r90BB7Copula-class
### dduCopula,numeric,r90BB7Copula-method
### ddvCopula,numeric,r90BB7Copula-method
### dduCopula,matrix,r90BB7Copula-method
### ddvCopula,matrix,r90BB7Copula-method r270BB7Copula-class
### dduCopula,numeric,r270BB7Copula-method
### ddvCopula,numeric,r270BB7Copula-method
### dduCopula,matrix,r270BB7Copula-method
### ddvCopula,matrix,r270BB7Copula-method
### Keywords: classes
### ** Examples
showClass("BB7Copula")
|
4b3273424e8f6fc1198d2228037e39c824db0fec | 54d0e0b1cfb9935174e0f9907f176e721d6d3bf3 | /9. CH9 - SupportVectorMachines/10. CH9 - - svms problemset/ProblemSetChapter9Solution.R | ae7f1449fda17ecadbb0f91c09c7e7bf204c74f0 | [] | no_license | clairehu9/R_ML_ISLR | 29f16ddcb02d654ae272f06510d85243ea30c68e | 26bce2a45a1037cfbbc64eef4dca0d93ea56f461 | refs/heads/master | 2020-09-12T06:11:09.600859 | 2019-11-18T01:38:55 | 2019-11-18T01:38:55 | 222,336,639 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 5,371 | r | ProblemSetChapter9Solution.R | rm(list=ls())
installIfAbsentAndLoad <- function(neededVector) {
if(length(neededVector) > 0) {
for(thispackage in neededVector) {
if(! require(thispackage, character.only = T)) {
install.packages(thispackage)}
require(thispackage, character.only = T)
}
}
}
needed <- c('ISLR', 'e1071')
installIfAbsentAndLoad(needed)
#####################
# Part a ##
#####################
set.seed(5082)
n = dim(OJ)[1]
train_inds = sample(1:n, 800)
test_inds = (1:n)[-train_inds]
#Create test and train data
train_y = OJ$Purchase[train_inds]
train_x = OJ[2:18][train_inds, ]
test_y = OJ$Purchase[test_inds]
test_x = OJ[2:18][test_inds, ]
#Turn this into usable data for the svm function
traindat <- data.frame(x = train_x, y = train_y)
testdat <- data.frame(x = test_x, y = test_y)
#####################
# Part b ##
#####################
# A suport vector classifier (linear kernel but a particular cost = 1)
svmfitTrain <- svm(y ~ ., data = traindat,
kernel="linear",
cost = 1,
scale = TRUE)
#Summary of SVM
summary(svmfitTrain)
# The results of the SVMTrain indicate that there were 446 support vectors- 223 for the
# CH class and 223 for the MM class. This is a high number (considering we're in a training
# set of 800 points), which indicates that the data is probably very close together and often
# misclassified since support vectors are near the margins or misclassified.
#####################
# Part c ##
#####################
trainpred <- predict(svmfitTrain, traindat)
trainError <- table( actual = traindat$y, predicted = trainpred)
print(paste("The training error rate is", (trainError[1, 2] + trainError[2, 1])/sum(trainError)))
testpred <- predict(svmfitTrain, testdat) #use old svmfitTrain and predict on testdata
(testErrorTable <- table(actual = testdat$y, predicted = testpred))
svclassifier.Error.rate <- (testErrorTable[1, 2] + testErrorTable[2, 1])/sum(testErrorTable)
print(paste("The test error rate using a support vector classifier is", svclassifier.Error.rate))
############################
# Parts d and e ##
############################
# Linear kernel with optimized cost
tune.out <- tune(svm, y ~ ., data = traindat, kernel="linear",
ranges=list(cost=c(0.01, 0.05, 0.1, 0.5, 1, 5)))
bestmod <- tune.out$best.model
summary(bestmod)
ypred <- predict(bestmod, traindat)
(trainTuneOptimal <- table(actual = traindat$y, predicted = ypred))
print(paste("The training error rate with a linear kernel after being tuned is", (trainTuneOptimal[1, 2] + trainTuneOptimal[2, 1])/sum(trainTuneOptimal)))
ypred <- predict(bestmod, testdat)
(testTuneOptimal <- table(actual = testdat$y, predicted = ypred))
linear.Error.rate <- (testTuneOptimal[1, 2] + testTuneOptimal[2, 1])/sum(testTuneOptimal)
print(paste("The test error rate with a linear kernel after being tuned is", linear.Error.rate))
#####################
# Part f ##
#####################
# Radial kernel
tune.out <- tune(svm, y ~ ., data = traindat, kernel="radial",
ranges=list(cost=c(0.01, 0.05, 0.1, 0.5, 1, 5),
gamma=c(0.001, 0.01, 1, 3, 5)))
bestmod <- tune.out$best.model
summary(bestmod)
ypred <- predict(bestmod, traindat)
trainTuneOptimal <- table(actual = traindat$y, predicted = ypred)
print(paste("The training error rate with a radial kernel after being tuned is", (trainTuneOptimal[1, 2] + trainTuneOptimal[2, 1])/sum(trainTuneOptimal)))
ypred <- predict(bestmod, testdat)
(testTuneOptimal <- table( actual = testdat$y, predicted = ypred))
radial.Error.rate <- (testTuneOptimal[1, 2] + testTuneOptimal[2, 1])/sum(testTuneOptimal)
print(paste("The test error rate with a radial kernel after being tuned is", radial.Error.rate))
#####################
# Part g ##
#####################
# Polynomial kernel
tune.out <- tune(svm, y ~ ., data = traindat, kernel="polynomial",
ranges=list(cost=c(0.01, 0.05, 0.1, 0.5, 1, 5),
degree=2:5))
bestmod <- tune.out$best.model
summary(bestmod)
ypred <- predict(bestmod, traindat)
trainTuneOptimal <- table( actual = traindat$y, predicted = ypred)
print(paste("The training error rate using a polynomial kernel after being tuned is", (trainTuneOptimal[1, 2] + trainTuneOptimal[2, 1])/sum(trainTuneOptimal)))
ypred <- predict(bestmod, testdat)
(testTuneOptimal <- table( actual = testdat$y, predicted = ypred))
poly.Error.rate <- (testTuneOptimal[1, 2] + testTuneOptimal[2, 1])/sum(testTuneOptimal)
print(paste("The test error rate using a polynomial kernel after being tuned is", poly.Error.rate))
#####################
# Part h ##
#####################
data.frame(svclassifier.Error.rate,
linear.Error.rate,
radial.Error.rate,
poly.Error.rate)
# The best approach appears to be either the linear kernal
# with cost set to 0.05 or the radial kernel with cost set
# to 5 and gamma set to 0.01. Both models had the same test
# error rates (not that unusual...they both made 43
# prediction errors - their type 1/2 error rates just
# differed slightly). Since the linear model is the simpler
# model, I would choose it over the model with the radial
# kernel.
|
ba49ae65f0d058684056a601cd487275903e47f3 | fa7d14277895cd5982677deb1070b0d0adb7e53c | /Day_3/scripts/05_practice_worsheet.R | 89c8590f0703f773278c41cfa6810c0721b1a5b2 | [
"MIT"
] | permissive | Christian-Ryan/R-A_Hitchhikers_Guide_to_Reproducible_Research | 24a29eafcd640e31f00ed9652316500397266322 | 021a292e76d9f9450fec5d26af8c3a45fcf887be | refs/heads/master | 2020-09-15T23:00:19.182160 | 2019-08-07T10:35:21 | 2019-08-07T10:35:21 | 223,577,660 | 0 | 1 | MIT | 2019-11-23T11:28:13 | 2019-11-23T11:28:12 | null | UTF-8 | R | false | false | 2,164 | r | 05_practice_worsheet.R | ###########################################################################
# 3-day R workshop
# Day 3
# Morning
# 05_practice_worksheet.R
###########################################################################
# 1. Look up the help for functions
help('function')
# 2. Let's attempt to create a function
# Write a function that will identify the largest number
# from a series of numbers and then multiply it by itself
# (i) Start by writing a line of code that will find the biggest number
# (ii) Then look at how you might multiply this number by itself
# (iii) Do you need to plan ahead for potential NA's?
# (iv) Test it out on some numeric vectors
# (v) Place the code in a function, give the function a name and run it
# 3. Conditional execution
# Write a function that evaluates the numeric input as positive or negative
# Print out a confirmation message whether the number is positive/negative
num.test <- function(x){
}
# 4. Use num.test to evaluate the following vectors
num.test(4)
num.test(-3)
num.test(0)
# 5. Write a for loop to get the mean of every column of mtcars
# Recall: mtcars is one of the data sets that comes with the tidyverse
mtcars <- mtcars
output <- vector('numeric', 11)
# 6. You might want to modify and existing object using a for loop
# Recall our function rescale.01
# rescale.01 <- function(x){ # Informative name
#
# rng <- range(x, na.rm = TRUE)
# (x - rng[1]) / (rng[2] - rng[1])
#
# }
# Let's regenerate our data frame
df2 <- tibble(
e = sample(1:10, 10, replace = TRUE),
f = sample(1:100, 10, replace = TRUE),
g = sample(1:1000, 10, replace = TRUE),
h = sample(1:10000, 10, replace = TRUE))
# Now we apply a for loop to rescale the data frame
# 7. Want some more practice? ---------------------------------------------
# Create a customised function called area to calculate the area of a circle,
# given the radius r
# Hint: the formula for the area of a circle is pi * r^2
# 8. One more -------------------------------------------------------------
# Write a function called statistics that calculates the mean, median, and
# standard deviation of a vector of numbers
|
6f51d88d6df7b548f89bf836eb3bc788e20cd1da | 08319ef51e3f82bf0c04ea75710cf4f898430875 | /improve-elastic_net/prototype.R | a076eea036e5cbb35250452d56b291c2a39a45e9 | [] | no_license | walkingsparrow/tests | d6a4b36919d6029b898cc3743e4b8e3640371b3d | 2d2490f4d1149f66cf943e5f4b5bb63b097fbe71 | refs/heads/master | 2016-09-05T21:57:12.348291 | 2014-02-26T22:17:03 | 2014-02-26T22:17:03 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,822 | r | prototype.R | ## Prototype for fast elastic net algorithms
soft.thresh <- function(z, lambda) {
if (z > 0 && z > lambda) return (z - lambda)
if (z < 0 && -z > lambda) return (z + lambda)
0
}
## linear elastic net
el.lin <- function (x, y, alpha = 1, lambda = 0.1, thresh = 1e-6,
max.iter = 1000, use.active.set = TRUE, glmnet = FALSE) {
x <- as.matrix(x)
y <- as.vector(y)
## x <- scale(x)
## y <- scale(y, scale = FALSE)
x.ctr <- colMeans(x)
x.scl <- sqrt(rowMeans((t(x)-x.ctr)^2))
y.ctr <- mean(y)
x <- t((t(x) - x.ctr) / x.scl)
y <- y - y.ctr
if (glmnet) {
y.scl <- sqrt(mean(y^2))
y <- y / y.scl
lambda <- lambda / y.scl
} else y.scl <- 1
xy <- t(x) %*% y
xx <- t(x) %*% x
n <- ncol(x)
N <- nrow(x)
a0 <- 0 # intercept
a <- rep(0, n) # coef
prev.a <- a
count <- 0
active.set <- FALSE
repeat {
if (active.set && use.active.set) {
run <- seq_len(n)[a!=0]
} else {
run <- seq_len(n)
}
for (j in run) {
z <- (xy[j] - sum(xx[j,a!=0] * a[a!=0]))/N + a[j]
a[j] <- soft.thresh(z, alpha*lambda) / (1 + lambda * (1-alpha))
}
count <- count + 1
if (count > max.iter) break
if (sqrt(mean((a-prev.a)^2))/mean(abs(prev.a)) < thresh) {
if (active.set && use.active.set)
active.set <- FALSE
else
break
} else {
if (!active.set) active.set <- TRUE
}
prev.a <- a
}
a0 <- y.ctr - sum(x.ctr * a / x.scl) * y.scl
a <- a * y.scl / x.scl
list(a0=a0, a=a, iter=count, y.scl = y.scl)
}
## ----------------------------------------------------------------------
## x <- matrix(rnorm(100*20),100,20)
## y <- rnorm(100, 0.1, 2)
## save(x, y, file = "data.rda")
load("data.rda")
f <- el.lin(x, y, alpha = 0.2, lambda = 0.05, thresh = 1e-10, max.iter = 10000, use.active.set = TRUE, glmnet = T)
f
f$y.scl
## ----------------------------------------------------------------------
library(PivotalR)
db.connect(port = 5433, dbname = "madlib")
## dat <- as.data.frame(cbind(x, y))
## delete("eldata")
## z <- as.db.data.frame(dat, "eldata")
z <- db.data.frame("eldata")
g <- madlib.elnet(y ~ ., data = z, alpha = 0.2, lambda = 0.05, control = list(random.stepsize=FALSE, use.active.set=FALSE, tolerance=1e-6), glmnet = T)
g
p <- predict(g, z)
lk(p, 10)
## ----------------------------------------------------------------------
library(glmnet)
## w <- sqrt(mean((y-mean(y))^2))
s <- glmnet(x, y, family = "gaussian", alpha = 0.2, lambda = 0.05)
as.vector(s$beta)
s$a0
v <- scale(y)
ctr <- attr(v, "scaled:center")
scl <- attr(v, "scaled:scale")
s$a0*scl + ctr
g$intercept*scl + ctr
|
5a66612bf9745c3329f2ef704fe5bd700bdbc259 | 452b2db9e37c5aeba079c2a5b1b2b05104af5570 | /code/09-db_construction.R | 21282dc0a8068331a4a0099b9cc07e9e4b418b6c | [] | no_license | complexity-inequality/complexity-inequality-app | 9660d97373dade1130e0ba29ba8ceeb9261a6651 | 6bb79c48c3937ba26a5d9278cc5f6e3d25bd6d1f | refs/heads/master | 2023-07-03T13:34:35.942904 | 2021-08-06T08:13:16 | 2021-08-06T08:13:16 | 379,519,828 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 9,997 | r | 09-db_construction.R | # R-script db_construction.R
# References --------------------------------------------------------------
# https://grapher.network/blog/
# https://pacha.dev/blog/
# https://pacha.dev/
# https://cran.r-project.org/web/packages/economiccomplexity/economiccomplexity.pdf
# Method of Reflections is the one used by Hidalgo and Hausmann
# Setup -------------------------------------------------------------------
rm(list = ls())
gc()
options(stringsAsFactors = F)
ggplot2::theme_set(ggplot2::theme_minimal())
options(scipen = 666)
mongo_credentials <- config::get(file = "conf/globalresources.yml")
# Packages ----------------------------------------------------------------
if(!require(readr)){install.packages("readr")}
if(!require(plyr)){install.packages("plyr")}
if(!require(dplyr)){install.packages("dplyr")}
if(!require(tidyr)){install.packages("tidyr")}
if(!require(ggplot2)){install.packages("ggplot2")}
if(!require(janitor)){install.packages("janitor")}
if(!require(mongolite)){install.packages("mongolite")}
if(!require(readxl)){install.packages("readxl")}
if(!require(reticulate)){install.packages("reticulate")}
if(!require(vroom)){install.packages("vroom")}
if(!require(economiccomplexity)){install.packages("economiccomplexity")}
# Functions ---------------------------------------------------------------
source(file = "code/functions/data_loc.R")
source(file = "code/functions/fct_insertmongodb.R")
source(file = "code/functions/fct_add_eci.R")
# Code --------------------------------------------------------------------
# Getting BR location info
sg_uf_br <- c("AC", "AL", "AP", "AM", "BA", "CE", "DF", "ES", "GO", "MA", "MT", "MS", "MG", "PA", "PB", "PR", "PE", "PI", "RJ", "RN", "RS", "RO", "RR", "SC", "SP", "SE", "TO")
br_loc <- data_loc(sg_uf_br) %>%
dplyr::distinct()
# Loading exp data
exp <- vroom::vroom(file = "data/EXP_COMPLETA_MUN2/EXP_COMPLETA_MUN.csv") %>%
suppressMessages() %>%
janitor::clean_names() %>%
dplyr::mutate(exp=dplyr::if_else(is.na(vl_fob), 0, vl_fob)) %>%
dplyr::mutate("cd_sh2" = substr(sh4, 1, 2)) %>%
dplyr::rename(
"sg_uf"="sg_uf_mun",
"cd_mun"="co_mun",
"cd_year"="co_ano",
"cd_sh4"="sh4"
) %>%
dplyr::select(cd_mun, sg_uf, cd_sh2, cd_sh4, cd_year, exp)
# Fixing those dumb ass mistakes :::::::::::::::::::::: make it within dplyr pipe
exp[which(exp$sg_uf=="SP"), "cd_mun"] = exp[which(exp$sg_uf=="SP"), "cd_mun"]+100000 # SP
exp[which(exp$sg_uf=="GO"), "cd_mun"] = exp[which(exp$sg_uf=="GO"), "cd_mun"]-100000 # GO
exp[which(exp$sg_uf=="MS"), "cd_mun"] = exp[which(exp$sg_uf=="MS"), "cd_mun"]-200000 # MS
exp[which(exp$sg_uf=="DF"), "cd_mun"] = exp[which(exp$sg_uf=="DF"), "cd_mun"]-100000 # DF
exp1 <- exp %>%
dplyr::group_by(cd_mun, sg_uf, cd_year, cd_sh2) %>%
dplyr::summarise(exp = sum(exp)) %>%
dplyr::ungroup(); rm(exp)
exp2 <- exp1 %>%
dplyr::mutate(
cd_mun=as.character(cd_mun),
cd_year=as.character(cd_year)
) %>%
dplyr::left_join(., br_loc, by = c("cd_mun", "sg_uf")) %>%
stats::na.omit() %>%
dplyr::select(cd_mun, nm_mun, cd_meso, nm_meso, cd_rgint, nm_rgint, cd_micro, nm_micro, cd_rgime, nm_rgime, cd_uf, nm_uf, sg_uf, cd_rg, nm_rg, sg_rg, cd_year, cd_sh2, exp)
exp_t <- exp2 %>%
dplyr::group_by(cd_mun, sg_uf, cd_sh2) %>%
dplyr::summarise(exp=sum(exp)) %>%
dplyr::ungroup() %>%
dplyr::mutate(
cd_year="1997-2021"
) %>%
dplyr::left_join(., br_loc, by = c("cd_mun", "sg_uf")) %>%
dplyr::select(cd_mun, nm_mun, cd_meso, nm_meso, cd_rgint, nm_rgint, cd_micro, nm_micro, cd_rgime, nm_rgime, cd_uf, nm_uf, sg_uf, cd_rg, nm_rg, sg_rg, cd_year, cd_sh2, exp)
exp_t0 <- exp2 %>%
dplyr::group_by(cd_mun, sg_uf) %>%
dplyr::summarise(exp=sum(exp)) %>%
dplyr::ungroup() %>%
dplyr::mutate(
cd_sh2="00",
cd_year="1997-2021"
) %>%
dplyr::left_join(., br_loc, by = c("cd_mun", "sg_uf")) %>%
dplyr::select(cd_mun, nm_mun, cd_meso, nm_meso, cd_rgint, nm_rgint, cd_micro, nm_micro, cd_rgime, nm_rgime, cd_uf, nm_uf, sg_uf, cd_rg, nm_rg, sg_rg, cd_year, cd_sh2, exp)
colec_mun <- dplyr::bind_rows(exp2, exp_t, exp_t0) %>%
dplyr::mutate(cd_sh2=paste0("sh", cd_sh2)) %>%
dplyr::mutate(
cd_mun=as.character(cd_mun),
cd_meso=as.character(cd_meso),
cd_rgint=as.character(cd_rgint),
cd_micro=as.character(cd_micro),
cd_rgime=as.character(cd_rgime),
cd_uf=as.character(cd_uf),
cd_rg=as.character(cd_rg),
product=cd_sh2,
value=exp
) %>%
dplyr::select(cd_mun, nm_mun, cd_meso, nm_meso, cd_rgint, nm_rgint, cd_micro, nm_micro, cd_rgime, nm_rgime, cd_uf, nm_uf, sg_uf, cd_rg, nm_rg, sg_rg, cd_year, product, value)
# Insert mun info into MongoDB
fmongo_insert(df = colec_mun, nm_db = "db1", nm_collec = "colec_mun")
# Get mun info from MongoDB
# mongo_set <- mongolite::mongo(db = "db1", collection = "colec_mun", url = mongo_credentials$mongoURL, verbose = TRUE)
# (colec_mun <- mongo_set$find())
# db_uf -------------------------------------------------------------------
# Grouping by uf
df_uf <- colec_mun %>%
dplyr::group_by(cd_uf, nm_uf, sg_uf, cd_rg, nm_rg, sg_rg, cd_year, product) %>%
dplyr::summarise(value=sum(value)) %>%
dplyr::ungroup()
# Adding ECI to df
ll_uf <- list()
for(i in unique(df_uf$cd_year)){
ll_uf[[i]] <- fct_add_eci(
df = df_uf,
reg = "cd_uf",
ano=i
)
}
colec_uf <- dplyr::bind_rows(df_uf, ll_uf)
# Insert uf info into MongoDB
fmongo_insert(df = colec_uf, nm_db = "db1", nm_collec = "colec_uf")
# Get uf info from MongoDB
# mongo_set <- mongolite::mongo(db = "db1", collection = "colec_uf", url = mongo_credentials$mongoURL, verbose = TRUE)
# (colec_uf <- mongo_set$find())
# db_meso -----------------------------------------------------------------
# Grouping by meso
df_meso <- colec_mun %>%
dplyr::group_by(cd_meso, nm_meso, cd_uf, nm_uf, sg_uf, cd_rg, nm_rg, sg_rg, cd_year, product) %>%
dplyr::summarise(value=sum(value)) %>%
dplyr::ungroup()
# Adding ECI to df
ll_meso <- list()
for(i in unique(df_meso$cd_year)){
ll_meso[[i]] <- fct_add_eci(
df = df_meso,
reg = "cd_meso",
ano=i
)
}
colec_meso <- dplyr::bind_rows(df_meso, ll_meso)
# Insert meso info into MongoDB
fmongo_insert(df = colec_meso, nm_db = "db1", nm_collec = "colec_meso")
# Get meso info from MongoDB
# mongo_set <- mongolite::mongo(db = "db1", collection = "colec_meso", url = mongo_credentials$mongoURL, verbose = TRUE)
# (colec_meso <- mongo_set$find())
# db_micro ----------------------------------------------------------------
# Grouping by micro
df_micro <- colec_mun %>%
dplyr::group_by(cd_micro, nm_micro, cd_uf, nm_uf, sg_uf, cd_rg, nm_rg, sg_rg, cd_year, product) %>%
dplyr::summarise(value=sum(value)) %>%
dplyr::ungroup()
# Adding ECI to df
ll_micro <- list()
for(i in unique(df_micro$cd_year)){
ll_micro[[i]] <- fct_add_eci(
df = df_micro,
reg = "cd_micro",
ano=i
)
}
colec_micro <- dplyr::bind_rows(df_micro, ll_micro)
# Insert micro info into MongoDB
fmongo_insert(df = colec_micro, nm_db = "db1", nm_collec = "colec_micro")
# Get micro info from MongoDB
# mongo_set <- mongolite::mongo(db = "db1", collection = "colec_micro", url = mongo_credentials$mongoURL, verbose = TRUE)
# (colec_micro <- mongo_set$find())
# db_rgint ----------------------------------------------------------------
# Grouping by rgint
df_rgint <- colec_mun %>%
dplyr::group_by(cd_rgint, nm_rgint, cd_uf, nm_uf, sg_uf, cd_rg, nm_rg, sg_rg, cd_year, product) %>%
dplyr::summarise(value=sum(value)) %>%
dplyr::ungroup()
# Adding ECI to df
ll_rgint <- list()
for(i in unique(df_rgint$cd_year)){
ll_rgint[[i]] <- fct_add_eci(
df = df_rgint,
reg = "cd_rgint",
ano=i
)
}
colec_rgint <- dplyr::bind_rows(df_rgint, ll_rgint)
# Insert rgint to MongoDB
fmongo_insert(df = colec_rgint, nm_db = "db1", nm_collec = "colec_rgint")
# Get rgint info from MongoDB
# mongo_set <- mongolite::mongo(db = "db1", collection = "colec_rgint", url = mongo_credentials$mongoURL, verbose = TRUE)
# (colec_rgint <- mongo_set$find())
# db_rgime ----------------------------------------------------------------
# Grouping by rgime
df_rgime <- colec_mun %>%
dplyr::group_by(cd_rgime, nm_rgime, cd_uf, nm_uf, sg_uf, cd_rg, nm_rg, sg_rg, cd_year, product) %>%
dplyr::summarise(value=sum(value)) %>%
dplyr::ungroup()
# Adding ECI to df
ll_rgime <- list()
for(i in unique(df_rgime$cd_year)){
ll_rgime[[i]] <- fct_add_eci(
df = df_rgime,
reg = "cd_rgime",
ano=i
)
}
colec_rgime <- dplyr::bind_rows(df_rgime, ll_rgime)
# Insert rgime to df
fmongo_insert(df = colec_rgime, nm_db = "db1", nm_collec = "colec_rgime")
# Get rgime info from MongoDB
# mongo_set <- mongolite::mongo(db = "db1", collection = "colec_rgime", url = mongo_credentials$mongoURL, verbose = TRUE)
# (colec_rgime <- mongo_set$find())
# More --------------------------------------------------------------------
# options -----------------------------------------------------------------
# Check plot --------------------------------------------------------------
# reac_query
colec = paste0("colec_", "meso")
mongo_set <- mongolite::mongo(db = "db1", collection = colec, url = mongo_credentials$mongoURL, verbose = TRUE)
df <- mongo_set$find()
df <- mongo_set$find(paste0('{"product" : ', paste0('"', "eci_ref_norm", '"'), '}'))
df <- mongo_set$find(paste0('{"product" : ', paste0('"', "eci_ref_norm", '"'), ', "cd_year" : ', paste0('"', "2013", '"'), '}'))
# react_df
shp_df <- sf::st_read("data/shp/shp_uf/")
df_shp <- dplyr::full_join(
df,
shp_df
) %>% sf::st_sf()
# output$plot1
ggplot2::ggplot(df_shp)+
# ggplot2::geom_sf(ggplot2::aes(0), color="black", size=.13)+
ggplot2::geom_sf(ggplot2::aes(fill=value), color="black", size=.2)+
ggplot2::scale_fill_gradient(low="white", high="blue")+
ggplot2::labs(title = "", caption = "", y = "Latitude", x = "Longitude")+
ggplot2::theme_void()
# draft -------------------------------------------------------------------
colec_rgime
|
d8dfeb318e9e89561eb49d2c12f2b7632cec9f2f | 84605328adcc79b5af1b8794f09b599e991b2f2a | /Ttest_T.R | 79a1dae849f226a038326fc542b51f7715b56cf1 | [] | no_license | hehuili-XIEG/Projected-Summer-climate-change-in-the-Aral-Sea-region | 1df5561b6bdbc33bd2907d133253c7dd8112ed50 | 18152acacc8112c5c8c3de7ca4295c3f2a6c0752 | refs/heads/main | 2023-07-07T16:14:00.431612 | 2021-08-06T11:10:39 | 2021-08-06T11:10:39 | 393,329,347 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,498 | r | Ttest_T.R | ###########################################################
#
#
# Temperature Annual
#
#
#############################################################
###### CTR and LU2050 #####
module purge
module load R
R
rm(list=ls())
library(Rfa)
inway = '/scratch-a/heheuili/CMIP5_RData/Year_RData/'
md = LFIdec("/scratch-b/heheuili/DOM/19800601/AROMOUT_.0024.lfi","T2M")
dom = attr(md,"domain")
corr=DomainPoints(md)
out_lat0=corr$lat
out_lon0=corr$lon
itc=seq(1,40000,200)
out_lat=array(NA,dim = c(200,200))
out_lon=array(NA,dim = c(200,200))
for(i in 1:200){
str=itc[i]
ed=itc[i]+199
tmp1=as.numeric(out_lat0[str:ed])
tmp2=as.numeric(out_lon0[str:ed])
for(j in 1:200){
out_lat[j,i]=tmp1[j]
out_lon[j,i]=tmp2[j]
}
}
con=load(paste(inway,'CTR/',"T2M_yearly.RData",sep=""))
#aa=load(paste(inway,'CTR/',"DTR_yearly.RData",sep=""))
con_mean_y = yr_mean
con_max_y = yr_max
con_min_y = yr_min
con_dtr_y=yr_dtr
rm(yr_mean,yr_max,yr_dtr,yr_min,con)
chg = load(paste(inway,'HIS/',"T2M_yearly.RData",sep=""))
aa = load(paste(inway,'HIS/',"DTR_yearly.RData",sep=""))
###### ------First step to calculate the year value
exp_mean_y = yr_mean
exp_min_y = yr_min
exp_max_y = yr_max
exp_dtr_y=yr_dtr
###### Second Step to make the Ttest
tmean=c(0,0)
tmin=c(0,0)
tmax=c(0,0)
tdtr=c(0,0)
pvalue_mean=array(NA,dim=c(200,200))
sig_mean = array(NA,dim=c(200,200))
pvalue_min=array(NA,dim=c(200,200))
sig_min = array(NA,dim=c(200,200))
pvalue_max=array(NA,dim=c(200,200))
sig_max = array(NA,dim=c(200,200))
pvalue_dtr=array(NA,dim=c(200,200))
sig_dtr = array(NA,dim=c(200,200))
pvalue=0.01
for(i in 1:200)
{
for(j in 1:200)
{
#pvalue[i,j]=t.test(newpre[i,j,],oldpre[i,j,],paired=T,conf.level=0.99)$p.value
pvalue_mean[i,j]=t.test(con_mean_y[i,j,],exp_mean_y[i,j,],paired=T)$p.value
pvalue_min[i,j]=t.test(con_min_y[i,j,],exp_min_y[i,j,],paired=T)$p.value
pvalue_max[i,j]=t.test(con_max_y[i,j,],exp_max_y[i,j,],paired=T)$p.value
pvalue_dtr[i,j]=t.test(con_dtr_y[i,j,],exp_dtr_y[i,j,],paired=T)$p.value
if(pvalue_mean[i,j] <= pvalue){
tmp1=out_lon[i,j]
tmp2=out_lat[i,j]
tmp3=c(tmp1,tmp2)
tmean=rbind(tmean,tmp3)
rm(tmp1,tmp2,tmp3)
sig_mean[i,j] = -as.numeric(mean(exp_mean_y[i,j,])) + as.numeric(mean(con_mean_y[i,j,]))
}#if
if(pvalue_max[i,j] <= pvalue){
tmp1=out_lon[i,j]
tmp2=out_lat[i,j]
tmp3=c(tmp1,tmp2)
tmax=rbind(tmax,tmp3)
rm(tmp1,tmp2,tmp3)
sig_max[i,j] = -as.numeric(mean(exp_max_y[i,j,]))+as.numeric(mean(con_max_y[i,j,]))
}#if
if(pvalue_min[i,j] <= pvalue){
tmp1=out_lon[i,j]
tmp2=out_lat[i,j]
tmp3=c(tmp1,tmp2)
tmin=rbind(tmin,tmp3)
rm(tmp1,tmp2,tmp3)
sig_min[i,j] = -as.numeric(mean(exp_min_y[i,j,])) + as.numeric(mean(con_min_y[i,j,]))
}#if
if(pvalue_dtr[i,j] < pvalue){
tmp1=out_lon[i,j]
tmp2=out_lat[i,j]
tmp3=c(tmp1,tmp2)
tdtr=rbind(tdtr,tmp3)
rm(tmp1,tmp2,tmp3)
sig_dtr[i,j] = -as.numeric(mean(exp_dtr_y[i,j,])) + as.numeric(mean(con_dtr_y[i,j,]))
}#if
}#j
}#i
outpth = "/home/heheuili/Fig/FUTURE/"
dir.create(outpth)
#Aral = read.table("/home/heheuili/FILE/Aral.txt",header = T, sep=",")
Aral = read.table("/home/heheuili/FILE/2005Aral.txt",header = T, sep=",")
Araldry_p1 = read.table("/home/heheuili/FILE/2015Aral_p1.txt",header = T, sep=",")
Araldry_p2 = read.table("/home/heheuili/FILE/2015Aral_p2.txt",header = T, sep=",")
Araldry_p3 = read.table("/home/heheuili/FILE/2015Aral_p3.txt",header = T, sep=",")
Araldry_p4 = read.table("/home/heheuili/FILE/2015Aral_p4.txt",header = T, sep=",")
pdf(paste(outpth,"T2max_sig.pdf",sep=""))
iview(subgrid(as.geofield(sig_max,dom),10,190,10,190),legend=T)
#points(project(Aral$Lon1,Aral$Lat1,dom$projection),type="l",lwd=1)
points(project(Aral$Lon1,Aral$Lat1,dom$projection),type="l",lwd=1,col="blue")
points(project(Araldry_p1$Lon1,Araldry_p1$Lat1,dom$projection),type="l",lwd=1,col="red")
points(project(Araldry_p2$Lon1,Araldry_p2$Lat1,dom$projection),type="l",lwd=1,col="red")
points(project(Araldry_p3$Lon1,Araldry_p3$Lat1,dom$projection),type="l",lwd=1,col="red")
points(project(Araldry_p4$Lon1,Araldry_p4$Lat1,dom$projection),type="l",lwd=1,col="red")
dev.off()
###########################
|
f103453245657197e56f0569c27ba42003b38088 | 551b9335dcc91791535095126beb86b4bd132a06 | /Import/RedditExtractoRTest.R | 1ca28d8fc1a02988b918d71dd443c954838f6fad | [
"MIT"
] | permissive | dungates/ImagePlotting | 214a8d4488327f8e0e6e16c85fd119277c641cf8 | b8bf80a4a086e6938fa0c159147a9822a13d489c | refs/heads/master | 2023-07-15T15:50:44.380292 | 2021-08-11T17:52:30 | 2021-08-11T17:52:30 | 325,050,470 | 1 | 2 | NOASSERTION | 2021-06-25T20:32:55 | 2020-12-28T15:41:32 | R | UTF-8 | R | false | false | 2,124 | r | RedditExtractoRTest.R | library(RedditExtractoR)
library(inspectdf)
library(tidyverse)
library(lubridate)
library(rvest)
library(xml2)
# REDDIT SCRAPE FUNCTIONS AND PURRR LOOP THROUGH THINGS
reddit_links <- reddit_urls(
search_terms = "donald",
page_threshold = 1
) %>%
mutate(date = dmy(date))
show_plot(inspect_types(reddit_links))
show_plot(inspect_na(reddit_links)) # Good solid dataframe here
show_plot(inspect_cat(reddit_links)) # Politics dataframe shows up a lot
reddit_thread <- reddit_content(reddit_links$URL[1])
show_plot(inspect_types(reddit_thread))
show_plot(inspect_na(reddit_thread)) # Again no NA's
show_plot(inspect_cat(reddit_thread)) # Looks like a few users are commenting more than once on posts, not sure what structure variable is supposed to be
graph_object <- construct_graph(reddit_content(reddit_thread$URL[1])) # Comment network, basically a decision tree
# target_urls <- reddit_urls(search_terms="cats", subreddit="Art", cn_threshold=50) # isolate some URLs
# target_df <- target_urls %>% filter(num_comments==min(target_urls$num_comments)) %$% URL %>% reddit_content # get the contents of a small thread
# network_list <- target_df %>% user_network(include_author=FALSE, agg=TRUE) # extract the network
getcontent <- reddit_urls(
search_terms = "apple", # Search term
page_threshold = 10000, # Page threshold
cn_threshold = 10#, # Comment threshold for number of comments that need to be in thread for scrape
# subreddit = "pics"
# regex_filter = "i\.redd\.it"
)
reddit_url_df <- getcontent %>% select(URL) %>% as_vector() %>% toL()
# img_sites <- c("i.redd.it|imgur|giphy")
reddit_scrape <- function(url, xpath = '//*[@id="media-preview-kw9fc7"]/div/a/img') {
url <- url
images <- url %>%
read_html() %>%
html_nodes(xpath = xpath) %>%
html_attr('src')
}
image <- reddit_scrape(url = "http://www.reddit.com/r/memes/comments/kw9fc7/laughs_in_apple/")
content <- getcontent %>%
select(URL) %>%
as_vector() %>%
read_html() %>%
html_nodes(xpath = '') %>%
html_attr('src')
# filter(str_detect(link, ))
# So basically get_reddit to get links then filter?
|
db1b0883cd498aa8c4275af90539178339928fab | 95c7437be03fca270c4c9bb9ddf8526d5812317d | /tcc_r_files/tcc/app.R | b71ef56ff493778274fff325627611a33a6d176f | [] | no_license | ingoramos/tcc_mort_infantil | 108c3a754b2449bfa7256d06ba79f8ae7496eeee | cc38d1931d263fe2de1de788c54b149698de7c0d | refs/heads/main | 2023-05-08T08:45:24.533714 | 2021-05-26T19:46:50 | 2021-05-26T19:46:50 | 371,109,300 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 36,192 | r | app.R | #
# This is a Shiny web application. You can run the application by clicking
# the 'Run App' button above.
#
# Find out more about building applications with Shiny here:
#
# http://shiny.rstudio.com/
#
library(shiny)
library(shinydashboard)
library(ggplot2)
library(ggExtra)
library(d3treeR)
library(collapsibleTree)
pred_mort <- function(bancoTreino, bancoPred){
##################################################################################
#Para fazer o desenho da matriz de confusão
draw_confusion_matrix <- function(cm) {
layout(matrix(c(1,1,2)))
par(mar=c(2,2,2,2))
plot(c(100, 345), c(300, 450), type = "n", xlab="", ylab="", xaxt='n', yaxt='n')
title('CONFUSION MATRIX', cex.main=2)
# create the matrix
rect(150, 430, 240, 370, col='red')
text(195, 435, 'Não', cex=1.2)
rect(250, 430, 340, 370, col='green')
text(295, 435, 'Sim', cex=1.2)
text(125, 370, 'Predição', cex=1.3, srt=90, font=2)
text(245, 450, 'Referência', cex=1.3, font=2)
rect(150, 305, 240, 365, col='green')
rect(250, 305, 340, 365, col='red')
text(140, 400, 'Sim', cex=1.2, srt=90)
text(140, 335, 'Não', cex=1.2, srt=90)
# add in the cm results
res <- as.numeric(cm$table)
text(195, 400, res[1], cex=1.6, font=2, col='white')
text(195, 335, res[2], cex=1.6, font=2, col='white')
text(295, 400, res[3], cex=1.6, font=2, col='white')
text(295, 335, res[4], cex=1.6, font=2, col='white')
# add in the specifics
plot(c(100, 0), c(100, 0), type = "n", xlab="", ylab="", main = "DETALHES", xaxt='n', yaxt='n')
text(10, 85, names(cm$byClass[1]), cex=1.2, font=2)
text(10, 70, round(as.numeric(cm$byClass[1]), 3), cex=1.2)
text(30, 85, names(cm$byClass[2]), cex=1.2, font=2)
text(30, 70, round(as.numeric(cm$byClass[2]), 3), cex=1.2)
text(50, 85, names(cm$byClass[5]), cex=1.2, font=2)
text(50, 70, round(as.numeric(cm$byClass[5]), 3), cex=1.2)
text(70, 85, names(cm$byClass[6]), cex=1.2, font=2)
text(70, 70, round(as.numeric(cm$byClass[6]), 3), cex=1.2)
text(90, 85, names(cm$byClass[7]), cex=1.2, font=2)
text(90, 70, round(as.numeric(cm$byClass[7]), 3), cex=1.2)
# add in the accuracy information
text(30, 35, names(cm$overall[1]), cex=1.5, font=2)
text(30, 20, round(as.numeric(cm$overall[1]), 3), cex=1.4)
text(70, 35, names(cm$overall[2]), cex=1.5, font=2)
text(70, 20, round(as.numeric(cm$overall[2]), 3), cex=1.4)
}
##################################################################################
# dados para treino
train_data <- bancoTreino
# dados para controle do teste
test_ctrl <- bancoPred
# dados para o teste
test <- test_ctrl[, -which(names(test_ctrl) %in% c("OBITO"))]
#lista com os métodos de balanceamento
sampling_methods <- c("down", "up", "smote")
j <- 1
sm_index <- 1
maior_valor <- 0 #usado para verificar qual o modelo com maior valor preditivo negativo.
for(j in length(sampling_methods)){
sm_index <- sm_index + 1
j <- j + 1
#Create train/test index
# Create trainControl object: myControl - Deve ser utilizado em todos os modelos para que sejam comparáveis
myControl <- trainControl(
method = "repeatedcv", #"repeatedcv" é o método para realizar as repetições # cv cross validation
number = 5, #number é o número de folds ##use 10 folds
repeats = 2, #repeats é o número de repetições para cada fold ##use 5 repeats
summaryFunction = twoClassSummary,
classProbs = TRUE, # IMPORTANT!
verboseIter = FALSE,
savePredictions = TRUE,
returnResamp = "all",
sampling = sampling_methods[sm_index], #balanceamento dos dados
allowParallel = TRUE
)
#lista de modelos que serão usados inicialmente
# "glm" = Generalized Linear Model, "ranger" = Random Forest, "knn" = k-Nearest Neighbors,
#"nnet" = Neural Network, "dnn" = Stacked AutoEncoder Deep Neural Network,
#"xgbTree" = eXtreme Gradient Boosting, "gbm" = Stochastic Gradient Boosting, "adaboost" = AdaBoost Classification Trees.
# "glm", "ranger", "knn", "gbm", "nnet", "adaboost", "xgbTree", "dnn"
#"ranger", "gbm", "nnet"
modelos <- c("glm")
i <- 1 #indice para atualizar o while
index <- 1 #indice que retorna o modelo da lista
#voltar o "maior_valor" para esta posição se der errado ####################################################################
#espec <- 0 #usado para verificar qual o modelo com maior especificidade.
#lista com os métodos de balanceamentos
metrics <- c("ROC", "Sens")
k <- 1
m_index <- 1
for(k in length(metrics)){
m_index <- m_index + 1
k <- k + 1
#loop para selecionar o melhor algoritmo
while(i <= length((modelos))) {
# Fit model
model <- train(
OBITO ~ . , #variável preditiva
data = train_data, #banco de treino
#preProcess = c("center", "scale"),
metric = metrics[m_index], # métrica para comparação dos modelos
method = modelos[index], #lista com indice (retorna uma string com o nome do método para cada modelo)
trControl = myControl #aplica o controle
)
#fazendo a matriz de confusão para o banco de treino
banco_model <- model$trainingData
banco_model$.outcome <- as.factor(banco_model$.outcome)
cm_t <- confusionMatrix(banco_model$.outcome, sample(banco_model$.outcome))
# Print model to console
model
# Print maximum ROC statistic
max(model[["results"]][["ROC"]])
#predição dos modelos no banco para matriz de confusão
predictionsCM <- predict(model, test)
#predição dos modelos no banco para probabilidade
predictions <- predict(model, test, type = "prob")
#o test_control é usado para comparação com os valores da predição, gerando a matriz de confusão.
cm <- confusionMatrix(predictionsCM, test_ctrl$OBITO)
#extraindo os resultados da matriz de confusão
cm_results <- cm$byClass %>% as.list()
#extraindo a sensibilidade
sens <- cm_results[1] %>% as.data.frame() # [1] para sens [2] para spec
sens <- sens$Sensitivity #Sensitivity ou sens / Specificity ou spec
#verificação do maior valor preditivo negativo, como inicialmente o maior valor está atribuído como 0, o primeiro modelo sempre terá o maior valor, ou seja, sempre que um modelo conseguir alcançar um valor preditivo negativo maior que o armazenado na memória, este passa a ser o instrumento de verificação.
if(sens > maior_valor){
maior_valor <- sens #valor preditivo positivo passa a ser o maior valor
resultado <- paste("O melhor modelo foi: ", modelos[index], ", usando o método de balanceamento: ", sampling_methods[sm_index], "com a métrica: ", metrics[m_index], ", com sensibilidade de: ", sens) #mensagem para informar o modelo com melhor resultado
cm_melhor <- cm #cm armazena os dados da matriz de confusão (teste) do melhor modelo
cm_t_melhor <- cm_t #cm_t armazena os dados da matriz de confusão (treino) do melhor modelo
melhor_modelo <- model
#colando a coluna da predição para comparar com a real
resultados <- cbind(test_ctrl, predictions)
#cria uma coluna com a probabilidade em % de OBITO
resultados["Prob"] <- resultados$sim * 100
modelo <- modelos[index]
samp <- sampling_methods[sm_index]
metrica <- metrics[m_index]
}
else{
maior_valor <- maior_valor #caso a verificação falhe, o maior_valor continua sendo ele mesmo ("atual")
}
#atualiza o indice para o i (while), e index (lista de modelos)
i <- i + 1
index <- index + 1
}
}
}
#desenha a matriz de confusão para o cm armazenado com o melhor modelo
#cm_p <- draw_confusion_matrix(cm_melhor)
#retorno da função (matriz de confusão de treino (cm_t), mensagem de resultado (resultado), desenho da matriz de confusão de teste (cm_p))
return(list(melhor_modelo, cm_melhor))
}
modeloPred <- pred_mort(final_train, final_test)
conMatrix <- modeloPred[2]
bestModel <- modeloPred[1]
# Define UI for application that draws a histogram
ui <- dashboardPage(
dashboardHeader(title = "Mortalidade Infantil"),
dashboardSidebar(
sidebarMenu(
menuItem("Correlação", tabName = "corr", icon = icon("sitemap"),
menuSubItem("HeatMap", tabName = "corr_1"),
menuSubItem("DropOut Loss", tabName = "corr_2"),
menuSubItem("Information Value", tabName = "corr_3"),
menuSubItem("Boruta (Random Forest", tabName = "corr_4")
),
menuItem("Variáveis Categóricas", tabName = "catVars", icon = icon("chart-pie"),
menuSubItem("Gráfico de Barra Circular", tabName = "subCatVars_1"),
menuSubItem("Gráfico de Donut", tabName = "subCatVars_2"),
menuSubItem("Gráfico de Árvore", tabName = "subCatVars_3")
),
menuItem("Variáveis Numéricas", tabName = "numVars", icon = icon("chart-area"),
menuSubItem("Gráfico de Dispersão", tabName = "subNumVars_1"),
menuSubItem("Gráfico de Bolha", tabName = "subNumVars_2"),
menuSubItem("Box Plot com Individuos ", tabName = "subNumVars_3"),
menuSubItem("Histograma", tabName = "subNumVars_4")
),
menuItem("Predição", tabName = "pred", icon = icon("baby-carriage"))
)
),
dashboardBody(
tabItems(
#correlação e seleção de variáveis
tabItem(tabName = "corr_1",
fluidRow(
tags$style(type = "text/css", "#heatmapPlot {height: calc(100vh - 80px) !important;}"),
plotOutput("heatmapPlot", height = "100%")
)),
tabItem(tabName = "corr_2",
fluidRow(
tags$style(type = "text/css", "#dolPlot {height: calc(100vh - 80px) !important;}"),
plotOutput("dolPlot", height = "100%")
)),
tabItem(tabName = "corr_3",
fluidRow(
tags$style(type = "text/css", "#ivTable {height: calc(100vh - 80px) !important;}"),
tableOutput("ivTable")
)),
tabItem(tabName = "corr_4",
fluidRow(
tags$style(type = "text/css", "#borutaPlot {height: calc(100vh - 80px) !important;}"),
plotOutput("borutaPlot", height = "100%")
)),
#variáveis categóricas
tabItem(tabName = "subCatVars_1",
fluidRow(
tags$style(type = "text/css", "#circularBarPlot {height: calc(100vh - 80px) !important;}"),
plotOutput("circularBarPlot", height = "100%")
)
),
tabItem(tabName = "subCatVars_2",
fluidRow(
selectInput("donutSubset", label = "Selecione a Variável",
choices = list("Escolaridade" = "ESCMAEAGR1_SINASC", "Gravidez" = "GRAVIDEZ_SINASC",
"Raça/Cor" = "RACACORMAE_SINASC", "Gestação" = "GESTACAO_SINASC", "Parto" = "PARTO_SINASC",
"Consultas" = "CONSULTAS_SINASC", "Apgar1" = "APGAR1_SINASC",
"Apgar5" = "APGAR5_SINASC", "Anomalia" = "IDANOMAL_SINASC")),
tags$style(type = "text/css", "#donutPlot {height: calc(100vh - 80px) !important;}"),
plotOutput("donutPlot", height = "100%")
)
),
tabItem(tabName = "subCatVars_3",
fluidRow(
tags$style(type = "text/css", "#treeMapPlot {height: calc(100vh - 80px) !important;}"),
collapsibleTreeOutput("treeMapPlot", height = "100%")
)
),
#variáveis numéricas
tabItem(tabName = "subNumVars_1",
fluidRow(
box(width = 3, selectInput("varX", label = "Selecione",
choices = list("Nº consultas Pré Natal" = "CONSPRENAT_SINASC", "Semanas de Gestação" = "SEMAGESTAC_SINASC",
"Quantidade Gestações" = "QTDGESTANT_SINASC", "Idade da Mãe" = "IDADEMAE_SINASC",
"Filhos Vivos" = "QTDFILVIVO_SINASC"))),
box(width = 3, selectInput("varY", label = "Selecione",
choices = list("Semanas de Gestação" = "SEMAGESTAC_SINASC", "Nº consultas Pré Natal" = "CONSPRENAT_SINASC",
"Quantidade Gestações" = "QTDGESTANT_SINASC", "Idade da Mãe" = "IDADEMAE_SINASC",
"Filhos Vivos" = "QTDFILVIVO_SINASC"))),
box(width = 3, selectInput("varSelect", label = "Selecione a variável",
choices = list("Escolaridade" = "ESCMAEAGR1_SINASC", "Gravidez" = "GRAVIDEZ_SINASC",
"Raça/Cor" = "RACACORMAE_SINASC", "Gestação" = "GESTACAO_SINASC", "Parto" = "PARTO_SINASC",
"Consultas" = "CONSULTAS_SINASC", "Apgar1" = "APGAR1_SINASC",
"Apgar5" = "APGAR5_SINASC", "Anomalia" = "IDANOMAL_SINASC"))),
box(width = 3, selectInput("graphType", label = "Selecione a variável",
choices = list( "Densidade" = "density", "Boxplot" = "boxplot", "Histograma" = "histogram"))),
),
fluidRow(
tags$style(type = "text/css", "#scatterPlot {height: calc(100vh - 80px) !important;}"),
plotOutput("scatterPlot", height = "100%")
)
),
tabItem(tabName = "subNumVars_2",
fluidRow(
tags$style(type = "text/css", "#bubblePlot {height: calc(100vh - 80px) !important;}"),
plotlyOutput("bubblePlot", height = "100%")
)
),
tabItem(tabName = "subNumVars_3",
fluidRow(
box(width = 6, selectInput("varYbox", label = "Variável para eixo Y",
choices = list("Semanas de Gestação" = "SEMAGESTAC_SINASC", "Nº consultas Pré Natal" = "CONSPRENAT_SINASC",
"Quantidade Gestações" = "QTDGESTANT_SINASC", "Idade da Mãe" = "IDADEMAE_SINASC",
"Filhos Vivos" = "QTDFILVIVO_SINASC"))),
box(width = 6, selectInput("varXbox", label = "Variável para eixo X",
choices = list("Escolaridade" = "ESCMAEAGR1_SINASC", "Gravidez" = "GRAVIDEZ_SINASC",
"Raça/Cor" = "RACACORMAE_SINASC", "Gestação" = "GESTACAO_SINASC", "Parto" = "PARTO_SINASC",
"Consultas" = "CONSULTAS_SINASC", "Apgar1" = "APGAR1_SINASC",
"Apgar5" = "APGAR5_SINASC", "Anomalia" = "IDANOMAL_SINASC")))
),
fluidRow(
tags$style(type = "text/css", "#boxPlot {height: calc(100vh - 80px) !important;}"),
plotlyOutput("boxPlot", height = "100%")
)
),
tabItem(tabName = "subNumVars_4",
fluidRow(
box(width = 6, selectInput("varXhist", label = "Variável para eixo X",
choices = list("Semanas de Gestação" = "SEMAGESTAC_SINASC", "Nº consultas Pré Natal" = "CONSPRENAT_SINASC",
"Quantidade Gestações" = "QTDGESTANT_SINASC", "Idade da Mãe" = "IDADEMAE_SINASC",
"Filhos Vivos" = "QTDFILVIVO_SINASC"))),
box(width = 6, selectInput("varYhist", label = "Variável para eixo Y",
choices = list("Escolaridade" = "ESCMAEAGR1_SINASC", "Gravidez" = "GRAVIDEZ_SINASC",
"Raça/Cor" = "RACACORMAE_SINASC", "Gestação" = "GESTACAO_SINASC", "Parto" = "PARTO_SINASC",
"Consultas" = "CONSULTAS_SINASC", "Apgar1" = "APGAR1_SINASC",
"Apgar5" = "APGAR5_SINASC", "Anomalia" = "IDANOMAL_SINASC")))
),
fluidRow(
tags$style(type = "text/css", "#histPlot {height: calc(100vh - 80px) !important;}"),
plotlyOutput("histPlot", height = "100%")
)
),
tabItem(tabName = "pred",
fluidRow(
box(width = 3, numericInput("idade", label = h4("Idade da Mãe"), value = NA)), #*****
box(width = 3, selectInput("gestacao", label = h4("Semanas de Gestação"),
choices = list("Menos de 22 semanas" = 1, "22 a 27 semanas" = 2, "28 a 31 semanas" = 3, "32 a 36 semanas" = 4,
"37 a 41 semanas" = 5, "42 semanas e mais" = 6, "Ignorado" = 9))), #*****
box(width = 3, selectInput("gravidez", label = h6("Tipo de gravidez"),
choices = list("Única" = 1, "Dupla" = 2, "Tripla ou mais" = 3, "Ignorado" = 9))), #*****
box(width = 3, selectInput("parto", label = h6("Tipo de parto"),
choices = list("Vaginal" = 1, "Cesário" = 2, "Ignorado" = 9))) #*****
),
fluidRow(
box(width = 3, selectInput("consultas", label = h6("Número de consultas de pré‐nata"),
choices = list("Nenhuma consulta" = 1, "de 1 a 3 consultas" = 2, "de 4 a 6 consultas" = 3,
"7 e mais consultas" = 4, "Ignorado." = 9))), #*****
box(width = 3, numericInput("peso", label = h6("Peso ao nascer em gramas."), value = 1)), #*****
box(width = 3, numericInput("apgar1", label = h6("APGAR 1º minuto"), value = 1, max = 10, min = 1)), #*****
box(width = 3, numericInput("apgar5", label = h6("APGAR 5º minuto"), value = 1, max = 10, min = 1)) #*****
),
fluidRow(
box(width = 3, selectInput("anomalia", label = h6("Anomalia identificada"),
choices = list("Sim" = 1, "Não" = 2, "Ignorado" = 9))), #*****
box(width = 3, selectInput("escolaridade", label = h6("Escolaridade da Mãe"),
choices = list("Sem escolaridade" = 0, "Fundamental I Incompleto" = 1, "Fundamental I Completo" = 2,
"Fundamental II Incompleto" = 3, "Fundamental II Completo" = 4, "Ensino Médio Incompleto" = 5,
"Ensino Médio Completo" = 6, "Superior Incompleto" = 7, "Superior Completo" = 8,
"Fundamental I Incompleto ou Inespecífico" = 10, "Fundamental II Incompleto ou Inespecífico" = 11,
"Ensino Médio Incompleto ou Inespecífico" = 12, "Ignorado" = 9))), #*****
box(width = 3, selectInput("racacor", label = h6("Raça/Cor da Mãe"),
choices = list("Branca" = 1, "Preta" = 2, "Amarela" = 3,
"Parda" = 4, "Indígena" = 5, "Ignorado" = 9))),
box(width = 3, numericInput("qtdgestant", label = h6("Quantidade de gestações"), value = NA))
),
fluidRow(
box(width = 3, numericInput("semanagestac", label = h6("Número de semanas de gestação"), value = NA)),
box(width = 3, numericInput("consprenat", label = h6("Número de consultas pré Natal"), value = NA)),
box(width = 3, numericInput("qtdfilvivo", label = h6("Quantidade filhos vivos"), value = NA)),
div(align="center", actionButton("pred", "Predição"))
),
fluidRow(
div(align="center", h4(textOutput("probs")))
)
)
)
)
)
# Define server logic required to draw a histogram
server <- function(input, output) {
#correlação e seleção de variáveis
output$heatmapPlot <- renderPlot({
hm <- ggplot(data = melted_cor_mat_upper, aes(Var2, Var1, fill = value))+
geom_tile(color = 'white')+
scale_fill_gradient2(low = 'blue', high = 'red', mid = 'Yellow',
midpoint = 0, limit = c(-1,1), space = 'Lab',
name='Pearson Correlation')+
theme_minimal()+
theme(axis.text.x = element_text(angle = 45, vjust = 1,
size = 12, hjust = 1))+
coord_fixed()
hm
})
output$dolPlot <- renderPlot({
plot(rf_imp, lg_imp, svm_imp)+
ggtitle("Permutation variable Importance", "")
})
output$ivTable <- renderTable({
df_iv
})
output$borutaPlot <- renderPlot({
plot(boruta_output, cex.axis=.7, las=2, xlab="", main="Variable Importance")
})
#plots das variáveis categóricas
#Donut plot
subDonut <- reactive({
input$donutSubset
})
output$donutPlot <- renderPlot({
plotData <- summary(data[, c(subDonut())])
plotData <- as.data.frame(plotData)
plotData <- rownames_to_column(plotData, var="var")
names(plotData) <- c("feature", "value")
plotData$fraction <- plotData$value / sum(plotData$value)
plotData$ymax = cumsum(plotData$fraction)
#para o plot tem que chamar essa funçaõ aqui que chama donutSubset
#daí vai fazer o subset e plotar
plotData$ymin = c(0, head(plotData$ymax, n=-1))
ggplot(plotData, aes(ymax=ymax, ymin=ymin, xmax=4, xmin=3, fill=feature)) +
geom_rect() +
scale_fill_brewer(palette="Paired")+
theme_minimal() +
coord_polar(theta="y") + # Try to remove that to understand how the chart is built initially
xlim(c(2, 4)) # Try to remove that to see how to make a pie chart
})
output$circularBarPlot <- renderPlot({
cbp <- ggplot(circularBar_data, aes(x=as.factor(id), y=perc, fill=group)) +
geom_bar(stat="identity", alpha=0.5) +
ylim(-100,120) +
scale_fill_brewer(palette = "Set2") +
theme_minimal() +
theme(
axis.text = element_blank(),
axis.title = element_blank(),
panel.grid = element_blank(),
plot.margin = unit(rep(-2,4), "cm") # This remove unnecessary margin around plot
) +
coord_polar(start = 0)+
geom_text(data=label_data, aes(x=id, y=perc+10, label=feature, hjust=hjust), color="black",
fontface="bold",alpha=0.6, size=5.5, angle= label_data$angle, inherit.aes = FALSE )
cbp
})
output$treeMapPlot <- renderCollapsibleTree({
sub_data <- data[, c(4, 7, 10)]
#tentar agrupar os valores de alguma forma e contar a quantidade de dados
#https://adeelk93.github.io/collapsibleTree/
tmp <- collapsibleTree(df=sub_data, c('GESTACAO_SINASC', 'CONSULTAS_SINASC', 'IDANOMAL_SINASC'), fill='lightsteelblue', tooltip = TRUE, collapsed = FALSE)
tmp
})
#variáveis numéricas
#plot scatter
output$scatterPlot <- renderPlot({
#hist(final_train$IDADEMAE_SINASC)
sub_box <- data[data$CONSPRENAT_SINASC != 99, ]
p <- ggplot(sub_box, aes_string(x=input$varX, y=input$varY, color=input$varSelect)) + #, size="cyl"
geom_point(size=3) +
scale_fill_brewer(palette = "Set2") +
theme_minimal()
p1 <- ggMarginal(p, type = input$graphType)
p1
})
output$bubblePlot <- renderPlotly({
sub_bubble <- data[data$CONSPRENAT_SINASC != 99, ]
sub_bubble$OBITO <- as.factor(sub_bubble$OBITO)
sub_bubble %>%
arrange(desc(QTDGESTANT_SINASC))
p <- ggplot(data=sub_bubble, aes(x=SEMAGESTAC_SINASC, y=CONSPRENAT_SINASC, size=QTDGESTANT_SINASC, fill=OBITO)) +
geom_point(alpha=0.5, shape=21, color="black") +
scale_size(range = c(.1, 24), name='Quantidade Gestações') +
scale_fill_brewer(palette="Set2")+
theme_minimal() +
theme(legend.position="bottom") +
ylab("Consultas pré Natal") +
xlab("Semanas de Gestação") +
theme(legend.position = "none")
p
})
output$boxPlot <- renderPlotly({
box_ind_data <- data[data$CONSPRENAT_SINASC != 99, ]
# Plot
box_ind_data %>%
ggplot( aes_string(x=input$varXbox, y=input$varYbox, fill=input$varXbox)) +
geom_boxplot() +
scale_fill_brewer(palette="Paired") +
geom_jitter(color="black", size=0.4, alpha=0.9) +
theme_minimal() +
theme(
legend.position="none",
plot.title = element_text(size=11)
) +
theme(axis.text.x = element_text(angle = 45, vjust = 1,
size = 12, hjust = 1))+
ggtitle("BoxPlot com individuos") +
xlab("")
})
output$histPlot <- renderPlotly({
hist_data <- data
p <- hist_data %>%
ggplot( aes_string(x=input$varXhist, fill=input$varYhist)) +
geom_histogram( color="#e9ecef", alpha=0.6, position = 'identity') +
scale_fill_brewer(palette="Paired") +
theme_minimal() +
labs(fill="")
p
})
#predição
observeEvent( input$pred, {
gestacao <- as.factor(input$gestacao)
gravidez <- as.factor(input$gravidez)
parto <- as.factor(input$parto)
consultas <- as.factor(input$consultas)
apgar1 <- as.factor(input$apgar1)
apgar5 <- as.factor(input$apgar5)
anomalia <- as.factor(input$anomalia)
escolaridade <- as.factor(input$escolaridade)
racacor <- as.factor(input$racacor)
qtdgestant <- as.numeric(input$qtdgestant)
semanagestac <- as.numeric(input$semanagestac)
idade <- as.numeric(input$idade)
peso <- as.numeric(input$peso)
consprenat <- as.numeric(input$consprenat)
qtdfilvivo <- as.numeric(input$qtdfilvivo)
teste_obito <- cbind.data.frame(gestacao, gravidez, parto, consultas, apgar1, apgar5, anomalia, escolaridade, racacor,
qtdgestant, semanagestac, idade, peso, consprenat, qtdfilvivo)
scaled <- cbind(data.frame(qtdgestant, semanagestac, idade, peso, consprenat, qtdfilvivo))
scaled <- as.matrix(scaled)
scaled <- scale(scaled, center = T, scale = T)
teste_obito <- cbind(teste_obito, scaled)
names(teste_obito) <- c("GESTACAO_SINASC", "GRAVIDEZ_SINASC", "PARTO_SINASC", "CONSULTAS_SINASC", "APGAR1_SINASC", "APGAR5_SINASC",
"IDANOMAL_SINASC", "ESCMAEAGR1_SINASC", "RACACORMAE_SINASC",
"QTDGESTANT_SINASC", "SEMAGESTAC_SINASC", "IDADEMAE_SINASC", "PESO_SINASC", "CONSPRENAT_SINASC", "QTDFILVIVO_SINASC")
teste_obito$APGAR1_SINASC <- as.numeric(teste_obito$APGAR1_SINASC)
teste_obito$APGAR5_SINASC <- as.numeric(teste_obito$APGAR5_SINASC)
for(i in 1:nrow(teste_obito)){
if(teste_obito$APGAR1_SINASC[i] <= 2){teste_obito$APGAR1_SINASC[i] <- "asfixia grave"}
else if(teste_obito$APGAR1_SINASC[i] >= 3 & teste_obito$APGAR1_SINASC[i] <= 4){teste_obito$APGAR1_SINASC[i] <- "asfixia moderada"}
else if(teste_obito$APGAR1_SINASC[i] >= 5 & teste_obito$APGAR1_SINASC[i] <= 7){teste_obito$APGAR1_SINASC[i] <- "asfixia leve"}
else {teste_obito$APGAR1_SINASC[i] <- "sem asfixia"}
}
teste_obito$APGAR1_SINASC <- as.factor(teste_obito$APGAR1_SINASC)
for(i in 1:nrow(teste_obito)){
if(teste_obito$APGAR5_SINASC[i] <= 2){teste_obito$APGAR5_SINASC[i] <- "asfixia grave"}
else if(teste_obito$APGAR5_SINASC[i] >= 3 & teste_obito$APGAR5_SINASC[i] <= 4){teste_obito$APGAR5_SINASC[i] <- "asfixia moderada"}
else if(teste_obito$APGAR5_SINASC[i] >= 5 & teste_obito$APGAR5_SINASC[i] <= 7){teste_obito$APGAR5_SINASC[i] <- "asfixia leve"}
else {teste_obito$APGAR5_SINASC[i] <- "sem asfixia"}
}
teste_obito$APGAR5_SINASC <- as.factor(teste_obito$APGAR5_SINASC)
teste_obito$ESCMAEAGR1_SINASC <- as.factor(teste_obito$ESCMAEAGR1_SINASC)
teste_obito$GESTACAO_SINASC <- as.factor(teste_obito$GESTACAO_SINASC)
teste_obito$GRAVIDEZ_SINASC <- as.factor(teste_obito$GRAVIDEZ_SINASC)
teste_obito$PARTO_SINASC <- as.factor(teste_obito$PARTO_SINASC)
teste_obito$CONSULTAS_SINASC <- as.factor(teste_obito$CONSULTAS_SINASC)
teste_obito$IDANOMAL_SINASC <- as.factor(teste_obito$IDANOMAL_SINASC)
teste_obito$RACACORMAE_SINASC <- as.factor(teste_obito$RACACORMAE_SINASC)
#RESOLVER O PROBLEMA DOS FATORES E SCALE NO DADOS NUMERICOS
output$probs <- renderText({
pred <- function(model, teste){
model <- model
test <- teste
#predição dos modelos no banco para probabilidade
predictions <- predict(model, test, type = "prob")
#colando a coluna da predição para comparar com a real
resultados <- cbind(test, predictions)
#cria uma coluna com a probabilidade em % de OBITO
resultados["Prob"] <- resultados$sim * 100
prob <- resultados$Prob
resultados$Prob <- as.numeric(resultados$Prob)
resultados["Probs"] <- NA
for(i in 1:nrow(resultados)){
if(resultados$Prob[i] <= 100 & resultados$Prob[i] >= 50){resultados$Probs[i] <- "alta"
}
else if(resultados$Prob[i] < 50 & resultados$Prob[i] >= 30){resultados$Probs[i] <- "média"
}
else{resultados$Probs[i] <- "baixa"
}
}
prob <- resultados$Probs
probNum <- resultados$Prob
#resultado <- probNum
resultado <- paste("Criança com ", prob, " probabilidade de óbito.") #, " Probabilidade num: ", probNum
return(resultado)
}
probs <- pred(bestModel, teste_obito)
})
})
observeEvent( input$corrMax, {
output$corr <- renderPrint({
corr <- conMatrix
})
})
}
# Run the application
shinyApp(ui = ui, server = server)
|
8ca92cc6a899775650f2a9a00f2ee4e80fcf57bf | 2dd16401534e83826ef2e1d352d2d51abd78a2a2 | /run_analysis.R | 75618835f136d70ec91d3234c9b81d5f1929827d | [] | no_license | danielmbaquero/Peer-graded-Assignment-Getting-and-Cleaning-Data-Course-Project | f4d87a9f8d11acbb4a134234bbbd585c10bc560c | 46eeb010de799379114a1f04d02e341e862b354a | refs/heads/master | 2022-05-31T22:35:24.378909 | 2020-04-28T00:44:22 | 2020-04-28T00:44:22 | 259,487,028 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,011 | r | run_analysis.R |
## Load the neccesary libraries
library(dplyr)
## Set the paths for all necesary files
## Read the data sets and load them to data frames
trainPath <- "./UCI HAR Dataset/train/X_train.txt"
trainActivityPath <- "./UCI HAR Dataset/train/y_train.txt"
testPath <- "./UCI HAR Dataset/test/X_test.txt"
testActivityPath <- "./UCI HAR Dataset/test/y_test.txt"
collabelsPath <- "./UCI HAR Dataset/features.txt"
activityDescPath <- "./UCI HAR Dataset/activity_labels.txt"
train <- read.table(trainPath)
trainActivity <- read.table(trainActivityPath)
test <- read.table(testPath)
testActivity <- read.table(testActivityPath)
activityDesc <- read.table(activityDescPath)
collabels <- read.table(collabelsPath)
collabels <- gsub("meanFreq", "NO", collabels$V2)
## Merge both the train and test data
## Put labels to the data frame
allData <- rbind(train, test)
allActivity <- rbind(trainActivity, testActivity)
allData <- cbind(allActivity, allData)
names(allData) <- c("activity", collabels)
##Extract anly the means and stds for each measurement
col_means <- grepl("mean",names(allData))
col_std <- grepl("std", names(allData))
activity_name <- names(allData) == "activity"
my_data <- allData[,col_means|col_std|activity_name]
## Give descriptive name for the activities
my_data <- mutate(my_data, activity = gsub(activityDesc[1,1], activityDesc[1,2], my_data$activity))
my_data <- mutate(my_data, activity = gsub(activityDesc[2,1], activityDesc[2,2], my_data$activity))
my_data <- mutate(my_data, activity = gsub(activityDesc[3,1], activityDesc[3,2], my_data$activity))
my_data <- mutate(my_data, activity = gsub(activityDesc[4,1], activityDesc[4,2], my_data$activity))
my_data <- mutate(my_data, activity = gsub(activityDesc[5,1], activityDesc[5,2], my_data$activity))
my_data <- mutate(my_data, activity = gsub(activityDesc[6,1], activityDesc[6,2], my_data$activity))
## Use descriptive variable names
{my_data_names <- names(my_data) %>% gsub(
pattern = "^t", replacement = "time ") %>% gsub(
pattern = "^f", replacement = "frequency ") %>% gsub(
pattern = "Acc", replacement = " accelerometer ") %>% gsub(
pattern = "Gyro", replacement = " gyroscope ") %>% tolower()}
names(my_data) <- my_data_names
my_data <- tbl_df(my_data)
### Create and independent data set
## Seth necessary paths, read and load the files
subject_trainPath <- "./UCI HAR Dataset/train/subject_train.txt"
subject_testPath <- "./UCI HAR Dataset/test/subject_test.txt"
subject_train <- read.table(subject_trainPath)
subject_test <- read.table(subject_testPath)
## Merge subjects from test and train
subject <- rbind(subject_train, subject_test)
## Create new tidy dataset
new_tidy <- mutate(my_data, subjectID = subject[,1])
new_tidy$activity <- as.factor(new_tidy$activity)
new_tidy$subjectID <- as.factor(new_tidy$subjectID)
new_tidy <- group_by(new_tidy, activity, subjectID)
new_tidy <- summarise_all(new_tidy, mean)
|
c03d033482a811d8d0e4eba64f36b20f44e6e484 | 67de61805dd839979d8226e17d1316c821f9b1b4 | /inst/models/passing/DogChain.R | f1fe5bb9aee1310ed6ed500fcde2815a05233a28 | [
"Apache-2.0"
] | permissive | falkcarl/OpenMx | f22ac3e387f6e024eae77b73341e222d532d0794 | ee2940012403fd94258de3ec8bfc8718d3312c20 | refs/heads/master | 2021-01-14T13:39:31.630260 | 2016-01-17T03:08:46 | 2016-01-17T03:08:46 | 49,652,924 | 1 | 0 | null | 2016-01-14T14:41:06 | 2016-01-14T14:41:05 | null | UTF-8 | R | false | false | 1,088 | r | DogChain.R | library(OpenMx)
m1 <- mxModel("dogChain",
mxMatrix(name="link", nrow=4, ncol=1, free=TRUE, lbound=-1, values=.1),
mxMatrix(name="dog", nrow=1, ncol=1, free=TRUE, values=-1), # violate constraint
mxFitFunctionAlgebra("dog"),
mxConstraint(dog > link[1,1] + link[2,1] + link[3,1] + link[4,1]))
m1 <- mxRun(m1)
omxCheckCloseEnough(m1$dog$values, -4, 1e-4)
omxCheckCloseEnough(m1$link$values[,1], rep(-1, 4), 1e-4)
omxCheckCloseEnough(m1$output$evaluations, 0, 260)
m2 <- mxModel("bentDogChain",
mxMatrix(name="link", nrow=4, ncol=1, free=TRUE, lbound=-1, values=.1),
mxMatrix(name="dog", nrow=1, ncol=1, free=TRUE, values=-1), # violate constraint
mxFitFunctionAlgebra("dog"),
mxConstraint(dog > link[1,1] + link[2,1] + link[3,1] + link[4,1]),
mxConstraint(abs(link[1,1]) == link[2,1] * link[2,1]))
m2 <- mxRun(m2)
omxCheckCloseEnough(m1$dog$values, -4, 1e-4)
omxCheckCloseEnough(m1$link$values[,1], rep(-1, 4), 1e-4)
omxCheckCloseEnough(m1$output$evaluations, 0, 100)
|
6f9b4a8396b916766b0ef05d51d507cfbe17a28d | 18e19488af40a720778965f6445ce21e6da49201 | /cachematrix.R | d867e764f1083a33f785435aab9c0674c47f26c3 | [] | no_license | hschreib/ProgrammingAssignment2 | 26af668070808333a64c12aff4aabe2e5ad7521f | d5329c8b668ea3fc5f04cf0ea985166568836e36 | refs/heads/master | 2021-01-15T15:32:09.768521 | 2015-10-22T09:02:15 | 2015-10-22T09:02:15 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,860 | r | cachematrix.R | ## Task Description (from the website)
# Matrix inversion is usually a costly computation and there may be some benefit to caching the inverse
# of a matrix rather than compute it repeatedly (there are also alternatives to matrix inversion that
# we will not discuss here).
#
# Your assignment is to write a pair of functions that cache the inverse of a matrix.
#
# cacheSolve: This function computes the inverse of the special "matrix" returned by makeCacheMatrix above. If the inverse has already been calculated (and the matrix has not changed), then the cachesolve should retrieve the inverse from the cache.
#
# (By taking the example "Caching the Mean of a Vector" as a blueprint)
# makeCacheMatrix: This function creates a special "matrix" object that can cache its inverse.
# Computing the inverse of a square matrix can be done with the solve function in R. For example,
# if X is a square invertible matrix, then solve(X) returns its inverse.
#
# For this assignment, assume that the matrix supplied is always invertible.
# Create the "Special Matrix Object" with getter and setter functions (methods)
makeCacheMatrix <- function(x = matrix()) {
# Init the variable for the inverse matrix
# x comes into implicitely from the function call
inv <- NULL
set <- function(y) {
# Override the vector variable "this.x" created by the call with the matrix y
x <<- y
# Re-initialise the inverse matrix as it has changed
inv <<- NULL
}
# Getter returns the original base matrix
get <- function() x
# Setter for the inverse matrix variable inv.
# It simply assigns the value given as the parameter in the call
# to the local variable inv. The solve (x) function is not part of this "object" but
# externalized by the cacheSolve function (as it would be in a strict object orientated design).
setInverse <- function(inv_matrix) inv <<- inv_matrix
# Getter for the m variable
getInverse <- function() inv
# Return values, functions (methods) can be accessed e.g. by object_variable$FUN: e.g. my_matrix$getInverse().
list(set = set, get = get,
setInverse = setInverse,
getInverse = getInverse)
}
# The following function calculates the inverse matrix of the special "base matrix" created with the makeCacheMatrix (x) function.
# However, it first checks to see if the inverse has already been calculated. If so, it gets the inverse matrix from
# the cache and skips the computation. Otherwise, it calculates the inverse of the data matrix by applying solve (x) and sets the value
# of the inverse matrix in the cache via the setInverse (x) function.
cacheSolve <- function(x, ...) {
## Return a matrix that is the inverse of 'x'
my_m <- x$getInverse()
# If the inverse is already in the chache, get it
if(!is.null(my_m)) {
message("getting cached data")
return(my_m)
}
# else ... do the calculation and use the setter to store it in the "Special Matrix with Inverse Object"
data <- x$get()
my_m <- solve(data, ...)
x$setInverse(my_m)
my_m
}
# Test results
# > source("cachematrix.R")
# > my_tmo <- makeCacheMatrix(matrix(rnorm(25), 5, 5))
# > my_tmo$get()
# [,1] [,2] [,3] [,4] [,5]
# [1,] -0.51458643 -1.14373513 0.2210488 -1.5512177 -1.2710580
# [2,] 1.01801809 0.38560855 1.5287741 -2.0906002 0.5089594
# [3,] -1.24774631 0.71107795 1.7351338 0.2343723 1.3220737
# [4,] -0.01905422 0.02046402 1.8625746 -2.0110660 -0.9485706
# [5,] 0.07029319 -0.37500412 0.4216691 -1.5419871 -1.1283103
#
# ## my_tmo$getInverse() = NULL test is missing ...
#
# > cacheSolve(my_tmo)
# [,1] [,2] [,3] [,4] [,5]
# [1,] 0.2262578 0.2266706 -0.8374336 1.0176135 -1.9893888
# [2,] -1.6840413 -0.2420188 0.7110352 -1.0587766 3.5111811
# [3,] 0.5158622 -0.1030412 -0.4593543 1.5464312 -2.4659297
# [4,] 0.2726881 -0.3558908 -0.5436845 1.2552495 -2.1600619
# [5,] 0.3939239 0.5424227 0.2828583 -0.7222476 -0.1467386
# > my_tmo$getInverse()
# [,1] [,2] [,3] [,4] [,5]
# [1,] 0.2262578 0.2266706 -0.8374336 1.0176135 -1.9893888
# [2,] -1.6840413 -0.2420188 0.7110352 -1.0587766 3.5111811
# [3,] 0.5158622 -0.1030412 -0.4593543 1.5464312 -2.4659297
# [4,] 0.2726881 -0.3558908 -0.5436845 1.2552495 -2.1600619
# [5,] 0.3939239 0.5424227 0.2828583 -0.7222476 -0.1467386
#
# > solve(my_tmo$get())
# [,1] [,2] [,3] [,4] [,5]
# [1,] 0.2262578 0.2266706 -0.8374336 1.0176135 -1.9893888
# [2,] -1.6840413 -0.2420188 0.7110352 -1.0587766 3.5111811
# [3,] 0.5158622 -0.1030412 -0.4593543 1.5464312 -2.4659297
# [4,] 0.2726881 -0.3558908 -0.5436845 1.2552495 -2.1600619
# [5,] 0.3939239 0.5424227 0.2828583 -0.7222476 -0.1467386
#
|
78c5aa4be1675d7331a4a8263e1ec741fcc99d66 | a9c942259f52d03c632369a3ceba3f90a876b0d7 | /GEOG306/LabProblems01.R | e6745131eac2f5c1f682070a25b1acbfbfd29998 | [] | no_license | aKeller25/Coursework | a374aa183d2096f3ae104bf2ca169c27d2d1237d | 1766f02c35cf84d48e33f6afd69aaecf251e6223 | refs/heads/main | 2023-01-24T23:43:00.225065 | 2020-12-08T09:59:54 | 2020-12-08T09:59:54 | 318,409,064 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 584 | r | LabProblems01.R | ########################################
# Author: Alex Keller
# Date created: 11:15 AM 8/29/2017
# Date last edited: 12:00 PM 8/29/2017
# Semester: Fall2017
# Class: Geog306
# Assignment name: LabProblems01
########################################
# Question 1-3
data("faithful")
help(faithful)
View(faithful)
# QUestion 4
setwd("C:/Users/akeller5/Desktop/GEOG306 Lab Introduction to R/GEOG306 Lab Introduction to R")
dc = read.csv("DCprecip.csv" , header = T)
write.csv(dc, "C:/Users/akeller5/Desktop/GEOG306 Lab Introduction to R/GEOG306 Lab Introduction to R/NewDCprecip.csv")
|
e8bedc83b35ccac0bfc481e6361d6145dba8069d | 9228ec9bac32fd19ce2115a634a447b931d6e072 | /tests/initialize.R | 43357c7495aafd7f4024b89af982c7bb512e9904 | [] | no_license | TheGilt/UITesting | dc080a1bd00ff9b6328987512f706171dd5dd6a4 | f6312a39ed13c88e09a5505d2eff5e71e6136bcc | refs/heads/master | 2021-06-27T21:40:01.338777 | 2017-09-08T18:40:47 | 2017-09-08T18:40:47 | 101,445,640 | 0 | 0 | null | 2017-08-25T22:10:49 | 2017-08-25T22:10:48 | null | UTF-8 | R | false | false | 4,143 | r | initialize.R | library(methods)
library(RSelenium)
library(testthat)
library(XML)
library(digest)
ISR_login <- Sys.getenv("ISR_login")
ISR_pwd <- Sys.getenv("ISR_pwd")
SAUCE_USERNAME <- Sys.getenv("SAUCE_USERNAME")
SAUCE_ACCESS_KEY <- Sys.getenv("SAUCE_ACCESS_KEY")
machine <- ifelse(Sys.getenv("TRAVIS") == "true", "TRAVIS", "LOCAL")
server <- ifelse(Sys.getenv("TRAVIS_BRANCH") == "master", "www", "test")
build <- Sys.getenv("TRAVIS_BUILD_NUMBER")
job <- Sys.getenv("TRAVIS_JOB_NUMBER")
jobURL <- paste0("https://travis-ci.org/RGLab/UITesting/jobs/", Sys.getenv("TRAVIS_JOB_ID"))
name <- ifelse(machine == "TRAVIS",
paste0("UI testing `", server, "` by TRAVIS #", job, " ", jobURL),
paste0("UI testing `", server, "` by ", Sys.info()["nodename"]))
url <- ifelse(machine == "TRAVIS", "localhost", "ondemand.saucelabs.com")
ip <- paste0(SAUCE_USERNAME, ":", SAUCE_ACCESS_KEY, "@", url)
port <- ifelse(machine == "TRAVIS", 4445, 80)
browserName <- ifelse(Sys.getenv("SAUCE_BROWSER") == "",
"chrome",
Sys.getenv("SAUCE_BROWSER"))
extraCapabilities <- list(name = name,
build = build,
username = SAUCE_USERNAME,
accessKey = SAUCE_ACCESS_KEY,
tags = list(machine, server))
remDr <- remoteDriver$new(remoteServerAddr = ip,
port = port,
browserName = browserName,
version = "latest",
platform = "Windows 10",
extraCapabilities = extraCapabilities)
remDr$open(silent = TRUE)
ptm <- proc.time()
cat("\nhttps://saucelabs.com/beta/tests/", remDr@.xData$sessionid, "\n", sep = "")
if (machine == "TRAVIS") write(paste0("export SAUCE_JOB=", remDr@.xData$sessionid), "SAUCE")
remDr$maxWindowSize()
remDr$setImplicitWaitTimeout(milliseconds = 20000)
siteURL <- paste0("https://", server, ".immunespace.org")
# helper functions ----
context_of <- function(file, what, url, level = NULL) {
if (exists("ptm")) {
elapsed <- proc.time() - ptm
timeStamp <- paste0("At ", floor(elapsed[3] / 60), " minutes ",
round(elapsed[3] %% 60), " seconds")
} else {
timeStamp <- ""
}
level <- ifelse(is.null(level), "", paste0(" (", level, " level) "))
msg <- paste0("\n", file, ": testing '", what, "' page", level,
"\n", url,
"\n", timeStamp,
"\n")
context(msg)
}
# test functions ----
test_connection <- function(remDr, pageURL, expectedTitle) {
test_that("can connect to the page", {
remDr$navigate(pageURL)
if (remDr$getTitle()[[1]] == "Sign In") {
id <- remDr$findElement(using = "id", value = "email")
id$sendKeysToElement(list(ISR_login))
pw <- remDr$findElement(using = "id", value = "password")
pw$sendKeysToElement(list(ISR_pwd))
loginButton <- remDr$findElement(using = "class", value = "labkey-button")
loginButton$clickElement()
while(remDr$getTitle()[[1]] == "Sign In") Sys.sleep(1)
}
pageTitle <- remDr$getTitle()[[1]]
expect_equal(pageTitle, expectedTitle)
})
}
test_module <- function(module) {
test_that(paste(module, "module is present"), {
module <- remDr$findElements(using = "css selector", value = "div.ISCore")
expect_equal(length(module), 1)
tab_panel <- remDr$findElements(using = "class", value = "x-tab-panel")
expect_equal(length(tab_panel), 1)
})
}
test_tabs <- function(x) {
test_that("tabs are present", {
tab_header <- remDr$findElements(using = "class", value = "x-tab-panel-header")
expect_equal(length(tab_header), 1)
tabs <- tab_header[[1]]$findChildElements(using = "css selector", value = "li[id^=ext-comp]")
expect_equal(length(tabs), 4)
expect_equal(tabs[[1]]$getElementText()[[1]], x[1])
expect_equal(tabs[[2]]$getElementText()[[1]], x[2])
expect_equal(tabs[[3]]$getElementText()[[1]], x[3])
expect_equal(tabs[[4]]$getElementText()[[1]], x[4])
})
}
|
15d1213559fae093161d2c0940dcfc5857a737d2 | 4834724ced99f854279c2745790f3eba11110346 | /man/MDSMap_from_list.Rd | 31dcf18e658f47f61dedb4e3a110f8b878f4a980 | [] | no_license | mdavy86/polymapR | 1c6ac5016f65447ac40d9001388e1e9b4494cc55 | 6a308769f3ad97fc7cb54fb50e2b898c6921ddf9 | refs/heads/master | 2021-01-25T12:37:09.046608 | 2018-02-13T17:07:35 | 2018-02-13T17:07:35 | 123,487,219 | 0 | 1 | null | null | null | null | UTF-8 | R | false | true | 1,613 | rd | MDSMap_from_list.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/exported_functions.R
\name{MDSMap_from_list}
\alias{MDSMap_from_list}
\title{Wrapper function for MDSMap to generate linkage maps from list of pairwise linkage estimates}
\usage{
MDSMap_from_list(linkage_list, write_to_file = FALSE,
mapdir = "mapping_files_MDSMap", plot_prefix = "", log = NULL, ...)
}
\arguments{
\item{linkage_list}{A named \code{list} with r and LOD of markers within linkage groups.}
\item{write_to_file}{Should output be written to a file? By default \code{FALSE}, if \code{TRUE} then output,
including plots from \code{MDSMap} are saved in the same directory as the one used for input files. These
plots are currently saved as pnf images. If a different plot format is required (e.g. for publications),
then run the \code{MDSMap} function \code{\link[MDSMap]{estimate.map}} (or similar) directly and save the output
with a different plotting function as wrapper around the map function call.}
\item{mapdir}{Directory to which map input files are initially written. Also used for output if \code{write_to_file=TRUE}}
\item{plot_prefix}{prefix for the filenames of output plots.}
\item{log}{Character string specifying the log filename to which standard output should be written.
If NULL log is send to stdout.}
\item{\dots}{Arguments passed to \code{\link[MDSMap]{estimate.map}}.}
}
\description{
Create multidimensional scaling maps from a list of linkages
}
\examples{
\dontrun{
data("all_linkages_list_P1")
maplist_P1 <- MDSMap_from_list(all_linkages_list_P1[1])
}
}
|
09b961d4c74b236eeef0c6408eb090b6ea77b9b8 | 8284b1b45303414dc77a233ea644488b91d9f649 | /R/dbj.R | 9a89c6575422ed825eebe642f117f0513949345d | [] | no_license | hoesler/dbj | b243eaa14bd18e493ba06e154f02034296969c47 | c5a4c81624f5212a3e188bd6102ef61d6057af0b | refs/heads/master | 2020-04-06T06:59:06.471743 | 2016-07-14T10:16:04 | 2016-07-14T10:16:04 | 40,183,942 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 707 | r | dbj.R | #' A DBI implementation using JDBC over rJava.
#'
#' Start with \code{\link{driver}} to create a new database driver.
#'
#' @docType package
#' @import DBI rJava methods assertthat
#' @name dbj
NULL
.onLoad <- function(libname, pkgname) {
.jpackage(pkgname, lib.loc = libname)
# Workaround for devtools::test()
# .package will not call the overwriten system.file of the devtools environment
# which takes care of the different folder structure.
if (!any(grepl("dbj", .jclassPath(), TRUE))) {
java_folder <- system.file("java", package = pkgname, lib.loc = libname)
jars <- grep(".*\\.jar", list.files(java_folder, full.names = TRUE), TRUE, value = TRUE)
.jaddClassPath(jars)
}
} |
b3254db68b1ae7ab0bb616e1615e96522501c1fa | 3bc70bcc059f3a6f75d56e16cd0c880fb4f1b2ac | /L4.2_combining_predictors.R | 76ef5e667d5dd88ebed4dbed38f086abcd9b51ef | [] | no_license | Priscilla-/practical_machine_learning_course | 531f6d44b99e96e5e490d5eeca1ff4b86721e9ae | 66d21688e558bc88912a184d47a90ff9ef25abb6 | refs/heads/master | 2021-01-18T10:10:25.118726 | 2015-11-26T14:24:02 | 2015-11-26T14:24:02 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,627 | r | L4.2_combining_predictors.R | # Week 4 Practical Machine Learning
# Lecture 2: Combining predictors
# load packages
library(ISLR)
library(caret)
library(ggplot2)
# get data, subset out the outcome
data(Wage)
Wage <- subset(Wage, select=-c(logwage))
# do the validation, train, test split
inBuild <- createDataPartition(y=Wage$wage, p=0.70, list=FALSE)
buildData <- Wage[inBuild,]
validation <- Wage[-inBuild,]
inTrain <- createDataPartition(y=buildData$wage, p=0.70, list=FALSE)
training <- buildData[inTrain,]
testing <- buildData[-inTrain,]
# output the dimensions of the sets
dim(training); dim(testing); dim(validation)
# build two different models
mod1 <- train(wage ~., method="glm", data=training)
mod2 <- train(wage ~., method="rf", data=training, trControl= trainControl(method="cv"), number=3)
# predict on the testing set
pred1 <- predict(mod1, testing)
pred2 <- predict(mod2, testing)
# plot predictions against each other
qplot(pred1, pred2, colour=wage, data=testing)
# fit a model that combines predictors
predDF <- data.frame(pred1, pred2, wage=testing$wage)
combModFit <- train(wage ~., method="gam", data=predDF)
combPred <- predict(combModFit, predDF)
# compare testing errors on the 2 models and the combined model
sqrt(sum((pred1-testing$wage)^2))
sqrt(sum((pred2-testing$wage)^2))
sqrt(sum((combPred-testing$wage)^2))
# predict on validation data set
pred1V <- predict(mod1, validation)
pred2V <- predict(mod2, validation)
combPredV <- predict(combModFit, predVDF)
# evaluate predictions on validation set
sqrt(sum((pred1-validation$wage)^2))
sqrt(sum((pred2-validation$wage)^2))
sqrt(sum((combPred-validation$wage)^2))
|
abe39b12ebf734cd2a95418497249055d6dd5555 | c5f44f6f107d5b83f0de5ac52d0d1972366de327 | /man/formula.xbal.Rd | 54767079257290e387484ee7d039d657907eb345 | [] | no_license | cran/RItools | 0719f0999aa2353cb06270d2b1822d124cc58623 | 97de113c050c20665c4f60290d3f4538dd5c392e | refs/heads/master | 2023-03-15T20:01:04.049134 | 2023-03-10T07:20:07 | 2023-03-10T07:20:07 | 17,692,916 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 459 | rd | formula.xbal.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils.R
\name{formula.xbal}
\alias{formula.xbal}
\alias{formula.balancetest}
\title{Returns \code{formula} attribute of an \code{xbal} object.}
\usage{
\method{formula}{xbal}(x, ...)
}
\arguments{
\item{x}{An \code{xbal} object.}
\item{...}{Ignored.}
}
\value{
The formula corresponding to \code{xbal}.
}
\description{
Returns \code{formula} attribute of an \code{xbal} object.
}
|
181fe6f1ea2f467bf06a5da6a1d462427f5ddd8d | ccbb7401341a5755f41a3026ac43bdf0ee8322f1 | /application_code/mfpca_apply_fcns.R | 5de0960c939e1feeae360b93255457c122532df3 | [
"BSD-3-Clause"
] | permissive | TrevorKMDay/mfpca-analyses | d12ff3b15c97d35964c82153f70c8a8f08dfe081 | 74f819d424e6ef6ae25e4b122570403cf6e41e72 | refs/heads/main | 2023-05-07T01:56:18.409248 | 2021-05-28T18:40:55 | 2021-05-28T18:40:55 | 371,776,807 | 0 | 0 | BSD-3-Clause | 2021-05-28T17:41:20 | 2021-05-28T17:41:19 | null | UTF-8 | R | false | false | 33,708 | r | mfpca_apply_fcns.R | #--------------------------------------------------------------------
# functions for MFPCA applications to microbiome multi-omics data
#--------------------------------------------------------------------
#---------------------------------------------------------------------
# transform data for MFPCA
# df_var: dataframe variable for different datasets
# shared_id_unique: same unique id identifier in two datasets
# shared_time_var: same time variable in two datasets
# y_var: response variable in each dataset (order matters!)
# covariates: interested covariates for FPC score analysis
# keep_all_obs: whether keep all obs from each dataset
# min_visit: minimum visit required for each subject
# output dataset: time scaled in [0,1]; response standardized
#----------------------------------------------------------------------
# update function to include all observations (within minimum requirement of at least one obs/block for each subject)
# keep_all_obs: if FALSE, meaning only observations (subjects + time) with measurements at all blocks are retained
data_mfpca_transformed <- function(df_var, shared_id_unique, shared_time_var, y_var,
covariates, keep_all_obs = TRUE, min_visit=0, nseed=31){
set.seed(nseed)
dat_list = df_var
P = length(df_var)
for (i in 1:P){
tmp = dat_list[[i]]
dat_list[[i]]$ID_merge = paste(tmp[, shared_id_unique], tmp[, shared_time_var])
}
# merge multiple data frame; keep all obs
dat = Reduce(function(...) merge(...,
by=c('ID_merge', shared_id_unique, shared_time_var, covariates),
all = keep_all_obs), dat_list)
dat$ID = as.numeric(as.factor(dat[, shared_id_unique]))
P_id.old=unique(dat$ID); N=length(P_id.old)
dat$P_id.old=dat$ID
dat$time=dat[, shared_time_var]
dat=dat[order(dat$ID,dat$time),]
# create visit and n_visits variable
dat$visit = 1
dat$n_visits = 1
for(i in 1:N){
dat_i=dat[dat$P_id.old==P_id.old[i],]
dat_i$ID=i
dat_i$visit = 1:dim(dat_i)[1]
dat_i$n_visits = dim(dat_i)[1]
dat[dat$P_id.old==P_id.old[i],]=dat_i
rm(dat_i)
}
# delete subjects with fewer than k visits
k = min_visit
dat = dat[dat$n_visits >= k, ]
P_id.old = unique(dat$ID); N = length(P_id.old)
dat$P_id.old=dat$ID;
dat$ID=0
for(i in 1:N){
dat_i=dat[dat$P_id.old==P_id.old[i],]
dat_i$ID=i
dat[dat$P_id.old==P_id.old[i],]=dat_i
rm(dat_i)
}
# re-scale time to be in [0,1]
dat$time=(dat$time-min(dat$time))/(max(dat$time-min(dat$time)))
N=max(dat$ID)
# standardized response variables
dat$y = scale(dat[, y_var])
# formatting data
Y_sparse=list()
time_sparse=list()
for(i in 1:N){
Y_sparse[[i]]=list()
time_sparse[[i]]=list()
for(p in 1:P){
Y_sparse[[i]][[p]]=dat$y[dat$ID==i & !is.na(dat$y[,p]),p]
time_sparse[[i]][[p]]=dat$time[dat$ID==i & !is.na(dat$y[,p])]
}
}
# remove subjects with no observation across all time for each block
idx_subject_remove = NULL
for (i in 1:length(Y_sparse)){
if (sum(sapply(Y_sparse[[i]], length) == 0) >= 1){
idx_subject_remove = c(idx_subject_remove, i)
}
}
if (length(idx_subject_remove) > 0){
Y_sparse = Y_sparse[-idx_subject_remove]
time_sparse = time_sparse[-idx_subject_remove]
dat = dat[dat[,shared_id_unique] %in% unique(dat[,shared_id_unique])[idx_subject_remove], ]
}
N = length(Y_sparse)
# record original scale (on filtered data)
time_origin = dat[, shared_time_var]
mu_y_origin = colMeans(dat[, y_var], na.rm=T)
sigma_y_origin = apply(dat[, y_var], 2, sd, na.rm=T)
return(list('Y_sparse' = Y_sparse, 'time_sparse' = time_sparse, 'N' = N, 'P' = P,
'data' = dat, 'time_origin' = time_origin, 'mu_y_origin'= mu_y_origin,
'sigma_y_origin'= sigma_y_origin, 'shared_id_unique' = shared_id_unique,
'covariates'= covariates, 'idx_subject_remove' = idx_subject_remove))
}
#-----------------------------------------------------------------------------------------------------------------
# prepare input parameters for stan
# dat_mfpca: transformed data from data_mfpca_transformed() function
#-----------------------------------------------------------------------------------------------------------------
input_mfpca_stan = function(dat_mfpca, num_PCs, nknots, nseed=31){
set.seed(nseed)
N = dat_mfpca$N
P = dat_mfpca$P
K = num_PCs
Q = (nknots + 4)
time_sparse = dat_mfpca$time_sparse
Y_sparse = dat_mfpca$Y_sparse
# below has to be generated in R; due to int/real type issue in Stan
len_Theta = sum(K * Q)
Theta_idx = cumsum(K*Q)
Q_idx_Theta = cumsum(Q)
K_idx_Theta = cumsum(K)
# set up basis
basis_stuff = basis_setup_sparse(time_sparse, nknots)
knots = basis_stuff[[1]]
phi_t = basis_stuff[[3]]
time_cont = basis_stuff[[4]]
phi_t_cont = basis_stuff[[5]]
phi_t_stacked = NULL # basis
V = matrix(0, N, P) # record the total number of visits for each subject for pth block
visits_stop = cov_stop = rep(0, N) # stopping index for each subject in visits and covariance matrix
time = Y = list()
for (n in 1:N){
Phi_D_t_n = t(phi_t[[1]][[n]]) # transposed basis for 1st block
for (p in 2:P){
Phi_D_t_n = as.matrix(Matrix::bdiag(Phi_D_t_n, t(phi_t[[p]][[n]]))) # sparse matrix (v1+v2)*(Q1+Q2)
} # add as.matrix() to fill in zeros so that Stan can read in
phi_t_stacked = rbind(phi_t_stacked, Phi_D_t_n)
# create matrix to record number of visits
time_n = time_sparse[[n]]
time[[n]] = unlist(time_n)
for (p in 1:P){
V[n, p] = length(time_n[[p]])
}
# stopping index
visits_stop[n] = sum(V) # for each subject across blocks
cov_stop[n] = sum(rowSums(V)^2)
visits_stop_each = cumsum(as.vector(t(V))) # for each subject in each block
Y[[n]] = unlist(Y_sparse[[n]])
}
visits_start = c(1, visits_stop[-N] + 1)
cov_start = c(1, cov_stop[-N] + 1)
cov_size = cov_stop[N]
visits_start_each = c(1, visits_stop_each[-length(visits_stop_each)] + 1)
M = sum(V)
# check knots placement: knots
return(list('num_subjects'=N, 'num_blocks'=P, 'num_PCs'=K, 'num_basis'=Q, 'response_vector'=unlist(Y), 'num_visits_all'=M,
'subject_starts' = visits_start, 'subject_stops' = visits_stop,
'cov_starts' = cov_start, 'cov_stops' = cov_stop, 'cov_size' = cov_size,
'subject_starts_each' = visits_start_each, 'subject_stops_each' = visits_stop_each,
'visits_matrix' = V, 'spline_basis' = phi_t_stacked, 'len_Theta' = len_Theta,
'Theta_idx' = Theta_idx, 'Q_idx_Theta' = Q_idx_Theta, 'K_idx_Theta' = K_idx_Theta,
'nknots' = nknots, 'knots' = knots, 'phi_t_cont' = phi_t_cont, 'phi_t' = phi_t, 'time_cont' = time_cont))
}
#--------------------------------------------
# set up basis
#--------------------------------------------
basis_setup_sparse=function(time_sparse,nknots=rep(1,length(time_sparse[[1]])), orth=TRUE, delta=1/1000){
library(splines)
library(pracma)
N=length(time_sparse)
P=length(time_sparse[[1]])
time_unique=list()
for(p in 1:P){
for(i in 1:N){
time_sparse[[i]][[p]]=round(time_sparse[[i]][[p]]/delta)*delta
}
time_unique[[p]]=time_sparse[[1]][[p]]
for(i in 2:N){
time_unique[[p]]=c(time_unique[[p]],time_sparse[[i]][[p]])
}
time_unique[[p]]=round(sort(unique(time_unique[[p]]))/delta)*delta
}
time_unique_combined=time_unique[[1]]
for(p in 2:P){
time_unique_combined=c(time_unique_combined,time_unique[[p]])
}
time_unique_combined=sort(unique(time_unique_combined))
T_min=min(time_unique_combined)
T_max=max(time_unique_combined)
time_cont=seq(T_min,T_max/delta)*delta
time_cont=round(time_cont/delta)*delta
knots=list()
for(p in 1:P){
K=nknots[p]+4
qs=1/(nknots[p]+1)
knots[[p]]=quantile(time_unique[[p]],qs)
if(nknots[p]>1){
for(q in 2:nknots[p]){
knots[[p]]=c(knots[[p]],q*quantile(time_unique[[p]],qs))
}
}
knots[[p]]=as.vector(knots[[p]])
}
phi_t_cont=list()
for(p in 1:P){
phi_t_cont[[p]]=bs(time_cont,knots=knots[[p]],degree=3,intercept=TRUE)
temp=phi_t_cont[[p]]
for(k in 1:(nknots[p]+4)){
if(orth==TRUE){
if(k>1){
for(q in 1:(k-1)){
temp[,k]=temp[,k]-(sum(temp[,k]*temp[,k-q])/
sum(temp[,k-q]^2))*temp[,k-q];
}
}
}
temp[,k]=temp[,k]/sqrt(sum(temp[,k]*temp[,k]))
}
phi_t_cont[[p]]=t(sqrt(1/delta)*temp)
}
phi_t=list()
for(p in 1:P){
phi_t[[p]]=list()
for(i in 1:N){
phi_t[[p]][[i]]=array(0,dim=c((nknots[p]+4),length(time_sparse[[i]][[p]])))
for(k in 1:(nknots[p]+4)){
for(t in 1:length(time_sparse[[i]][[p]])){
phi_t[[p]][[i]][k,t]=phi_t_cont[[p]][k,abs(time_cont-time_sparse[[i]][[p]][t])==
min(abs(time_cont-time_sparse[[i]][[p]][t]))][1]
}
}
}
}
results=list()
results[[1]]=knots
results[[2]]=time_sparse
results[[3]]=phi_t
results[[4]]=time_cont
results[[5]]=phi_t_cont
return(results)
}
#-------------------------------------------------
# post hoc rotation on MFPCA results
# fit: model returns from mfpca
# Nchains: #chains used in stan
# Nsamples: #samples used in stan
#-------------------------------------------------
library(Matrix)
library(rstan)
library(corpcor) # for covariance decomposition
post_hoc_rotation = function(fit, Nchains, Nsamples, N, P, K, Q){
Theta = extract(fit, "Theta", permuted=FALSE)
cov = extract(fit, "cov_alpha", permuted=FALSE)
theta_mu = extract(fit, "theta_mu", permuted=FALSE)
alpha = extract(fit, "alpha", permuted=FALSE)
sigma_eps = extract(fit, "sigma_eps", permuted=FALSE)
Theta_old = Theta_new = array(0, dim=c(sum(Q), sum(K), Nchains*Nsamples/2))
cov_old = cov_new = corr_new = array(0, dim=c(sum(K), sum(K), Nchains*Nsamples/2))
theta_mu_new = array(0, dim=c(sum(Q), Nchains*Nsamples/2))
alpha_old = alpha_new = array(0, dim=c(sum(K), N, Nchains*Nsamples/2))
ind = 0
prop_var = list()
for (i in 1:dim(Theta)[1]){
for (j in 1:dim(Theta)[2]){
ind = ind + 1 # loop through all posterior draws
Theta_old[,,ind] = array(Theta[i,j,],dim=c(sum(Q), sum(K)))
cov_old[,,ind] = array(cov[i,j,], dim=c(sum(K), sum(K)))
theta_mu_new[,ind] = array(theta_mu[i,j,])
alpha_old[,,ind] = array(alpha[i,j,],dim=c(sum(K), N))
indq = 1;
indk = 1;
prop_var[[ind]] = list()
for (p in 1:P){
poolvar = as.matrix(cov_old[,,ind][indk:sum(K[1:p]),indk:sum(K[1:p])])
temp_sigma = Theta_old[,,ind][indq:sum(Q[1:p]),indk:sum(K[1:p])] %*% poolvar %*% t(Theta_old[,,ind][indq:sum(Q[1:p]),indk:sum(K[1:p])])
eigen_temp_sigma=eigen(temp_sigma)
v_temp=Re(eigen_temp_sigma$vectors)
d_temp=Re(eigen_temp_sigma$values)
prop_var[[ind]][[p]] = d_temp / sum(d_temp)
# rotate Theta
Theta_new[,,ind][indq:sum(Q[1:p]),indk:sum(K[1:p])] = v_temp[, 1:K[p]]
for(k in 1:K[p]){
Theta_new[,,ind][,(k+indk-1)] = sign(Theta_new[,,ind][indq, (k+indk-1)]) * Theta_new[,,ind][,(k+indk-1)];
}
indk=indk+K[p]
indq=indq+Q[p]
} # end p loop
# rotate cov_alpha
cov_new[,,ind] = t(Theta_new[,,ind]) %*% Theta_old[,,ind] %*% cov_old[,,ind]%*%
t(Theta_old[,,ind]) %*% Theta_new[,,ind]
# obtain correlation matrix
corr_new[,,ind] = decompose.cov(cov_new[,,ind])$r
# rotate alpha
alpha_new[,, ind] = t(Theta_new[,,ind]) %*% Theta_old[,,ind] %*% alpha_old[,,ind]
}
}
ALPHA_array = alpha_new
MU_array = theta_mu_new
THETA_array = Theta_new
COV_array = cov_new
COR_array = corr_new
# compute averaged explained variance (average across draws)
prop_var_avg = list()
for (p in 1:P){
tmp = sapply(prop_var, '[[', p)
prop_var_avg[[p]] = rowMeans(tmp)[1:K[p]]
}
return(list('ALPHA_array' = ALPHA_array, 'MU_array'= MU_array, 'THETA_array'= THETA_array,
'COV_array' = COV_array, 'COR_array'= COR_array,
'prop_var' = prop_var, 'prop_var_avg'= prop_var_avg))
}
#-------------------------------------------------------------
# alternative plotting: save results + plot each individually
#-------------------------------------------------------------
output_mfpca = function(dat_mfpca, param_stan, post_rotation_results, title){
K = param_stan$num_PCs
Q = param_stan$num_basis
N = param_stan$num_subjects
P = param_stan$num_blocks
nknots = param_stan$nknots
time_origin = dat_mfpca$time_origin
mu_y = dat_mfpca$mu_y_origin
sigma_y = dat_mfpca$sigma_y_origin
ids_origin = dat_mfpca$shared_id_unique
covariates = dat_mfpca$covariates
Y_sparse = dat_mfpca$Y_sparse
time_sparse = dat_mfpca$time_sparse
phi_t_cont = param_stan$phi_t_cont
phi_t = param_stan$phi_t
time_cont = param_stan$time_cont
ALPHA_array = post_rotation_results$ALPHA_array
MU_array = post_rotation_results$MU_array
THETA_array = post_rotation_results$THETA_array
COV_array = post_rotation_results$COV_array
COR_array = post_rotation_results$COR_array
prop_var_avg = post_rotation_results$prop_var_avg
nloop=dim(ALPHA_array)[3]
first=1
last=nloop
MU_mean = MU_array[, first] # mean function across sampling sessions
ALPHA_mean = ALPHA_array[,,first] # mean factor scores
THETA_mean = THETA_array[,,first] # mean factor loading
COV_mean = COV_array[,,first]
COR_mean = COR_array[,,first]
for(iter in 2:nloop){
MU_mean = MU_mean + MU_array[, iter]
ALPHA_mean = ALPHA_mean + ALPHA_array[,,iter]
THETA_mean = THETA_mean + THETA_array[,,iter]
COV_mean = COV_mean + COV_array[,,iter]
COR_mean = COR_mean + COR_array[,,iter]
}
MU_mean = cbind(MU_mean/(last-first+1))
ALPHA_mean = cbind(ALPHA_mean/(last-first+1))
THETA_mean = cbind(THETA_mean/(last-first+1))
COV_mean = cbind(COV_mean/(last-first+1))
COR_mean = cbind(COR_mean/(last-first+1))
tmp = bdiag(cbind(phi_t_cont[[1]]), cbind(phi_t_cont[[2]]))
if(P > 2){
for(p in 3:P){
tmp = bdiag(tmp,cbind(phi_t_cont[[p]]))
}
}
Mu_functions=t(tmp)%*%MU_mean
Mu = list()
for(p in 1:P){
Mu[[p]] = Mu_functions[((p-1)*length(time_cont)+1):(p*length(time_cont))]
}
FPC_mean = list()
ind1 = 1
ind2 = 1
for(p in 1:P){
FPC_mean[[p]] = t(phi_t_cont[[p]])%*%THETA_mean[ind1:(ind1+nknots[p]+3),ind2:(ind2+K[p]-1)]
ind1 = ind1 + nknots[p]+4
ind2 = ind2 + K[p]
}
#setting the x- and y-axis limit for plotting
Y_min = list()
Y_max = list()
for(p in 1:P){
Y_min[[p]]=min(Y_sparse[[1]][[p]])
Y_max[[p]]=max(Y_sparse[[1]][[p]])
for(i in 2:N){
Y_min[[p]]=min(Y_min[[p]],min(Y_sparse[[i]][[p]]))
Y_max[[p]]=max(Y_max[[p]],max(Y_sparse[[i]][[p]]))
}
}
#---------------------------------------
# output FPC scores with original ID
#---------------------------------------
table_scores = as.data.frame(t(ALPHA_mean))
names_scores = NULL
names_covs = NULL
for (p in 1:P){
for (k in 1:K[p]){
names_scores = c(names_scores, paste0(title[p], '_FPC', k))
}
}
colnames(table_scores) = names_scores
df = dat_mfpca$data[, c('ID', ids_origin, covariates)]
df = df %>% distinct(!!as.symbol(ids_origin), .keep_all=TRUE)
table_scores = cbind(df, table_scores)
return(list('P' = P, 'N' = N, 'K' = K,
'time_cont' = time_cont, 'time_origin' = time_origin, 'time_sparse' = time_sparse,
'Mu' = Mu, 'ALPHA_mean' = ALPHA_mean, 'FPC_mean' = FPC_mean, 'prop_var_avg' = prop_var_avg,
'Y_min' = Y_min, 'Y_max' = Y_max, 'Y_sparse' = Y_sparse,
'mu_y' = mu_y, 'sigma_y' = sigma_y, 'FPC_scores' = table_scores, 'title' = title))
}
plots_mfpca = function(output_mfpca, plot_mean, mean_x_label, mean_y_label,
mean_x_range, mean_y_ticks, mean_x_ticks, mean_title,
plot_fpc, fpc_y_lim, fpc_x_label, fpc_y_label, fpc_title, fpc_y_ticks, fpc_x_ticks,
plot_fpc_scores, scores_selected=NULL, scores_group_compare=NULL, scores_y_label=NULL){
P = output_mfpca$P
N = output_mfpca$N
K = output_mfpca$K
time_cont = output_mfpca$time_cont
time_origin = output_mfpca$time_origin
time_sparse = output_mfpca$time_sparse
Mu = output_mfpca$Mu
sigma_y = output_mfpca$sigma_y
mu_y = output_mfpca$mu_y
Y_min = output_mfpca$Y_min
Y_max = output_mfpca$Y_max
Y_sparse = output_mfpca$Y_sparse
FPC_mean = output_mfpca$FPC_mean
table_scores = output_mfpca$FPC_scores
if (plot_mean){
figs_mean_list = list()
for(p in 1:P){
plot(time_cont*max(time_origin),Mu[[p]]*sigma_y[p] + mu_y[p],type="l", yaxt="n", xaxt="n",
ylim=c(Y_min[[p]]*sigma_y[p] + mu_y[p], Y_max[[p]]*sigma_y[p] + mu_y[p]),
xlim=mean_x_range[[p]],lwd=5,col=4, ylab=mean_y_label[p], xlab=mean_x_label, font.lab=2, cex.lab=1.2)
for(i in 1:N){
lines(time_sparse[[i]][[p]]*max(time_origin),Y_sparse[[i]][[p]]*sigma_y[p] + mu_y[p],type="l",lwd=.25)
}
title(main=mean_title[p])
axis(2, at = mean_y_ticks[[p]], las = 2)
axis(1, at = mean_x_ticks[[p]], las = 1)
assign(paste0('figure_mean', p), recordPlot())
figs_mean_list[[p]] = eval(as.name(paste0('figure_mean', p)))
}
}else{figs_mean_list = NULL}
if (plot_fpc){
figs_fpc_list = list()
for(p in 1:P){
tmp = NULL
tmp = cbind(tmp, time_cont*max(time_origin))
for (k in 1:K[[p]]){
tmp = cbind(tmp, FPC_mean[[p]][,k]*sigma_y[p] + mu_y[p])
}
tmp = as.data.frame(tmp)
colnames(tmp) = c('time', paste0('fpc', seq(1, K[[p]], 1)))
df = reshape::melt(tmp, 'time', paste0('fpc', seq(1, K[[p]], 1)))
figs_fpc_list[[p]] <- ggplot() + geom_line(data = df, aes(x = time, y = value,
colour = variable, linetype = variable), lwd = 1) +
guides(linetype = F) + labs(colour = 'curve') +
labs(title = fpc_title[p], x = fpc_x_label, y = fpc_y_label[p]) +
scale_x_continuous(breaks = fpc_x_ticks[[p]]) +
scale_y_continuous(limits = fpc_y_lim[[p]], breaks = fpc_y_ticks[[p]]) +
theme_classic() +
theme(plot.title = element_text(hjust = 0.5, size = 15, face = "bold"),
axis.text.x = element_text(size = 10, face = "bold"),
axis.text.y = element_text(size = 10, face = "bold"),
axis.title.x = element_text(size = 12, face = "bold"),
axis.title.y = element_text(size = 12, face = "bold"),
legend.title = element_blank(), legend.position = 'top')
}
}else{figs_fpc_list = NULL}
if (plot_fpc_scores){
figs_scores_list = list()
for (p in 1:P){
if (length(scores_group_compare) == 2){
tmp <- ggplot(aes(y = eval(as.name(scores_selected[[p]])),
x = eval(as.name(scores_group_compare[[1]])),
fill = eval(as.name(scores_group_compare[[2]]))),
data = table_scores) +
geom_boxplot() +
ylab(scores_y_label[p]) + xlab(scores_group_compare[[1]]) +
theme_classic() + theme(axis.text = element_text(size=10, face="bold", color='black'),
axis.title.x = element_text(size=12, face="bold"),
axis.title.y = element_text(size=12, face="bold"),
legend.position = "top") + scale_fill_discrete(name = scores_group_compare[[2]])
}else{
tmp <- ggplot(aes(y = eval(as.name(scores_selected[[p]])),
x = eval(as.name(scores_group_compare)),
fill = eval(as.name(scores_group_compare))),
data = table_scores) +
geom_boxplot() +
ylab(scores_y_label[p]) + xlab(scores_group_compare) +
theme_classic() + theme(axis.text = element_text(size=10, face="bold", color='black'),
axis.title.x = element_text(size=12, face="bold"),
axis.title.y = element_text(size=12, face="bold"),
legend.position = "none")
# + geom_jitter(aes(colour=eval(as.name(scores_group_compare))), shape=16,
# position=position_jitter(0.2)) # color weird
}
figs_scores_list[[p]] <- ggplotGrob(tmp)
}
}else{figs_scores_list = NULL}
return(list('figs_mean_list' = figs_mean_list, 'figs_fpc_list' = figs_fpc_list, 'figs_scores_list' = figs_scores_list))
}
#------------------------------------------------------------
# plot mfpca results including
# 1. standaridzed overal mean curve
# 2. original scaled mean curve
# 3. original scaled FPC curve (w/wo 1SD FPC scores)
#------------------------------------------------------------
library(dplyr) # for remove duplicated rows in data frame with distinct()
visu_mfpca = function(dat_mfpca, param_stan, post_rotation_results, title,
x_label, y_label, x_range, FPC_sd, fig_dir){
K = param_stan$num_PCs
Q = param_stan$num_basis
N = param_stan$num_subjects
P = param_stan$num_blocks
nknots = param_stan$nknots
time_origin = dat_mfpca$time_origin
mu_y = dat_mfpca$mu_y_origin
sigma_y = dat_mfpca$sigma_y_origin
ids_origin = dat_mfpca$shared_id_unique
covariates = dat_mfpca$covariates
Y_sparse = dat_mfpca$Y_sparse
time_sparse = dat_mfpca$time_sparse
phi_t_cont = param_stan$phi_t_cont
phi_t = param_stan$phi_t
time_cont = param_stan$time_cont
ALPHA_array = post_rotation_results$ALPHA_array
MU_array = post_rotation_results$MU_array
THETA_array = post_rotation_results$THETA_array
COV_array = post_rotation_results$COV_array
COR_array = post_rotation_results$COR_array
prop_var_avg = post_rotation_results$prop_var_avg
nloop=dim(ALPHA_array)[3]
first=1
last=nloop
MU_mean = MU_array[, first] # mean function across sampling sessions
ALPHA_mean = ALPHA_array[,,first] # mean factor scores
THETA_mean = THETA_array[,,first] # mean factor loading
COV_mean = COV_array[,,first]
COR_mean = COR_array[,,first]
for(iter in 2:nloop){
MU_mean = MU_mean + MU_array[, iter]
ALPHA_mean = ALPHA_mean + ALPHA_array[,,iter]
THETA_mean = THETA_mean + THETA_array[,,iter]
COV_mean = COV_mean + COV_array[,,iter]
COR_mean = COR_mean + COR_array[,,iter]
}
MU_mean = cbind(MU_mean/(last-first+1))
ALPHA_mean = cbind(ALPHA_mean/(last-first+1))
THETA_mean = cbind(THETA_mean/(last-first+1))
COV_mean = cbind(COV_mean/(last-first+1))
COR_mean = cbind(COR_mean/(last-first+1))
tmp = bdiag(cbind(phi_t_cont[[1]]), cbind(phi_t_cont[[2]]))
if(P > 2){
for(p in 3:P){
tmp = bdiag(tmp,cbind(phi_t_cont[[p]]))
}
}
Mu_functions=t(tmp)%*%MU_mean
Mu = list()
for(p in 1:P){
Mu[[p]] = Mu_functions[((p-1)*length(time_cont)+1):(p*length(time_cont))]
}
FPC_mean = list()
ind1 = 1
ind2 = 1
for(p in 1:P){
FPC_mean[[p]] = t(phi_t_cont[[p]])%*%THETA_mean[ind1:(ind1+nknots[p]+3),ind2:(ind2+K[p]-1)]
ind1 = ind1 + nknots[p]+4
ind2 = ind2 + K[p]
}
#setting the x- and y-axis limit for plotting
Y_min = list()
Y_max = list()
for(p in 1:P){
Y_min[[p]]=min(Y_sparse[[1]][[p]])
Y_max[[p]]=max(Y_sparse[[1]][[p]])
for(i in 2:N){
Y_min[[p]]=min(Y_min[[p]],min(Y_sparse[[i]][[p]]))
Y_max[[p]]=max(Y_max[[p]],max(Y_sparse[[i]][[p]]))
}
}
#----------------------------------------------------
# fig: spaghetti + mean curve (standardized scale)
#----------------------------------------------------
#capabilities()
print('figure mean at transformed scale')
png(paste0(fig_dir, 'mean_spaghetti_scaled.png'), units="in", width=10, height=7, res=300)
par(mfrow=c(ceiling(P/2),2))
for(p in 1:P){
plot(time_cont*max(time_origin),Mu[[p]], type="l", ylim=c(Y_min[[p]],Y_max[[p]]),
xlim=x_range, lwd=5, col=4, ylab=y_label, xlab=x_label)
for(i in 1:N){
lines(time_sparse[[i]][[p]]*max(time_origin), Y_sparse[[i]][[p]], type="l", lwd=.25)
}
#title(main=names(dat)[ind_y[p]])
title(main=title[p])
}
dev.off()
#----------------------------------------------------
# fig: spaghetti + mean curve (transformed scale)
#----------------------------------------------------
print('figure mean at original scale')
png(paste0(fig_dir,'mean_spaghetti_original.png'), units="in", width=10, height=7, res=300)
par(mfrow=c(ceiling(P/2),2))
for(p in 1:P){
plot(time_cont*max(time_origin),Mu[[p]]*sigma_y[p] + mu_y[p],type="l",
ylim=c(Y_min[[p]]*sigma_y[p] + mu_y[p], Y_max[[p]]*sigma_y[p] + mu_y[p]),
xlim=x_range,lwd=5,col=4, ylab=y_label, xlab=x_label, font.lab=2, cex.lab=1.2)
for(i in 1:N){
lines(time_sparse[[i]][[p]]*max(time_origin),Y_sparse[[i]][[p]]*sigma_y[p] + mu_y[p],type="l",lwd=.25)
}
title(main=title[p])
}
dev.off()
#-------------------------------------------------------------------
# fig: FPC curves +- 1SD (FPC scores) (transformed scale)
#-------------------------------------------------------------------
print('figure fpc curves')
idx_alpha = 0
for(p in 1:P){
png(paste0(fig_dir, 'PCs_original_', title[p], '_1sd_', FPC_sd, '.png'), units="in", width=7, height=5, res=300)
if (p <= 2){
par(mfrow=c(2,2))
}else{
par(mfrow=c(ceiling(p/2),2))
}
for (k in 1:K[[p]]){
idx_alpha = idx_alpha + 1
prop_var_approx = paste0(round(prop_var_avg[[p]][[k]]*100, 2), '%')
alpha_1sd = sd(ALPHA_mean[idx_alpha, ])
plot(time_cont*max(time_origin), Mu[[p]]*sigma_y[p] + mu_y[p], type="l",
ylim=c(Y_min[[p]]*sigma_y[p] + mu_y[p], Y_max[[p]]*sigma_y[p] + mu_y[p]),
lwd=2,col=1, ylab=y_label, xlab=x_label, font.lab=2, cex.lab=1.2)
if (FPC_sd){ # +- SD of FPC scores
lines(time_cont*max(time_origin),
(Mu[[p]] + FPC_mean[[p]][,k])*alpha_1sd*sigma_y[p] + mu_y[p],type="l",lwd=3,lty=2,col=2) # red
lines(time_cont*max(time_origin),
(Mu[[p]] - FPC_mean[[p]][,k])*alpha_1sd*sigma_y[p] + mu_y[p],type="l",lwd=3,lty=2,col=3) # green
}else{
lines(time_cont*max(time_origin),
(Mu[[p]] + FPC_mean[[p]][,k])*sigma_y[p] + mu_y[p],type="l",lwd=3,lty=2,col=2) # red
lines(time_cont*max(time_origin),
(Mu[[p]] - FPC_mean[[p]][,k])*sigma_y[p] + mu_y[p],type="l",lwd=3,lty=2,col=3) # green
}
title(main=paste(paste('PC', k, sep=' '), ' (', prop_var_approx, ' )', sep=''))
#axis(1, font=2) # make x-axis ticks label bold
legend('topright', c('+ pc', '- pc'), lty=c(2,2), lwd=c(3,3), col=c(2, 3), bty='n', cex=0.5)
}
dev.off()
}
#---------------------------------------
# output FPC scores with original ID
#---------------------------------------
table_scores = as.data.frame(t(ALPHA_mean))
names_scores = NULL
names_covs = NULL
for (p in 1:P){
for (k in 1:K[p]){
names_scores = c(names_scores, paste0(title[p], '_FPC', k))
}
}
colnames(table_scores) = names_scores
# df = dat_mfpca$data[, c('ID', 'ID_unique', covariates)]
# df = df %>% distinct(ID_unique, .keep_all=TRUE)
df = dat_mfpca$data[, c('ID', ids_origin, covariates)]
df = df %>% distinct(!!as.symbol(ids_origin), .keep_all=TRUE)
table_scores = cbind(df, table_scores)
return(list('FPC_scores' = table_scores))
}
#----------------------------------------------------------------------------------------------------
# general function to calculate mutual information
# file_summary_output: file directory for output from output_sim function within sim_setting_v2.R
# K: dimension of PC for each block
#----------------------------------------------------------------------------------------------------
H_est = function(sub_corr){
# calculate entropy
if (is.null(dim(sub_corr))){
H_est = 1/2 * log(sub_corr)
}else{
H_est = 1/2 * log(det(sub_corr))
}
return(H_est)
}
MI_norm = function(MI){
# normalize MI to be [0, 1]
MI_norm = (1 - exp(-MI * 2 )) ^ (1/2)
return(MI_norm)
}
# updated MI calculation
MI_est = function(corr_matrix, K){
# calculate mutual informaion
R = corr_matrix
P = length(K)
# index for each block (R_out_each: R after take out each variable; R_in_each: R of keep one variable = take out two if 3 in total)
idx_end = idx_start = rep(0, P)
idx_each = R_out_each = R_in_each = list()
for (p in 1:P){
idx_end[p] = sum(K[1:p])
idx_start[p] = idx_end[p] - K[p] + 1
idx_each[[p]] = seq(idx_start[p], idx_end[p], 1)
R_out_each[[p]] = R[-idx_each[[p]], -idx_each[[p]]]
R_in_each[[p]] = R[idx_each[[p]], idx_each[[p]]]
}
pairs = combn(seq(1,P, 1), 2)
pair_names_MI = pair_names_CMI = pair_names_MI_norm = pair_names_CMI_norm = NULL
for (p in 1:dim(pairs)[2]){
pair_names_MI = c(pair_names_MI, paste0('MI_', pairs[1, p], pairs[2, p]))
pair_names_CMI = c(pair_names_CMI, paste0('CMI_', pairs[1, p], pairs[2, p]))
pair_names_MI_norm = c(pair_names_MI_norm, paste0('norm_MI_', pairs[1, p], pairs[2, p]))
pair_names_CMI_norm = c(pair_names_CMI_norm, paste0('norm_CMI_', pairs[1, p], pairs[2, p]))
# compute mutual information
assign(paste0('MI_', pairs[1, p], pairs[2, p]), - H_est(R[unlist(idx_each[c(pairs[1, p], pairs[2, p])]),
unlist(idx_each[c(pairs[1, p], pairs[2, p])])]))
# compute conditional mutual information
assign(paste0('CMI_', pairs[1, p], pairs[2, p]),
H_est(R_out_each[[pairs[1, p]]]) + H_est(R_out_each[[pairs[2, p]]]) -
H_est(R[-unlist(idx_each[c(pairs[1, p], pairs[2, p])]),
-unlist(idx_each[c(pairs[1, p], pairs[2, p])])]) - H_est(R))
# compute normalized MI
assign(paste0('norm_MI_', pairs[1, p], pairs[2, p]),
MI_norm(eval(as.name(paste0('MI_', pairs[1, p], pairs[2, p])))))
assign(paste0('norm_CMI_', pairs[1, p], pairs[2, p]),
MI_norm(eval(as.name(paste0('CMI_', pairs[1, p], pairs[2, p])))))
}
results_list = list()
for (names in c(pair_names_MI, pair_names_CMI, pair_names_MI_norm, pair_names_CMI_norm)){
results_list[[names]] = eval(as.name(names))
}
return(results_list)
}
MI_est_data = function(post_rotation_results, K){
# R: true correlation matrix from simulated data
R_array = post_rotation_results$COR_array
nloop = dim(R_array)[3]
P = length(K)
MI_results = list()
for (i in 1:nloop){
MI_results[[i]] = MI_est(corr_matrix=R_array[,,i], K=K)
}
list_names = MI_names = names(MI_results[[1]])
summary_names = paste0(MI_names, '_list')
results_list = list()
for (m in 1:length(MI_names)){
tmp = list()
for (i in 1:nloop){
tmp[i] = MI_results[[i]][MI_names[m]]
}
assign(list_names[m], tmp)
results_tmp = as.numeric(eval(as.name(list_names[m])))
assign(summary_names[m], c(mean(results_tmp), quantile(results_tmp, 0.5),
quantile(results_tmp, 0.025), quantile(results_tmp, 0.975)))
results_list[[summary_names[m]]] = eval(as.name(summary_names[m]))
}
return(results_list)
}
#-----------------------------------
# diagnostic plot - LOOIC
#-----------------------------------
diagnostic_looic = function(loo_list, num_subjects, fig_dir=NULL){
N = num_subjects
pkdf <- data.frame(pk=loo_list$diagnostics$pareto_k, id=1:N)
fig_looic <- ggplot(pkdf, aes(x=id, y=pk)) + geom_point(shape=3, color="blue") +
labs(x="Observation left out", y="Pareto shape k") +
geom_hline(yintercept = 0.7, linetype=2, color="red", size=0.2) +
ggtitle("PSIS-LOO diagnostics") + theme_classic() +
theme(plot.title = element_text(hjust = 0.5, size=15, face="bold"),
axis.text.x= element_text(size=10, face="bold"),
axis.text.y= element_text(size=10, face="bold"),
axis.title.x= element_text(size=12, face="bold"),
axis.title.y= element_text(size=12, face="bold"))
if (!is.null(fig_dir)){
#ggsave(paste0(fig_dir, 'diagnostic_looic_fig.pdf'), width=4, height=4, dpi=300)
ggsave(paste0(fig_dir, 'diagnostic_looic_fig.png'), width=4, height=4, dpi=300)
}
return(fig_looic)
}
#---------------------------------------------------------
# diagnostic plot - posterior predictive checking
#---------------------------------------------------------
library("bayesplot")
library("ggplot2")
diagnostic_posterior = function(mfpca_fit, Nsamples, Nchains, visits_matrix, response_observed, fig_title, fig_dir=NULL){
Ynew = extract(mfpca_fit,"Ynew",permuted=FALSE)
V = visits_matrix
Ynew_transform = matrix(rep(0, Nsamples/2 * Nchains * sum(V)), ncol=sum(V))
ind = 0
for (i in 1:(Nsamples/2)){
for (j in 1:Nchains){
ind = ind + 1
Ynew_transform[ind, ] = Ynew[i,j,]
}
}
Ynew_mean = colMeans(Ynew_transform)
color_scheme_set("brightblue")
fig_posterior <- ppc_dens_overlay(response_observed, Ynew_transform) +
ggtitle(fig_title) + labs(x="Standardized response", y="Kernel density") +
theme(plot.title = element_text(hjust = 0.5, size=15, face="bold"),
axis.text.x= element_text(size=10, face="bold"),
axis.text.y= element_text(size=10, face="bold"),
axis.title.x= element_text(size=12, face="bold"),
axis.title.y= element_text(size=12, face="bold"),
legend.position = 'top')
if (!is.null(fig_dir)){
#ggsave(paste0(fig_dir, 'diagnostic_posterior_fig.pdf'), width=4, height=4, dpi=300)
ggsave(paste0(fig_dir, 'diagnostic_posterior_fig.png'), width=4, height=4, dpi=300)
}
return(fig_posterior)
}
|
b355c079f6f8fa17601836ebc1b844d752b9e261 | 733378b26fa10e84fa1bfe986d6c27ebe48865ba | /server.R | 30741a069c860bf330360a2deb10e5d93f97804d | [] | no_license | johnpateha/MyShinyAppBMI | ada426c4839b7ee48f548c7ec711d18a281da79c | c12247c2d380692aba83589a496e935dec39671c | refs/heads/master | 2016-08-10T16:15:07.348039 | 2015-11-22T18:52:09 | 2015-11-22T18:52:09 | 46,604,461 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,252 | r | server.R | library(shiny)
library(mixsmsn)
library(ggplot2)
data(bmi)
calcBMI <- function(height,weight,units) {
if (height == 0) 0
else {
if (units == 1) {round((weight*10000)/(height*height),1)}
else {round((weight * 703)/(height*height),1)}
}
}
shinyServer(function(input, output) {
pl <- ggplot(aes(x=bmi), data=subset(bmi,bmi<=50)) + theme_bw() +
geom_histogram(binwidth=1, fill="lightgrey", color="black") +
geom_vline(x=c(18.5,25),color="green",size=1) +
annotate("text", x = 21.5, y = 180, label = "norm", color="green") +
geom_vline(x=30,color="red",size=1)+
annotate("text", x = 34, y = 180, label = "obese", color="red")
output$resBMI <- renderPrint({ calcBMI(input$height,input$weight,input$units) })
output$BMIPlot <- renderPlot({
pl + geom_vline(x=calcBMI(input$height,input$weight,input$units), color="blue",linetype="longdash",size=1.5) +
annotate("text", x = 45, y = 100, label = paste("--You (",calcBMI(input$height,input$weight,input$units),")"), color="blue",face="bold",size=7)
})
})
|
65fb46a36b970284a473cd339f9a8316320a29ba | 9482413d84d3431f986cee3b26ce7b6bfbbd8799 | /man/make.ruleoutTable.FNtime.Rd | 09c4370c8c373b78d94b4e35a67ab3a273536f3d | [] | no_license | n8thangreen/IDEAdectree | d1867f4b6d8d45705d559d9bb26954957ee40540 | cd2ac09ad1142bf089c618bc7a848afe70b3e7bc | refs/heads/master | 2020-12-25T14:24:11.955849 | 2020-02-07T16:19:37 | 2020-02-07T16:19:37 | 67,139,702 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 470 | rd | make.ruleoutTable.FNtime.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/make_ruleoutTable-bysubgroups.R
\name{make.ruleoutTable.FNtime}
\alias{make.ruleoutTable.FNtime}
\title{Make rule-out table for multiple follow-up times}
\usage{
make.ruleoutTable.FNtime(FNtime.seq = c(7, seq(0, 60, by = 1)), Ctest)
}
\arguments{
\item{Ctest}{}
}
\value{
out
}
\description{
Iterate over a range of follow-up times and row-bind results (potentially a high number of rows).
}
|
305915708bb0d88495a289058db0638fcb187b69 | 3bb5a9d1b56cd683bb1bfc7562c015902b55e3d3 | /tests/testthat/test-addSwitch.R | 4463a146b5623751a7018aba1bab4b6fa70a3909 | [] | no_license | everdark/ArgParser | 216fb8aff0ed55b515e5370d137415b3d86bb43c | 0bf06b093539f21715904e766b7c6be87a19a039 | refs/heads/master | 2021-01-10T12:09:03.337371 | 2015-10-29T06:56:47 | 2015-10-29T06:56:47 | 44,537,262 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,983 | r | test-addSwitch.R |
context("Add switches onto ArgParser instance")
test_that("name definition of length > 1 will be warned, and the first one kept", {
expect_warning(ArgParser() %>% addSwitch(c("--s1", "--s2")))
expect_identical((ArgParser() %>% addSwitch(c("--s1", "--s2")))@switches_logic,
c(`--help`=FALSE, `--s1`=FALSE))
expect_identical((ArgParser() %>% addSwitch(c("--s1", "--s2"), states=0:1L))@switches_any,
list(`--s1`=list(unpushed=0L, pushed=1L)))
})
test_that("help definition of length > 1 will be warned, and the first one kept", {
expect_warning(ArgParser() %>% addSwitch("--s1", help=c("a", "b")))
expect_identical((ArgParser() %>% addSwitch("--s1", help=c("a", "b")))@help["--s1"],
c(`--s1`="a"))
})
test_that("duplicated switch definition will cause error", {
expect_error(ArgParser() %>% addSwitch("--s1") %>% addFlag("--s1"),
regexp="^.*invalid class")
expect_error(ArgParser() %>% addSwitch("--s1") %>% addFlag("--s2") %>% addFlag("--s1"),
regexp="^.*invalid class")
})
test_that("switch name without the -- prefix will casue error", {
expect_error(ArgParser() %>% addSwitch("-s1"),
regexp="^.*invalid class")
expect_error(ArgParser() %>% addSwitch("s1"),
regexp="^.*invalid class")
})
test_that("short alias is properly set, if any", {
expect_identical((ArgParser() %>% addSwitch("--s1", "-s"))@switches_alias,
c(`--help`="-h", `--s1`="-s"))
expect_identical((ArgParser() %>% addSwitch("--s1") %>% addSwitch("--s2", "-s"))@switches_alias,
c(`--help`="-h", `--s1`=NA, `--s2`="-s"))
})
test_that("duplicated alias definition will cause error", {
expect_error(ArgParser() %>% addSwitch("--s1", "-s") %>% addSwitch("--s2", "-s"),
regexp="^.*invalid class")
})
test_that("states default at FALSE", {
expect_identical((ArgParser() %>% addSwitch("--s1"))@switches_logic,
c(`--help`=FALSE, `--s1`=FALSE))
})
test_that("states must be of type vector; otherwise throw error", {
expect_error(ArgParser() %>% addSwitch("--s1", states=matrix(0,1,1)))
expect_error(ArgParser() %>% addSwitch("--s1", states=data.frame()))
})
test_that("logical states with length > 1 will cause warning", {
expect_warning(ArgParser() %>% addSwitch("--s1", states=c(TRUE,FALSE)))
expect_warning(ArgParser() %>% addSwitch("--s1", states=c(TRUE,FALSE,TRUE)))
})
test_that("non-logical states vector with length > 2 will cause warning", {
expect_warning((ArgParser() %>% addSwitch("--s1", states=c(1:3L))))
expect_warning((ArgParser() %>% addSwitch("--s1", states=list(T,F,F))))
})
test_that("non-logical states vector with length < 2 will cause error", {
expect_error((ArgParser() %>% addSwitch("--s1", states=1)))
expect_error((ArgParser() %>% addSwitch("--s1", states=list("unpushsed"))))
})
test_that("help, if any, is property set", {
expect_identical((ArgParser() %>% addSwitch("--s1", help="s1 help"))@help["--s1"],
c(`--s1`="s1 help"))
expect_identical((ArgParser() %>% addSwitch("--s1", help="s1 help")
%>% addSwitch("--s2"))@help,
c(`--help`="show this message and exit", `--s1`="s1 help"))
})
test_that("all arguments work properly together", {
p <- ArgParser() %>% addSwitch("--s1", "-s", list("unpushed", "pushed"), "help")
expect_identical(p@switches_any, list(`--s1`=list(unpushed="unpushed", pushed="pushed")))
expect_identical(p@switches_alias, c(`--help`="-h", `--s1`="-s"))
expect_identical(p@help["--s1"], c(`--s1`="help"))
})
|
30f533740dd2e79074c657f231797888e134cf2b | 5dc7dc7e33122e8c588eb6e13f23bf032c704d2e | /legacy/R/c20200617_tsy5_vs_realised.R | 7e68ab265efafbca3b64dccef57178e986275756 | [
"Apache-2.0"
] | permissive | brianr747/platform | a3319e84858345e357c1fa9a3916f92122775b30 | 84b1bd90fc2e35a51f32156a8d414757664b4b4f | refs/heads/master | 2022-01-23T16:06:26.855556 | 2022-01-12T18:13:22 | 2022-01-12T18:13:22 | 184,085,670 | 3 | 2 | null | null | null | null | UTF-8 | R | false | false | 928 | r | c20200617_tsy5_vs_realised.R |
source('startup.R')
source('startup.R')
fed <- pfetch('U@FedFunds')
tsy5 <- pfetch('F@DGS5')
fed_m <- convertdm(fed)
tsy5 <- convertdm(tsy5)
recession <- pfetch('F@USREC')
fed_ma <- MA(fed_m,60)
realised <- lag(fed_ma, k=-60)
realised <- realised["1990-01-01/2015-06-01"]
ser <- tsy5
ser2 <- realised
pp <- ShadeBars2(ser, ser2, recession, c('Treasury yield', 'realised short rate'),
ylab="%",main="U.S. 5-Year Treasury Versus Realised Short Rate",
startdate='1990-01-01', legendhead='5-year rate')
pp <- SetXAxis(pp, "1990-01-01", "2015-06-01")
gap <- 100 *(tsy5-realised)
p2 <- ShadeBars1(gap, recession, 'BPs.', '5-Year Treasury Yield Less Realised')
p2 <- SetXAxis(p2, "1990-01-01", "2015-06-01")
p2 <- p2 + geom_hline(yintercept=0,color=BondEconomicsBlue(),size=1)
TwoPanelChart(pp, p2, "c20200617_tsy5_vs_realised.png","Source: Fed H.15 (via FRED), author calculations.")
|
7711eebed3d99e79e4292563f2f94fa517e31c55 | a61a45533408be0193bb5a6c1d4db7f227e2186c | /Models/explores.R | 8af2129c86b357083cf0c6a346eb287de82a06bc | [] | no_license | JusteRaimbault/InteractionGibrat-submission | 3403d9b6944df1132abb72747b25edf3f1eb62bd | ae25ae89e42867271903ecc5b8f6213a14878174 | refs/heads/master | 2020-03-31T12:29:58.733711 | 2017-07-28T07:53:24 | 2017-07-28T07:53:24 | 152,218,198 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 8,242 | r | explores.R |
# analysis of exploration results
library(dplyr)
library(ggplot2)
source(paste0(Sys.getenv('CN_HOME'),'/Models/Utils/R/plots.R'))
#setwd(paste0(Sys.getenv('CN_HOME'),'/Results/NetworkNecessity/InteractionGibrat/calibration/all/fixedgravity/20160920_fixedgravity_local'))
#setwd(paste0(Sys.getenv('CN_HOME'),'/Results/NetworkNecessity/InteractionGibrat/exploration/full/20160912_gridfull/data'))
#setwd(paste0(Sys.getenv('CN_HOME'),'/Results/NetworkNecessity/InteractionGibrat/calibration/period/nofeedback/20170228_test'))
#setwd(paste0(Sys.getenv('CN_HOME'),'/Results/NetworkNecessity/InteractionGibrat/exploration/nofeedback/20170218_1831-1851'))
setwd(paste0(Sys.getenv('CN_HOME'),'/Models/NetworkNecessity/InteractionGibrat/calibration'))
#res <- as.tbl(read.csv('20170224_calibperiod_nsga/1921-1936/population100.csv'))
#res <- as.tbl(read.csv('data/2017_02_18_20_25_12_CALIBGRAVITY_GRID.csv'))
periods = c("1831-1851","1841-1861","1851-1872","1881-1901","1891-1911","1921-1936","1946-1968","1962-1982","1975-1999")
resdir = '20170725_calibperiod_full_nsga'
#resdir = '20170224_calibperiod_nsga'
params = c("growthRate","gravityGamma","gravityDecay","gravityWeight","feedbackGamma","feedbackDecay","feedbackWeight")
#params = c("growthRate","gravityGamma","gravityDecay","gravityWeight")
figdir = paste0(Sys.getenv('CN_HOME'),'/Results/NetworkNecessity/InteractionGibrat/',resdir,'/');dir.create(figdir)
plots=list()
for(param in params){
cperiods = c();cparam=c();mselog=c();logmse=c()
for(period in periods){
latestgen = max(as.integer(sapply(strsplit(sapply(strsplit(list.files(paste0(resdir,'/',period)),"population"),function(s){s[2]}),".csv"),function(s){s[1]})))
res <- as.tbl(read.csv(paste0(resdir,'/',period,'/population',latestgen,'.csv')))
#res=res[which(res$gravityWeight>0.0001&res$gravityDecay<500&res$feedbackDecay<500),]
show(paste0(period,' : dG = ',mean(res$gravityDecay),' +- ',sd(res$gravityDecay)))
mselog=append(mselog,res$mselog);logmse=append(logmse,res$logmse)
cperiods=append(cperiods,rep(period,nrow(res)));cparam=append(cparam,res[[param]])
}
g=ggplot(data.frame(mselog=mselog,logmse=logmse,param=cparam,period=cperiods),aes_string(x="logmse",y="mselog",colour="param"))
plots[[param]]=g+geom_point()+scale_colour_gradient(low="blue",high="red",name=param)+facet_wrap(~period,scales = "free")
}
multiplot(plotlist = plots,cols=4)
##### plot values of a given param
getDate<-function(s){(as.integer(strsplit(s,"-")[[1]][1])+as.integer(strsplit(s,"-")[[1]][2]))/2}
filtered=T # makes no sense to filter in full model.
for(param in params){
decays=c();sdDecay=c();types=c();ctimes=c()
for(period in periods){
latestgen = max(as.integer(sapply(strsplit(sapply(strsplit(list.files(paste0(resdir,'/',period)),"population"),function(s){s[2]}),".csv"),function(s){s[1]})))
res <- as.tbl(read.csv(paste0(resdir,'/',period,'/population',latestgen,'.csv')))
if(filtered){res=res[which(res$gravityWeight>0.0001&res$gravityDecay<200),]}#&res$gravityGamma<5),]}
decays = append(decays,mean(unlist(res[,param])));sdDecay = append(sdDecay,sd(unlist(res[,param])));types = append(types,"pareto")
decays = append(decays,unlist(res[which(res$logmse==min(res$logmse))[1],param]));sdDecay=append(sdDecay,0);types = append(types,"logmse")
decays = append(decays,unlist(res[which(res$mselog==min(res$mselog))[1],param]));sdDecay=append(sdDecay,0);types = append(types,"mselog")
ctimes = append(ctimes,rep(getDate(period),3))
}
g=ggplot(data.frame(decay=decays,sd=sdDecay,type=types,time=ctimes),aes(x=time,y=decay,colour=type,group=type))
g+geom_point()+geom_line()+
geom_errorbar(aes(ymin=decay-sd,ymax=decay+sd))+ylab(param)+stdtheme
ggsave(file=paste0(figdir,param,'_filt',as.numeric(filtered),'.png'),width=20,height=15,units='cm')
}
param='gravityWeight'
decays=c();sdDecay=c();types=c();ctimes=c()
for(period in periods){
latestgen = max(as.integer(sapply(strsplit(sapply(strsplit(list.files(paste0(resdir,'/',period)),"population"),function(s){s[2]}),".csv"),function(s){s[1]})))
res <- as.tbl(read.csv(paste0(resdir,'/',period,'/population',latestgen,'.csv')))
#res=res[which(res$gravityWeight>0.0001&res$gravityDecay<200),]
decays = append(decays,mean(unlist(res[,param]))/mean(unlist(res[,"growthRate"])));sdDecay = append(sdDecay,sd(unlist(res[,param]))/mean(unlist(res[,"growthRate"])));types = append(types,"pareto")
decays = append(decays,unlist(res[which(res$logmse==min(res$logmse))[1],param])/unlist(res[which(res$logmse==min(res$logmse))[1],"growthRate"]));sdDecay=append(sdDecay,0);types = append(types,"logmse")
decays = append(decays,unlist(res[which(res$mselog==min(res$mselog))[1],param])/unlist(res[which(res$mselog==min(res$mselog))[1],"growthRate"]));sdDecay=append(sdDecay,0);types = append(types,"mselog")
ctimes = append(ctimes,rep(getDate(period),3))
}
g=ggplot(data.frame(decay=decays,sd=sdDecay,type=types,time=ctimes),aes(x=time,y=decay,colour=type,group=type))
g+geom_point()+geom_line()+
geom_errorbar(aes(ymin=decay-sd,ymax=decay+sd))+ylab(param)+stdtheme+ylab(paste0(param,'/growthRate'))
ggsave(file=paste0(figdir,param,'_relativegrowthRate.png'),width=20,height=15,units='cm')
#
#m = lm(logmse~gravityDecay+gravityGamma+gravityWeight+growthRate,res)
#
# d <- res %>% mutate(gfg=floor(feedbackGamma*6)/6,ggd=floor(gravityDecay/100)*100)
# res%>%group_by(growthRate)%>%summarise(logmse=mean(logmse))
#d=res[which(res$gravityDecay<50),]
d=res[res$logmse<24.5&res$mselog<6.5&res$gravityDecay<50,]
gp = ggplot(d,aes(x=gravityDecay,y=logmse,colour=gravityWeight,group=gravityWeight))
gp+geom_line()+facet_grid(growthRate~gravityGamma,scales="free")#+stat_smooth()
######
params = c("growthRate","gravityWeight","gravityGamma","gravityDecay")#"growthRate","gravityWeight")
#params = c("growthRate","gravityWeight","gravityGamma","gravityDecay","feedbackWeight","feedbackGamma","feedbackDecay")
#params = c("feedbackWeight","feedbackGamma","feedbackDecay")
d=res#[which(res$gravityWeight>0.0001),]#[res$logmse<24.5&res$mselog<6.35,]
plots=list()
for(param in params){
g=ggplot(d,aes_string(x="logmse",y="mselog",colour=param))
plots[[param]]=g+geom_point()+scale_colour_gradient(low="yellow",high="red")
}
multiplot(plotlist = plots,cols=2)
#######
##M1
#data.frame(res[res$logmse<31.24&res$mselog<302.8125,])
##M2
#data.frame(res[res$logmse<31.24&res$mselog<303,])
####
#res$rate=res$gravityWeight/res$growthRate
#p1="rate";p2="gravityDecay";p3="gravityGamma";p4="growthRate";err="mselog"
res$rate=res$feedbackWeight/res$growthRate
p1="rate";p2="feedbackDecay";p3="feedbackGamma";p4="growthRate";err="logmse"
g=ggplot(res)#[abs(res$growthRate-0.071)<0.0001,])#res[res$growthRate==0.07|res$growthRate==0.06,])
g+geom_line(aes_string(x=p1,y=err,colour=p2,group=p2),alpha=0.7)#+facet_grid(paste0(p3,"~",p4),scales="free")#)))+stat_smooth()#+facet_wrap(~gravityGamma,scales = "free")
####
# Determination of range of min growth rates
minlogmse = c();minmselog=c()
for(gravityWeight in unique(res$gravityWeight)){
for(gravityGamma in unique(res$gravityGamma)){
for(gravityDecay in unique(res$gravityDecay)){
d = res[res$gravityWeight==gravityWeight&res$gravityGamma==gravityGamma&res$gravityDecay==gravityDecay,]
minlogmse = append(minlogmse,d$growthRate[d$logmse==min(d$logmse)])
minmselog = append(minmselog,d$growthRate[d$mselog==min(d$mselog)])
}
}
}
#
# -> range(minlogmse) ; range(minmselog)
#
data.frame(res[res$logmse==min(res$logmse),])
data.frame(res[res$mselog==min(res$mselog),])
############
# Looking only at gravity
resgrav = res[res$feedbackWeight==0.0&res$feedbackDecay==2.0&res$feedbackGamma==1.5,]
#rm(res)
sres = resgrav %>% group_by(gravityGamma,gravityWeight,growthRate) %>% summarise(gravminlogmse=gravityDecay[which(logmse==min(logmse))[1]],gravminmselog=gravityDecay[which(mselog==min(mselog))[1]])
g=ggplot(sres)
g+geom_point(aes(x=gravityWeight/growthRate,y=gravminlogmse,colour=gravityGamma))
# -> needs more exploration
sres[sres$gravminlogmse==201,]
ressample=sres
ressample=resgrav[resgrav$gravityGamma==1&abs(resgrav$gravityWeight-6e-04)<1e-10&resgrav$growthRate==0.007,]
plot(ressample$gravityDecay,ressample$logmse,type='l')
|
7af8d20bfd2bbf277d0f7ec7e13fc84fcf84b40c | ed0131ae47615ad8c9d455fc031b33c322f460e5 | /rupload.r | 86cd7f7ddd89ec66bd17b775404f9df48f2919a3 | [] | no_license | rmkelley/rmkelley.github.io | d7191aaac5f523673750a399c3cfd0ece4feaf70 | 243ef9997829d300b91be450086884ceaf3362c2 | refs/heads/master | 2020-07-23T18:09:20.906172 | 2019-12-15T15:40:08 | 2019-12-15T15:40:08 | 207,661,231 | 0 | 1 | null | null | null | null | UTF-8 | R | false | false | 1,345 | r | rupload.r | ## script to convert an SPSS statistics file into something understandable/uploadable by a database
## We had a SPSS file from a DHS Survey. The Import Dataset feature in R imported the dataset as the name entails.
install.packages("RPostgres")
install.packages("rccmisc")
library(foreign)
library(DBI)
library(dplyr)
library(rccmisc)
library(RPostgres)
dhshh <- lownames(select(MWHR61FL,HHID,HV001,HV204,HV248,HV246A,HV246D,HV246E,HV246G,HV245,HV271,HV251,HV206,HV226,HV219,HV243A,HV207))
## Create a new dataframe with only the necessary columns, and switching all field names to lower-case
## Change the first parameter of select() to the name of your data frame (from importing SPSS file) and then list any of the columns you want to keep
con <- dbConnect(RPostgres::Postgres(), dbname='database name', host='database host', user='user', password='password')
## Connect to the database. Changes the database name, host, user, and password to your own.
dbListTables(con)
## This generates all the tables in the db for a sight-check
dbWriteTable(con,'dhshh',dhshh, overwrite=TRUE)
## import the table to the database and overwrite existing one. con is the database connection, 'dhshh' is the name of the new table, and dhshh is the data frame to be imported
dbDisconnect(con) #disconnects from the database
|
4fff710ee94946b833c31dc174885fa9601684c4 | 29585dff702209dd446c0ab52ceea046c58e384e | /mmds/R/integrate.pdf.R | f5920a22a01a8e9ef6a670fa019accdf6705a4df | [] | no_license | ingted/R-Examples | 825440ce468ce608c4d73e2af4c0a0213b81c0fe | d0917dbaf698cb8bc0789db0c3ab07453016eab9 | refs/heads/master | 2020-04-14T12:29:22.336088 | 2016-07-21T14:01:14 | 2016-07-21T14:01:14 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,612 | r | integrate.pdf.R | integrate.pdf<-function(x,width,pars,mix.terms,z=NULL,zdim=0,pt=FALSE){
# integrate the detection function -- ie. calculate overall mu
# grab some pars
got.pars<-getpars(pars,mix.terms,zdim,z)
key.scale<-got.pars$key.scale
key.shape<-got.pars$key.shape
mix.prop<-got.pars$mix.prop
if(pt){
intfcn<-integrate.hn.pt
}else{
intfcn<-integrate.hn
}
#work out prob det for each mixture
#if(mix.terms>1){
if(is.list(z)|all(zdim>0)){
p<-matrix(0,mix.terms,length(x$distance))
for (i in 1:mix.terms) {
if(is.list(z)){
keysc<-key.scale[i,]
}else{
keysc<-key.scale[i]
}
p[i,]<-intfcn(keysc,width)
}
}else{
p<-numeric(mix.terms)
for (i in 1:mix.terms) {
if(is.list(z)){
keysc<-key.scale[i,]
}else{
keysc<-key.scale[i]
}
p[i]<-intfcn(keysc,width)
}
}
#work out proportion of each mixture class in the population
p.pop<-mix.prop/p
p.pop<-p.pop/sum(p.pop)
#}else{
# p.pop<-mix.prop
#}
res<-numeric(length(x$distance))
# covariates...
if(is.list(z)|all(zdim>0)){
# storage
for(j in 1:mix.terms){
res<-res+(p.pop[j,]/intfcn(key.scale[j,],width))*intfcn(key.scale[j,],x$distance)
}
# no covariates
}else{
# storage
for(j in 1:mix.terms){
res<-res+(p.pop[j]/intfcn(key.scale[j],width))*intfcn(key.scale[j],x$distance)
}
}
return(res)
}
|
b92c52a09aa25d784591d7ba7eed5499ad7315fb | 2166f50dab140b8c395ad4acc266430e0f47c921 | /Opposite day3.R | 48e19a223da2d13458bce51bf876ae2c5972e99b | [] | no_license | rosehartman/frog-trap | e101990fd030849d233015ffe07000c71324b880 | 43cce031300b5d4b5a4cf57b376966aa922ae462 | refs/heads/master | 2021-01-22T05:06:53.781155 | 2015-02-08T21:33:45 | 2015-02-08T21:33:45 | 14,573,077 | 0 | 0 | null | 2014-03-05T17:15:08 | 2013-11-21T00:12:05 | R | UTF-8 | R | false | false | 6,346 | r | Opposite day3.R | # Now lets have predation effect the adults AND have the adults be
# the migratory life stage.
source('R/opposite day3 functions.R')
# apply stochastic growth function over all predation levels
resultsSO3 = ldply(pred, function(pred2){
# calculate stochastic growth rates
r.0 = foreach(i=1:21, .combine=c) %dopar% fooopp3(p=p[i], pred=pred2, states=states)
return(r.0)
})
# apply deterministic growth function over all predation levels
resultsDO3 = ldply(pred, function(pred){
# deterministic growth rate
r2.0 = sapply(p, Patchopp3, s1=c(ok[2],ok[2]), J=c(ok[1],ok[1]), fx=fx, pred=pred, simplify=T)
r2.0 = log(r2.0) # take the log to make it comparable to stochastic rates
return(r2.0)
})
allO3 = data.frame(t(rbind(resultsDO3, resultsSO3, p)))
names(allO3) = c(pred,predSt,"p")
write.csv(allO3, file = "adult disppred.csv")
library(reshape2)
allO = melt(allO3, id.vars="p", variable.name= "predation", value.name="lambda")
allO$stoch = rep(NA, 252)
allO$stoch[c(1:126)]="n"
allO$stoch[c(127:252)] = "y"
allO$rate = c(NA, 252)
allO$rate[c(1:21, 127:147)] = 6
allO$rate[c(22:42, 148:168)] = 5
allO$rate[c(43:63 , 169:189 )] = 4
allO$rate[c( 64:84 , 190:210)] = 3
allO$rate[c( 85:105 ,211:231 )] = 2
allO$rate[c( 106:126 ,232:252 )] = 1
allO$rate = ordered(allopp$rate,
labels = rev(c("100%", "80%", "60%", "40%", "20%", "0%")))
qplot(data=allO[which(allopp$stoch=="n"),], x=p, y=lambda, color=rate, geom="line")
# plot stochastic growth and deterministic
opp3 = ggplot(data=allO,
aes(x=p, y=lambda, color=rate, linetype=stoch))
opp3 = opp3+geom_line() + labs(y="population growth rate log(lambda)", x="proportion of each year's total adults \n dispersing to the high predation patch")+
scale_linetype_manual(name = "model", values=c(2, 1), labels = c("deterministic", "stochastic")) +
scale_y_continuous(limits=c(-.25, .5))
opp3
# Make a function to calculate average sensitivites for a bunch to time runs
# over a bunch of attractiveness values
elsplotO = function(pred) {
i=1:21
# Apparently this gets too big for R to handle if you run it for 100,000 time steps, but that doesn't make sense...
run = laply(i, function(x){
ru = replicate(100,bigrunopp3(tf=1000, p1=p[x], pred1=pred))
aaply(ru, 1:2, function(thingy) {sum(thingy)/100})
},
.parallel=T)
# Data frame with the elasticity of each non-zero matrix entry
elsdf = data.frame(p=p, f1=run[, 1,2], j11=run[, 2,1], j21=run[, 2,3], a1=run[, 2,2],
f2=run[, 3,4], j12=run[, 4,1], j22=run[, 4,3], a2=run[, 4,4])
library(reshape2)
elsedf2 = melt(elsdf, id.vars="p", variable.name="stage",value.name="elas")
elsedf2$patch = rep(NA, 168)
elsedf2$stage = rep(NA, 168)
elsedf2$patch[1:84] = "1"
elsedf2$patch[85:168] = "2"
elsedf2$stage[1:21]="f"
elsedf2$stage[22:42]="j1"
elsedf2$stage[43:63]="j2"
elsedf2$stage[64:84]="a"
elsedf2$stage[85:105]="f"
elsedf2$stage[106:126]="j1"
elsedf2$stage[127:147]="j2"
elsedf2$stage[148:168]="a"
# plot the change in elasticities with different ammounts of migration
el = qplot(data=elsedf2, x=p, y=elas, geom="line", color=stage, linetype = patch,
xlab="proportion of juveniles moving to \nhigh predation patch (patch1)", ylab="elasticity of log lambdas \n to changes in life stage",
main=paste("predation = ", pred))
el
}
predO = elsplotO(pred=.5)
fO = foreach (i=1:10, combine=cbind) %dopar% {
fx1 = c(150*f[i], 150*f[i])
r = ldply(p, fooopp3, states1=states, fx=fx1, pred=.5)
return(r)
}
fO = as.data.frame(fO)
fO$p = p
maxf = apply(fO[,1:10], 2, max)
minf = as.numeric(fO[1,1:10])
maxsf = fO[1:10,]
summaryf = data.frame(levels = seq(.2, 2, by=.2), stage = rep("f", 10), maxlamO=maxf, p = maxsf$p[1:10], ldiff=(maxf-minf))
survO = foreach (i=1:12, combine=cbind) %dopar% {
states1 <- cbind(states[,1]*surv[i,1], states[,2]*surv[i,2])
r = ldply(p, fooopp3, states1=states1, pred=.5)
return(r)
}
survO2 = as.data.frame(survO)
names(survO2) = surv[,1]
survO2$p = p
survOdat = melt(survO2, id.vars="p")
survOdat$levels = surv[,1]
survOplot = ggplot(survOdat, aes(x=p, y=value)) + geom_line()
maxlamO = apply(survO2[,1:12], 2, max)
minlamO = as.numeric(survO2[1,1:12])
maxsO = survO2[1:12,]
for (i in 1:12) maxsO[i,] = survO2[which(survO2[,i]==maxlamO[i]),]
summaryO = data.frame(levels = surv[,1]/(surv[,1]+surv[,2]), maxs=maxlamO, p = maxsO$p, ldiff=(maxlamO-minlamO))
write.csv(summaryO, file = "summary survivalsO tradeoff.csv")
lamlocalO = qplot(levels, p, data= summaryO, geom="line",xlab= "investment in juves", ylab="migration proportion at \n peak of migration/lambda curve", main = "Proporiton of juves \n migrating that maximizes growth")
lamlocalO
# Graph changes in height of the peak of the lambda curve
lampeakO = qplot(levels, maxlamO, data= summaryO, geom="line", xlab= "investment in juves", ylab="lambda at \n peak of migration/lambda curve", main = "Maximum growth rate for each life \n stage at each survival level")
lampeakO
# graph changes in difference between max and min or lambda curve
lamdiffO = qplot(levels, ldiff, data= summaryO, geom="line",xlab= "investment in juves", ylab="δlogλ_sMAX", main = "Predation on adults, adults disperse")
lamdiffO + scale_y_continuous(limits=c(0, .26))
svg(filename="lamdiff adult predmig.svg", width=6, height=4)
lamdiffO+ scale_y_continuous(limits=c(0, .26))
dev.off()
# Add the data from the different scenarios together and plot them
summary$scenario = rep("1", nrow(summary))
summaryads$scenario = rep("2",nrow(summaryads))
summaryopp$scenario = rep("3", nrow(summaryopp))
summaryO$scenario = rep("4", nrow(summaryO))
summarytotal = rbind(summary, summaryads, summaryopp, summaryO)
summarytotal$scenario = as.factor(summarytotal$scenario)
lamdifftot = qplot(levels, ldiff, data = summarytotal, geom="line", color=scenario, xlab="proportional investment in juveniles", ylab= "δlogλ_sMAX")
lamdifftot + scale_color_manual( values=c("red","blue","green","black"), labels = c("juvenile dispersal, \n predation on juveniles", "juvenile dispersal, \n predation on adults", "adult dispersal, \n predation on juveniles", "adult dispersal, \n predtion on adults"))
svg(filename="lamdiff_total.svg", width=8, height=4)
lamdiffO+ scale_y_continuous(limits=c(0, .35))
dev.off() |
ec9290154c3c63b0b605ea8bb9edccf8e5c0e591 | 0500ba15e741ce1c84bfd397f0f3b43af8cb5ffb | /cran/paws.management/man/cloudtrail_put_event_selectors.Rd | 89bbb8fcaf6d1799166575c9ad0ebee4b150bcdf | [
"Apache-2.0"
] | permissive | paws-r/paws | 196d42a2b9aca0e551a51ea5e6f34daca739591b | a689da2aee079391e100060524f6b973130f4e40 | refs/heads/main | 2023-08-18T00:33:48.538539 | 2023-08-09T09:31:24 | 2023-08-09T09:31:24 | 154,419,943 | 293 | 45 | NOASSERTION | 2023-09-14T15:31:32 | 2018-10-24T01:28:47 | R | UTF-8 | R | false | true | 3,004 | rd | cloudtrail_put_event_selectors.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/cloudtrail_operations.R
\name{cloudtrail_put_event_selectors}
\alias{cloudtrail_put_event_selectors}
\title{Configures an event selector or advanced event selectors for your trail}
\usage{
cloudtrail_put_event_selectors(
TrailName,
EventSelectors = NULL,
AdvancedEventSelectors = NULL
)
}
\arguments{
\item{TrailName}{[required] Specifies the name of the trail or trail ARN. If you specify a trail
name, the string must meet the following requirements:
\itemize{
\item Contain only ASCII letters (a-z, A-Z), numbers (0-9), periods (.),
underscores (_), or dashes (-)
\item Start with a letter or number, and end with a letter or number
\item Be between 3 and 128 characters
\item Have no adjacent periods, underscores or dashes. Names like
\verb{my-_namespace} and \code{my--namespace} are not valid.
\item Not be in IP address format (for example, 192.168.5.4)
}
If you specify a trail ARN, it must be in the following format.
\code{arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail}}
\item{EventSelectors}{Specifies the settings for your event selectors. You can configure up to
five event selectors for a trail. You can use either \code{EventSelectors} or
\code{AdvancedEventSelectors} in a
\code{\link[=cloudtrail_put_event_selectors]{put_event_selectors}} request, but not
both. If you apply \code{EventSelectors} to a trail, any existing
\code{AdvancedEventSelectors} are overwritten.}
\item{AdvancedEventSelectors}{Specifies the settings for advanced event selectors. You can add
advanced event selectors, and conditions for your advanced event
selectors, up to a maximum of 500 values for all conditions and
selectors on a trail. You can use either \code{AdvancedEventSelectors} or
\code{EventSelectors}, but not both. If you apply \code{AdvancedEventSelectors} to
a trail, any existing \code{EventSelectors} are overwritten. For more
information about advanced event selectors, see \href{https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html}{Logging data events}
in the \emph{CloudTrail User Guide}.}
}
\description{
Configures an event selector or advanced event selectors for your trail. Use event selectors or advanced event selectors to specify management and data event settings for your trail. If you want your trail to log Insights events, be sure the event selector enables logging of the Insights event types you want configured for your trail. For more information about logging Insights events, see \href{https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-insights-events-with-cloudtrail.html}{Logging Insights events for trails} in the \emph{CloudTrail User Guide}. By default, trails created without specific event selectors are configured to log all read and write management events, and no data events.
See \url{https://www.paws-r-sdk.com/docs/cloudtrail_put_event_selectors/} for full documentation.
}
\keyword{internal}
|
5dfa2fe33b846506708693fab5ab00b09e023aad | 1515e2e409fc02383f6025ed3be4eef138c943c6 | /code/data-prep/build-flight-volume-features.R | 3617d19269c1fe560dfd85c57897f54fca1faf4c | [] | no_license | dgarant/flight-delays | 84f4ffeee438e80ca492ecbf1ae9f5b10af7e745 | 37fc0a11c7487d565beeb0e1fcc540b1e6f65486 | refs/heads/master | 2016-08-11T11:43:03.047151 | 2015-12-18T18:58:06 | 2015-12-18T18:58:06 | 44,618,018 | 0 | 0 | null | 2015-12-15T02:03:40 | 2015-10-20T16:03:04 | R | UTF-8 | R | false | false | 1,428 | r | build-flight-volume-features.R | # This script creates the number of flights arriving and departing from an airport within hourly blocks
library(plyr)
library(RSQLite)
sqlite <- dbDriver("SQLite")
ontimeDb <- dbConnect(sqlite, "../data/ontime.sqlite3")
dat <- dbGetQuery(ontimeDb, "select * from ontime")
dat$departure_dt <- with(dat, ISOdatetime(Year, Month, DayofMonth, floor(CRSDepTime / 100), CRSDepTime %% 100, 0))
# ISOdatetime can be added to a number of seconds - estelapsedtime is in minutes
dat$arrival_dt <- dat$CRSElapsedTime * 60 + dat$departure_dt
dat$dep_hour_group <- strftime(dat$departure_dt, format="%Y-%m-%d %H")
dat$arr_hour_group <- strftime(dat$arrival_dt, format="%Y-%m-%d %H")
hourly_departures <- ddply(dat, .(Origin, dep_hour_group), summarize, num_departures=length(Origin))
hourly_departures <- rename(hourly_departures, c("Origin"="airport", "dep_hour_group"="hour"))
hourly_arrivals <- ddply(dat, .(Dest, arr_hour_group), summarize, num_arrivals=length(Dest))
hourly_arrivals <- rename(hourly_arrivals, c("Dest"="airport", "arr_hour_group"="hour"))
airport_volume <- merge(hourly_departures, hourly_arrivals, all=TRUE)
airport_volume$num_departures <- ifelse(is.na(airport_volume$num_departures), 0, airport_volume$num_departures)
airport_volume$num_arrivals <- ifelse(is.na(airport_volume$num_arrivals), 0, airport_volume$num_arrivals)
write.csv(airport_volume, "../data/airport-volume.csv", quote=FALSE, row.names=FALSE)
|
bc292be2525326e1d2f22c90b15ae538de256e50 | 00c728d52337f009c7d4b2780d41ab173e2a527e | /Admin/script/internal/bind_and_hash.R | 7b7f9419822f9870c272c3e46ee6cf7198ef13a0 | [
"LicenseRef-scancode-generic-cla",
"MIT"
] | permissive | microsoft/vivainsights_zoom_int | add2c3ed2198a327f88fe76d5a42e1a15371038c | d0a9f57519888b06ea64d786931545670c6d524f | refs/heads/main | 2023-08-23T11:33:56.054405 | 2023-08-16T14:35:08 | 2023-08-16T14:35:08 | 368,118,562 | 1 | 2 | MIT | 2023-08-15T16:34:26 | 2021-05-17T08:53:54 | R | UTF-8 | R | false | false | 5,384 | r | bind_and_hash.R | #' @title
#' Read in csv files with the same column structure, row-bind, and hash
#'
#' @description
#' csv files with a matched pattern in the file name in the given file
#' path are read in, and a row-bind operation is run to combine them. Pattern
#' matching is optional, and the user can choose whether to return a combined
#' data frame or save the output with the same name in the directory.
#'
#' The combined data is merged with a hash file.
#'
#'
#' @param path String containing the file path to the csv files.
#' @param pattern String containing pattern to match in the file name. Optional
#' and defaults to `NULL` (all csv files in path will be read in).
#' @param hash_file string containing the path to the .csv file containing the
#' hash IDs. The file should contain only two columns with the following
#' headers:
#' - `PersonID`: column containing email addresses that map to the Zoom
#' `User_Name` columns.
#' - `HashID`: column containing HashIDs which will be uploaded as
#' organizational attributes to Workplace Analytics.
#' @param match_only logical. Determines whether to include only rows where
#' there is a corresponding match on the hash file. Defaults to `FALSE`.
#'
#' @return
#' When `save_csv` is set to `FALSE`, a data frame is returned. When set to
#' `TRUE`, a csv file containing the combined data frame is saved in the same
#' folder as the provided path.
#'
#' @section How to Run:
#' ```
#' # Read files in and save output to path
#' bind_csv(
#' path = "data",
#' pattern = "2021-04-05",
#' save_csv = TRUE
#' )
#'
#' # Read files in and return data frame
#' bind_csv(
#' path = "data"
#' )
#' ```
#'
#' @export
bind_and_hash <- function(path,
pattern = NULL,
hash_path,
match_only = FALSE){
start_t <- Sys.time()
# Read in hash file upfront -----------------------------------------------
hash_dt <- data.table::fread(hash_path, encoding = "UTF-8")
# List all files in path --------------------------------------------------
file_str <- list.files(path = path)
# List csv file in path ---------------------------------------------------
csv_str <- file_str[grepl(pattern = "\\.csv$",
x = file_str,
ignore.case = TRUE)]
# Match all files with pattern --------------------------------------------
if(is.null(pattern)){
csv_match_str <- csv_str # No pattern matching
} else {
csv_match_str <- csv_str[grepl(pattern = pattern,
x = csv_str,
ignore.case = FALSE)]
}
# Append file path --------------------------------------------------------
csv_match_str <- paste0(path, "/", csv_match_str)
# Read all csv files in ---------------------------------------------------
# Run `clean_zoom()` on each file
readin_list <-
seq(1, length(csv_match_str)) %>%
purrr::map(
function(i){
message(
paste0("Reading in ", i,
" out of ", length(csv_match_str),
" data files..."))
clean_zoom(
suppressMessages(
suppressWarnings(
readr::read_csv(file = csv_match_str[[i]])
)
)
)
})
# Column name cleaning ----------------------------------------------------
zoom_dt <- data.table::as.data.table(dplyr::bind_rows(readin_list))
# Drop unnecessary columns ------------------------------------------------
zoom_dt[, User_Name := NULL]
# zoom_dt[, X13 := NULL]
# zoom_dt[, Display_name := NULL]
# zoom_dt[, Phone_numberName_Original_Name := NULL]
# Standardize cases prior to replacement ----------------------------------
hash_dt[, PersonID := str_trim(tolower(PersonID))]
zoom_dt[, User_Email_1 := str_trim(tolower(User_Email_1))]
zoom_dt[, User_Email_2 := str_trim(tolower(User_Email_2))]
# Replace `User_Email_1` --------------------------------------------------
setkey(zoom_dt, "User_Email_1")
setkey(hash_dt, "PersonID")
zoom_dt <- hash_dt[zoom_dt]
zoom_dt[, User_Email_1 := HashID]
zoom_dt[, HashID := NULL] # Drop `HashID`
# Replace `User_Email_2` --------------------------------------------------
setkey(zoom_dt, "User_Email_2")
zoom_dt <- hash_dt[zoom_dt]
zoom_dt[, User_Email_2 := HashID]
zoom_dt[, HashID := NULL] # Drop `HashID`
# Drop unnecessary columns ------------------------------------------------
zoom_dt[, PersonID := NULL]
zoom_dt[, i.PersonID := NULL]
# Print timestamp ---------------------------------------------------------
message(
paste("Total runtime for `bind_and_hash()`: ",
round(difftime(Sys.time(), start_t, units = "mins"), 1),
"minutes.")
)
message(
paste("Total number of rows in output data: ",
nrow(zoom_dt))
)
message(
paste("Total number of columns in output data: ",
ncol(zoom_dt))
)
message(
paste("Total number of users in output data: ",
zoom_dt %>%
dplyr::pull(User_Email_1) %>%
dplyr::n_distinct())
)
# Remove non-matches ------------------------------------------------------
if(match_only == TRUE){
zoom_dt %>%
.[!is.na(User_Email_1)] %>%
.[!is.na(User_Email_2)]
} else {
zoom_dt[]
}
}
|
c9fff1527ab1cb8414facf75a902372f864010dc | 9940a0e6f44db27fedce196b657ce30f53fa247e | /figures/figure4_rna.R | 99957094ec23378c0cf7aa2dc74259ff3382fce5 | [
"MIT"
] | permissive | p4rkerw/Muto_Wilson_NComm_2020 | cec71644b51baa3fc45e3c937dfadb20294695e9 | 874e7daa1368f9426cf7d05bdf878f1ca416139d | refs/heads/master | 2023-04-09T13:02:11.347401 | 2023-03-13T13:17:44 | 2023-03-13T13:17:44 | 239,532,534 | 21 | 7 | null | 2021-02-22T15:03:06 | 2020-02-10T14:33:17 | R | UTF-8 | R | false | false | 3,078 | r | figure4_rna.R | library(Seurat)
library(monocle3)
library(ggplot2)
library(dplyr)
library(Matrix)
library(here)
library(BuenColors)
set.seed(1234)
#snRNA-seq data
rnaAggr <- readRDS("cellranger_rna_prep/rnaAggr_control.rds")
new.fig4a <- FeaturePlot(rnaAggr,"VCAM1",order=T) #700x600
new.figS10a_1 <- FeaturePlot(rnaAggr,"TPM1",order=T) #700x600
new.figS10a_2 <- FeaturePlot(rnaAggr,"SLC5A12",order=T) #700x600
new.figS10a_3 <- FeaturePlot(rnaAggr,"SLC4A4",order=T) #700x600
#Idents(rnaAggr) <- "celltype"
count_data <- GetAssayData(rnaAggr, assay = "RNA",slot = "counts")
gene_metadata <- as.data.frame(rownames(count_data))
colnames(gene_metadata) <- "gene_short_name"
rownames(gene_metadata) <- gene_metadata$gene_short_name
cds <- new_cell_data_set(as(count_data, "sparseMatrix"),
cell_metadata = rnaAggr@meta.data,
gene_metadata = gene_metadata)
cds <- preprocess_cds(cds, num_dim = 100)
cds = align_cds(cds, num_dim = 100, alignment_group = "orig.ident")
cds = reduce_dimension(cds,preprocess_method = "Aligned")
#fig4a_1 <- plot_cells(cds, color_cells_by="celltype",
group_label_size = 0) #png: 570x540
#subclustering
cds_subset <- choose_cells(cds) #subseting PT for later analysis: before adding pseudotime
cds_subset <- cds_subset[,colData(cds_subset)$celltype %in% c("PT","PT_VCAM1")]
cds_subset <- cluster_cells(cds_subset)
cds_subset <- learn_graph(cds_subset,close_loop = FALSE)
cds_subset <- order_cells(cds_subset)
fig4a_2 <- plot_cells(cds_subset,
color_cells_by = "pseudotime",
label_groups_by_cluster=FALSE,
label_leaves=FALSE,
label_branch_points=FALSE,
show_trajectory_graph=T) #png 560x430
pseudotime <- as.data.fra.e(cds_subset@principal_graph_aux@listData[["UMAP"]][["pseudotime"]])
pt <- AddMetaData(pt,pseudotime)
FeaturePlot(pt,"pseudotime",cols = jdb_palette("brewers_yes"))
genes <- c("VCAM1","TPM1","SLC5A12","SLC4A4")
lineage_cds <- cds_subset[rowData(cds_subset)$gene_short_name %in% genes,]
genes <- c("VCAM1","TPM1")
lineage_cds <- cds_subset[rowData(cds_subset)$gene_short_name %in% genes,]
fig4b_1 <- plot_genes_in_pseudotime(lineage_cds,
color_cells_by="celltype",
min_expr=1,
cell_size=0.1,
trend_formula = "~ splines::ns(pseudotime, df=10)",
panel_order = c("VCAM1","TPM1")
) #png 500x670
genes <- c("SLC5A12","SLC4A4")
lineage_cds <- cds_subset[rowData(cds_subset)$gene_short_name %in% genes,]
fig4b_2 <- plot_genes_in_pseudotime(lineage_cds,
color_cells_by="celltype",
min_expr=1,
cell_size=0.1,
trend_formula = "~ splines::ns(pseudotime, df=10)",
panel_order = c("SLC5A12","SLC4A4")
) #png 500x670
figS10 <- FeaturePlot(rnaAggr,features = c("VCAM1","TPM1","SLC5A12","SLC4A4"),order=T) |
319c509652355b08441e722a2fe80c1a1253fb95 | ad77cd8d25d727733a70e6783057cb4a9a16ca20 | /data_analysis.R | 20257b5ce94164f78b653ea0ff5d2904ac06e39f | [] | no_license | muuankarski/oslo2013 | 1ae115b9463b1a33fe0a731b7320050191179636 | 7c42d5e6a05b45577c77b5b5935a17c3df9932f4 | refs/heads/master | 2016-08-03T11:44:31.089764 | 2013-10-11T13:32:17 | 2013-10-11T13:32:17 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,312 | r | data_analysis.R | # Retrevieng data from 2002 census
# http://std.gmcrosstata.ru/webapi/opendatabase?id=vpn2002_pert
# ТЕРСОН Alue + "Город подчинения субъекта РФ"
#
setwd("~/workspace/courses/oslo2013/workspace_oslo/oslo_essay")
## Karelia
raw <- read.csv("data/housematerial_karelia.csv", sep=";", skip=6)
names(raw) <- c("region","measure","material","value")
df <- raw
n.row <- nrow(df)
df <- df[-n.row:-(n.row-2),]
head(df)
tail(df)
# translate material
df$material <- as.character(df$material)
df$material[df$material == "Не указано"] <- "NotSpecified"
df$material[df$material == "Кирпич, камень"] <- "BrickStone"
df$material[df$material == "Панель"] <- "Panel"
df$material[df$material == "Блок"] <- "Block"
df$material[df$material == "Дерево"] <- "Timber"
df$material[df$material == "Смешанный материал"] <- "MixedMaterial"
df$material[df$material == "Другой материал"] <- "OtherMaterial"
df$material <- as.factor(df$material)
summary(df)
# make number relative
library(reshape2)
df.wide <- dcast(df, region + measure ~ material, value.var = "value")
head(df.wide)
df.wide$sum <- rowSums(df.wide[,3:9])
df.wide$ShareBlock <- round(df.wide[,3]/df.wide[,10]*100,2)
df.wide$ShareBrickStone <- round(df.wide[,4]/df.wide[,10]*100,2)
df.wide$ShareMixedMaterial <- round(df.wide[,5]/df.wide[,10]*100,2)
df.wide$ShareNotSpecified <- round(df.wide[,6]/df.wide[,10]*100,2)
df.wide$ShareOtherMaterial <- round(df.wide[,7]/df.wide[,10]*100,2)
df.wide$SharePanel <- round(df.wide[,8]/df.wide[,10]*100,2)
df.wide$ShareTimber <- round(df.wide[,9]/df.wide[,10]*100,2)
# lets order the regions by size for plotting
df.wide <- df.wide[order(df.wide$sum), ]
df.wide$region <- factor(df.wide$region,
levels = as.character(df.wide[order(df.wide$sum), 1]))
df.wide.karelia <- df.wide
# back to long for plotting
df.long <- melt(df.wide, id.vars = "region",
measure.vars=c("ShareBlock",
"ShareBrickStone",
"ShareMixedMaterial",
"ShareNotSpecified",
"ShareOtherMaterial",
"SharePanel",
"ShareTimber"))
head(df.long)
df.long <- df.long[!is.na(df.long$value), ]
df.long.karelia <- df.long
# plotting
library(ggplot2)
ggplot(df.long, aes(x=region,y=value,fill=variable)) +
geom_bar(stat="identity",position="dodge") +
coord_flip() +
labs(title="sorted by the size of settlement - Karelia")
ggsave(file="hist_karelia.png", width=14, height=14)
# Nizhni
raw <- read.csv("data/housematerial_nizhni.csv", sep=";", skip=6)
names(raw) <- c("region","measure","material","value")
df <- raw
n.row <- nrow(df)
df <- df[-n.row:-(n.row-2),]
head(df)
tail(df)
# translate material
df$material <- as.character(df$material)
df$material[df$material == "Не указано"] <- "NotSpecified"
df$material[df$material == "Кирпич, камень"] <- "BrickStone"
df$material[df$material == "Панель"] <- "Panel"
df$material[df$material == "Блок"] <- "Block"
df$material[df$material == "Дерево"] <- "Timber"
df$material[df$material == "Смешанный материал"] <- "MixedMaterial"
df$material[df$material == "Другой материал"] <- "OtherMaterial"
df$material <- as.factor(df$material)
summary(df)
# make number relative
library(reshape2)
df.wide <- dcast(df, region + measure ~ material, value.var = "value")
head(df.wide)
df.wide$sum <- rowSums(df.wide[,3:9])
df.wide$ShareBlock <- round(df.wide[,3]/df.wide[,10]*100,2)
df.wide$ShareBrickStone <- round(df.wide[,4]/df.wide[,10]*100,2)
df.wide$ShareMixedMaterial <- round(df.wide[,5]/df.wide[,10]*100,2)
df.wide$ShareNotSpecified <- round(df.wide[,6]/df.wide[,10]*100,2)
df.wide$ShareOtherMaterial <- round(df.wide[,7]/df.wide[,10]*100,2)
df.wide$SharePanel <- round(df.wide[,8]/df.wide[,10]*100,2)
df.wide$ShareTimber <- round(df.wide[,9]/df.wide[,10]*100,2)
# lets order the regions by size for plotting
df.wide <- df.wide[order(df.wide$sum), ]
df.wide$region <- factor(df.wide$region,
levels = as.character(df.wide[order(df.wide$sum), 1]))
df.wide.nizhni <- df.wide
# back to long for plotting
df.long <- melt(df.wide, id.vars = "region",
measure.vars=c("ShareBlock",
"ShareBrickStone",
"ShareMixedMaterial",
"ShareNotSpecified",
"ShareOtherMaterial",
"SharePanel",
"ShareTimber"))
head(df.long)
df.long <- df.long[!is.na(df.long$value), ]
df.long.nizhni <- df.long
# plotting
library(ggplot2)
ggplot(df.long, aes(x=region,y=value,fill=variable)) +
geom_bar(stat="identity",position="dodge") +
coord_flip() +
labs(title="sorted by the size of settlement - Karelia")
ggsave(file="hist_nizhni.png", width=14, height=14)
# plot share of timber on map
library(rustfare)
shapefile <- GetRusGADM("rayon")
shape_karelia <- shapefile[shapefile$NAME_1 == "Karelia", ]
shape_nizhni <- shapefile[shapefile$NAME_1 == "Nizhegorod", ]
plot(shape_karelia)
plot(shape_nizhni)
## Karelia
region_key_karelia <- read.csv("~/workspace/general/region_coding/karelia_key_rayon.csv")
df.map.karelia <- merge(df.long.karelia,
region_key_karelia,
by="region")
library(ggplot2)
library(rgeos)
shape_karelia$id <- rownames(shape_karelia@data)
map.points <- fortify(shape_karelia, region = "id")
map.df <- merge(map.points, shape_karelia, by = "id")
choro <- merge(map.df,df.map.karelia,by="ID_2")
choro <- choro[order(choro$order), ]
ggplot(choro, aes(long,lat,group=group)) +
geom_polygon(aes(fill = value)) +
geom_polygon(data = map.df, aes(long,lat),
fill=NA,
color = "white",
size=0.1) + # white borders
coord_map(project="orthographic") +
facet_wrap(~variable)
ggsave(file="map_karelia.png", width=14, height=14)
# Nizhni
# tehdään nizhnin avain
write.csv(shape_nizhni@data[,6:7], file="temp/shape_name.csv")
write.csv(factor(df.wide.nizhni$region), file="temp/census_name.csv")
|
afa8b51925218eadf7bc8108b0fd5e7d756cf3d2 | 461580fbd2677d90a6a580b2d00ed5109a71db54 | /phase3.R | 46c6830cae9508b46839edad81f09f53ef1806be | [] | no_license | ahmedsyed584/Survival-prediction | 2490fa7026e1cc6c7ef7fd9905b05c257d6cdd1f | 3b8c1d315ca4b413784bfc330c52e09a36e378e5 | refs/heads/master | 2020-08-27T07:40:04.123436 | 2019-10-31T15:21:58 | 2019-10-31T15:21:58 | 217,287,417 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 467 | r | phase3.R | #### Logistic regression ###
glm.fit = glm(Survived~ Pclass + Age + SibSp + Parch+SibSp*Parch, data = titanic)
glm.fit
summary(glm.fit)
test = read.csv(file = "test.csv")
prediction = predict.glm(glm.fit, newdata = test, type = "response")
predictions = prediction
for(i in 1:length(prediction))
{
if(is.na(prediction[i]) || prediction[i]> 0.5)
{
predictions[i] = 1
}
else
{
predictions[i] = 0
}
}
head(test)
tail(test)
|
00e93513b58713c69223c88d9f98fd302313a3c2 | f75251c8a1209205ea2dc45241adc2f8f1e24b4c | /cachematrix.R | 1252946c43f7e69ee6b516b586269baa99df1c12 | [] | no_license | isaidi90/ProgrammingAssignment2 | d2dd2fcd7a26bdaebfebd74e0156f423bf8d8725 | 0bf713d2649d9d314a804c148044ac3e871b00fd | refs/heads/master | 2020-12-09T20:02:50.839307 | 2020-01-12T15:05:11 | 2020-01-12T15:05:11 | 233,405,781 | 0 | 0 | null | 2020-01-12T14:32:16 | 2020-01-12T14:32:15 | null | UTF-8 | R | false | false | 1,544 | r | cachematrix.R | #First of all, thank you for the time you're taking for evaluating my work
#As described in the instructions, two functions should be writtent
#The first one "makeCacheMatrix" creates a special "matrix"
#The second one computes the inverse of the special "matrix" returned by makeCacheMatrix.
#If the inverse has already been calculated (and the matrix has not changed),
#then the cachSolve should retrieve the inverse from the cache
## makeCacheMatrix creates a special matrix wich is a list containing a function to :
#1. set the value of the matrix
#2. get the value of the matrix
#3. set the value of the inverse
#4. get the value of the inverse
makeCacheMatrix <- function(x = matrix()) {
inv <- NULL
set <- function(y) {
x <<- y
inv <<- NULL
}
get <- function() {x}
setinverse <- function(inverse) inv <<- inverse
getinverse <- function() inv
list(set = set, get = get,
setinverse = setinverse,
getinverse = getinverse)
}
## The following function calculates the inverse of the special "matrix" created with the makeCacheMatrix function.
#If the inverse had already been calculated. If so, it gets the inverse from the cache and skips the computation.*
#Otherwise, it calculates the inverse of the data and sets the value of the inverse
#in the cache via the setminverse function.
cacheSolve <- function(x, ...) {
inv <- x$getinverse()
if(!is.null(inv)) {
message("getting cached data")
return(inv)
}
data <- x$get()
inv <- solve(data , ...)
x$setinverse(inv)
inv
}
|
90b1cb663f80bd3638de8ecf2fcdb881aecf9a14 | bce55fe84abe924b59ae71925ac4d413b178eb0a | /R/append_dest_origin.R | db3618ee0020aa22c88969004fac96fd669f7687 | [] | no_license | james-e-thomas/origindest | 7c8556abdeae70509f851509041c3f602d19c188 | 475eb081a972e21e14fe8a0145f0ca8854c926cf | refs/heads/master | 2022-11-30T02:46:01.872946 | 2020-08-15T14:51:44 | 2020-08-15T14:51:44 | 273,926,944 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 343 | r | append_dest_origin.R | append_dest_origin <- function(x) {
x2 <- dplyr::select(x, starts_with("dest_"), starts_with("origin_"))
colnames(x2) <- gsub("dest_", "origindest_", colnames(x2))
colnames(x2) <- gsub("origin_", "dest_", colnames(x2))
colnames(x2) <- gsub("origindest_", "origin_", colnames(x2))
x <- rbind.data.frame(x, x2)
rm(x2)
x
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.