content large_stringlengths 0 6.46M | path large_stringlengths 3 331 | license_type large_stringclasses 2 values | repo_name large_stringlengths 5 125 | language large_stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 4 6.46M | extension large_stringclasses 75 values | text stringlengths 0 6.46M |
|---|---|---|---|---|---|---|---|---|---|
#'@title Gillespie Stochastic Simulation Algorithm
#'
#'@description
#'\code{ssa} Runs an ensemble of SSA \code{realizations} and writes output
#'data to \code{outputDir}
#'
#'@param modelFile Character string with path to StochKit2 .xml model file
#'@param time Simulation time of each realization
#'@param realizations Number of realizations
#'@param intervals Number of output intervals. Default 0 outputs at end time only. 1=keep data at start and end time, 2=keep data at start, middle, and end times, etc. Note data is stored at (intervals+1) equally spaced time points.
#'@param noStats Do not keep means and variances data. Default \code{FALSE} creates stats directory in output directory
#'@param keepTrajectories Keep trajectory data. Creates trajectories directory in output directory
#'@param keepHistograms Keep histogram data. Creates histograms directory in output directory
#'@param bins Number of histogram bins
#'@param outputDir Character string with path to output directory. By default (=NULL) and will not write file. If output directory does not exist, it will be created. If output directory already exists, use \code{force=TRUE} to overwrite
#'@param force Force overwriting of existing data
#'@param seed Seed the random number generator. By default the seed is determined by the R random number generator, so the seed can also be set by calling \code{set.seed} in R immediately before calling \code{ssa}
#'@param p Override default and specify the number of processes (threads) to use. By default (=0), the number of processes will be determined automatically (recommended). Ignored on systems without OpenMP support.
#'@return List of data frames with means, variances, and a list of trajectory vectors
#'@examples
#'\dontrun{
#'#'#example using included dimer_decay.xml file
#'#output written to directory ex_out (created in current working directory)
#'#run 100 simulations for 10 time units, keeping output at 20 time intervals
#'#store model file name in a variable first
#'model <- system.file("dimer_decay.xml",package="StochKit2R")
#'out <- ssa(model,10,100,20,F,T,T,outputDir="ex_out",force=T)
#'#more typical example where model file is stored elsewhere
#'#(must be valid path to existing .xml StochKit2 model file)
#'#store output in dimer_decay_output, overwrite existing data
#'#and keep trajectory data.
#'out <- ssa("Desktop/dimer_decay.xml",
#' "Desktop/dimer_decay_output",10,100,20,keepTrajectories=T,force=T)
#'}
ssa <- function(modelFile,time,realizations,intervals=0,noStats=FALSE,keepTrajectories=FALSE,keepHistograms=FALSE,bins=32,outputDir=NULL,force=FALSE,seed=NULL,p=0) {
# can set seed in R with set.seed()
#checks on modelFile
if (!file.exists(modelFile)) {
stop("ERROR: Model file does not exist")
}
#simple checks
if (time<=0) {
stop("ERROR: simulation time must be positive")
}
if (realizations<=0) {
stop("ERROR: realizations must be positive")
}
if (intervals<0) {
stop("ERROR: intervals must not be negative")
}
if (bins<=0) {
stop("ERROR: number of bins must be positive")
}
if (p<0) {
stop("ERROR: number of processes must not be negative (when p=0, processes will be determined automatically)")
}
#must keep some output
if (noStats & !keepTrajectories & !keepHistograms) {
stop("ERROR: must output at least one of stats, trajectories, or histograms.")
}
if(!is.null(outputDir)){
#expand path
outputDir <- tryCatch(suppressWarnings(normalizePath(outputDir)), error = function(e) {stop("Invalid or missing outputDir output directory path, terminating StochKit2R")}, finally = NULL)
#remove tailing slashes or backslashes
#because file.exists returns false if directory ends in slash or backslash
outputDir <- gsub("//*$","",outputDir)
outputDir <- gsub("\\\\*$","",outputDir)
createOutputDirs(outputDir,noStats,keepTrajectories,keepHistograms,force)
}
if(is.null(outputDir)){
outputDir=""
}
if (is.null(seed)) {
seed=floor(runif(1,-.Machine$integer.max,.Machine$integer.max))
}
# process the xml model file, put in R Lists
StochKit2Rmodel <- buildStochKit2Rmodel(modelFile)
# everything is set up, ready to run the simulation
ssaStochKit2RInterface(StochKit2Rmodel,time,realizations,intervals,!noStats,keepTrajectories,keepHistograms,bins,outputDir,seed,p)
} | /R/ssa.R | no_license | HarryCaveMan/StochKit2R | R | false | false | 4,370 | r | #'@title Gillespie Stochastic Simulation Algorithm
#'
#'@description
#'\code{ssa} Runs an ensemble of SSA \code{realizations} and writes output
#'data to \code{outputDir}
#'
#'@param modelFile Character string with path to StochKit2 .xml model file
#'@param time Simulation time of each realization
#'@param realizations Number of realizations
#'@param intervals Number of output intervals. Default 0 outputs at end time only. 1=keep data at start and end time, 2=keep data at start, middle, and end times, etc. Note data is stored at (intervals+1) equally spaced time points.
#'@param noStats Do not keep means and variances data. Default \code{FALSE} creates stats directory in output directory
#'@param keepTrajectories Keep trajectory data. Creates trajectories directory in output directory
#'@param keepHistograms Keep histogram data. Creates histograms directory in output directory
#'@param bins Number of histogram bins
#'@param outputDir Character string with path to output directory. By default (=NULL) and will not write file. If output directory does not exist, it will be created. If output directory already exists, use \code{force=TRUE} to overwrite
#'@param force Force overwriting of existing data
#'@param seed Seed the random number generator. By default the seed is determined by the R random number generator, so the seed can also be set by calling \code{set.seed} in R immediately before calling \code{ssa}
#'@param p Override default and specify the number of processes (threads) to use. By default (=0), the number of processes will be determined automatically (recommended). Ignored on systems without OpenMP support.
#'@return List of data frames with means, variances, and a list of trajectory vectors
#'@examples
#'\dontrun{
#'#'#example using included dimer_decay.xml file
#'#output written to directory ex_out (created in current working directory)
#'#run 100 simulations for 10 time units, keeping output at 20 time intervals
#'#store model file name in a variable first
#'model <- system.file("dimer_decay.xml",package="StochKit2R")
#'out <- ssa(model,10,100,20,F,T,T,outputDir="ex_out",force=T)
#'#more typical example where model file is stored elsewhere
#'#(must be valid path to existing .xml StochKit2 model file)
#'#store output in dimer_decay_output, overwrite existing data
#'#and keep trajectory data.
#'out <- ssa("Desktop/dimer_decay.xml",
#' "Desktop/dimer_decay_output",10,100,20,keepTrajectories=T,force=T)
#'}
ssa <- function(modelFile,time,realizations,intervals=0,noStats=FALSE,keepTrajectories=FALSE,keepHistograms=FALSE,bins=32,outputDir=NULL,force=FALSE,seed=NULL,p=0) {
# can set seed in R with set.seed()
#checks on modelFile
if (!file.exists(modelFile)) {
stop("ERROR: Model file does not exist")
}
#simple checks
if (time<=0) {
stop("ERROR: simulation time must be positive")
}
if (realizations<=0) {
stop("ERROR: realizations must be positive")
}
if (intervals<0) {
stop("ERROR: intervals must not be negative")
}
if (bins<=0) {
stop("ERROR: number of bins must be positive")
}
if (p<0) {
stop("ERROR: number of processes must not be negative (when p=0, processes will be determined automatically)")
}
#must keep some output
if (noStats & !keepTrajectories & !keepHistograms) {
stop("ERROR: must output at least one of stats, trajectories, or histograms.")
}
if(!is.null(outputDir)){
#expand path
outputDir <- tryCatch(suppressWarnings(normalizePath(outputDir)), error = function(e) {stop("Invalid or missing outputDir output directory path, terminating StochKit2R")}, finally = NULL)
#remove tailing slashes or backslashes
#because file.exists returns false if directory ends in slash or backslash
outputDir <- gsub("//*$","",outputDir)
outputDir <- gsub("\\\\*$","",outputDir)
createOutputDirs(outputDir,noStats,keepTrajectories,keepHistograms,force)
}
if(is.null(outputDir)){
outputDir=""
}
if (is.null(seed)) {
seed=floor(runif(1,-.Machine$integer.max,.Machine$integer.max))
}
# process the xml model file, put in R Lists
StochKit2Rmodel <- buildStochKit2Rmodel(modelFile)
# everything is set up, ready to run the simulation
ssaStochKit2RInterface(StochKit2Rmodel,time,realizations,intervals,!noStats,keepTrajectories,keepHistograms,bins,outputDir,seed,p)
} |
# GEOEssential
# SDG Workflow for WP5 Biodiversity and Ecosystem Services
# 15.3.1 Proportion of land that is degraded over total land area
# http://maps.elie.ucl.ac.be/CCI/viewer/index.php
# install.packages("raster","R.utils")
library(raster)
# Load raster files
#url(description, open = "", blocking = TRUE,
# encoding = getOption("encoding"), method)
esa1992<-raster(paste0(getwd(),"/DATA/esa_lc_1992_agg10.tif"))
esa2015<-raster(paste0(getwd(),"/DATA/esa_lc_2015_agg10.tif"))
esa<-stack(esa1992,esa2015)
print('data_loaded')
# # Reclass to Anthropogenic Classes (i.e. Cultivated and Urban Areas)
# # See http://maps.elie.ucl.ac.be/CCI/viewer/download/CCI-LC_Maps_Legend.pdf
# m <- c(10,1,11,1,12,1,20,1,30,1,40,1,190,1,
# 50,NA,60,NA,61,NA,62,NA,70,NA,71,NA,72,NA,
# 80,NA,81,NA,82,NA,90,NA,100,NA,110,NA,
# 120,NA,121,NA,122,NA,130,NA,140,NA,
# 150,NA,151,NA,152,NA,153,NA,160,NA,
# 170,NA,180,NA,190,NA,200,NA,201,NA,
# 202,NA,210,NA,220,NA)
# print('m_1')
# rclmat <- matrix(m, ncol=2, byrow=TRUE)
# print('m_2')
# esa <- reclass(esa, rclmat,na.rm=T)
# print('data_classified')
## Calculate number of Antropogenic cells withing 100 x 100 grides
f<-10
#esa<-aggregate(esa,fact=f,fun=sum,na.rm=T)
#print('data_aggregated')
# Calculate the change between 1992 and 2015
esa<-esa[[2]]-esa[[1]]
esa<-esa/(f*f)
# Plot
col=colorRampPalette(c("blue","white", "red"))(20)
png(filename="esa.png", width=1000, height= 1000)
plot(esa,col=col)
dev.off()
print('data_plotted')
| /wp_4_aidin01.R | no_license | gomezgimenez/New_model_WP4 | R | false | false | 1,539 | r | # GEOEssential
# SDG Workflow for WP5 Biodiversity and Ecosystem Services
# 15.3.1 Proportion of land that is degraded over total land area
# http://maps.elie.ucl.ac.be/CCI/viewer/index.php
# install.packages("raster","R.utils")
library(raster)
# Load raster files
#url(description, open = "", blocking = TRUE,
# encoding = getOption("encoding"), method)
esa1992<-raster(paste0(getwd(),"/DATA/esa_lc_1992_agg10.tif"))
esa2015<-raster(paste0(getwd(),"/DATA/esa_lc_2015_agg10.tif"))
esa<-stack(esa1992,esa2015)
print('data_loaded')
# # Reclass to Anthropogenic Classes (i.e. Cultivated and Urban Areas)
# # See http://maps.elie.ucl.ac.be/CCI/viewer/download/CCI-LC_Maps_Legend.pdf
# m <- c(10,1,11,1,12,1,20,1,30,1,40,1,190,1,
# 50,NA,60,NA,61,NA,62,NA,70,NA,71,NA,72,NA,
# 80,NA,81,NA,82,NA,90,NA,100,NA,110,NA,
# 120,NA,121,NA,122,NA,130,NA,140,NA,
# 150,NA,151,NA,152,NA,153,NA,160,NA,
# 170,NA,180,NA,190,NA,200,NA,201,NA,
# 202,NA,210,NA,220,NA)
# print('m_1')
# rclmat <- matrix(m, ncol=2, byrow=TRUE)
# print('m_2')
# esa <- reclass(esa, rclmat,na.rm=T)
# print('data_classified')
## Calculate number of Antropogenic cells withing 100 x 100 grides
f<-10
#esa<-aggregate(esa,fact=f,fun=sum,na.rm=T)
#print('data_aggregated')
# Calculate the change between 1992 and 2015
esa<-esa[[2]]-esa[[1]]
esa<-esa/(f*f)
# Plot
col=colorRampPalette(c("blue","white", "red"))(20)
png(filename="esa.png", width=1000, height= 1000)
plot(esa,col=col)
dev.off()
print('data_plotted')
|
% Auto-generated: do not edit by hand
\name{htmlSummary}
\alias{htmlSummary}
\title{Summary component}
\description{
}
\usage{
htmlSummary(htmlSummary(children=NULL, id=NULL, n_clicks=NULL, n_clicks_timestamp=NULL,
key=NULL, role=NULL, accessKey=NULL, className=NULL, contentEditable=NULL,
contextMenu=NULL, dir=NULL, draggable=NULL, hidden=NULL, lang=NULL,
spellCheck=NULL, style=NULL, tabIndex=NULL, title=NULL, loading_state=NULL, ...))
}
\arguments{
\item{children}{A list of or a singular dash component, string or number. The children of this component}
\item{id}{Character. The ID of this component, used to identify dash components
in callbacks. The ID needs to be unique across all of the
components in an app.}
\item{n_clicks}{Numeric. An integer that represents the number of times
that this element has been clicked on.}
\item{n_clicks_timestamp}{Numeric. An integer that represents the time (in ms since 1970)
at which n_clicks changed. This can be used to tell
which button was changed most recently.}
\item{key}{Character. A unique identifier for the component, used to improve
performance by React.js while rendering components
See https://reactjs.org/docs/lists-and-keys.html for more info}
\item{role}{Character. The ARIA role attribute}
\item{accessKey}{Character. Defines a keyboard shortcut to activate or add focus to the element.}
\item{className}{Character. Often used with CSS to style elements with common properties.}
\item{contentEditable}{Character. Indicates whether the element's content is editable.}
\item{contextMenu}{Character. Defines the ID of a <menu> element which will serve as the element's context menu.}
\item{dir}{Character. Defines the text direction. Allowed values are ltr (Left-To-Right) or rtl (Right-To-Left)}
\item{draggable}{Character. Defines whether the element can be dragged.}
\item{hidden}{A value equal to: 'hidden', 'hidden' | logical. Prevents rendering of given element, while keeping child elements, e.g. script elements, active.}
\item{lang}{Character. Defines the language used in the element.}
\item{spellCheck}{Character. Indicates whether spell checking is allowed for the element.}
\item{style}{Named list. Defines CSS styles which will override styles previously set.}
\item{tabIndex}{Character. Overrides the browser's default tab order and follows the one specified instead.}
\item{title}{Character. Text to be displayed in a tooltip when hovering over the element.}
\item{loading_state}{Lists containing elements 'is_loading', 'prop_name', 'component_name'.
those elements have the following types:
- is_loading (logical; optional): determines if the component is loading or not
- prop_name (character; optional): holds which property is loading
- component_name (character; optional): holds the name of the component that is loading. Object that holds the loading state object coming from dash-renderer}
\item{...}{wildcards: `data-*` or `aria-*`}
}
| /man/htmlSummary.Rd | permissive | TannerSorensen/dash-html-components | R | false | false | 2,954 | rd | % Auto-generated: do not edit by hand
\name{htmlSummary}
\alias{htmlSummary}
\title{Summary component}
\description{
}
\usage{
htmlSummary(htmlSummary(children=NULL, id=NULL, n_clicks=NULL, n_clicks_timestamp=NULL,
key=NULL, role=NULL, accessKey=NULL, className=NULL, contentEditable=NULL,
contextMenu=NULL, dir=NULL, draggable=NULL, hidden=NULL, lang=NULL,
spellCheck=NULL, style=NULL, tabIndex=NULL, title=NULL, loading_state=NULL, ...))
}
\arguments{
\item{children}{A list of or a singular dash component, string or number. The children of this component}
\item{id}{Character. The ID of this component, used to identify dash components
in callbacks. The ID needs to be unique across all of the
components in an app.}
\item{n_clicks}{Numeric. An integer that represents the number of times
that this element has been clicked on.}
\item{n_clicks_timestamp}{Numeric. An integer that represents the time (in ms since 1970)
at which n_clicks changed. This can be used to tell
which button was changed most recently.}
\item{key}{Character. A unique identifier for the component, used to improve
performance by React.js while rendering components
See https://reactjs.org/docs/lists-and-keys.html for more info}
\item{role}{Character. The ARIA role attribute}
\item{accessKey}{Character. Defines a keyboard shortcut to activate or add focus to the element.}
\item{className}{Character. Often used with CSS to style elements with common properties.}
\item{contentEditable}{Character. Indicates whether the element's content is editable.}
\item{contextMenu}{Character. Defines the ID of a <menu> element which will serve as the element's context menu.}
\item{dir}{Character. Defines the text direction. Allowed values are ltr (Left-To-Right) or rtl (Right-To-Left)}
\item{draggable}{Character. Defines whether the element can be dragged.}
\item{hidden}{A value equal to: 'hidden', 'hidden' | logical. Prevents rendering of given element, while keeping child elements, e.g. script elements, active.}
\item{lang}{Character. Defines the language used in the element.}
\item{spellCheck}{Character. Indicates whether spell checking is allowed for the element.}
\item{style}{Named list. Defines CSS styles which will override styles previously set.}
\item{tabIndex}{Character. Overrides the browser's default tab order and follows the one specified instead.}
\item{title}{Character. Text to be displayed in a tooltip when hovering over the element.}
\item{loading_state}{Lists containing elements 'is_loading', 'prop_name', 'component_name'.
those elements have the following types:
- is_loading (logical; optional): determines if the component is loading or not
- prop_name (character; optional): holds which property is loading
- component_name (character; optional): holds the name of the component that is loading. Object that holds the loading state object coming from dash-renderer}
\item{...}{wildcards: `data-*` or `aria-*`}
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/speed_time.R
\name{speed_time}
\alias{speed_time}
\title{Speed over time}
\usage{
speed_time(player_profile, split_time)
}
\arguments{
\item{player_profile}{player profile from player_profile function}
\item{split_time}{Time}
}
\value{
Speed the player reaches at a given time from zero velocity
}
\description{
Speed over time
}
| /man/speed_time.Rd | no_license | aadler/midsprint | R | false | true | 409 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/speed_time.R
\name{speed_time}
\alias{speed_time}
\title{Speed over time}
\usage{
speed_time(player_profile, split_time)
}
\arguments{
\item{player_profile}{player profile from player_profile function}
\item{split_time}{Time}
}
\value{
Speed the player reaches at a given time from zero velocity
}
\description{
Speed over time
}
|
% Generated by roxygen2 (4.0.0): do not edit by hand
\name{desc}
\alias{desc}
\title{Descending order.}
\usage{
desc(x)
}
\arguments{
\item{x}{vector to transform}
}
\description{
Transform a vector into a format that will be sorted in descending order.
}
\examples{
desc(1:10)
desc(factor(letters))
first_day <- seq(as.Date("1910/1/1"), as.Date("1920/1/1"), "years")
desc(first_day)
}
| /man/desc.Rd | no_license | skranz/dplyr | R | false | false | 387 | rd | % Generated by roxygen2 (4.0.0): do not edit by hand
\name{desc}
\alias{desc}
\title{Descending order.}
\usage{
desc(x)
}
\arguments{
\item{x}{vector to transform}
}
\description{
Transform a vector into a format that will be sorted in descending order.
}
\examples{
desc(1:10)
desc(factor(letters))
first_day <- seq(as.Date("1910/1/1"), as.Date("1920/1/1"), "years")
desc(first_day)
}
|
library(rhdf5)
library(ggplot2)
library(abind)
files = args
getdata = function(track){
do.call(abind, lapply(files, function(x){
h5read(x,track)
}))
}
values = getdata("ism") #(1, 4, 12288, nseqs)
coding = getdata("coding") #(12288, nseqs)
seqs = getdata("seqs") #(12288, nseqs)
preds = getdata("ref") #(12288, nseqs)
NBINS = 100
binnedval = function(x){
a = data.frame(vals=x, bin = 1:length(x))
a$bin = as.numeric(cut_number(a$bin, NBINS))
a = aggregate(a$vals,list(a$bin),function(x) mean(x, na.rm=T))
b=rep(NA,NBINS)
b[a[,1]]=a[,2]
b
}
allutr5=c()
allorf=c()
allutr3=c()
allpreds=c()
for (i in 1:dim(values)[4]){ # iterate through all sequences
if(i%%1000 == 0) say(i)
X=abs(values[1,,,i])*(seqs[,,i]=="TRUE")
myvals = apply(X,2,sum)
txstart = which(myvals!=0)[1]
txend = 12288
orf = which(coding[,i] == "TRUE")
orfstart = orf[1]
orfend = min(orf[length(orf)]+3, txend)
utr5 = myvals[txstart:(orfstart-1)]
orf = myvals[orfstart:orfend]
utr3 = myvals[orfend:txend]
if(length(utr3) < 100 || length(orf) < 100 || length(utr5) < 100) next
if(i==1){
allutr5=binnedval(utr5)
allorf=binnedval(orf)
allutr3=binnedval(utr3)
}
else{
allutr5=rbind(allutr5,binnedval(utr5))
allorf=rbind(allorf,binnedval(orf))
allutr3=rbind(allutr3,binnedval(utr3))
}
allpreds = c(allpreds, preds[1,i])
}
master_table = as.data.frame(cbind(allutr5, allorf, allutr3))
master_table$bin <- cut(allpreds, breaks=quantile(allpreds), include.lowest = TRUE) #4 prediction bins
save(master_table, file="master_table.Rdata")
load("master_table.Rdata")
table(master_table$bin)
dim(master_table)
xin = 1:(NBINS*3)
meanvals = lapply(levels(master_table$bin), function(x) apply(master_table[master_table$bin==x,xin],2,function(x) mean(x, na.rm=T)))
pdf("png/Fig5E.pdf")
plot(xin, predict(loess(meanvals[[4]]~xin, span=0.05)), col='cyan', type='l') #, ylim=c(-0.004,0.004), axes = FALSE
lines(xin, predict(loess(meanvals[[3]]~xin, span=0.05)), col='blue', type='l')
lines(xin, predict(loess(meanvals[[2]]~xin, span=0.05)), col='red', type='l')
lines(xin, predict(loess(meanvals[[1]]~xin, span=0.05)), col='black', type='l')
abline(v=NBINS)
abline(v=NBINS*2)
plot(xin, predict(loess(meanvals[[4]]~xin, span=0.02)), col='cyan', type='l') #, ylim=c(-0.004,0.004), axes = FALSE
lines(xin, predict(loess(meanvals[[3]]~xin, span=0.02)), col='blue', type='l')
lines(xin, predict(loess(meanvals[[2]]~xin, span=0.02)), col='red', type='l')
lines(xin, predict(loess(meanvals[[1]]~xin, span=0.02)), col='black', type='l')
abline(v=NBINS)
abline(v=NBINS*2)
plot(xin, predict(loess(meanvals[[4]]~xin, span=0.01)), col='cyan', type='l') #, ylim=c(-0.004,0.004), axes = FALSE
lines(xin, predict(loess(meanvals[[3]]~xin, span=0.01)), col='blue', type='l')
lines(xin, predict(loess(meanvals[[2]]~xin, span=0.01)), col='red', type='l')
lines(xin, predict(loess(meanvals[[1]]~xin, span=0.01)), col='black', type='l')
abline(v=NBINS)
abline(v=NBINS*2)
plot(xin, meanvals[[4]], col='cyan', type='l')
lines(xin, meanvals[[3]], col='blue', type='l')
lines(xin, meanvals[[2]], col='red', type='l')
lines(xin, meanvals[[1]], col='black', type='l')
abline(v=NBINS)
abline(v=NBINS*2)
dev.off()
| /Fig5_S6/Fig5d.R | permissive | vagarwal87/saluki_paper | R | false | false | 3,227 | r | library(rhdf5)
library(ggplot2)
library(abind)
files = args
getdata = function(track){
do.call(abind, lapply(files, function(x){
h5read(x,track)
}))
}
values = getdata("ism") #(1, 4, 12288, nseqs)
coding = getdata("coding") #(12288, nseqs)
seqs = getdata("seqs") #(12288, nseqs)
preds = getdata("ref") #(12288, nseqs)
NBINS = 100
binnedval = function(x){
a = data.frame(vals=x, bin = 1:length(x))
a$bin = as.numeric(cut_number(a$bin, NBINS))
a = aggregate(a$vals,list(a$bin),function(x) mean(x, na.rm=T))
b=rep(NA,NBINS)
b[a[,1]]=a[,2]
b
}
allutr5=c()
allorf=c()
allutr3=c()
allpreds=c()
for (i in 1:dim(values)[4]){ # iterate through all sequences
if(i%%1000 == 0) say(i)
X=abs(values[1,,,i])*(seqs[,,i]=="TRUE")
myvals = apply(X,2,sum)
txstart = which(myvals!=0)[1]
txend = 12288
orf = which(coding[,i] == "TRUE")
orfstart = orf[1]
orfend = min(orf[length(orf)]+3, txend)
utr5 = myvals[txstart:(orfstart-1)]
orf = myvals[orfstart:orfend]
utr3 = myvals[orfend:txend]
if(length(utr3) < 100 || length(orf) < 100 || length(utr5) < 100) next
if(i==1){
allutr5=binnedval(utr5)
allorf=binnedval(orf)
allutr3=binnedval(utr3)
}
else{
allutr5=rbind(allutr5,binnedval(utr5))
allorf=rbind(allorf,binnedval(orf))
allutr3=rbind(allutr3,binnedval(utr3))
}
allpreds = c(allpreds, preds[1,i])
}
master_table = as.data.frame(cbind(allutr5, allorf, allutr3))
master_table$bin <- cut(allpreds, breaks=quantile(allpreds), include.lowest = TRUE) #4 prediction bins
save(master_table, file="master_table.Rdata")
load("master_table.Rdata")
table(master_table$bin)
dim(master_table)
xin = 1:(NBINS*3)
meanvals = lapply(levels(master_table$bin), function(x) apply(master_table[master_table$bin==x,xin],2,function(x) mean(x, na.rm=T)))
pdf("png/Fig5E.pdf")
plot(xin, predict(loess(meanvals[[4]]~xin, span=0.05)), col='cyan', type='l') #, ylim=c(-0.004,0.004), axes = FALSE
lines(xin, predict(loess(meanvals[[3]]~xin, span=0.05)), col='blue', type='l')
lines(xin, predict(loess(meanvals[[2]]~xin, span=0.05)), col='red', type='l')
lines(xin, predict(loess(meanvals[[1]]~xin, span=0.05)), col='black', type='l')
abline(v=NBINS)
abline(v=NBINS*2)
plot(xin, predict(loess(meanvals[[4]]~xin, span=0.02)), col='cyan', type='l') #, ylim=c(-0.004,0.004), axes = FALSE
lines(xin, predict(loess(meanvals[[3]]~xin, span=0.02)), col='blue', type='l')
lines(xin, predict(loess(meanvals[[2]]~xin, span=0.02)), col='red', type='l')
lines(xin, predict(loess(meanvals[[1]]~xin, span=0.02)), col='black', type='l')
abline(v=NBINS)
abline(v=NBINS*2)
plot(xin, predict(loess(meanvals[[4]]~xin, span=0.01)), col='cyan', type='l') #, ylim=c(-0.004,0.004), axes = FALSE
lines(xin, predict(loess(meanvals[[3]]~xin, span=0.01)), col='blue', type='l')
lines(xin, predict(loess(meanvals[[2]]~xin, span=0.01)), col='red', type='l')
lines(xin, predict(loess(meanvals[[1]]~xin, span=0.01)), col='black', type='l')
abline(v=NBINS)
abline(v=NBINS*2)
plot(xin, meanvals[[4]], col='cyan', type='l')
lines(xin, meanvals[[3]], col='blue', type='l')
lines(xin, meanvals[[2]], col='red', type='l')
lines(xin, meanvals[[1]], col='black', type='l')
abline(v=NBINS)
abline(v=NBINS*2)
dev.off()
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{motifbreakR_motif}
\alias{motifbreakR_motif}
\title{MotifDb object containing motif information from the motif databases of HOCOMOCO, Homer,
FactorBook and ENCODE}
\format{\code{\link[MotifDb]{MotifDb}} object of length 2816; to access metadata
use mcols(motifbreakR_motif)}
\source{
Kulakovskiy,I.V., Medvedeva,Y.A., Schaefer,U., Kasianov,A.S.,
Vorontsov,I.E., Bajic,V.B. and Makeev,V.J. (2013) HOCOMOCO: a comprehensive
collection of human transcription factor binding sites models. Nucleic
Acids Research, \bold{41}, D195--D202.
Heinz S, Benner C, Spann N, Bertolino E et al. (2010 May 28) Simple Combinations
of Lineage-Determining Transcription Factors Prime cis-Regulatory
Elements Required for Macrophage and B Cell Identities. Mol Cell, \bold{38(4):576-589}.
PMID: \href{20513432}{http://www.ncbi.nlm.nih.gov/sites/entrez?Db=Pubmed&term=20513432[UID]}
J Wang, J Zhuang, S Iyer, XY Lin, et al. (2012) Sequence features and
chromatin structure around the genomic regions bound by 119 human transcription
factors. Genome Research, \bold{22 (9)}, 1798-1812, doi:10.1101/gr.139105.112
Pouya Kheradpour and Manolis Kellis (2013 December 13) Systematic
discovery and characterization of regulatory motifs in ENCODE TF binding
experiments. Nucleic Acids Research, doi:10.1093/nar/gkt1249
}
\usage{
motifbreakR_motif
}
\value{
\code{\link[MotifDb]{MotifList-class}} object
}
\description{
This object contains all the \code{\link[MotifDb]{MotifList-class}} objects that were generated
for this package. See the individual help sections for \code{\link{hocomoco}}, \code{\link{homer}},
\code{\link{factorbook}}, and \code{\link{encodemotif}}, for how the data is formatted.
}
\details{
Load with \code{data(motifbreakR_motif)}
}
\examples{
data(motifbreakR_motif)
motifbreakR_motif
}
\seealso{
\code{\link{hocomoco}}, \code{\link{homer}},
\code{\link{factorbook}}, and \code{\link{encodemotif}}
}
\keyword{datasets}
| /man/motifbreakR_motif.Rd | no_license | RamsinghLab/motifBreakR | R | false | false | 2,054 | rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{motifbreakR_motif}
\alias{motifbreakR_motif}
\title{MotifDb object containing motif information from the motif databases of HOCOMOCO, Homer,
FactorBook and ENCODE}
\format{\code{\link[MotifDb]{MotifDb}} object of length 2816; to access metadata
use mcols(motifbreakR_motif)}
\source{
Kulakovskiy,I.V., Medvedeva,Y.A., Schaefer,U., Kasianov,A.S.,
Vorontsov,I.E., Bajic,V.B. and Makeev,V.J. (2013) HOCOMOCO: a comprehensive
collection of human transcription factor binding sites models. Nucleic
Acids Research, \bold{41}, D195--D202.
Heinz S, Benner C, Spann N, Bertolino E et al. (2010 May 28) Simple Combinations
of Lineage-Determining Transcription Factors Prime cis-Regulatory
Elements Required for Macrophage and B Cell Identities. Mol Cell, \bold{38(4):576-589}.
PMID: \href{20513432}{http://www.ncbi.nlm.nih.gov/sites/entrez?Db=Pubmed&term=20513432[UID]}
J Wang, J Zhuang, S Iyer, XY Lin, et al. (2012) Sequence features and
chromatin structure around the genomic regions bound by 119 human transcription
factors. Genome Research, \bold{22 (9)}, 1798-1812, doi:10.1101/gr.139105.112
Pouya Kheradpour and Manolis Kellis (2013 December 13) Systematic
discovery and characterization of regulatory motifs in ENCODE TF binding
experiments. Nucleic Acids Research, doi:10.1093/nar/gkt1249
}
\usage{
motifbreakR_motif
}
\value{
\code{\link[MotifDb]{MotifList-class}} object
}
\description{
This object contains all the \code{\link[MotifDb]{MotifList-class}} objects that were generated
for this package. See the individual help sections for \code{\link{hocomoco}}, \code{\link{homer}},
\code{\link{factorbook}}, and \code{\link{encodemotif}}, for how the data is formatted.
}
\details{
Load with \code{data(motifbreakR_motif)}
}
\examples{
data(motifbreakR_motif)
motifbreakR_motif
}
\seealso{
\code{\link{hocomoco}}, \code{\link{homer}},
\code{\link{factorbook}}, and \code{\link{encodemotif}}
}
\keyword{datasets}
|
#' Calculate C(t) for a 3-compartment linear model at steady state, with zero-order absorption
#'
#' @param tad Time after dose (h)
#' @param CL Clearance (L/h)
#' @param V1 Central volume of distribution (L)
#' @param V2 First peripheral volume of distribution (L)
#' @param V3 Second peripheral volume of distribution (L)
#' @param Q2 Intercompartmental clearance between V1 and V2 (L/h)
#' @param Q3 Intercompartmental clearance between V2 and V3 (L/h)
#' @param dur Duration of zero-order absorption (h)
#' @param dose Dose
#' @param tau Dosing interval (h)
#'
#' @return Concentration of drug at requested time after dose (\code{tad}) at steady state, given provided set of parameters and variables.
#'
#' @author Justin Wilkins, \email{justin.wilkins@@occams.com}
#' @references Bertrand J & Mentre F (2008). Mathematical Expressions of the Pharmacokinetic and Pharmacodynamic Models
#' implemented in the Monolix software. \url{http://lixoft.com/wp-content/uploads/2016/03/PKPDlibrary.pdf}
#' @references Rowland M, Tozer TN. Clinical Pharmacokinetics and Pharmacodynamics: Concepts and Applications (4th). Lippincott Williams & Wilkins, Philadelphia, 2010.
#'
#' @examples
#' Ctrough <- calc_ss_3cmt_linear_oral_0(tad = 11.75, CL = 3.5, V1 = 20, V2 = 500,
#' V3 = 200, Q2 = 0.5, Q3 = 0.05, dur = 1, dose = 100, tau = 24)
#'
#' @export
calc_ss_3cmt_linear_oral_0 <- function(tad, CL, V1, V2, V3, Q2, Q3, dur, dose, tau) {
### microconstants - 1.3 p. 37
k <- CL/V1
k12 <- Q2/V1
k21 <- Q2/V2
k13 <- Q3/V1
k31 <- Q3/V3
### a0, a1, a2
a0 <- k * k21 * k31
a1 <- (k * k31) + (k21 * k31) + (k21 * k13) + (k * k21) + (k31 * k12)
a2 <- k + k12 + k13 + k21 + k31
p <- a1 - (a2^2)/3
q <- ((2 * (a2^3))/27) - ((a1 * a2)/3) + a0
r1 <- sqrt(-((p^3)/27))
r2 <- 2 * (r1^(1/3))
### phi
phi <- acos(-(q/(2 * r1)))/3
### alpha
alpha <- -(cos(phi)*r2 - (a2/3))
### beta
beta <- -(cos(phi + ((2 * pi)/3)) * r2 - (a2/3))
### gamma
gamma <- -(cos(phi + ((4 * pi)/3)) * r2 - (a2/3))
### macroconstants - 1.3.4 p. 47
A <- (1/V1) * ((k21 - alpha)/(alpha - beta)) * ((k31 - alpha)/(alpha - gamma))
B <- (1/V1) * ((k21 - beta)/(beta - alpha)) * ((k31 - beta)/(beta - gamma))
C <- (1/V1) * ((k21 - gamma)/(gamma - beta)) * ((k31 - gamma)/(gamma - alpha))
### C(t) after single dose - eq 1.83 p. 49
Ct <- (dose / dur) * ((A/alpha) * ((1 - exp(-alpha * dur)) * exp(-alpha * (tad - dur)))/(1 - exp(-alpha * tau)) +
(B/beta) * ((1 - exp(-beta * dur)) * exp(-beta * (tad - dur)))/(1 - exp(-beta * tau)) +
(C/gamma) * ((1 - exp(-gamma * dur)) * exp(-gamma * (tad - dur)))/(1 - exp(-gamma * tau)))
tm1a <- 1 - exp(-alpha * tad[tad <= dur])
tm2a <- exp(-alpha * tau)
tm3a <- 1 - exp(-alpha * dur)
tm4a <- exp(-alpha * (tad[tad <= dur] - dur))
tm5a <- 1 - exp(-alpha * tau)
tm1b <- 1 - exp(-beta * tad[tad <= dur])
tm2b <- exp(-beta * tau)
tm3b <- 1 - exp(-beta * dur)
tm4b <- exp(-beta * (tad[tad <= dur] - dur))
tm5b <- 1 - exp(-beta * tau)
tm1c <- 1 - exp(-gamma * tad[tad <= dur])
tm2c <- exp(-gamma * tau)
tm3c <- 1 - exp(-gamma * dur)
tm4c <- exp(-gamma * (tad[tad <= dur] - dur))
tm5c <- 1 - exp(-gamma * tau)
Ct[tad <= dur] <- (dose / dur) * ((A/alpha) * (tm1a + (tm2a * ((tm3a * tm4a)/tm5a))) +
(B/beta) * (tm1b + (tm2b * ((tm3b * tm4b)/tm5b))) +
(C/gamma) * (tm1c + (tm2c * ((tm3c * tm4c)/tm5c))))
Ct
}
| /R/calc_ss_3cmt_linear_oral_0.R | no_license | kylebaron/pmxTools | R | false | false | 3,525 | r | #' Calculate C(t) for a 3-compartment linear model at steady state, with zero-order absorption
#'
#' @param tad Time after dose (h)
#' @param CL Clearance (L/h)
#' @param V1 Central volume of distribution (L)
#' @param V2 First peripheral volume of distribution (L)
#' @param V3 Second peripheral volume of distribution (L)
#' @param Q2 Intercompartmental clearance between V1 and V2 (L/h)
#' @param Q3 Intercompartmental clearance between V2 and V3 (L/h)
#' @param dur Duration of zero-order absorption (h)
#' @param dose Dose
#' @param tau Dosing interval (h)
#'
#' @return Concentration of drug at requested time after dose (\code{tad}) at steady state, given provided set of parameters and variables.
#'
#' @author Justin Wilkins, \email{justin.wilkins@@occams.com}
#' @references Bertrand J & Mentre F (2008). Mathematical Expressions of the Pharmacokinetic and Pharmacodynamic Models
#' implemented in the Monolix software. \url{http://lixoft.com/wp-content/uploads/2016/03/PKPDlibrary.pdf}
#' @references Rowland M, Tozer TN. Clinical Pharmacokinetics and Pharmacodynamics: Concepts and Applications (4th). Lippincott Williams & Wilkins, Philadelphia, 2010.
#'
#' @examples
#' Ctrough <- calc_ss_3cmt_linear_oral_0(tad = 11.75, CL = 3.5, V1 = 20, V2 = 500,
#' V3 = 200, Q2 = 0.5, Q3 = 0.05, dur = 1, dose = 100, tau = 24)
#'
#' @export
calc_ss_3cmt_linear_oral_0 <- function(tad, CL, V1, V2, V3, Q2, Q3, dur, dose, tau) {
### microconstants - 1.3 p. 37
k <- CL/V1
k12 <- Q2/V1
k21 <- Q2/V2
k13 <- Q3/V1
k31 <- Q3/V3
### a0, a1, a2
a0 <- k * k21 * k31
a1 <- (k * k31) + (k21 * k31) + (k21 * k13) + (k * k21) + (k31 * k12)
a2 <- k + k12 + k13 + k21 + k31
p <- a1 - (a2^2)/3
q <- ((2 * (a2^3))/27) - ((a1 * a2)/3) + a0
r1 <- sqrt(-((p^3)/27))
r2 <- 2 * (r1^(1/3))
### phi
phi <- acos(-(q/(2 * r1)))/3
### alpha
alpha <- -(cos(phi)*r2 - (a2/3))
### beta
beta <- -(cos(phi + ((2 * pi)/3)) * r2 - (a2/3))
### gamma
gamma <- -(cos(phi + ((4 * pi)/3)) * r2 - (a2/3))
### macroconstants - 1.3.4 p. 47
A <- (1/V1) * ((k21 - alpha)/(alpha - beta)) * ((k31 - alpha)/(alpha - gamma))
B <- (1/V1) * ((k21 - beta)/(beta - alpha)) * ((k31 - beta)/(beta - gamma))
C <- (1/V1) * ((k21 - gamma)/(gamma - beta)) * ((k31 - gamma)/(gamma - alpha))
### C(t) after single dose - eq 1.83 p. 49
Ct <- (dose / dur) * ((A/alpha) * ((1 - exp(-alpha * dur)) * exp(-alpha * (tad - dur)))/(1 - exp(-alpha * tau)) +
(B/beta) * ((1 - exp(-beta * dur)) * exp(-beta * (tad - dur)))/(1 - exp(-beta * tau)) +
(C/gamma) * ((1 - exp(-gamma * dur)) * exp(-gamma * (tad - dur)))/(1 - exp(-gamma * tau)))
tm1a <- 1 - exp(-alpha * tad[tad <= dur])
tm2a <- exp(-alpha * tau)
tm3a <- 1 - exp(-alpha * dur)
tm4a <- exp(-alpha * (tad[tad <= dur] - dur))
tm5a <- 1 - exp(-alpha * tau)
tm1b <- 1 - exp(-beta * tad[tad <= dur])
tm2b <- exp(-beta * tau)
tm3b <- 1 - exp(-beta * dur)
tm4b <- exp(-beta * (tad[tad <= dur] - dur))
tm5b <- 1 - exp(-beta * tau)
tm1c <- 1 - exp(-gamma * tad[tad <= dur])
tm2c <- exp(-gamma * tau)
tm3c <- 1 - exp(-gamma * dur)
tm4c <- exp(-gamma * (tad[tad <= dur] - dur))
tm5c <- 1 - exp(-gamma * tau)
Ct[tad <= dur] <- (dose / dur) * ((A/alpha) * (tm1a + (tm2a * ((tm3a * tm4a)/tm5a))) +
(B/beta) * (tm1b + (tm2b * ((tm3b * tm4b)/tm5b))) +
(C/gamma) * (tm1c + (tm2c * ((tm3c * tm4c)/tm5c))))
Ct
}
|
#'@export
CPF_RB <- function(nparticles, model, theta, observations, ref_trajectory = NULL, with_as = FALSE,
test_function = function(trajectory){ return(trajectory) }){
datalength <- nrow(observations)
# create tree representation of the trajectories
Tree <- new(TreeClass, nparticles, 10*nparticles*model$dimension, model$dimension)
# initialization
model_precomputed <- model$precompute(theta)
xparticles <- model$rinit(nparticles, theta, model$rinit_rand(nparticles, theta), model_precomputed)
if (!is.null(ref_trajectory)){
xparticles[,nparticles] <- ref_trajectory[,1]
}
Tree$init(xparticles)
#
normweights <- rep(1/nparticles, nparticles)
# if ancestor sampling, needs to keep the last generation of particles at hand
if (with_as){
x_last <- xparticles
}
# step t > 1
for (time in 1:datalength){
ancestors <- multinomial_resampling_n(normweights, nparticles)
# if no observation or first time, no resampling
if (time == 1 || (time > 1 && is.na(observations[time-1,1]))){
ancestors <- 1:nparticles
}
xparticles <- xparticles[,ancestors]
if (is.null(dim(xparticles))) xparticles <- matrix(xparticles, nrow = dimension)
xparticles <- model$rtransition(xparticles, theta, time, model$rtransition_rand(nparticles, theta), model_precomputed)
if (!is.null(ref_trajectory)){
xparticles[,nparticles] <- ref_trajectory[,time+1]
if (with_as){
# Ancestor sampling
logm <- model$dtransition(ref_trajectory[,time+1], x_last, theta, time, model_precomputed)
logm <- log(normweights) + logm
w_as <- exp(logm - max(logm))
w_as <- w_as / sum(w_as)
ancestors[nparticles] <- systematic_resampling_n(w_as, 1, runif(1))
x_last <- xparticles
} else {
ancestors[nparticles] <- nparticles
}
}
#
logw <- model$dmeasurement(xparticles, theta, observations[time,], model_precomputed)
maxlw <- max(logw)
w <- exp(logw - maxlw)
normweights <- w / sum(w)
#
Tree$update(xparticles, ancestors - 1)
}
trajectories <- array(dim = c(model$dimension, datalength + 1, nparticles))
estimate <- 0
for (k in 0:(nparticles-1)){
trajectories[,,k+1] <- Tree$get_path(k)
estimate <- estimate + normweights[k+1] * test_function(trajectories[,,k+1])
}
new_trajectory <- matrix(trajectories[,,systematic_resampling_n(normweights, 1, runif(1))], nrow = model$dimension)
return(list(new_trajectory = new_trajectory, estimate = estimate))
}
#'@export
CPF_RB_coupled <- function(nparticles, model, theta, observations, ref_trajectory1, ref_trajectory2,
coupled_resampling, with_as = FALSE,
test_function = function(trajectory){ return(trajectory) }){
#
datalength <- nrow(observations)
# create tree representation of the trajectories
Tree1 <- new(TreeClass, nparticles, 10*nparticles*model$dimension, model$dimension)
Tree2 <- new(TreeClass, nparticles, 10*nparticles*model$dimension, model$dimension)
# initialization
model_precomputed <- model$precompute(theta)
init_rand <- model$rinit_rand(nparticles, theta)
xparticles1 <- model$rinit(nparticles, theta, init_rand, model_precomputed)
xparticles1[,nparticles] <- ref_trajectory1[,1]
Tree1$init(xparticles1)
normweights1 <- rep(1/nparticles, nparticles)
#
xparticles2 <- model$rinit(nparticles, theta, init_rand, model_precomputed)
xparticles2[,nparticles] <- ref_trajectory2[,1]
Tree2$init(xparticles2)
normweights2 <- rep(1/nparticles, nparticles)
#
# if ancestor sampling, needs to keep the last generation of particles at hand
if (with_as){
x_last1 <- xparticles1
x_last2 <- xparticles2
}
# step t > 1
for (time in 1:datalength){
ancestors <- coupled_resampling(xparticles1, xparticles2, normweights1, normweights2)
ancestors1 <- ancestors[,1]
ancestors2 <- ancestors[,2]
# if no observation or first time, no resampling
if (time == 1 || (time > 1 && is.na(observations[time-1,1]))){
ancestors1 <- 1:nparticles
ancestors2 <- 1:nparticles
}
#
xparticles1 <- xparticles1[,ancestors1]
xparticles2 <- xparticles2[,ancestors2]
if (is.null(dim(xparticles1))) xparticles1 <- matrix(xparticles1, nrow = dimension)
if (is.null(dim(xparticles2))) xparticles2 <- matrix(xparticles2, nrow = dimension)
#
transition_rand <- model$rtransition_rand(nparticles, theta)
xparticles1 <- model$rtransition(xparticles1, theta, time, transition_rand, model_precomputed)
xparticles2 <- model$rtransition(xparticles2, theta, time, transition_rand, model_precomputed)
if (is.null(dim(xparticles1))) xparticles1 <- matrix(xparticles1, nrow = dimension)
if (is.null(dim(xparticles2))) xparticles2 <- matrix(xparticles2, nrow = dimension)
#
xparticles1[,nparticles] <- ref_trajectory1[,time+1]
xparticles2[,nparticles] <- ref_trajectory2[,time+1]
if (with_as){
# % Ancestor sampling
logm1 <- model$dtransition(ref_trajectory1[,time+1], x_last1, theta, time, model_precomputed)
logm1 <- log(normweights1) + logm1
w_as1 <- exp(logm1 - max(logm1))
w_as1 <- w_as1 / sum(w_as1)
unif_resampling_as <- runif(1)
ancestors1[nparticles] = systematic_resampling_n(w_as1, 1, unif_resampling_as)
x_last1 <- xparticles1
#
logm2 <- model$dtransition(ref_trajectory2[,time+1], x_last2, theta, time, model_precomputed)
logm2 <- log(normweights2) + logm2
w_as2 <- exp(logm2 - max(logm2))
w_as2 <- w_as2 / sum(w_as2)
ancestors2[nparticles] = systematic_resampling_n(w_as2, 1, unif_resampling_as)
x_last2 <- xparticles2
} else {
ancestors1[nparticles] <- nparticles
ancestors2[nparticles] <- nparticles
}
#
logw1 <- model$dmeasurement(xparticles1, theta, observations[time,], model_precomputed)
logw2 <- model$dmeasurement(xparticles2, theta, observations[time,], model_precomputed)
#
maxlw1 <- max(logw1)
w1 <- exp(logw1 - maxlw1)
normweights1 <- w1 / sum(w1)
#
maxlw2 <- max(logw2)
w2 <- exp(logw2 - maxlw2)
normweights2 <- w2 / sum(w2)
#
Tree1$update(xparticles1, ancestors1 - 1)
Tree2$update(xparticles2, ancestors2 - 1)
}
u <- runif(1)
k_path1 <- systematic_resampling_n(normweights1, 1, u)
k_path2 <- systematic_resampling_n(normweights2, 1, u)
##
trajectories1 <- array(dim = c(model$dimension, datalength + 1, nparticles))
trajectories2 <- array(dim = c(model$dimension, datalength + 1, nparticles))
estimate1 <- 0
estimate2 <- 0
for (k in 0:(nparticles-1)){
trajectories1[,,k+1] <- Tree1$get_path(k)
trajectories2[,,k+1] <- Tree2$get_path(k)
estimate1 <- estimate1 + normweights1[k+1] * test_function(trajectories1[,,k+1])
estimate2 <- estimate2 + normweights2[k+1] * test_function(trajectories2[,,k+1])
}
new_trajectory1 <- matrix(trajectories1[,,k_path1], nrow = model$dimension)
new_trajectory2 <- matrix(trajectories2[,,k_path2], nrow = model$dimension)
return(list(new_trajectory1 = new_trajectory1, new_trajectory2 = new_trajectory2,
estimate1 = estimate1, estimate2 = estimate2))
}
#'@export
rheeglynn_estimator_RB <- function(observations, model, theta, algoparameters,
test_function = function(trajectory){ return(trajectory) }){
# number of particles
nparticles <- algoparameters$nparticles
# ancestor sampling
with_as <- algoparameters$with_as
# coupled resampling scheme
coupled_resampling <- algoparameters$coupled_resampling
#
CPF_RB_res <- CPF_RB(nparticles, model, theta, observations, test_function = test_function)
xref <- CPF_RB_res$new_trajectory
CPF_RB_res_tilde <- CPF_RB(nparticles, model, theta, observations, test_function = test_function)
xref_tilde <- CPF_RB_res_tilde$new_trajectory
iteration <- 0
estimate <- CPF_RB_res$estimate #test_function(xref)
iteration <- iteration + 1
CPF_RB_res <- CPF_RB(nparticles, model, theta, observations, xref, with_as = with_as)
xref <- CPF_RB_res$new_trajectory
# estimate <- estimate + (test_function(xref) - test_function(xref_tilde))
estimate <- estimate + CPF_RB_res$estimate - CPF_RB_res_tilde$estimate
meeting <- FALSE
while (!meeting){
iteration <- iteration + 1
res <- CPF_RB_coupled(nparticles, model, theta, observations, xref, xref_tilde, coupled_resampling, with_as = with_as,
test_function = test_function)
xref <- res$new_trajectory1
xref_tilde <- res$new_trajectory2
estimate <- estimate + (res$estimate1 - res$estimate2)
# estimate <- estimate + (test_function(xref) - test_function(xref_tilde))
if (isTRUE(all.equal(xref, xref_tilde))){
meeting <- TRUE
break
}
}
return(list(estimate = estimate, iteration = iteration))
}
#'@export
rheeglynn_estimator_RB2 <- function(observations, model, theta, algoparameters,
test_function = function(trajectory){ return(trajectory) }){
# number of particles
nparticles <- algoparameters$nparticles
# ancestor sampling
with_as <- algoparameters$with_as
# coupled resampling scheme
coupled_resampling <- algoparameters$coupled_resampling
# step at which to start computing the estimate
m <- algoparameters$m
if (is.null(algoparameters$m)){
m <- 0
}
#
CPF_RB_res <- CPF_RB(nparticles, model, theta, observations, test_function = test_function)
xref <- CPF_RB_res$new_trajectory
CPF_RB_res_tilde <- CPF_RB(nparticles, model, theta, observations, test_function = test_function)
xref_tilde <- CPF_RB_res_tilde$new_trajectory
iteration <- 0
estimate <- 0
if (m == 0){
estimate <- CPF_RB_res$estimate
}
iteration <- 1
CPF_RB_res <- CPF_RB(nparticles, model, theta, observations, xref, with_as = with_as, test_function)
xref <- CPF_RB_res$new_trajectory
if (m == 1){
estimate <- CPF_RB_res$estimate
} else {
estimate <- estimate + CPF_RB_res$estimate - CPF_RB_res_tilde$estimate
}
meeting <- FALSE
meetingtime <- Inf
finished <- FALSE
while (!finished){
# while (!meeting && iteration < m){
iteration <- iteration + 1
res <- CPF_RB_coupled(nparticles, model, theta, observations, xref, xref_tilde, coupled_resampling, with_as, test_function)
if (!meeting && isTRUE(all.equal(xref, xref_tilde))){
meeting <- TRUE
meetingtime <- iteration
}
xref <- res$new_trajectory1
xref_tilde <- res$new_trajectory2
if (m == iteration){
estimate <- res$estimate1
} else {
estimate <- estimate + (res$estimate1 - res$estimate2)
}
if (iteration >= meetingtime && iteration >= m){
finished <- TRUE
if (m+1 >= meetingtime){
estimate <- res$estimate1
}
}
}
return(list(estimate = estimate, iteration = iteration, meetingtime = meetingtime))
}
| /R/rheeglynn_estimator_RB.R | no_license | particlemontecarlo/unbiased_pimh | R | false | false | 10,977 | r | #'@export
CPF_RB <- function(nparticles, model, theta, observations, ref_trajectory = NULL, with_as = FALSE,
test_function = function(trajectory){ return(trajectory) }){
datalength <- nrow(observations)
# create tree representation of the trajectories
Tree <- new(TreeClass, nparticles, 10*nparticles*model$dimension, model$dimension)
# initialization
model_precomputed <- model$precompute(theta)
xparticles <- model$rinit(nparticles, theta, model$rinit_rand(nparticles, theta), model_precomputed)
if (!is.null(ref_trajectory)){
xparticles[,nparticles] <- ref_trajectory[,1]
}
Tree$init(xparticles)
#
normweights <- rep(1/nparticles, nparticles)
# if ancestor sampling, needs to keep the last generation of particles at hand
if (with_as){
x_last <- xparticles
}
# step t > 1
for (time in 1:datalength){
ancestors <- multinomial_resampling_n(normweights, nparticles)
# if no observation or first time, no resampling
if (time == 1 || (time > 1 && is.na(observations[time-1,1]))){
ancestors <- 1:nparticles
}
xparticles <- xparticles[,ancestors]
if (is.null(dim(xparticles))) xparticles <- matrix(xparticles, nrow = dimension)
xparticles <- model$rtransition(xparticles, theta, time, model$rtransition_rand(nparticles, theta), model_precomputed)
if (!is.null(ref_trajectory)){
xparticles[,nparticles] <- ref_trajectory[,time+1]
if (with_as){
# Ancestor sampling
logm <- model$dtransition(ref_trajectory[,time+1], x_last, theta, time, model_precomputed)
logm <- log(normweights) + logm
w_as <- exp(logm - max(logm))
w_as <- w_as / sum(w_as)
ancestors[nparticles] <- systematic_resampling_n(w_as, 1, runif(1))
x_last <- xparticles
} else {
ancestors[nparticles] <- nparticles
}
}
#
logw <- model$dmeasurement(xparticles, theta, observations[time,], model_precomputed)
maxlw <- max(logw)
w <- exp(logw - maxlw)
normweights <- w / sum(w)
#
Tree$update(xparticles, ancestors - 1)
}
trajectories <- array(dim = c(model$dimension, datalength + 1, nparticles))
estimate <- 0
for (k in 0:(nparticles-1)){
trajectories[,,k+1] <- Tree$get_path(k)
estimate <- estimate + normweights[k+1] * test_function(trajectories[,,k+1])
}
new_trajectory <- matrix(trajectories[,,systematic_resampling_n(normweights, 1, runif(1))], nrow = model$dimension)
return(list(new_trajectory = new_trajectory, estimate = estimate))
}
#'@export
CPF_RB_coupled <- function(nparticles, model, theta, observations, ref_trajectory1, ref_trajectory2,
coupled_resampling, with_as = FALSE,
test_function = function(trajectory){ return(trajectory) }){
#
datalength <- nrow(observations)
# create tree representation of the trajectories
Tree1 <- new(TreeClass, nparticles, 10*nparticles*model$dimension, model$dimension)
Tree2 <- new(TreeClass, nparticles, 10*nparticles*model$dimension, model$dimension)
# initialization
model_precomputed <- model$precompute(theta)
init_rand <- model$rinit_rand(nparticles, theta)
xparticles1 <- model$rinit(nparticles, theta, init_rand, model_precomputed)
xparticles1[,nparticles] <- ref_trajectory1[,1]
Tree1$init(xparticles1)
normweights1 <- rep(1/nparticles, nparticles)
#
xparticles2 <- model$rinit(nparticles, theta, init_rand, model_precomputed)
xparticles2[,nparticles] <- ref_trajectory2[,1]
Tree2$init(xparticles2)
normweights2 <- rep(1/nparticles, nparticles)
#
# if ancestor sampling, needs to keep the last generation of particles at hand
if (with_as){
x_last1 <- xparticles1
x_last2 <- xparticles2
}
# step t > 1
for (time in 1:datalength){
ancestors <- coupled_resampling(xparticles1, xparticles2, normweights1, normweights2)
ancestors1 <- ancestors[,1]
ancestors2 <- ancestors[,2]
# if no observation or first time, no resampling
if (time == 1 || (time > 1 && is.na(observations[time-1,1]))){
ancestors1 <- 1:nparticles
ancestors2 <- 1:nparticles
}
#
xparticles1 <- xparticles1[,ancestors1]
xparticles2 <- xparticles2[,ancestors2]
if (is.null(dim(xparticles1))) xparticles1 <- matrix(xparticles1, nrow = dimension)
if (is.null(dim(xparticles2))) xparticles2 <- matrix(xparticles2, nrow = dimension)
#
transition_rand <- model$rtransition_rand(nparticles, theta)
xparticles1 <- model$rtransition(xparticles1, theta, time, transition_rand, model_precomputed)
xparticles2 <- model$rtransition(xparticles2, theta, time, transition_rand, model_precomputed)
if (is.null(dim(xparticles1))) xparticles1 <- matrix(xparticles1, nrow = dimension)
if (is.null(dim(xparticles2))) xparticles2 <- matrix(xparticles2, nrow = dimension)
#
xparticles1[,nparticles] <- ref_trajectory1[,time+1]
xparticles2[,nparticles] <- ref_trajectory2[,time+1]
if (with_as){
# % Ancestor sampling
logm1 <- model$dtransition(ref_trajectory1[,time+1], x_last1, theta, time, model_precomputed)
logm1 <- log(normweights1) + logm1
w_as1 <- exp(logm1 - max(logm1))
w_as1 <- w_as1 / sum(w_as1)
unif_resampling_as <- runif(1)
ancestors1[nparticles] = systematic_resampling_n(w_as1, 1, unif_resampling_as)
x_last1 <- xparticles1
#
logm2 <- model$dtransition(ref_trajectory2[,time+1], x_last2, theta, time, model_precomputed)
logm2 <- log(normweights2) + logm2
w_as2 <- exp(logm2 - max(logm2))
w_as2 <- w_as2 / sum(w_as2)
ancestors2[nparticles] = systematic_resampling_n(w_as2, 1, unif_resampling_as)
x_last2 <- xparticles2
} else {
ancestors1[nparticles] <- nparticles
ancestors2[nparticles] <- nparticles
}
#
logw1 <- model$dmeasurement(xparticles1, theta, observations[time,], model_precomputed)
logw2 <- model$dmeasurement(xparticles2, theta, observations[time,], model_precomputed)
#
maxlw1 <- max(logw1)
w1 <- exp(logw1 - maxlw1)
normweights1 <- w1 / sum(w1)
#
maxlw2 <- max(logw2)
w2 <- exp(logw2 - maxlw2)
normweights2 <- w2 / sum(w2)
#
Tree1$update(xparticles1, ancestors1 - 1)
Tree2$update(xparticles2, ancestors2 - 1)
}
u <- runif(1)
k_path1 <- systematic_resampling_n(normweights1, 1, u)
k_path2 <- systematic_resampling_n(normweights2, 1, u)
##
trajectories1 <- array(dim = c(model$dimension, datalength + 1, nparticles))
trajectories2 <- array(dim = c(model$dimension, datalength + 1, nparticles))
estimate1 <- 0
estimate2 <- 0
for (k in 0:(nparticles-1)){
trajectories1[,,k+1] <- Tree1$get_path(k)
trajectories2[,,k+1] <- Tree2$get_path(k)
estimate1 <- estimate1 + normweights1[k+1] * test_function(trajectories1[,,k+1])
estimate2 <- estimate2 + normweights2[k+1] * test_function(trajectories2[,,k+1])
}
new_trajectory1 <- matrix(trajectories1[,,k_path1], nrow = model$dimension)
new_trajectory2 <- matrix(trajectories2[,,k_path2], nrow = model$dimension)
return(list(new_trajectory1 = new_trajectory1, new_trajectory2 = new_trajectory2,
estimate1 = estimate1, estimate2 = estimate2))
}
#'@export
rheeglynn_estimator_RB <- function(observations, model, theta, algoparameters,
test_function = function(trajectory){ return(trajectory) }){
# number of particles
nparticles <- algoparameters$nparticles
# ancestor sampling
with_as <- algoparameters$with_as
# coupled resampling scheme
coupled_resampling <- algoparameters$coupled_resampling
#
CPF_RB_res <- CPF_RB(nparticles, model, theta, observations, test_function = test_function)
xref <- CPF_RB_res$new_trajectory
CPF_RB_res_tilde <- CPF_RB(nparticles, model, theta, observations, test_function = test_function)
xref_tilde <- CPF_RB_res_tilde$new_trajectory
iteration <- 0
estimate <- CPF_RB_res$estimate #test_function(xref)
iteration <- iteration + 1
CPF_RB_res <- CPF_RB(nparticles, model, theta, observations, xref, with_as = with_as)
xref <- CPF_RB_res$new_trajectory
# estimate <- estimate + (test_function(xref) - test_function(xref_tilde))
estimate <- estimate + CPF_RB_res$estimate - CPF_RB_res_tilde$estimate
meeting <- FALSE
while (!meeting){
iteration <- iteration + 1
res <- CPF_RB_coupled(nparticles, model, theta, observations, xref, xref_tilde, coupled_resampling, with_as = with_as,
test_function = test_function)
xref <- res$new_trajectory1
xref_tilde <- res$new_trajectory2
estimate <- estimate + (res$estimate1 - res$estimate2)
# estimate <- estimate + (test_function(xref) - test_function(xref_tilde))
if (isTRUE(all.equal(xref, xref_tilde))){
meeting <- TRUE
break
}
}
return(list(estimate = estimate, iteration = iteration))
}
#'@export
rheeglynn_estimator_RB2 <- function(observations, model, theta, algoparameters,
test_function = function(trajectory){ return(trajectory) }){
# number of particles
nparticles <- algoparameters$nparticles
# ancestor sampling
with_as <- algoparameters$with_as
# coupled resampling scheme
coupled_resampling <- algoparameters$coupled_resampling
# step at which to start computing the estimate
m <- algoparameters$m
if (is.null(algoparameters$m)){
m <- 0
}
#
CPF_RB_res <- CPF_RB(nparticles, model, theta, observations, test_function = test_function)
xref <- CPF_RB_res$new_trajectory
CPF_RB_res_tilde <- CPF_RB(nparticles, model, theta, observations, test_function = test_function)
xref_tilde <- CPF_RB_res_tilde$new_trajectory
iteration <- 0
estimate <- 0
if (m == 0){
estimate <- CPF_RB_res$estimate
}
iteration <- 1
CPF_RB_res <- CPF_RB(nparticles, model, theta, observations, xref, with_as = with_as, test_function)
xref <- CPF_RB_res$new_trajectory
if (m == 1){
estimate <- CPF_RB_res$estimate
} else {
estimate <- estimate + CPF_RB_res$estimate - CPF_RB_res_tilde$estimate
}
meeting <- FALSE
meetingtime <- Inf
finished <- FALSE
while (!finished){
# while (!meeting && iteration < m){
iteration <- iteration + 1
res <- CPF_RB_coupled(nparticles, model, theta, observations, xref, xref_tilde, coupled_resampling, with_as, test_function)
if (!meeting && isTRUE(all.equal(xref, xref_tilde))){
meeting <- TRUE
meetingtime <- iteration
}
xref <- res$new_trajectory1
xref_tilde <- res$new_trajectory2
if (m == iteration){
estimate <- res$estimate1
} else {
estimate <- estimate + (res$estimate1 - res$estimate2)
}
if (iteration >= meetingtime && iteration >= m){
finished <- TRUE
if (m+1 >= meetingtime){
estimate <- res$estimate1
}
}
}
return(list(estimate = estimate, iteration = iteration, meetingtime = meetingtime))
}
|
#' @include optim.R
NULL
optim_SGD <- R6::R6Class(
"optim_sgd",
lock_objects = FALSE,
inherit = Optimizer,
public = list(
initialize = function(params, lr=optim_required(), momentum=0, dampening=0,
weight_decay=0, nesterov=FALSE) {
if (!is_optim_required(lr) && lr < 0)
value_error("Invalid learning rate: {lr}")
if (momentum < 0)
value_error("Invalid momentum value: {momentum}")
if (weight_decay < 0)
value_error("Invalid weight_decay value: {weight_decay}")
if (nesterov && ( momentum <= 0 || dampening != 0))
value_error("Nesterov momentum requires a momentum and zero dampening")
defaults <- list(lr=lr, momentum=momentum, dampening=dampening,
weight_decay=weight_decay, nesterov=nesterov)
super$initialize(params, defaults)
},
step = function(closure = NULL) {
with_no_grad({
loss <- NULL
if (!is.null(closure)) {
with_enable_grad({
loss <- closure()
})
}
for (g in seq_along(self$param_groups)) {
group <- self$param_groups[[g]]
weight_decay <- group$weight_decay
momentum <- group$momentum
dampening <- group$dampening
nesterov <- group$nesterov
for (p in seq_along(group$params)) {
param <- group$params[[p]]
if (is.null(param$grad))
next
d_p <- param$grad
if (weight_decay != 0) {
d_p <- d_p$add(p, alpha = weight_decay)
}
if (momentum != 0) {
param_state <- attr(param, "state")
if (is.null(param_state) || !"momentum_buffer" %in% names(param_state)) {
buf <- torch_clone(d_p)$detach()
attr(self$param_groups[[g]]$params[[p]], "state") <- list(momentum_buffer = buf)
} else {
buf <- param_state$momentum_buffer
buf$mul_(momentum)$add_(d_p, alpha=1 - dampening)
}
if (nesterov) {
d_p <- d_p$add(buf, alpha = momentum)
}
else {
d_p <- buf
}
}
param$add_(d_p, alpha = -group$lr)
}
}
})
loss
}
)
)
#' SGD optimizer
#'
#' Implements stochastic gradient descent (optionally with momentum).
#' Nesterov momentum is based on the formula from
#' On the importance of initialization and momentum in deep learning.
#'
#' @param params (iterable): iterable of parameters to optimize or dicts defining
#' parameter groups
#' @param lr (float): learning rate
#' @param momentum (float, optional): momentum factor (default: 0)
#' @param weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
#' @param dampening (float, optional): dampening for momentum (default: 0)
#' @param nesterov (bool, optional): enables Nesterov momentum (default: FALSE)
#'
#' @section Note:
#'
#' The implementation of SGD with Momentum-Nesterov subtly differs from
#' Sutskever et. al. and implementations in some other frameworks.
#'
#' Considering the specific case of Momentum, the update can be written as
#' \deqn{
#' \begin{aligned}
#' v_{t+1} & = \mu * v_{t} + g_{t+1}, \\
#' p_{t+1} & = p_{t} - \text{lr} * v_{t+1},
#' \end{aligned}
#' }
#'
#' where \eqn{p}, \eqn{g}, \eqn{v} and \eqn{\mu} denote the
#' parameters, gradient, velocity, and momentum respectively.
#'
#' This is in contrast to Sutskever et. al. and
#' other frameworks which employ an update of the form
#'
#' \deqn{
#' \begin{aligned}
#' v_{t+1} & = \mu * v_{t} + \text{lr} * g_{t+1}, \\
#' p_{t+1} & = p_{t} - v_{t+1}.
#' \end{aligned}
#' }
#' The Nesterov version is analogously modified.
#'
#' @examples
#' \dontrun{
#' optimizer <- optim_sgd(model.parameters(), lr=0.1, momentum=0.9)
#' optimizer$zero_grad()
#' loss_fn(model(input), target)$backward()
#' optimizer$step()
#' }
#'
#' @export
optim_sgd <- function(params, lr=optim_required(), momentum=0, dampening=0,
weight_decay=0, nesterov=FALSE) {
optim_SGD$new(params, lr, momentum, dampening,
weight_decay, nesterov)
} | /R/optim-sgd.R | permissive | qykong/torch | R | false | false | 4,407 | r | #' @include optim.R
NULL
optim_SGD <- R6::R6Class(
"optim_sgd",
lock_objects = FALSE,
inherit = Optimizer,
public = list(
initialize = function(params, lr=optim_required(), momentum=0, dampening=0,
weight_decay=0, nesterov=FALSE) {
if (!is_optim_required(lr) && lr < 0)
value_error("Invalid learning rate: {lr}")
if (momentum < 0)
value_error("Invalid momentum value: {momentum}")
if (weight_decay < 0)
value_error("Invalid weight_decay value: {weight_decay}")
if (nesterov && ( momentum <= 0 || dampening != 0))
value_error("Nesterov momentum requires a momentum and zero dampening")
defaults <- list(lr=lr, momentum=momentum, dampening=dampening,
weight_decay=weight_decay, nesterov=nesterov)
super$initialize(params, defaults)
},
step = function(closure = NULL) {
with_no_grad({
loss <- NULL
if (!is.null(closure)) {
with_enable_grad({
loss <- closure()
})
}
for (g in seq_along(self$param_groups)) {
group <- self$param_groups[[g]]
weight_decay <- group$weight_decay
momentum <- group$momentum
dampening <- group$dampening
nesterov <- group$nesterov
for (p in seq_along(group$params)) {
param <- group$params[[p]]
if (is.null(param$grad))
next
d_p <- param$grad
if (weight_decay != 0) {
d_p <- d_p$add(p, alpha = weight_decay)
}
if (momentum != 0) {
param_state <- attr(param, "state")
if (is.null(param_state) || !"momentum_buffer" %in% names(param_state)) {
buf <- torch_clone(d_p)$detach()
attr(self$param_groups[[g]]$params[[p]], "state") <- list(momentum_buffer = buf)
} else {
buf <- param_state$momentum_buffer
buf$mul_(momentum)$add_(d_p, alpha=1 - dampening)
}
if (nesterov) {
d_p <- d_p$add(buf, alpha = momentum)
}
else {
d_p <- buf
}
}
param$add_(d_p, alpha = -group$lr)
}
}
})
loss
}
)
)
#' SGD optimizer
#'
#' Implements stochastic gradient descent (optionally with momentum).
#' Nesterov momentum is based on the formula from
#' On the importance of initialization and momentum in deep learning.
#'
#' @param params (iterable): iterable of parameters to optimize or dicts defining
#' parameter groups
#' @param lr (float): learning rate
#' @param momentum (float, optional): momentum factor (default: 0)
#' @param weight_decay (float, optional): weight decay (L2 penalty) (default: 0)
#' @param dampening (float, optional): dampening for momentum (default: 0)
#' @param nesterov (bool, optional): enables Nesterov momentum (default: FALSE)
#'
#' @section Note:
#'
#' The implementation of SGD with Momentum-Nesterov subtly differs from
#' Sutskever et. al. and implementations in some other frameworks.
#'
#' Considering the specific case of Momentum, the update can be written as
#' \deqn{
#' \begin{aligned}
#' v_{t+1} & = \mu * v_{t} + g_{t+1}, \\
#' p_{t+1} & = p_{t} - \text{lr} * v_{t+1},
#' \end{aligned}
#' }
#'
#' where \eqn{p}, \eqn{g}, \eqn{v} and \eqn{\mu} denote the
#' parameters, gradient, velocity, and momentum respectively.
#'
#' This is in contrast to Sutskever et. al. and
#' other frameworks which employ an update of the form
#'
#' \deqn{
#' \begin{aligned}
#' v_{t+1} & = \mu * v_{t} + \text{lr} * g_{t+1}, \\
#' p_{t+1} & = p_{t} - v_{t+1}.
#' \end{aligned}
#' }
#' The Nesterov version is analogously modified.
#'
#' @examples
#' \dontrun{
#' optimizer <- optim_sgd(model.parameters(), lr=0.1, momentum=0.9)
#' optimizer$zero_grad()
#' loss_fn(model(input), target)$backward()
#' optimizer$step()
#' }
#'
#' @export
optim_sgd <- function(params, lr=optim_required(), momentum=0, dampening=0,
weight_decay=0, nesterov=FALSE) {
optim_SGD$new(params, lr, momentum, dampening,
weight_decay, nesterov)
} |
library(ggplot2)
library(dplyr)
treatment_age <- smoking_data %>%
filter(qsmk == 1) %>%
summarise(mean(age))
control_age <- smoking_data %>%
filter(qsmk == 0) %>%
summarise(mean(age))
treatment_sex <- smoking_data %>%
filter(qsmk == 1) %>%
summarise(mean(sex))
control_sex <- smoking_data %>%
filter(qsmk == 0) %>%
summarise(mean(sex))
covariate_inbalance <- data.frame("Group" = c("Treatment", "Treatment", "Control", "Control"),
"Variable" = c("Average Age", "Proportion Men", "Average Age", "Proportion Men"),
"Value" = c(as.numeric(treatment_age), as.numeric(treatment_sex) * 100,
as.numeric(control_age), as.numeric(control_sex) *100))
ggplot(covariate_inbalance, aes(x = Group, y = Value)) + geom_bar(stat = "identity", aes(fill = Group)) +scale_fill_brewer(palette="Set1")+facet_grid(. ~Variable ) + coord_cartesian(ylim=c(40,55))
aasmd_prop <- data.frame("Type" = c( "After Matching", "Before Matching"), "Value" = c( prop_log_total_aasmd, before_aasmd))
library(ggplot2)
aasmd_prop_plot <- ggplot(aasmd_prop, aes(x = reorder(Type, -Value), Value, fill = Type)) + geom_bar(stat = "identity") + xlab("Before vs After")+ ylab("Average Absolute Standardized Mean Difference")
aasmd_prop_plot
| /plots.r | no_license | eheimlich/econometrics_research | R | false | false | 1,346 | r | library(ggplot2)
library(dplyr)
treatment_age <- smoking_data %>%
filter(qsmk == 1) %>%
summarise(mean(age))
control_age <- smoking_data %>%
filter(qsmk == 0) %>%
summarise(mean(age))
treatment_sex <- smoking_data %>%
filter(qsmk == 1) %>%
summarise(mean(sex))
control_sex <- smoking_data %>%
filter(qsmk == 0) %>%
summarise(mean(sex))
covariate_inbalance <- data.frame("Group" = c("Treatment", "Treatment", "Control", "Control"),
"Variable" = c("Average Age", "Proportion Men", "Average Age", "Proportion Men"),
"Value" = c(as.numeric(treatment_age), as.numeric(treatment_sex) * 100,
as.numeric(control_age), as.numeric(control_sex) *100))
ggplot(covariate_inbalance, aes(x = Group, y = Value)) + geom_bar(stat = "identity", aes(fill = Group)) +scale_fill_brewer(palette="Set1")+facet_grid(. ~Variable ) + coord_cartesian(ylim=c(40,55))
aasmd_prop <- data.frame("Type" = c( "After Matching", "Before Matching"), "Value" = c( prop_log_total_aasmd, before_aasmd))
library(ggplot2)
aasmd_prop_plot <- ggplot(aasmd_prop, aes(x = reorder(Type, -Value), Value, fill = Type)) + geom_bar(stat = "identity") + xlab("Before vs After")+ ylab("Average Absolute Standardized Mean Difference")
aasmd_prop_plot
|
\name{create_container}
\alias{create_container}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
creates a container for training, classifying, and analyzing documents.
}
\description{
Given a \code{DocumentTermMatrix} from the \pkg{tm} package and corresponding document labels, creates a container of class \code{\link{matrix_container-class}} that can be used for training and classification (i.e. \code{\link{train_model}}, \code{\link{train_models}}, \code{\link{classify_model}}, \code{\link{classify_models}})
}
\usage{
create_container(matrix, labels, trainSize=NULL, testSize=NULL, virgin)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{matrix}{
A document-term matrix of class \code{DocumentTermMatrix} or \code{TermDocumentMatrix} from the \pkg{tm} package, or generated by \code{\link{create_matrix}}.
}
\item{labels}{
A \code{factor} or \code{vector} of labels corresponding to each document in the matrix.
}
\item{trainSize}{
A range (e.g. \code{1:1000}) specifying the number of documents to use for training the models. Can be left blank for classifying corpora using saved models that don't need to be trained.
}
\item{testSize}{
A range (e.g. \code{1:1000}) specifying the number of documents to use for classification. Can be left blank for training on all data in the matrix.
}
\item{virgin}{
A logical (\code{TRUE} or \code{FALSE}) specifying whether to treat the classification data as virgin data or not.
}
}
\value{
A container of class \code{\link{matrix_container-class}} that can be passed into other functions such as \code{\link{train_model}}, \code{\link{train_models}}, \code{\link{classify_model}}, \code{\link{classify_models}}, and \code{\link{create_analytics}}.
}
\author{
Timothy P. Jurka <tpjurka@ucdavis.edu>, Loren Collingwood <lorenc2@uw.edu>
}
\examples{
library(RTextTools)
data(NYTimes)
data <- NYTimes[sample(1:3100,size=100,replace=FALSE),]
matrix <- create_matrix(cbind(data["Title"],data["Subject"]), language="english",
removeNumbers=TRUE, stemWords=FALSE, weighting=tm::weightTfIdf)
container <- create_container(matrix,data$Topic.Code,trainSize=1:75, testSize=76:100,
virgin=FALSE)
}
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{method}
| /man/create_container.Rd | no_license | SidShenoy/RTextTools | R | false | false | 2,310 | rd | \name{create_container}
\alias{create_container}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
creates a container for training, classifying, and analyzing documents.
}
\description{
Given a \code{DocumentTermMatrix} from the \pkg{tm} package and corresponding document labels, creates a container of class \code{\link{matrix_container-class}} that can be used for training and classification (i.e. \code{\link{train_model}}, \code{\link{train_models}}, \code{\link{classify_model}}, \code{\link{classify_models}})
}
\usage{
create_container(matrix, labels, trainSize=NULL, testSize=NULL, virgin)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{matrix}{
A document-term matrix of class \code{DocumentTermMatrix} or \code{TermDocumentMatrix} from the \pkg{tm} package, or generated by \code{\link{create_matrix}}.
}
\item{labels}{
A \code{factor} or \code{vector} of labels corresponding to each document in the matrix.
}
\item{trainSize}{
A range (e.g. \code{1:1000}) specifying the number of documents to use for training the models. Can be left blank for classifying corpora using saved models that don't need to be trained.
}
\item{testSize}{
A range (e.g. \code{1:1000}) specifying the number of documents to use for classification. Can be left blank for training on all data in the matrix.
}
\item{virgin}{
A logical (\code{TRUE} or \code{FALSE}) specifying whether to treat the classification data as virgin data or not.
}
}
\value{
A container of class \code{\link{matrix_container-class}} that can be passed into other functions such as \code{\link{train_model}}, \code{\link{train_models}}, \code{\link{classify_model}}, \code{\link{classify_models}}, and \code{\link{create_analytics}}.
}
\author{
Timothy P. Jurka <tpjurka@ucdavis.edu>, Loren Collingwood <lorenc2@uw.edu>
}
\examples{
library(RTextTools)
data(NYTimes)
data <- NYTimes[sample(1:3100,size=100,replace=FALSE),]
matrix <- create_matrix(cbind(data["Title"],data["Subject"]), language="english",
removeNumbers=TRUE, stemWords=FALSE, weighting=tm::weightTfIdf)
container <- create_container(matrix,data$Topic.Code,trainSize=1:75, testSize=76:100,
virgin=FALSE)
}
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{method}
|
library(shiny)
library(ggplot2)
shinyServer(function(input, output) {
resultados <- reactiveValues()
output$Migrafico1 <- renderPlot({
ggplot(mpg, aes(x = cyl)) + geom_histogram(fill = input$color)
})
observeEvent(input$clicar, {
df1 <- as.data.frame(rnorm(input$elementos))
df2 <- as.data.frame(rnorm(input$elementos))
dataframe <- cbind(df1, df2)
names(dataframe) <- c('ejex', 'ejey')
resultados$grafica <- ggplot(dataframe) + geom_point(aes(ejex, ejey))
})
output$Migrafico2 <- renderPlot({
df1 <- as.data.frame(rnorm(input$elementos))
df2 <- as.data.frame(rnorm(input$elementos))
dataframe <- cbind(df1, df2)
names(dataframe) <- c('ejex', 'ejey')
resultados$grafica <- ggplot(dataframe) + geom_point(aes(ejex, ejey))
resultados$grafica
})
})
#Pasos que he ido haciendo:
##Me creo la variable resultados en donde guardaremos todos los valores reactivos que vayamos creando para ambas gráficas.
##Para nuestro gráfico 1 hacemos un ggplot, en el ejeX estará la cyl del dataset mpg y lo rellenaremos el histograma por el color por defecto, es decir, el azul
## Con el observeevent cada vez que se produzca se actualizará la gráfica en base al reactive value
##Para mi gráfico2, me creo al igual que el anterior una rnorm en base a los elemenos contenidos en el sliderinput, creandonos un dataframe para poder representarlo gráficamente | /Examen_visualizacion/Examen_Shiny/Examen/server.R | no_license | cRistiancec/TecnicasVisualizacion | R | false | false | 1,448 | r | library(shiny)
library(ggplot2)
shinyServer(function(input, output) {
resultados <- reactiveValues()
output$Migrafico1 <- renderPlot({
ggplot(mpg, aes(x = cyl)) + geom_histogram(fill = input$color)
})
observeEvent(input$clicar, {
df1 <- as.data.frame(rnorm(input$elementos))
df2 <- as.data.frame(rnorm(input$elementos))
dataframe <- cbind(df1, df2)
names(dataframe) <- c('ejex', 'ejey')
resultados$grafica <- ggplot(dataframe) + geom_point(aes(ejex, ejey))
})
output$Migrafico2 <- renderPlot({
df1 <- as.data.frame(rnorm(input$elementos))
df2 <- as.data.frame(rnorm(input$elementos))
dataframe <- cbind(df1, df2)
names(dataframe) <- c('ejex', 'ejey')
resultados$grafica <- ggplot(dataframe) + geom_point(aes(ejex, ejey))
resultados$grafica
})
})
#Pasos que he ido haciendo:
##Me creo la variable resultados en donde guardaremos todos los valores reactivos que vayamos creando para ambas gráficas.
##Para nuestro gráfico 1 hacemos un ggplot, en el ejeX estará la cyl del dataset mpg y lo rellenaremos el histograma por el color por defecto, es decir, el azul
## Con el observeevent cada vez que se produzca se actualizará la gráfica en base al reactive value
##Para mi gráfico2, me creo al igual que el anterior una rnorm en base a los elemenos contenidos en el sliderinput, creandonos un dataframe para poder representarlo gráficamente |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{tap_data}
\alias{tap_data}
\title{Sample tapping data from the tapping activity on a smartphone}
\format{A data frame with 181 rows (observations) and 4 variables:
\describe{
\item{t}{time, in seconds}
\item{x}{the x location of the tap on the phone screen}
\item{y}{the y location of the tap on the phone screen}
\item{buttonid}{a string from 'TappedButonLeft', 'TappedButtonRight' or
'TappedButtonNone' indicating that at that time the left, right or No
button was tapped respectively}
}}
\usage{
tap_data
}
\description{
A dataframe containing sample JSON output format of the tap data,
containing t(time), x and y (the location of the tap on the screen),
and buttonid (which of left/right/No button was tapped).
}
\details{
Participants were shown a screen with two buttons on it,
indicating left and right. They were asked to tap on the buttons
alternatingly as fast as they can for 30s.
}
\keyword{datasets}
| /man/tap_data.Rd | permissive | JulioV/mhealthtools | R | false | true | 1,032 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{tap_data}
\alias{tap_data}
\title{Sample tapping data from the tapping activity on a smartphone}
\format{A data frame with 181 rows (observations) and 4 variables:
\describe{
\item{t}{time, in seconds}
\item{x}{the x location of the tap on the phone screen}
\item{y}{the y location of the tap on the phone screen}
\item{buttonid}{a string from 'TappedButonLeft', 'TappedButtonRight' or
'TappedButtonNone' indicating that at that time the left, right or No
button was tapped respectively}
}}
\usage{
tap_data
}
\description{
A dataframe containing sample JSON output format of the tap data,
containing t(time), x and y (the location of the tap on the screen),
and buttonid (which of left/right/No button was tapped).
}
\details{
Participants were shown a screen with two buttons on it,
indicating left and right. They were asked to tap on the buttons
alternatingly as fast as they can for 30s.
}
\keyword{datasets}
|
# #########################################################################################
# plot haplotype network for Plutella australiana and P. xylostella
# K. Perry, 8/5/2017
#
# Notes:
# This analysis plots haplotype networks for 27 Plutella australiana individuals
# ... from study `KPRAD2`, which was sequenced at the COI gene by S. Baxter, and
# ... 57 individuals from landry and herberts study downloaded as .tsv file.
# Simon downloaded trace files, checked chromatograms, dropped low qual samples,
# ... aligned together with kp samples in geneious v10.0, trimmed to common length,
# ... and exported a new fasta file.
#
# This script does:
# ... reads in the fasta file with trimmed/aligned COI sequecnes for 89 Paus individuals,
# ... ands new sample names, writes new fasta files and plots hapotype networks in R pegas.
#
# #########################################################################################
library(tidyverse)
library(magrittr)
library(readxl)
library(stringr)
library(reshape2)
library(pegas)
library(RColorBrewer)
library(xtable)
# ###############################
# Write new formatted fasta files
# ###############################
# read in the cleaned Landry and Perry COI sequences for each species
# ... (done separately by species, because the species cannot easily be determined from the combined fasta)
paDNA <- read.dna("C:/UserData/Kym/PhD/RAD2/haplotypes/Landry_Perry_Pa_Extract.fasta",
format = "fasta")
pxDNA <- read.dna("C:/UserData/Kym/PhD/RAD2/haplotypes/Landry_Perry_Px_Extract (modified) (modified).fasta",
format = "fasta")
# make a dataframe containing the sequences and names
seqDf <- data.frame(oldNames = labels(paDNA),
seq = sapply(paDNA, paste, collapse = "")) %>%
# filter out the landry Pa consensus sequence
filter(oldNames != "Landry_Pa_Consensus_extraction") %>%
mutate(species = "P.aus") %>%
# combine with the px sequences
bind_rows(data.frame(oldNames = labels(pxDNA),
seq = sapply(pxDNA, paste, collapse = "")) %>%
mutate(species = "P.x")) %>%
magrittr::set_rownames(NULL)
# rename the samples
# ... first, write the old names to file
write.csv(seqDf[, c("oldNames", "species")], "seqNames.csv", row.names = FALSE)
# ... manually add new names to this file in a separate column,
# ... then change filename (to avoid overwrite), read back in
# ... note: to find the sample locations, I cross-referenced the old names with the sample names in `COI summary sheet``
seqNames <- read.csv("C:/UserData/Kym/PhD/RAD2/haplotypes/seqNamesNew.csv")
seqDf %<>% merge(seqNames, by = c("oldNames", "species")) %>%
# add a column identifying landry's and perry's samples
mutate(study = NA)
seqDf$study[grep("-\\d{2}$", seqDf$newNames)] <- "landry"
seqDf$study[grep("-\\d{2}$", seqDf$newNames, invert = TRUE)] <- "perry"
# create additional seqName variables to facilitate plotting haplotype by `study` and `species` factor in R pegas
# ... allows us to determine how many haplotypes discovered by landry, perry
seqDf %<>% mutate(ID = paste(newNames, species, study, sep = "_")) %>%
arrange(species, study, newNames) %>%
dplyr::select(ID, seq, species, study)
# write new fasta files for all samples, P.x samples, and P.aus samples
pxpaFA <- c(rbind(paste0("> ", as.character(seqDf$ID)), as.character(seqDf$seq)))
write.table(pxpaFA, "pxpaCOI.fasta",
col.names = FALSE, row.names = FALSE, quote = FALSE)
pxDf <- filter(seqDf, species == "P.x")
pxFA <- c(rbind(paste0("> ", as.character(pxDf$ID)), as.character(pxDf$seq)))
write.table(pxFA, "pxCOI.fasta",
col.names = FALSE, row.names = FALSE, quote = FALSE)
paDf <- filter(seqDf, species == "P.aus")
paFA <- c(rbind(paste0("> ", as.character(paDf$ID)), as.character(paDf$seq)))
write.table(paFA, "paCOI.fasta",
col.names = FALSE, row.names = FALSE, quote = FALSE)
# ##################################
# Plot haplotype networks in R pegas
#
# ... assign labels
# ... plot network with sample sizes
# ... plot network with the haplotype labels
# ... plot outer circles with smaller sizes (singleton)
# ... import to inkscape
# ##################################
# *********************
# P.x haplotype network
# *********************
pxCOI <- read.dna("C:/UserData/Kym/PhD/RAD2/haplotypes/pxCOI.fasta",
format = "fasta")
xh <- haplotype(pxCOI, labels = NULL)
plot(xh)
xnet <- haploNet(xh)
# set the new labels for xh and xnet
# ... create vector of hap names in the order to assign names to haps
# ... below, print the names/freq attributes here for transparency
pxHapNames <- c("PxCOI04", "PxCOI01", "PxCOI05", "PxCOI03", "PxCOI02")
attr(xh, "dimnames")[[1]]
# [1] "I" "II" "III" "IV" "V"
attr(xh, "dimnames")[[1]] <- pxHapNames
attr(xh, "dimnames")[[1]]
# [1] "PxCOI4" "PxCOI1" "PxCOI5" "PxCOI3" "PxCOI2"
attr(xnet, "freq")
# [1] 23 76 1 1 1
attr(xnet, "labels")
# [1] "I" "II" "III" "IV" "V"
attr(xnet, "labels") <- pxHapNames
attr(xnet, "labels")
# [1] "PxCOI4" "PxCOI1" "PxCOI5" "PxCOI3" "PxCOI3"
# plot a network with the hap labels, and sample sizes
plot(xnet, threshold = 0)
# plot a network labelled with the sample size
xnet2 <- xnet
attr(xnet2, "labels") <- attr(xnet, "freq")
plot(xnet2, threshold = 0)
# export some pdfs
pdf("PxHapNet.pdf")
plot(xnet, threshold = 0)
dev.off()
pdf("PxHapNetSize.pdf")
plot(xnet2, threshold = 0)
dev.off()
# export some svgs
svg("PxHapNet.svg")
plot(xnet, threshold = 0)
dev.off()
svg("PxHapNetSize.svg")
plot(xnet2, threshold = 0)
dev.off()
# ********************************
# P. australiana haplotype network
# ********************************
paCOI <- read.dna("C:/UserData/Kym/PhD/RAD2/haplotypes/paCOI.fasta",
format = "fasta")
ah <- haplotype(paCOI, labels = NULL)
plot(ah)
anet <- haploNet(ah)
# set the new haplotype names for ah and anet
# . label the Pa haplotypes according to the nomenclature in draft HapNetWorkTable.xlsx
# ... view the draft haplotype network table in this file:
# . hapnetTab <- "C:/UserData/Kym/PhD/RAD2/haplotypes/HapNetworkTable.xlsx"
# ... find the index of individuals with each `haplotype`
# ... (to id the samples, use the names in the dna.bin object, from which haplotype object was derived)
# ... then, match the samples with genbaknk accession numbers, and \
# ... \ cross reference these by vewing the genbank accession numbers for each sample in this file:
# . gbPath <- "C:/UserData/Kym/PhD/RAD2/haplotypes/genbankAccessionsPerryPlutellaCOI.txt"
# index the paCOI object to find the accession numbers for haplotypes [[1]] to [[9]]
# ... note that we're indexing the default order of haplotypes
labels(paCOI)[attr(ah, "index")[[1]]] # n = 74 samples, Pa COI 01
labels(paCOI)[attr(ah, "index")[[2]]] # [1] "LNSWA731-05_P.aus_landry", Pa COI 09
labels(paCOI)[attr(ah, "index")[[3]]] # [1] "LSM1299-11_P.aus_landry" , Pa COI 08
labels(paCOI)[attr(ah, "index")[[4]]] # [1] "MCCAA2949-12_P.aus_landry", pa COI 04
labels(paCOI)[attr(ah, "index")[[5]]] # n = 6 samples, Pa COI 02
labels(paCOI)[attr(ah, "index")[[6]]] # [1] "PHLCA920-11_P.aus_landry", pa COI 05
labels(paCOI)[attr(ah, "index")[[7]]] # [1] "dypWAU1011Ca_P.aus_perry", matches accession MF151883, Pa COI 07
labels(paCOI)[attr(ah, "index")[[8]]] # [1] "dypWAU1012Aa_P.aus_perry", matches accession MF151885, Pa COI 03
labels(paCOI)[attr(ah, "index")[[9]]] # [1] "wagNSW0201Ga_P.aus_perry", matches accession MF151836, pa COI 06
# ... create vector of hap names in the order above to assign names to haps
# ... below, also print the names/freq attributes here for transparency
paHapNames <- c("PaCOI01", "PaCOI09", "PaCOI08", "PaCOI04", "PaCOI02",
"PaCOI05", "PaCOI07", "PaCOI03", "PaCOI06")
attr(ah, "dimnames")[[1]]
# [1] "I" "II" "III" "IV" "V" "VI" "VII" "VIII" "IX"
attr(ah, "dimnames")[[1]] <- paHapNames
attr(ah, "dimnames")[[1]]
# [1] "PaCOI01" "PaCOI09" "PaCOI08" "PaCOI04" "PaCOI02" "PaCOI05" "PaCOI07" "PaCOI03" "PaCOI06"
attr(anet, "freq")
# [1] 74 1 1 1 6 1 1 1 1
attr(anet, "labels")
# [1] "I" "II" "III" "IV" "V" "VI" "VII" "VIII" "IX"
attr(anet, "labels") <- paHapNames
attr(anet, "labels")
# [1] "PaCOI01" "PaCOI09" "PaCOI08" "PaCOI04" "PaCOI02" "PaCOI05" "PaCOI07" "PaCOI03" "PaCOI06"
# plot a network with the hap labels, and sample sizes
plot(anet, threshold = 0)
# plot a network labelled with the sample size
anet2 <- anet
attr(anet2, "labels") <- attr(anet, "freq")
plot(anet2, threshold = 0)
# export some pdfs
pdf("PaHapNet.pdf")
plot(anet, threshold = 0)
dev.off()
pdf("PaHapNetSize.pdf")
plot(anet2, threshold = 0)
dev.off()
# export some svgs
svg("PaHapNet.svg")
plot(anet, threshold = 0)
dev.off()
svg("PaHapNetSize.svg")
plot(anet2, threshold = 0)
dev.off()
# calculate sample sizes, frequency by study (landry vs perry)
(PxFreqStudy <- haploFreq(pxCOI, split = "_", what = 3))
(PaFreqStudy <- haploFreq(paCOI, split = "_", what = 3))
data.frame(landryPx = PxFreqStudy[, "landry"],
perryPx = PxFreqStudy[, "perry"])
# landryPx perryPx
# 1 14 9
# 2 41 35
# 3 1 0
# 4 1 0
# 5 1 0
data.frame(N_landryPx = sum(PxFreqStudy[, "landry"]),
N_perryPx = sum(PxFreqStudy[, "perry"]))
# N_landryPx N_perryPx
# 1 58 44
data.frame(landryPaus = PaFreqStudy[, "landry"],
perryPaus = PaFreqStudy[, "perry"])
# landryPaus perryPaus
# 1 42 32
# 2 1 0
# 3 1 0
# 4 1 0
# 5 4 2
# 6 1 0
# 7 0 1
# 8 0 1
# 9 0 1
data.frame(N_landryPaus = sum(PaFreqStudy[, "landry"]),
N_perryPaus = sum(PaFreqStudy[, "perry"]))
# N_landryPaus N_perryPaus
# 1 50 37
# #####################################################
# extract the meta data for perry P.x and P.aus samples
# ... for submission to genbank
# #####################################################
# first read in the metadata for each sample and population
# ... variable `pos` contains the unique plate and well coordinates for each individual
genotypesPath <- "C:/UserData/Kym/PhD/RAD2/genotypesMasterRAD2.xlsx"
s <- "skip"
t <- "text"
colT <- c(s,t,t,s,s,s,s,s,"numeric",t,t,s,s,t,t,t,t,s,s,s,s,s,s,s,s,s)
metaData <- read_excel(genotypesPath, sheet = "genotypes-master",
col_types = colT) %>%
mutate(plate = gsub("JF_2", "JF02", plate) %>%
gsub("JF_1", "JF01", .) %>%
str_pad(side = "left", width = 2, pad = 0),
position = str_pad(position, side = "left", width = 3, pad = 0),
pos = paste0(plate, position)) %>%
dplyr::select(pos, popCode, location, state, host, hostType, gender) %>%
merge(read_excel(genotypesPath, sheet = "pops-master") %>%
filter(popCode != 9.1) %>%
mutate(popCode = as.numeric(popCode),
collectionDate = format(as.Date(collectionDate), "%d-%b-%Y")) %>%
dplyr::select(popCode, latitude, longitude,
stageCollected, collectionDate, collector), by = c("popCode"))
# get the perry seqNames, extract well and plate positions, merge with metaData
renameHost <- function(x) {
gsub("c", "Canola", x) %>%
gsub("wt", "Wild turnip", .) %>%
gsub("v", "Brassica vegetables", .) %>%
gsub("lw", "Diplotaxis sp.", .) %>%
gsub("f", "Forage brassica", .) %>%
gsub("wr", "Wild radish", .)
}
seqPerry <- seqDf %>%
filter(study == "perry") %>%
mutate(ID = gsub("_(.)+", "", ID),
pos = str_extract(ID,"(JF)?[0-9]{4}[A-H]{1}"),
plate = str_extract(pos, "(JF)?\\d{2}"),
well = str_extract(pos, "\\d{2}[A-H]$")) %>%
dplyr::select(pos, ID, species, seq) %>%
merge(metaData, by = "pos") %>%
mutate(Country = "Australia",
Lat_Lon = paste(round(latitude, 2), "N",
round(longitude, 2), "E"),
host = renameHost(host),
Isolation_source = paste(location, state),
species = gsub("P.x", "Plutella xylostella", species) %>%
gsub("P.aus", "Plutella australiana", .),
gender = gsub("f$", "Female", gender) %>%
gsub("m$", "Male", .) %>%
gsub("u$", "Unknown", .)) %>%
dplyr::select(Sequence_ID = ID, Collection_date = collectionDate,
Collected_by = collector, Country, Lat_Lon,
Host = host, Isolate = pos, Isolation_source, Species = species, Sex = gender, Sequence = seq)
write.csv(seqPerry, "perry.81PxPa.612bpCOI.metadata.csv")
# write a fasta file with just the perry samples (for submission to genbank)
perryFA <- c(rbind(paste0("> ", as.character(seqPerry$Sequence_ID)), as.character(seqPerry$Sequence)))
write.table(perryFA, "perry.81PxPa.612bpCOI.fasta",
col.names = FALSE, row.names = FALSE, quote = FALSE)
# #######################################################################
# Make a Latex table for the GenBank accession numbers for the Haplotypes
# ... supplementary tables
# #######################################################################
# ====================
# Plutella xylostella
# ====================
# Read in the summary table (from Simon Baxter) containing an example accession
# for each haplotype
hapAccFile <- file.path("C:", "UserData", "Kym", "PhD", "RAD2", "haplotypes", "HapNetworkTable.xlsx")
PxAccessions <- read_excel(hapAccFile, sheet = "Sheet1", skip = 2)
PxAccessions <- PxAccessions[1:5, 1:8]
# format latex table
PxAccLatex <- PxAccessions %>%
dplyr::select(-Study) %>%
rename(`{No. individuals}` = n,
`Sequence reference` = sequenceRefNo)
colWidth <- "0.75cm"
PausAccAlign <- c("l",
"l",
paste0(">{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
"S[round-mode=places,round-precision=0,table-number-alignment=center, table-figures-decimal=0,table-figures-integer=2]",
"l")
# Jess Saw Haplotypes:
# PxMt01 = DQ394347
# PxMt02 = DQ394348
# PxMt06 = DQ394352
# make a multi-row header using the add to rows argument
# make a multirow header using the add.to.row() argument
PxHeader <- list()
PxHeader$pos <- list(0, 0)
PxHeader$command = c(
"& \\multicolumn{4}{c}{Nucleotide position} & \\\\\n \\cmidrule(lr){2-5}\n",
paste(paste(names(PxAccLatex), collapse = " & "), "\\\\\n", collapse = "")
)
PxAccCap <- "The four variable nucleotide sites among the five \\textit{P. xylostella}
613 bp COI haplotypes identified in 102 individuals from Australia.
Shown are sequences from this study and re-analysed sequences from Landry and Hebert (2013) downloaded
from dx.doi.org//10.5883//DS-PLUT1.
Three haplotypes correspond to those reported by Saw et al. (2006):
PxCOI01/PxMt01, GenBank accession: DQ394347; PxCOI02/PxMt06, GenBank accession: DQ394352;
PxCOI04/PxMt02, GenBank accession: DQ394348.
Nucleotide positions were determined from sequence MF151841.
Only positions that differ from haplotype PxCOI01 are shown."
xtable(PxAccLatex,
caption = PxAccCap,
align = PxAccAlign,
lab = "tab:PxAccessions") %>%
print.xtable(include.rownames = FALSE,
include.colnames = FALSE,
add.to.row = PxHeader,
caption.placement = "top",
table.placement = "h",
booktabs = TRUE,
sanitize.text.function = function(x){x})
# ######################
# Plutella australiana #
# ######################
# format latex table
PausAccLatex <- read_excel(hapAccFile, sheet = "Sheet1", skip = 11) %>%
filter(Haplotype != "Total") %>%
dplyr::select(-Study) %>%
rename(`{No. individuals}` = n,
`Sequence reference` = sequenceRefNo)
names(PausAccLatex)
# make a multi-row header using the add to rows argument
# make a multirow header using the add.to.row() argument
PaHeader <- list()
PaHeader$pos <- list(0, 0)
PaHeader$command = c(
"& \\multicolumn{8}{c}{Nucleotide position} & \\\\\n \\cmidrule(lr){2-9}\n",
paste(paste(names(PausAccLatex), collapse = " & "), "\\\\\n", collapse = "")
)
colWidth <- "0.7cm"
# I add the @{} space, otherwise cols too squashed to centre exactly.
PausAccAlign <- c("l",
"l",
paste0(">{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
"S[round-mode=places,round-precision=0,table-number-alignment=center, table-figures-decimal=0,table-figures-integer=2]",
"l")
PausAccCap <- "The eight variable nucleotide sites among the nine \\textit{P. australiana}
613 bp COI haplotypes identified in 87 individuals from Australia.
Haplotypes PaCOI01 and PaCOI02 were identified among sequences from this study and Landry and Hebert (2013),
and PaCOI04, PaCOI05, PaCOI08 and PaCOI09 were identified from Landry and Hebert (2013).
Nucleotide positions were determined from sequence MF151865.
Only the positions that differ from haplotype PaCOI01 are shown."
xtable(PausAccLatex,
caption = PausAccCap,
align = PausAccAlign,
lab = "tab:PausAccessions") %>%
print.xtable(include.rownames = FALSE,
include.colnames = FALSE,
add.to.row = PaHeader,
caption.placement = "top",
table.placement = "h",
booktabs = TRUE,
sanitize.text.function = function(x){x})
# # End script
# ##################################################################
| /haplotypes.R | no_license | kymperry01/PlutellaCanola | R | false | false | 18,381 | r | # #########################################################################################
# plot haplotype network for Plutella australiana and P. xylostella
# K. Perry, 8/5/2017
#
# Notes:
# This analysis plots haplotype networks for 27 Plutella australiana individuals
# ... from study `KPRAD2`, which was sequenced at the COI gene by S. Baxter, and
# ... 57 individuals from landry and herberts study downloaded as .tsv file.
# Simon downloaded trace files, checked chromatograms, dropped low qual samples,
# ... aligned together with kp samples in geneious v10.0, trimmed to common length,
# ... and exported a new fasta file.
#
# This script does:
# ... reads in the fasta file with trimmed/aligned COI sequecnes for 89 Paus individuals,
# ... ands new sample names, writes new fasta files and plots hapotype networks in R pegas.
#
# #########################################################################################
library(tidyverse)
library(magrittr)
library(readxl)
library(stringr)
library(reshape2)
library(pegas)
library(RColorBrewer)
library(xtable)
# ###############################
# Write new formatted fasta files
# ###############################
# read in the cleaned Landry and Perry COI sequences for each species
# ... (done separately by species, because the species cannot easily be determined from the combined fasta)
paDNA <- read.dna("C:/UserData/Kym/PhD/RAD2/haplotypes/Landry_Perry_Pa_Extract.fasta",
format = "fasta")
pxDNA <- read.dna("C:/UserData/Kym/PhD/RAD2/haplotypes/Landry_Perry_Px_Extract (modified) (modified).fasta",
format = "fasta")
# make a dataframe containing the sequences and names
seqDf <- data.frame(oldNames = labels(paDNA),
seq = sapply(paDNA, paste, collapse = "")) %>%
# filter out the landry Pa consensus sequence
filter(oldNames != "Landry_Pa_Consensus_extraction") %>%
mutate(species = "P.aus") %>%
# combine with the px sequences
bind_rows(data.frame(oldNames = labels(pxDNA),
seq = sapply(pxDNA, paste, collapse = "")) %>%
mutate(species = "P.x")) %>%
magrittr::set_rownames(NULL)
# rename the samples
# ... first, write the old names to file
write.csv(seqDf[, c("oldNames", "species")], "seqNames.csv", row.names = FALSE)
# ... manually add new names to this file in a separate column,
# ... then change filename (to avoid overwrite), read back in
# ... note: to find the sample locations, I cross-referenced the old names with the sample names in `COI summary sheet``
seqNames <- read.csv("C:/UserData/Kym/PhD/RAD2/haplotypes/seqNamesNew.csv")
seqDf %<>% merge(seqNames, by = c("oldNames", "species")) %>%
# add a column identifying landry's and perry's samples
mutate(study = NA)
seqDf$study[grep("-\\d{2}$", seqDf$newNames)] <- "landry"
seqDf$study[grep("-\\d{2}$", seqDf$newNames, invert = TRUE)] <- "perry"
# create additional seqName variables to facilitate plotting haplotype by `study` and `species` factor in R pegas
# ... allows us to determine how many haplotypes discovered by landry, perry
seqDf %<>% mutate(ID = paste(newNames, species, study, sep = "_")) %>%
arrange(species, study, newNames) %>%
dplyr::select(ID, seq, species, study)
# write new fasta files for all samples, P.x samples, and P.aus samples
pxpaFA <- c(rbind(paste0("> ", as.character(seqDf$ID)), as.character(seqDf$seq)))
write.table(pxpaFA, "pxpaCOI.fasta",
col.names = FALSE, row.names = FALSE, quote = FALSE)
pxDf <- filter(seqDf, species == "P.x")
pxFA <- c(rbind(paste0("> ", as.character(pxDf$ID)), as.character(pxDf$seq)))
write.table(pxFA, "pxCOI.fasta",
col.names = FALSE, row.names = FALSE, quote = FALSE)
paDf <- filter(seqDf, species == "P.aus")
paFA <- c(rbind(paste0("> ", as.character(paDf$ID)), as.character(paDf$seq)))
write.table(paFA, "paCOI.fasta",
col.names = FALSE, row.names = FALSE, quote = FALSE)
# ##################################
# Plot haplotype networks in R pegas
#
# ... assign labels
# ... plot network with sample sizes
# ... plot network with the haplotype labels
# ... plot outer circles with smaller sizes (singleton)
# ... import to inkscape
# ##################################
# *********************
# P.x haplotype network
# *********************
pxCOI <- read.dna("C:/UserData/Kym/PhD/RAD2/haplotypes/pxCOI.fasta",
format = "fasta")
xh <- haplotype(pxCOI, labels = NULL)
plot(xh)
xnet <- haploNet(xh)
# set the new labels for xh and xnet
# ... create vector of hap names in the order to assign names to haps
# ... below, print the names/freq attributes here for transparency
pxHapNames <- c("PxCOI04", "PxCOI01", "PxCOI05", "PxCOI03", "PxCOI02")
attr(xh, "dimnames")[[1]]
# [1] "I" "II" "III" "IV" "V"
attr(xh, "dimnames")[[1]] <- pxHapNames
attr(xh, "dimnames")[[1]]
# [1] "PxCOI4" "PxCOI1" "PxCOI5" "PxCOI3" "PxCOI2"
attr(xnet, "freq")
# [1] 23 76 1 1 1
attr(xnet, "labels")
# [1] "I" "II" "III" "IV" "V"
attr(xnet, "labels") <- pxHapNames
attr(xnet, "labels")
# [1] "PxCOI4" "PxCOI1" "PxCOI5" "PxCOI3" "PxCOI3"
# plot a network with the hap labels, and sample sizes
plot(xnet, threshold = 0)
# plot a network labelled with the sample size
xnet2 <- xnet
attr(xnet2, "labels") <- attr(xnet, "freq")
plot(xnet2, threshold = 0)
# export some pdfs
pdf("PxHapNet.pdf")
plot(xnet, threshold = 0)
dev.off()
pdf("PxHapNetSize.pdf")
plot(xnet2, threshold = 0)
dev.off()
# export some svgs
svg("PxHapNet.svg")
plot(xnet, threshold = 0)
dev.off()
svg("PxHapNetSize.svg")
plot(xnet2, threshold = 0)
dev.off()
# ********************************
# P. australiana haplotype network
# ********************************
paCOI <- read.dna("C:/UserData/Kym/PhD/RAD2/haplotypes/paCOI.fasta",
format = "fasta")
ah <- haplotype(paCOI, labels = NULL)
plot(ah)
anet <- haploNet(ah)
# set the new haplotype names for ah and anet
# . label the Pa haplotypes according to the nomenclature in draft HapNetWorkTable.xlsx
# ... view the draft haplotype network table in this file:
# . hapnetTab <- "C:/UserData/Kym/PhD/RAD2/haplotypes/HapNetworkTable.xlsx"
# ... find the index of individuals with each `haplotype`
# ... (to id the samples, use the names in the dna.bin object, from which haplotype object was derived)
# ... then, match the samples with genbaknk accession numbers, and \
# ... \ cross reference these by vewing the genbank accession numbers for each sample in this file:
# . gbPath <- "C:/UserData/Kym/PhD/RAD2/haplotypes/genbankAccessionsPerryPlutellaCOI.txt"
# index the paCOI object to find the accession numbers for haplotypes [[1]] to [[9]]
# ... note that we're indexing the default order of haplotypes
labels(paCOI)[attr(ah, "index")[[1]]] # n = 74 samples, Pa COI 01
labels(paCOI)[attr(ah, "index")[[2]]] # [1] "LNSWA731-05_P.aus_landry", Pa COI 09
labels(paCOI)[attr(ah, "index")[[3]]] # [1] "LSM1299-11_P.aus_landry" , Pa COI 08
labels(paCOI)[attr(ah, "index")[[4]]] # [1] "MCCAA2949-12_P.aus_landry", pa COI 04
labels(paCOI)[attr(ah, "index")[[5]]] # n = 6 samples, Pa COI 02
labels(paCOI)[attr(ah, "index")[[6]]] # [1] "PHLCA920-11_P.aus_landry", pa COI 05
labels(paCOI)[attr(ah, "index")[[7]]] # [1] "dypWAU1011Ca_P.aus_perry", matches accession MF151883, Pa COI 07
labels(paCOI)[attr(ah, "index")[[8]]] # [1] "dypWAU1012Aa_P.aus_perry", matches accession MF151885, Pa COI 03
labels(paCOI)[attr(ah, "index")[[9]]] # [1] "wagNSW0201Ga_P.aus_perry", matches accession MF151836, pa COI 06
# ... create vector of hap names in the order above to assign names to haps
# ... below, also print the names/freq attributes here for transparency
paHapNames <- c("PaCOI01", "PaCOI09", "PaCOI08", "PaCOI04", "PaCOI02",
"PaCOI05", "PaCOI07", "PaCOI03", "PaCOI06")
attr(ah, "dimnames")[[1]]
# [1] "I" "II" "III" "IV" "V" "VI" "VII" "VIII" "IX"
attr(ah, "dimnames")[[1]] <- paHapNames
attr(ah, "dimnames")[[1]]
# [1] "PaCOI01" "PaCOI09" "PaCOI08" "PaCOI04" "PaCOI02" "PaCOI05" "PaCOI07" "PaCOI03" "PaCOI06"
attr(anet, "freq")
# [1] 74 1 1 1 6 1 1 1 1
attr(anet, "labels")
# [1] "I" "II" "III" "IV" "V" "VI" "VII" "VIII" "IX"
attr(anet, "labels") <- paHapNames
attr(anet, "labels")
# [1] "PaCOI01" "PaCOI09" "PaCOI08" "PaCOI04" "PaCOI02" "PaCOI05" "PaCOI07" "PaCOI03" "PaCOI06"
# plot a network with the hap labels, and sample sizes
plot(anet, threshold = 0)
# plot a network labelled with the sample size
anet2 <- anet
attr(anet2, "labels") <- attr(anet, "freq")
plot(anet2, threshold = 0)
# export some pdfs
pdf("PaHapNet.pdf")
plot(anet, threshold = 0)
dev.off()
pdf("PaHapNetSize.pdf")
plot(anet2, threshold = 0)
dev.off()
# export some svgs
svg("PaHapNet.svg")
plot(anet, threshold = 0)
dev.off()
svg("PaHapNetSize.svg")
plot(anet2, threshold = 0)
dev.off()
# calculate sample sizes, frequency by study (landry vs perry)
(PxFreqStudy <- haploFreq(pxCOI, split = "_", what = 3))
(PaFreqStudy <- haploFreq(paCOI, split = "_", what = 3))
data.frame(landryPx = PxFreqStudy[, "landry"],
perryPx = PxFreqStudy[, "perry"])
# landryPx perryPx
# 1 14 9
# 2 41 35
# 3 1 0
# 4 1 0
# 5 1 0
data.frame(N_landryPx = sum(PxFreqStudy[, "landry"]),
N_perryPx = sum(PxFreqStudy[, "perry"]))
# N_landryPx N_perryPx
# 1 58 44
data.frame(landryPaus = PaFreqStudy[, "landry"],
perryPaus = PaFreqStudy[, "perry"])
# landryPaus perryPaus
# 1 42 32
# 2 1 0
# 3 1 0
# 4 1 0
# 5 4 2
# 6 1 0
# 7 0 1
# 8 0 1
# 9 0 1
data.frame(N_landryPaus = sum(PaFreqStudy[, "landry"]),
N_perryPaus = sum(PaFreqStudy[, "perry"]))
# N_landryPaus N_perryPaus
# 1 50 37
# #####################################################
# extract the meta data for perry P.x and P.aus samples
# ... for submission to genbank
# #####################################################
# first read in the metadata for each sample and population
# ... variable `pos` contains the unique plate and well coordinates for each individual
genotypesPath <- "C:/UserData/Kym/PhD/RAD2/genotypesMasterRAD2.xlsx"
s <- "skip"
t <- "text"
colT <- c(s,t,t,s,s,s,s,s,"numeric",t,t,s,s,t,t,t,t,s,s,s,s,s,s,s,s,s)
metaData <- read_excel(genotypesPath, sheet = "genotypes-master",
col_types = colT) %>%
mutate(plate = gsub("JF_2", "JF02", plate) %>%
gsub("JF_1", "JF01", .) %>%
str_pad(side = "left", width = 2, pad = 0),
position = str_pad(position, side = "left", width = 3, pad = 0),
pos = paste0(plate, position)) %>%
dplyr::select(pos, popCode, location, state, host, hostType, gender) %>%
merge(read_excel(genotypesPath, sheet = "pops-master") %>%
filter(popCode != 9.1) %>%
mutate(popCode = as.numeric(popCode),
collectionDate = format(as.Date(collectionDate), "%d-%b-%Y")) %>%
dplyr::select(popCode, latitude, longitude,
stageCollected, collectionDate, collector), by = c("popCode"))
# get the perry seqNames, extract well and plate positions, merge with metaData
renameHost <- function(x) {
gsub("c", "Canola", x) %>%
gsub("wt", "Wild turnip", .) %>%
gsub("v", "Brassica vegetables", .) %>%
gsub("lw", "Diplotaxis sp.", .) %>%
gsub("f", "Forage brassica", .) %>%
gsub("wr", "Wild radish", .)
}
seqPerry <- seqDf %>%
filter(study == "perry") %>%
mutate(ID = gsub("_(.)+", "", ID),
pos = str_extract(ID,"(JF)?[0-9]{4}[A-H]{1}"),
plate = str_extract(pos, "(JF)?\\d{2}"),
well = str_extract(pos, "\\d{2}[A-H]$")) %>%
dplyr::select(pos, ID, species, seq) %>%
merge(metaData, by = "pos") %>%
mutate(Country = "Australia",
Lat_Lon = paste(round(latitude, 2), "N",
round(longitude, 2), "E"),
host = renameHost(host),
Isolation_source = paste(location, state),
species = gsub("P.x", "Plutella xylostella", species) %>%
gsub("P.aus", "Plutella australiana", .),
gender = gsub("f$", "Female", gender) %>%
gsub("m$", "Male", .) %>%
gsub("u$", "Unknown", .)) %>%
dplyr::select(Sequence_ID = ID, Collection_date = collectionDate,
Collected_by = collector, Country, Lat_Lon,
Host = host, Isolate = pos, Isolation_source, Species = species, Sex = gender, Sequence = seq)
write.csv(seqPerry, "perry.81PxPa.612bpCOI.metadata.csv")
# write a fasta file with just the perry samples (for submission to genbank)
perryFA <- c(rbind(paste0("> ", as.character(seqPerry$Sequence_ID)), as.character(seqPerry$Sequence)))
write.table(perryFA, "perry.81PxPa.612bpCOI.fasta",
col.names = FALSE, row.names = FALSE, quote = FALSE)
# #######################################################################
# Make a Latex table for the GenBank accession numbers for the Haplotypes
# ... supplementary tables
# #######################################################################
# ====================
# Plutella xylostella
# ====================
# Read in the summary table (from Simon Baxter) containing an example accession
# for each haplotype
hapAccFile <- file.path("C:", "UserData", "Kym", "PhD", "RAD2", "haplotypes", "HapNetworkTable.xlsx")
PxAccessions <- read_excel(hapAccFile, sheet = "Sheet1", skip = 2)
PxAccessions <- PxAccessions[1:5, 1:8]
# format latex table
PxAccLatex <- PxAccessions %>%
dplyr::select(-Study) %>%
rename(`{No. individuals}` = n,
`Sequence reference` = sequenceRefNo)
colWidth <- "0.75cm"
PausAccAlign <- c("l",
"l",
paste0(">{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
"S[round-mode=places,round-precision=0,table-number-alignment=center, table-figures-decimal=0,table-figures-integer=2]",
"l")
# Jess Saw Haplotypes:
# PxMt01 = DQ394347
# PxMt02 = DQ394348
# PxMt06 = DQ394352
# make a multi-row header using the add to rows argument
# make a multirow header using the add.to.row() argument
PxHeader <- list()
PxHeader$pos <- list(0, 0)
PxHeader$command = c(
"& \\multicolumn{4}{c}{Nucleotide position} & \\\\\n \\cmidrule(lr){2-5}\n",
paste(paste(names(PxAccLatex), collapse = " & "), "\\\\\n", collapse = "")
)
PxAccCap <- "The four variable nucleotide sites among the five \\textit{P. xylostella}
613 bp COI haplotypes identified in 102 individuals from Australia.
Shown are sequences from this study and re-analysed sequences from Landry and Hebert (2013) downloaded
from dx.doi.org//10.5883//DS-PLUT1.
Three haplotypes correspond to those reported by Saw et al. (2006):
PxCOI01/PxMt01, GenBank accession: DQ394347; PxCOI02/PxMt06, GenBank accession: DQ394352;
PxCOI04/PxMt02, GenBank accession: DQ394348.
Nucleotide positions were determined from sequence MF151841.
Only positions that differ from haplotype PxCOI01 are shown."
xtable(PxAccLatex,
caption = PxAccCap,
align = PxAccAlign,
lab = "tab:PxAccessions") %>%
print.xtable(include.rownames = FALSE,
include.colnames = FALSE,
add.to.row = PxHeader,
caption.placement = "top",
table.placement = "h",
booktabs = TRUE,
sanitize.text.function = function(x){x})
# ######################
# Plutella australiana #
# ######################
# format latex table
PausAccLatex <- read_excel(hapAccFile, sheet = "Sheet1", skip = 11) %>%
filter(Haplotype != "Total") %>%
dplyr::select(-Study) %>%
rename(`{No. individuals}` = n,
`Sequence reference` = sequenceRefNo)
names(PausAccLatex)
# make a multi-row header using the add to rows argument
# make a multirow header using the add.to.row() argument
PaHeader <- list()
PaHeader$pos <- list(0, 0)
PaHeader$command = c(
"& \\multicolumn{8}{c}{Nucleotide position} & \\\\\n \\cmidrule(lr){2-9}\n",
paste(paste(names(PausAccLatex), collapse = " & "), "\\\\\n", collapse = "")
)
colWidth <- "0.7cm"
# I add the @{} space, otherwise cols too squashed to centre exactly.
PausAccAlign <- c("l",
"l",
paste0(">{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
paste0("@{}>{\\centering\\arraybackslash}p{", colWidth, "}"),
"S[round-mode=places,round-precision=0,table-number-alignment=center, table-figures-decimal=0,table-figures-integer=2]",
"l")
PausAccCap <- "The eight variable nucleotide sites among the nine \\textit{P. australiana}
613 bp COI haplotypes identified in 87 individuals from Australia.
Haplotypes PaCOI01 and PaCOI02 were identified among sequences from this study and Landry and Hebert (2013),
and PaCOI04, PaCOI05, PaCOI08 and PaCOI09 were identified from Landry and Hebert (2013).
Nucleotide positions were determined from sequence MF151865.
Only the positions that differ from haplotype PaCOI01 are shown."
xtable(PausAccLatex,
caption = PausAccCap,
align = PausAccAlign,
lab = "tab:PausAccessions") %>%
print.xtable(include.rownames = FALSE,
include.colnames = FALSE,
add.to.row = PaHeader,
caption.placement = "top",
table.placement = "h",
booktabs = TRUE,
sanitize.text.function = function(x){x})
# # End script
# ##################################################################
|
\docType{package}
\name{ggphylo}
\alias{ggphylo}
\alias{ggphylo-package}
\alias{package-ggphylo}
\title{Plots a tree or a list of trees using \code{\link{ggplot}}.}
\usage{
ggphylo(x, extra.data = NA, layout = "default",
do.plot = T, facet.trees = T, x.lab = "Branch Length",
x.lim = NA, x.expand = c(0.05, 0.3), y.lim = NA, ...)
}
\arguments{
\item{x}{input phylo object, list of phylo objects, or a
Newick- or NHX-format tree string. When a list is given,
all trees will be arranged using \code{\link{facet_wrap}}
into a grid of plots. Trees of different shapes and sizes
can be included; the plotting area will be expanded to
fit the largest input tree.}
\item{extra.data}{an optional data.frame or string
pointing to a CSV file, containing data to be linked to
the input phylo object(s). The input data must contain a
column named 'label', which will be used to match each
row in the data to a node in the input tree(s). Defaults
to NA.}
\item{layout}{a string indicating how the tree will be
laid out. Defaults to 'defualt'. Available layouts
include: \enumerate{ \item{default} The 'default' layout,
which arranges the tree in the standard 'phylogram'
arrangement with branch length along the x-axis and tree
nodes evenly spaced along the y-axis. \item{unrooted}
The 'unrooted' layout, which arranges the tree from
root-to-tip giving equal viewing angle to each clade
according to its size. \item{radial} The 'radial'
layout, which is equivalent to the 'default' layout but
plotted in polar coordinates. }}
\item{do.plot}{a boolean indicating whether the plot
should be printed to the current graphics device. When
FALSE, the plot is not printed; the graphics grob is
silently returned. Defaults to TRUE.}
\item{plot.internals}{boolean, whether the labels of
internal nodes should be shown. Defaults to FALSE.}
\item{x.lab}{string, the label given to the x-axis (or,
when layout='unrooted', given to both axes). Defaults to
'Branch Length'.}
\item{x.lim}{vector of length 2, the lower and upper
limits to apply to the x-axis. Defaults to NA, in which
case the plot is expanded to include the entire tree.}
\item{y.lim}{vector of length 2, the lower and upper
limits to apply to the y-axis. Defaults to NA, in which
case the plot is expanded to include the entire tree.}
\item{x.expand}{vector of length 2, the fraction by which
to expand the x-axis limits to the left and to the right.
This is useful to allow extra space for trees with long
labels or large node sizes. Defaults to c(0.05, 0.3)
which extends the x-axis limits 5% to the left and 30% to
the right of the default size.}
\item{[line|node|label].[size|color|alpha].by}{string,
indicating the name of a tag value by which to modulate
the given visual property. A value of 0 will be given to
nodes which do not contain the given tag, and unless a
'x.y.scale' parameter is given (see below), the default
\code{\link{ggplot}} scaling and size/color/alpha rules
will be applied. The combination of 3 visual elements
and 3 visual properties creates 9 possible parameters to
choose from: \enumerate{ \item{line.size.by}
\item{line.color.by} \item{line.alpha.by}
\item{node.size.by} \item{node.color.by}
\item{node.alpha.by} \item{label.size.by}
\item{label.color.by} \item{label.alpha.by} }}
\item{[line|node|label].[size|color|alpha].scale}{function,
used in conjunction with a corresponding 'x.y.by'
parameter (e.g. node.scale.by) to specify the type,
limits and range of the visual scaling. The value is
usually the result of one of the \code{\link{ggplot}}
scale convenience functions, such as: \enumerate{
\item{node.size.scale=scale_size_continuous(limits=c(0,100),
range=c(1, 5))} } In this example, 'limits' controls the
range of tag values to be shown, and 'range' controls the
range of the resulting visual scale (i.e. the node size
will range from 1 to 5). See
\code{\link{scale_size_continuous}} or the other ggplot
scale_x_y functions for more info.}
\item{[line|node|label].[size|color|alpha]}{string or
numeric, used to specify a constant value for a visual
parameter if no data-based scaling is specified. Default
values are: \enumerate{ \item{line.size} 1
\item{line.color} '#888888' \item{line.alpha} 1
\item{node.size} 1.5 \item{node.color} '#333333'
\item{node.alpha} 1 \item{label.size} 3
\item{label.color} 'black' \item{label.alpha} 1 }}
}
\value{
the \code{\link{ggplot}} grob list.
}
\description{
Plots a tree or a list of trees using
\code{\link{ggplot}}.
\code{ggphylo} provides convenient functions and tools
for visualizing phylogenetic data in R. Many methods are
provided to simplify the process of working with
\code{phylo} objects (e.g., \code{\link{tree.scale.to}}
and \code{\link{tree.translate}} , while two plotting
methods allow for a wide range of expressive plots:
\code{\link{tree.plot}} and \code{\link{aln.plot}}
}
| /man/ggphylo.Rd | no_license | gjuggler/ggphylo | R | false | false | 5,028 | rd | \docType{package}
\name{ggphylo}
\alias{ggphylo}
\alias{ggphylo-package}
\alias{package-ggphylo}
\title{Plots a tree or a list of trees using \code{\link{ggplot}}.}
\usage{
ggphylo(x, extra.data = NA, layout = "default",
do.plot = T, facet.trees = T, x.lab = "Branch Length",
x.lim = NA, x.expand = c(0.05, 0.3), y.lim = NA, ...)
}
\arguments{
\item{x}{input phylo object, list of phylo objects, or a
Newick- or NHX-format tree string. When a list is given,
all trees will be arranged using \code{\link{facet_wrap}}
into a grid of plots. Trees of different shapes and sizes
can be included; the plotting area will be expanded to
fit the largest input tree.}
\item{extra.data}{an optional data.frame or string
pointing to a CSV file, containing data to be linked to
the input phylo object(s). The input data must contain a
column named 'label', which will be used to match each
row in the data to a node in the input tree(s). Defaults
to NA.}
\item{layout}{a string indicating how the tree will be
laid out. Defaults to 'defualt'. Available layouts
include: \enumerate{ \item{default} The 'default' layout,
which arranges the tree in the standard 'phylogram'
arrangement with branch length along the x-axis and tree
nodes evenly spaced along the y-axis. \item{unrooted}
The 'unrooted' layout, which arranges the tree from
root-to-tip giving equal viewing angle to each clade
according to its size. \item{radial} The 'radial'
layout, which is equivalent to the 'default' layout but
plotted in polar coordinates. }}
\item{do.plot}{a boolean indicating whether the plot
should be printed to the current graphics device. When
FALSE, the plot is not printed; the graphics grob is
silently returned. Defaults to TRUE.}
\item{plot.internals}{boolean, whether the labels of
internal nodes should be shown. Defaults to FALSE.}
\item{x.lab}{string, the label given to the x-axis (or,
when layout='unrooted', given to both axes). Defaults to
'Branch Length'.}
\item{x.lim}{vector of length 2, the lower and upper
limits to apply to the x-axis. Defaults to NA, in which
case the plot is expanded to include the entire tree.}
\item{y.lim}{vector of length 2, the lower and upper
limits to apply to the y-axis. Defaults to NA, in which
case the plot is expanded to include the entire tree.}
\item{x.expand}{vector of length 2, the fraction by which
to expand the x-axis limits to the left and to the right.
This is useful to allow extra space for trees with long
labels or large node sizes. Defaults to c(0.05, 0.3)
which extends the x-axis limits 5% to the left and 30% to
the right of the default size.}
\item{[line|node|label].[size|color|alpha].by}{string,
indicating the name of a tag value by which to modulate
the given visual property. A value of 0 will be given to
nodes which do not contain the given tag, and unless a
'x.y.scale' parameter is given (see below), the default
\code{\link{ggplot}} scaling and size/color/alpha rules
will be applied. The combination of 3 visual elements
and 3 visual properties creates 9 possible parameters to
choose from: \enumerate{ \item{line.size.by}
\item{line.color.by} \item{line.alpha.by}
\item{node.size.by} \item{node.color.by}
\item{node.alpha.by} \item{label.size.by}
\item{label.color.by} \item{label.alpha.by} }}
\item{[line|node|label].[size|color|alpha].scale}{function,
used in conjunction with a corresponding 'x.y.by'
parameter (e.g. node.scale.by) to specify the type,
limits and range of the visual scaling. The value is
usually the result of one of the \code{\link{ggplot}}
scale convenience functions, such as: \enumerate{
\item{node.size.scale=scale_size_continuous(limits=c(0,100),
range=c(1, 5))} } In this example, 'limits' controls the
range of tag values to be shown, and 'range' controls the
range of the resulting visual scale (i.e. the node size
will range from 1 to 5). See
\code{\link{scale_size_continuous}} or the other ggplot
scale_x_y functions for more info.}
\item{[line|node|label].[size|color|alpha]}{string or
numeric, used to specify a constant value for a visual
parameter if no data-based scaling is specified. Default
values are: \enumerate{ \item{line.size} 1
\item{line.color} '#888888' \item{line.alpha} 1
\item{node.size} 1.5 \item{node.color} '#333333'
\item{node.alpha} 1 \item{label.size} 3
\item{label.color} 'black' \item{label.alpha} 1 }}
}
\value{
the \code{\link{ggplot}} grob list.
}
\description{
Plots a tree or a list of trees using
\code{\link{ggplot}}.
\code{ggphylo} provides convenient functions and tools
for visualizing phylogenetic data in R. Many methods are
provided to simplify the process of working with
\code{phylo} objects (e.g., \code{\link{tree.scale.to}}
and \code{\link{tree.translate}} , while two plotting
methods allow for a wide range of expressive plots:
\code{\link{tree.plot}} and \code{\link{aln.plot}}
}
|
dataFile <- "./data/household_power_consumption.txt"
data <- read.table(dataFile, header=TRUE, sep=";", stringsAsFactors=FALSE, dec=".")
subSetData <- data[data$Date %in% c("1/2/2007","2/2/2007") ,]
#str(subSetData)
datetime <- strptime(paste(subSetData$Date, subSetData$Time, sep=" "), "%d/%m/%Y %H:%M:%S")
globalActivePower <- as.numeric(subSetData$Global_active_power)
globalReactivePower <- as.numeric(subSetData$Global_reactive_power)
voltage <- as.numeric(subSetData$Voltage)
subMetering1 <- as.numeric(subSetData$Sub_metering_1)
subMetering2 <- as.numeric(subSetData$Sub_metering_2)
subMetering3 <- as.numeric(subSetData$Sub_metering_3)
png("plot4.png", width=480, height=480)
par(mfrow = c(2, 2))
plot(datetime, globalActivePower, type="l", xlab="", ylab="Global Active Power", cex=0.2)
plot(datetime, voltage, type="l", xlab="datetime", ylab="Voltage")
plot(datetime, subMetering1, type="l", ylab="Energy Submetering", xlab="")
lines(datetime, subMetering2, type="l", col="red")
lines(datetime, subMetering3, type="l", col="blue")
legend("topright", c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"), lty=, lwd=2.5, col=c("black", "red", "blue"), bty="o")
plot(datetime, globalReactivePower, type="l", xlab="datetime", ylab="Global_reactive_power")
dev.off()
| /plot4.R | no_license | elixir75/ExData_Plotting1 | R | false | false | 1,311 | r | dataFile <- "./data/household_power_consumption.txt"
data <- read.table(dataFile, header=TRUE, sep=";", stringsAsFactors=FALSE, dec=".")
subSetData <- data[data$Date %in% c("1/2/2007","2/2/2007") ,]
#str(subSetData)
datetime <- strptime(paste(subSetData$Date, subSetData$Time, sep=" "), "%d/%m/%Y %H:%M:%S")
globalActivePower <- as.numeric(subSetData$Global_active_power)
globalReactivePower <- as.numeric(subSetData$Global_reactive_power)
voltage <- as.numeric(subSetData$Voltage)
subMetering1 <- as.numeric(subSetData$Sub_metering_1)
subMetering2 <- as.numeric(subSetData$Sub_metering_2)
subMetering3 <- as.numeric(subSetData$Sub_metering_3)
png("plot4.png", width=480, height=480)
par(mfrow = c(2, 2))
plot(datetime, globalActivePower, type="l", xlab="", ylab="Global Active Power", cex=0.2)
plot(datetime, voltage, type="l", xlab="datetime", ylab="Voltage")
plot(datetime, subMetering1, type="l", ylab="Energy Submetering", xlab="")
lines(datetime, subMetering2, type="l", col="red")
lines(datetime, subMetering3, type="l", col="blue")
legend("topright", c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"), lty=, lwd=2.5, col=c("black", "red", "blue"), bty="o")
plot(datetime, globalReactivePower, type="l", xlab="datetime", ylab="Global_reactive_power")
dev.off()
|
# R Diary of 9/14/15
# Sample of size 6 from the set {1,2,...,49} (by default without replacement)
sample(49,6)
[1] 37 30 19 4 16 45
# Sample with replacement. Repeated entries in the sample are now possible.
sample(49,6, replace =T)
[1] 17 44 16 16 5 31
# Read in the baby names text file (available in Blackboard) and change the names.
# Then take a look at the file.
yob1990 <- read.csv("E:/ANLY511/sept14/babynames/yob1990.txt", header=FALSE, stringsAsFactors=FALSE)
names(yob1990) <- c("name","gender","count")
head(yob1990)
name gender count
1 Jessica F 46466
2 Ashley F 45549
3 Brittany F 36535
4 Amanda F 34406
5 Samantha F 25864
6 Sarah F 25808
# How many male names and how many female names?
table(yob1990$gender)
F M
15231 9482
# Make a sample of size 30 of names from babies born in 1990.
elclass <- sample(yob1990$name, 30, prob = yob1990$count)
elclass
[1] "Christian" "Ashley" "Rachel" "Laina"
[5] "Steven" "Alex" "Olivia" "Rebecca"
[9] "Tera" "Lauren" "Carlos" "Chanelle"
[13] "Sarah" "Casey" "Allison" "Michael"
[17] "Dylan" "Mikhail" "Lachanda" "Jared"
[21] "Megan" "Jordyn" "Justin" "Brittany"
[25] "Katherine" "Mitchell" "Cynthia" "Jessica"
[29] "Meghan" "Johnny"
# Repeat, with replacement, since there might be several children with the same first name in a class.
elclass <- sample(yob1990$name, 30, prob = yob1990$count, replace = T)
sort(elclass)
[1] "Alvin" "Angela" "Boyd" "Caroline"
[5] "Chaim" "Christian" "Christopher" "Connor"
[9] "Courtney" "Daniel" "Erin" "Geoffrey"
[13] "Ian" "Kala" "Kevin" "Kyle"
[17] "Martez" "Mattie" "Michael" "Natalie"
[21] "Nathaniel" "Randall" "Rebecca" "Ryan"
[25] "Samuel" "Sean" "Shawn" "Vashti"
[29] "Victoria" "Yolanda"
# Compute the actual probability of each name from the count vector
# and write it in a new column in the data frame.
yob1990$prob <- yob1990$count/sum(yob1990$count)
# Note that some names are very rare. For example,
# There are only 25 children with first name "Kelcee" who were born in the US in that year.
yob1990[yob1990$name == "Kelcee",]
name gender count prob
4060 Kelcee F 25 6.32871e-06
# Some names appear both as male and female first names.
yob1990[yob1990$name == "Dana",]
name gender count prob
114 Dana F 2932 0.0007422311
15762 Dana M 347 0.0000878425
# How common was your first name?
yob1990[yob1990$name == "Arif",]
name gender count prob
19021 Arif M 15 3.797226e-06
yob1990[yob1990$name == "Jordan",]
name gender count prob
59 Jordan F 5954 0.001507246
15260 Jordan M 16128 0.004082778
# Work with binomial distribution
# 1 random number, n = 20, p = 2/7
rbinom(1,20,2/7)
[1] 3
rbinom(1,20,2/7)
[1] 8
# Make 10 random numbers
rbinom(10,20,2/7)
[1] 8 3 6 7 2 7 5 6 8 3
# Probability mass function for x = 7
dbinom(7,20,2/7)
[1] 0.1518004
# Cumulative distribution function for x = 7
pbinom(7,20,2/7)
[1] 0.8138364
# Quantile for x = .2
# This means that P(X < 4 ) <= .2 and P(X <= 4) = .2.
qbinom(.2,20,2/7)
[1] 4
pbinom(4,20,2/7)
[1] 0.2825349
pbinom(3,20,2/7)
[1] 0.1342923
# Make a histogram of a large random sample.
x <- rbinom(1000,20,2/7)
hist(x)
# A better view: plot a table.
table(x)
x
0 1 2 3 4 5 6 7 8 9 10 11 12 13
1 9 32 83 162 172 192 174 94 51 19 8 2 1
plot(table(x))
# Now compute the probability mass function for all values from 0 to 20
# and plot it in the same way.
x1 <- 0:20
plot(x1,dbinom(x1,20,2/7), type = 'h')
# The plots are similar, but of course not the same.
# Make a plot of the cumulative distribution function as a staircase.
plot(x1,pbinom(x1,20,2/7), type = 's')
# We can also plot the "empirical cumulative distribution function"
# of the random sample x that was created earlier.
plot.ecdf(x)
# By itself, the function ecdf() creates a function from a given sample
# that can be used to compute the empirical cumulative distribution function
# for any argument.
z <- ecdf(x)
# 45.9% of the sample is <= 5.
z(5)
[1] 0.459
# The theoretical value is 47.23%.
pbinom(5,20,2/7)
[1] 0.4722854
# We can plot the empirical cumulative distribution function
# against the cumulative distribution function.
# The result is nearly a straight line.
plot(z(x), pbinom(x,20,2/7))
# Compare a binomial distribution with large n and small p
# with a Poisson distribution with lambda = np
# using a simulation.
xbinom <- rbinom(1000,100,.05) # Make a binomial random sample of size 1000
xpois <- rpois(1000,5) # Make a possible random sample of the same sides
# Histograms of the two distributions. The distributions are very similar.
plot(table(xbinom))
plot(table(xpois))
# Now let's compare the theoretical probabilities.
x <- 0:20 # These are the values at which distributions will be evaluated
plot(x,ppois(x,5), type = 's') # Staircase plot of the cumulative distribution function of the Poisson distribution
points(x,pbinom(x,100,.05), col = 2, lwd = 3) # Plot the values of the cdf of the binomial distribution in the same plot
points(x,ppois(x,5), col = 4, lwd = 2) # Plot the values of the cdf of the Poisson distribution in the same plot again
# The plots show that the two distributions are very similar.
# Work with exponential distribution. Random sample of size 1000 and plot of the empirical cdf:
x1 <- rexp(1000,1)
plot.ecdf(x1)
# Make two more samples from the same distribution
x2 <- rexp(1000,1)
x3 <- rexp(1000,1)
# We want to make a new random variable which consists of the minimum of x1, x2, x3.
# Here's how this can be done with a for loop:
xmin <- rep(0,1000)
for (j in 1:1000) xmin[j] <- min(c(x1[j], x2[j], x3[j]))
# Alternative, using matrix operations and sapply (not done in class)
A <- matrix(c(x1,x2,x3),ncol = 3)
mymin = function(j) min(A[j,])
xmin1 <- sapply(1:1000,mymin)
# The empirical cdf and the histogram suggest that this has also an exponential distribution.
plot.ecdf(xmin)
hist(xmin,breaks = 20)
# Do the same thing for maxima, first with a for loop.
xmax <- rep(0,1000)
for (j in 1:1000) xmax[j] <- max(c(x1[j], x2[j], x3[j]))
# Alternatively, using matrix operations and sapply
mymax = function(j) max(A[j,])
xmax1 <- sapply(1:1000,mymax)
# Plot the empirical cdf and make a histogram.
# This is not an exponentially distributed random variable.
plot.ecdf(xmax)
hist(xmax,breaks = 20)
# Finally, compute the sums of x1, x2, x3. First with a for loop:
xsum <- rep(0,1000)
for (j in 1:1000) xsum[j] <- sum(c(x1[j], x2[j], x3[j]))
# Or with a built-in function that computes row sums:
xsum1 <- rowSums(A)
# The plots show that this is again not an exponentially distributed random variable.
hist(xsum,breaks = 20)
plot.ecdf(xsum)
########### End of diary of 9/14/15 | /Lecture/diary0914.r | no_license | arifyali/Probabilistic-and-Statistical-Modeling-Fall-2015 | R | false | false | 7,085 | r | # R Diary of 9/14/15
# Sample of size 6 from the set {1,2,...,49} (by default without replacement)
sample(49,6)
[1] 37 30 19 4 16 45
# Sample with replacement. Repeated entries in the sample are now possible.
sample(49,6, replace =T)
[1] 17 44 16 16 5 31
# Read in the baby names text file (available in Blackboard) and change the names.
# Then take a look at the file.
yob1990 <- read.csv("E:/ANLY511/sept14/babynames/yob1990.txt", header=FALSE, stringsAsFactors=FALSE)
names(yob1990) <- c("name","gender","count")
head(yob1990)
name gender count
1 Jessica F 46466
2 Ashley F 45549
3 Brittany F 36535
4 Amanda F 34406
5 Samantha F 25864
6 Sarah F 25808
# How many male names and how many female names?
table(yob1990$gender)
F M
15231 9482
# Make a sample of size 30 of names from babies born in 1990.
elclass <- sample(yob1990$name, 30, prob = yob1990$count)
elclass
[1] "Christian" "Ashley" "Rachel" "Laina"
[5] "Steven" "Alex" "Olivia" "Rebecca"
[9] "Tera" "Lauren" "Carlos" "Chanelle"
[13] "Sarah" "Casey" "Allison" "Michael"
[17] "Dylan" "Mikhail" "Lachanda" "Jared"
[21] "Megan" "Jordyn" "Justin" "Brittany"
[25] "Katherine" "Mitchell" "Cynthia" "Jessica"
[29] "Meghan" "Johnny"
# Repeat, with replacement, since there might be several children with the same first name in a class.
elclass <- sample(yob1990$name, 30, prob = yob1990$count, replace = T)
sort(elclass)
[1] "Alvin" "Angela" "Boyd" "Caroline"
[5] "Chaim" "Christian" "Christopher" "Connor"
[9] "Courtney" "Daniel" "Erin" "Geoffrey"
[13] "Ian" "Kala" "Kevin" "Kyle"
[17] "Martez" "Mattie" "Michael" "Natalie"
[21] "Nathaniel" "Randall" "Rebecca" "Ryan"
[25] "Samuel" "Sean" "Shawn" "Vashti"
[29] "Victoria" "Yolanda"
# Compute the actual probability of each name from the count vector
# and write it in a new column in the data frame.
yob1990$prob <- yob1990$count/sum(yob1990$count)
# Note that some names are very rare. For example,
# There are only 25 children with first name "Kelcee" who were born in the US in that year.
yob1990[yob1990$name == "Kelcee",]
name gender count prob
4060 Kelcee F 25 6.32871e-06
# Some names appear both as male and female first names.
yob1990[yob1990$name == "Dana",]
name gender count prob
114 Dana F 2932 0.0007422311
15762 Dana M 347 0.0000878425
# How common was your first name?
yob1990[yob1990$name == "Arif",]
name gender count prob
19021 Arif M 15 3.797226e-06
yob1990[yob1990$name == "Jordan",]
name gender count prob
59 Jordan F 5954 0.001507246
15260 Jordan M 16128 0.004082778
# Work with binomial distribution
# 1 random number, n = 20, p = 2/7
rbinom(1,20,2/7)
[1] 3
rbinom(1,20,2/7)
[1] 8
# Make 10 random numbers
rbinom(10,20,2/7)
[1] 8 3 6 7 2 7 5 6 8 3
# Probability mass function for x = 7
dbinom(7,20,2/7)
[1] 0.1518004
# Cumulative distribution function for x = 7
pbinom(7,20,2/7)
[1] 0.8138364
# Quantile for x = .2
# This means that P(X < 4 ) <= .2 and P(X <= 4) = .2.
qbinom(.2,20,2/7)
[1] 4
pbinom(4,20,2/7)
[1] 0.2825349
pbinom(3,20,2/7)
[1] 0.1342923
# Make a histogram of a large random sample.
x <- rbinom(1000,20,2/7)
hist(x)
# A better view: plot a table.
table(x)
x
0 1 2 3 4 5 6 7 8 9 10 11 12 13
1 9 32 83 162 172 192 174 94 51 19 8 2 1
plot(table(x))
# Now compute the probability mass function for all values from 0 to 20
# and plot it in the same way.
x1 <- 0:20
plot(x1,dbinom(x1,20,2/7), type = 'h')
# The plots are similar, but of course not the same.
# Make a plot of the cumulative distribution function as a staircase.
plot(x1,pbinom(x1,20,2/7), type = 's')
# We can also plot the "empirical cumulative distribution function"
# of the random sample x that was created earlier.
plot.ecdf(x)
# By itself, the function ecdf() creates a function from a given sample
# that can be used to compute the empirical cumulative distribution function
# for any argument.
z <- ecdf(x)
# 45.9% of the sample is <= 5.
z(5)
[1] 0.459
# The theoretical value is 47.23%.
pbinom(5,20,2/7)
[1] 0.4722854
# We can plot the empirical cumulative distribution function
# against the cumulative distribution function.
# The result is nearly a straight line.
plot(z(x), pbinom(x,20,2/7))
# Compare a binomial distribution with large n and small p
# with a Poisson distribution with lambda = np
# using a simulation.
xbinom <- rbinom(1000,100,.05) # Make a binomial random sample of size 1000
xpois <- rpois(1000,5) # Make a possible random sample of the same sides
# Histograms of the two distributions. The distributions are very similar.
plot(table(xbinom))
plot(table(xpois))
# Now let's compare the theoretical probabilities.
x <- 0:20 # These are the values at which distributions will be evaluated
plot(x,ppois(x,5), type = 's') # Staircase plot of the cumulative distribution function of the Poisson distribution
points(x,pbinom(x,100,.05), col = 2, lwd = 3) # Plot the values of the cdf of the binomial distribution in the same plot
points(x,ppois(x,5), col = 4, lwd = 2) # Plot the values of the cdf of the Poisson distribution in the same plot again
# The plots show that the two distributions are very similar.
# Work with exponential distribution. Random sample of size 1000 and plot of the empirical cdf:
x1 <- rexp(1000,1)
plot.ecdf(x1)
# Make two more samples from the same distribution
x2 <- rexp(1000,1)
x3 <- rexp(1000,1)
# We want to make a new random variable which consists of the minimum of x1, x2, x3.
# Here's how this can be done with a for loop:
xmin <- rep(0,1000)
for (j in 1:1000) xmin[j] <- min(c(x1[j], x2[j], x3[j]))
# Alternative, using matrix operations and sapply (not done in class)
A <- matrix(c(x1,x2,x3),ncol = 3)
mymin = function(j) min(A[j,])
xmin1 <- sapply(1:1000,mymin)
# The empirical cdf and the histogram suggest that this has also an exponential distribution.
plot.ecdf(xmin)
hist(xmin,breaks = 20)
# Do the same thing for maxima, first with a for loop.
xmax <- rep(0,1000)
for (j in 1:1000) xmax[j] <- max(c(x1[j], x2[j], x3[j]))
# Alternatively, using matrix operations and sapply
mymax = function(j) max(A[j,])
xmax1 <- sapply(1:1000,mymax)
# Plot the empirical cdf and make a histogram.
# This is not an exponentially distributed random variable.
plot.ecdf(xmax)
hist(xmax,breaks = 20)
# Finally, compute the sums of x1, x2, x3. First with a for loop:
xsum <- rep(0,1000)
for (j in 1:1000) xsum[j] <- sum(c(x1[j], x2[j], x3[j]))
# Or with a built-in function that computes row sums:
xsum1 <- rowSums(A)
# The plots show that this is again not an exponentially distributed random variable.
hist(xsum,breaks = 20)
plot.ecdf(xsum)
########### End of diary of 9/14/15 |
library('lubridate')
library('data.table')
setwd("/Users/onRu/Documents/Methodology/Data Science courses/Exploratory Data Analysis/FirstProject")
downloadURL <- "https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2Fhousehold_power_consumption.zip"
zipFile <- "./Data/household_power_consumption.zip"
dataFile <- "./Data/household_power_consumption.txt"
##
if (!file.exists(dataFile)) {
download.file(downloadURL, zipFile, method = "curl")
unzip(zipFile, overwrite = T, exdir = "./Data")
}
##
plotData <- read.table(dataFile, header=T, sep=";", na.strings="?")
## set time variable##
dataset1 <- plotData[plotData$Date %in% c("1/2/2007","2/2/2007"),] ##this leaves the two dates that are asked for in the question##
SetTime <-strptime(paste(dataset1$Date, dataset1$Time, sep=" "),"%d/%m/%Y %H:%M:%S") ##converting date and time variables##
dataset1 <- cbind(SetTime, dataset1) ##binding converted variables to the rest of the dataset1##
dataset1$DateTime<-dmy(dataset1$Date)+hms(dataset1$Time)
## plotting the day-by-day grapth for Global Active Power ##
png(filename='/Users/onRu/Documents/Methodology/Data Science courses/Exploratory Data Analysis/FirstProject/plot2.png',width=480,height=480,units='px')
plot(dataset1$DateTime,dataset1$Global_active_power,ylab='Global Active Power (kilowatts)', xlab='', type='l')
dev.off() | /plot2.R | no_license | obakiner/ExData_Plotting1 | R | false | false | 1,334 | r | library('lubridate')
library('data.table')
setwd("/Users/onRu/Documents/Methodology/Data Science courses/Exploratory Data Analysis/FirstProject")
downloadURL <- "https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2Fhousehold_power_consumption.zip"
zipFile <- "./Data/household_power_consumption.zip"
dataFile <- "./Data/household_power_consumption.txt"
##
if (!file.exists(dataFile)) {
download.file(downloadURL, zipFile, method = "curl")
unzip(zipFile, overwrite = T, exdir = "./Data")
}
##
plotData <- read.table(dataFile, header=T, sep=";", na.strings="?")
## set time variable##
dataset1 <- plotData[plotData$Date %in% c("1/2/2007","2/2/2007"),] ##this leaves the two dates that are asked for in the question##
SetTime <-strptime(paste(dataset1$Date, dataset1$Time, sep=" "),"%d/%m/%Y %H:%M:%S") ##converting date and time variables##
dataset1 <- cbind(SetTime, dataset1) ##binding converted variables to the rest of the dataset1##
dataset1$DateTime<-dmy(dataset1$Date)+hms(dataset1$Time)
## plotting the day-by-day grapth for Global Active Power ##
png(filename='/Users/onRu/Documents/Methodology/Data Science courses/Exploratory Data Analysis/FirstProject/plot2.png',width=480,height=480,units='px')
plot(dataset1$DateTime,dataset1$Global_active_power,ylab='Global Active Power (kilowatts)', xlab='', type='l')
dev.off() |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utilities_math.R
\name{to_dB}
\alias{to_dB}
\title{Convert to dB}
\usage{
to_dB(x)
}
\arguments{
\item{x}{a vector of floats between 0 and 1 (exclusive, i.e. these are ratios)}
}
\description{
Internal soundgen function.
}
| /man/to_dB.Rd | no_license | lightbringer/soundgen | R | false | true | 301 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utilities_math.R
\name{to_dB}
\alias{to_dB}
\title{Convert to dB}
\usage{
to_dB(x)
}
\arguments{
\item{x}{a vector of floats between 0 and 1 (exclusive, i.e. these are ratios)}
}
\description{
Internal soundgen function.
}
|
examples.showreg = function() {
library(regtools)
getOption("showreg.package")
# iv and ols with robust standard errors
library(AER)
data("CigarettesSW", package = "AER")
CigarettesSW$rprice <- with(CigarettesSW, price/cpi)
CigarettesSW$rincome <- with(CigarettesSW, income/population/cpi)
CigarettesSW$tdiff <- with(CigarettesSW, (taxs - tax)/cpi)
iv <- ivreg(log(packs) ~ log(rprice) + log(rincome) | log(rincome) + tdiff + I(tax/cpi),data = CigarettesSW, subset = year == "1995")
ols <- lm(log(packs) ~ log(rprice) + log(rincome),data = CigarettesSW, subset = year == "1995")
showreg(list(iv,ols), package="texreg")
showreg(list(iv,ols), package="stargazer")
txt = showreg(list(iv=iv,iv.rob=iv, ols=ols, ols.rob=ols),
robust=c(FALSE,TRUE,FALSE,TRUE), robust.type="HC4", output="text", package="stargazer")
txt
showreg(list(iv=iv,iv.rob=iv, ols=ols, ols.rob=ols), title = "My models",
robust=c(FALSE,TRUE,FALSE,TRUE), robust.type="NeweyWest")
txt = showreg(list(iv=iv,iv.rob=iv, ols=ols, ols.rob=ols),
robust=c(FALSE,TRUE,FALSE,TRUE), robust.type="HC4", output="html", caption="My table", caption.above=!TRUE)
# Marginal effect for probit regression
set.seed(12345)
n = 1000
x = rnorm(n)
# binary outcome
y = ifelse(pnorm(1 + 4*x + rnorm(n))>0.5, 1, 0)
data = data.frame(y,x)
reg = glm(y~x, data=data, family=binomial(link=probit))
showreg(list("probit"=reg,"marginal effects"=reg), coef.transform=c("no", "mfx"), omit.coef = "(Intercept)")
# Clustered standard errors
# (not tested for correctness at all)
library(Ecdat)
data(Fatality)
LSDV <- lm(mrall ~ beertax + factor(year) + factor(state), data=Fatality)
LSDV$custom.data = Fatality
showreg(
list("LSDV"=LSDV,
"LSDV (state cluster)"=LSDV,
"LSDV (state-year cluster)"=LSDV
),
robust=c(FALSE,TRUE,TRUE),
robust.type=c("const","cluster1","cluster2"),
cluster1 = "state", cluster2="year"
)
}
#' Show and compare regression results
#'
#' The function extends and wraps either stargazer or the screenreg, texreg and htmlreg functions in the texreg package. It allows for robust standard errors (also clustered robust standard errors) and can show marginal effects in glm models.
#'
#' @param l list of models as in screenreg
#' @param custom.model.names custom titles for each model. By default the names of the model list.
#' @param robust shall robust standard errors be used? Logical single number or a vector specifying for each model.
#' @param robust.type the type of robust standard errors. Can be "HAC", "cluster", "HC1" to "HC4" or "NeweyWest". Can be a vector specifying a type for each model.
#' @param cluster1 and cluster2 if clustered robust standard errors are used, the name of the variables that shall be clustered by
#' @param vcov.list optional a list of covariance matrices of the coefficients for every model
#' @param coef.transform either NULL or a vector containing "no" or "mfx", if an entry is "mfx" we show the marginal effects of the corresponding model.
#' @param coef.mat.li for highest flexibility, you can also provide a list that contains for each model a matrix or data.frame as returned by coeftest with the columns: coefficent, se, t-value, p-value.
#' @param output either "text", "html" or "latex"
#' @param output.fun allows a manual output function, e.g. if one has overloaded the design of screenreg, texreg or htmlreg.
#' @param title a string shown as title above the table
#' @param package the underlying package for creating the tables, either "texreg" or "stargazer". The current default is texreg but that may change.
#' @param ... additional parameters for stargazer or screenreg, texreg or htmlreg
#'
#' @export
showreg = function(l,custom.model.names=names(l), omit.stat=c("F","ser"),robust = FALSE, robust.type = "HC3", cluster1=NULL, cluster2=NULL,vcov.li=NULL,coef.transform = NULL, coef.mat.li = NULL, digits = 2, output=c("text","html","latex")[1], output.fun = NULL, doctype = FALSE,title=NULL, intercept.bottom=FALSE, package=getOption("showreg.package")[1]
, ...){
dots = list(...)
restore.point("showreg")
if (!is.null(dots$type)) output=dots$type
type = output
if (package=="stargazer") {
library(stargazer)
call.stargazer = function(args) {
dupl = duplicated(names(args)) & names(args)!=""
args = args[!dupl]
out = capture.output(res<-do.call("stargazer",args))
class(res) = c("classShowReg","character")
res
}
}
if (is.null(output.fun)) {
if (package=="texreg") {
library(texreg)
if (output=="text"){
output.fun=screenreg
} else if (output=="html") {
output.fun = htmlreg
} else if (output=="latex") {
output.fun = texreg
}
}
}
if (!any(robust) & is.null(vcov.li) & is.null(coef.mat.li) & is.null(coef.transform)) {
if (package=="stargazer" & is.null(output.fun)) {
args = c(l, dots,list(type=output, digits=digits,
omit.stat=omit.stat,intercept.bottom=intercept.bottom))
res = call.stargazer(args)
} else {
res =output.fun(l, ..., custom.model.names=custom.model.names,digits = digits, doctype=doctype)
}
res = add.showreg.title(res,title,output)
return(res)
}
if (length(robust)==1)
robust = rep(robust,length(l))
if (length(robust.type)==1)
robust.type = rep(robust.type,length(l))
robust.type[!robust] = "const"
if (is.null(vcov.li)) {
vcov.li = lapply(seq_along(l), function(i){
vcovRobust(l[[i]], type = robust.type[[i]], cluster1=cluster1, cluster2=cluster2)
})
}
cml = lapply(seq_along(l), function(i){
get.coef.mat(l[[i]],vcov=vcov.li[[i]], robust = robust[[i]], robust.type = robust.type[[i]], coef.transform = coef.transform[[i]], cluster1 = cluster1[[i]], cluster2 = cluster2[[i]], coef.mat = coef.mat.li[[i]])
})
coef.li = lapply(cml, function(r) convert.na(r[,1],Inf))
se.li = lapply(cml, function(r) convert.na(r[,2],Inf))
pval.li = lapply(cml, function(r) convert.na(r[,4],1))
if (package=="stargazer" & is.null(output.fun)) {
#restore.point("showreg.stargazer")
names(l)=NULL
args = c(l, dots,list(type=type, digits=digits,
omit.stat=omit.stat,intercept.bottom=intercept.bottom,
coef = coef.li, se=se.li, p=pval.li))
res = call.stargazer(args)
} else {
res = output.fun(l,..., custom.model.names = custom.model.names,
override.coef = coef.li, override.se = se.li,
override.pval = pval.li,
digits = digits,doctype=doctype)
}
add.showreg.title(res,title,output)
}
print.classShowReg = function(x) {
cat("\n", paste0(x,collapse="\n"))
}
add.showreg.title = function(out, title=NULL, output="text") {
if (is.null(title))
return(out)
out.class = class(out)
if (output=="text" | output=="latex") {
res = paste0(title,"\n",out)
} else if (output=="html") {
res = paste0("<H4>",title,"</H4>\n\n",out)
}
class(res) = class(out)
return(res)
}
#' get the matrix with coefficients, se, t-value, p-value of a regression model
#'
#' internally used by showreg
get.coef.mat = function(mod, vcov=NULL, robust = FALSE, robust.type = "HC3", coef.transform = NULL, cluster1=NULL, cluster2=NULL, coef.mat = NULL) {
restore.point("get.coef.mat")
# coef.mat is given, so just return it
if (!is.null(coef.mat))
return(coef.mat)
# For marginal effects use the functions from mfx
if (isTRUE(coef.transform=="mfx")) {
mfx = glm.marginal.effects(mod, robust=robust, clustervar1=cluster1, clustervar2 = cluster2)
df = as.data.frame(mfx$mfxest)
# Add a missing intercept
if (NROW(df)<length(coef(mod))) {
idf = data.frame(NaN,NaN,NaN,NaN)
colnames(idf) = colnames(df)
rownames(idf) = names(coef(mod))[1]
df = rbind(idf,df)
}
return(df)
}
# Compute robust vcov
if (is.null(vcov) & robust) {
vcov = vcovRobust(l[[i]], type = robust.type, cluster1=cluster1, cluster2=cluster2)
}
# return the coefficient matrix
coeftest(mod, vcov.=vcov)
}
convert.na = function(x,new.val) {
x[is.na(x)] = new.val
x
}
| /R/showreg.r | no_license | skranz/regtools | R | false | false | 8,240 | r |
examples.showreg = function() {
library(regtools)
getOption("showreg.package")
# iv and ols with robust standard errors
library(AER)
data("CigarettesSW", package = "AER")
CigarettesSW$rprice <- with(CigarettesSW, price/cpi)
CigarettesSW$rincome <- with(CigarettesSW, income/population/cpi)
CigarettesSW$tdiff <- with(CigarettesSW, (taxs - tax)/cpi)
iv <- ivreg(log(packs) ~ log(rprice) + log(rincome) | log(rincome) + tdiff + I(tax/cpi),data = CigarettesSW, subset = year == "1995")
ols <- lm(log(packs) ~ log(rprice) + log(rincome),data = CigarettesSW, subset = year == "1995")
showreg(list(iv,ols), package="texreg")
showreg(list(iv,ols), package="stargazer")
txt = showreg(list(iv=iv,iv.rob=iv, ols=ols, ols.rob=ols),
robust=c(FALSE,TRUE,FALSE,TRUE), robust.type="HC4", output="text", package="stargazer")
txt
showreg(list(iv=iv,iv.rob=iv, ols=ols, ols.rob=ols), title = "My models",
robust=c(FALSE,TRUE,FALSE,TRUE), robust.type="NeweyWest")
txt = showreg(list(iv=iv,iv.rob=iv, ols=ols, ols.rob=ols),
robust=c(FALSE,TRUE,FALSE,TRUE), robust.type="HC4", output="html", caption="My table", caption.above=!TRUE)
# Marginal effect for probit regression
set.seed(12345)
n = 1000
x = rnorm(n)
# binary outcome
y = ifelse(pnorm(1 + 4*x + rnorm(n))>0.5, 1, 0)
data = data.frame(y,x)
reg = glm(y~x, data=data, family=binomial(link=probit))
showreg(list("probit"=reg,"marginal effects"=reg), coef.transform=c("no", "mfx"), omit.coef = "(Intercept)")
# Clustered standard errors
# (not tested for correctness at all)
library(Ecdat)
data(Fatality)
LSDV <- lm(mrall ~ beertax + factor(year) + factor(state), data=Fatality)
LSDV$custom.data = Fatality
showreg(
list("LSDV"=LSDV,
"LSDV (state cluster)"=LSDV,
"LSDV (state-year cluster)"=LSDV
),
robust=c(FALSE,TRUE,TRUE),
robust.type=c("const","cluster1","cluster2"),
cluster1 = "state", cluster2="year"
)
}
#' Show and compare regression results
#'
#' The function extends and wraps either stargazer or the screenreg, texreg and htmlreg functions in the texreg package. It allows for robust standard errors (also clustered robust standard errors) and can show marginal effects in glm models.
#'
#' @param l list of models as in screenreg
#' @param custom.model.names custom titles for each model. By default the names of the model list.
#' @param robust shall robust standard errors be used? Logical single number or a vector specifying for each model.
#' @param robust.type the type of robust standard errors. Can be "HAC", "cluster", "HC1" to "HC4" or "NeweyWest". Can be a vector specifying a type for each model.
#' @param cluster1 and cluster2 if clustered robust standard errors are used, the name of the variables that shall be clustered by
#' @param vcov.list optional a list of covariance matrices of the coefficients for every model
#' @param coef.transform either NULL or a vector containing "no" or "mfx", if an entry is "mfx" we show the marginal effects of the corresponding model.
#' @param coef.mat.li for highest flexibility, you can also provide a list that contains for each model a matrix or data.frame as returned by coeftest with the columns: coefficent, se, t-value, p-value.
#' @param output either "text", "html" or "latex"
#' @param output.fun allows a manual output function, e.g. if one has overloaded the design of screenreg, texreg or htmlreg.
#' @param title a string shown as title above the table
#' @param package the underlying package for creating the tables, either "texreg" or "stargazer". The current default is texreg but that may change.
#' @param ... additional parameters for stargazer or screenreg, texreg or htmlreg
#'
#' @export
showreg = function(l,custom.model.names=names(l), omit.stat=c("F","ser"),robust = FALSE, robust.type = "HC3", cluster1=NULL, cluster2=NULL,vcov.li=NULL,coef.transform = NULL, coef.mat.li = NULL, digits = 2, output=c("text","html","latex")[1], output.fun = NULL, doctype = FALSE,title=NULL, intercept.bottom=FALSE, package=getOption("showreg.package")[1]
, ...){
dots = list(...)
restore.point("showreg")
if (!is.null(dots$type)) output=dots$type
type = output
if (package=="stargazer") {
library(stargazer)
call.stargazer = function(args) {
dupl = duplicated(names(args)) & names(args)!=""
args = args[!dupl]
out = capture.output(res<-do.call("stargazer",args))
class(res) = c("classShowReg","character")
res
}
}
if (is.null(output.fun)) {
if (package=="texreg") {
library(texreg)
if (output=="text"){
output.fun=screenreg
} else if (output=="html") {
output.fun = htmlreg
} else if (output=="latex") {
output.fun = texreg
}
}
}
if (!any(robust) & is.null(vcov.li) & is.null(coef.mat.li) & is.null(coef.transform)) {
if (package=="stargazer" & is.null(output.fun)) {
args = c(l, dots,list(type=output, digits=digits,
omit.stat=omit.stat,intercept.bottom=intercept.bottom))
res = call.stargazer(args)
} else {
res =output.fun(l, ..., custom.model.names=custom.model.names,digits = digits, doctype=doctype)
}
res = add.showreg.title(res,title,output)
return(res)
}
if (length(robust)==1)
robust = rep(robust,length(l))
if (length(robust.type)==1)
robust.type = rep(robust.type,length(l))
robust.type[!robust] = "const"
if (is.null(vcov.li)) {
vcov.li = lapply(seq_along(l), function(i){
vcovRobust(l[[i]], type = robust.type[[i]], cluster1=cluster1, cluster2=cluster2)
})
}
cml = lapply(seq_along(l), function(i){
get.coef.mat(l[[i]],vcov=vcov.li[[i]], robust = robust[[i]], robust.type = robust.type[[i]], coef.transform = coef.transform[[i]], cluster1 = cluster1[[i]], cluster2 = cluster2[[i]], coef.mat = coef.mat.li[[i]])
})
coef.li = lapply(cml, function(r) convert.na(r[,1],Inf))
se.li = lapply(cml, function(r) convert.na(r[,2],Inf))
pval.li = lapply(cml, function(r) convert.na(r[,4],1))
if (package=="stargazer" & is.null(output.fun)) {
#restore.point("showreg.stargazer")
names(l)=NULL
args = c(l, dots,list(type=type, digits=digits,
omit.stat=omit.stat,intercept.bottom=intercept.bottom,
coef = coef.li, se=se.li, p=pval.li))
res = call.stargazer(args)
} else {
res = output.fun(l,..., custom.model.names = custom.model.names,
override.coef = coef.li, override.se = se.li,
override.pval = pval.li,
digits = digits,doctype=doctype)
}
add.showreg.title(res,title,output)
}
print.classShowReg = function(x) {
cat("\n", paste0(x,collapse="\n"))
}
add.showreg.title = function(out, title=NULL, output="text") {
if (is.null(title))
return(out)
out.class = class(out)
if (output=="text" | output=="latex") {
res = paste0(title,"\n",out)
} else if (output=="html") {
res = paste0("<H4>",title,"</H4>\n\n",out)
}
class(res) = class(out)
return(res)
}
#' get the matrix with coefficients, se, t-value, p-value of a regression model
#'
#' internally used by showreg
get.coef.mat = function(mod, vcov=NULL, robust = FALSE, robust.type = "HC3", coef.transform = NULL, cluster1=NULL, cluster2=NULL, coef.mat = NULL) {
restore.point("get.coef.mat")
# coef.mat is given, so just return it
if (!is.null(coef.mat))
return(coef.mat)
# For marginal effects use the functions from mfx
if (isTRUE(coef.transform=="mfx")) {
mfx = glm.marginal.effects(mod, robust=robust, clustervar1=cluster1, clustervar2 = cluster2)
df = as.data.frame(mfx$mfxest)
# Add a missing intercept
if (NROW(df)<length(coef(mod))) {
idf = data.frame(NaN,NaN,NaN,NaN)
colnames(idf) = colnames(df)
rownames(idf) = names(coef(mod))[1]
df = rbind(idf,df)
}
return(df)
}
# Compute robust vcov
if (is.null(vcov) & robust) {
vcov = vcovRobust(l[[i]], type = robust.type, cluster1=cluster1, cluster2=cluster2)
}
# return the coefficient matrix
coeftest(mod, vcov.=vcov)
}
convert.na = function(x,new.val) {
x[is.na(x)] = new.val
x
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/functions.R
\name{dimw}
\alias{dimw}
\title{Difference in Means and Difference in Weighted Means}
\usage{
dimw(X, w, target)
}
\arguments{
\item{X}{matrix of data where rows are observations and columns are covariates.}
\item{w}{numeric vector of weights for each observation.}
\item{target}{numeric vector of length equal to the total number of units where population/treated units take a value of 1 and sample/control units take a value of 0.}
}
\value{
\item{dim}{the simple, unweighted difference in means.}
\item{dimw}{the weighted difference in means.}
}
\description{
Calculates the simple difference in means or weighted difference in means between the control or sample population and the treated or target population.
}
\examples{
\donttest{
#let's say we want to get the unweighted DIM and the weighted DIM using weights from the kbal
#function with the lalonde data:
#load and clean data a bit
data(lalonde)
xvars=c("age","black","educ","hisp","married","re74","re75","nodegr","u74","u75")
#get the kbal weights
kbalout= kbal(allx=lalonde[,xvars],
sampledinpop=FALSE,
treatment=lalonde$nsw)
#now use dimw to get the DIMs
dimw(X = lalonde[,xvars], w = kbalout$w, target = lalonde$nsw)}
}
| /man/dimw.Rd | no_license | chadhazlett/KBAL | R | false | true | 1,312 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/functions.R
\name{dimw}
\alias{dimw}
\title{Difference in Means and Difference in Weighted Means}
\usage{
dimw(X, w, target)
}
\arguments{
\item{X}{matrix of data where rows are observations and columns are covariates.}
\item{w}{numeric vector of weights for each observation.}
\item{target}{numeric vector of length equal to the total number of units where population/treated units take a value of 1 and sample/control units take a value of 0.}
}
\value{
\item{dim}{the simple, unweighted difference in means.}
\item{dimw}{the weighted difference in means.}
}
\description{
Calculates the simple difference in means or weighted difference in means between the control or sample population and the treated or target population.
}
\examples{
\donttest{
#let's say we want to get the unweighted DIM and the weighted DIM using weights from the kbal
#function with the lalonde data:
#load and clean data a bit
data(lalonde)
xvars=c("age","black","educ","hisp","married","re74","re75","nodegr","u74","u75")
#get the kbal weights
kbalout= kbal(allx=lalonde[,xvars],
sampledinpop=FALSE,
treatment=lalonde$nsw)
#now use dimw to get the DIMs
dimw(X = lalonde[,xvars], w = kbalout$w, target = lalonde$nsw)}
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/classification.R
\name{roc}
\alias{roc}
\title{Receiver Operating Characteristic curve (ROC) and area under this curve (AUC)}
\usage{
roc(rf_model)
}
\arguments{
\item{rf_model}{random forest model}
}
\description{
Receiver Operating Characteristic curve (ROC) and area under this curve (AUC)
}
| /gToolbox/man/roc.Rd | no_license | aqibrahimbt/BioMarkerAnalysis | R | false | true | 373 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/classification.R
\name{roc}
\alias{roc}
\title{Receiver Operating Characteristic curve (ROC) and area under this curve (AUC)}
\usage{
roc(rf_model)
}
\arguments{
\item{rf_model}{random forest model}
}
\description{
Receiver Operating Characteristic curve (ROC) and area under this curve (AUC)
}
|
#' get_mz_by_monoisotopicmass
#'
#' Generate list of expected m/z for a specific monoisotopic mass
#'
#'
#' @param monoisotopicmass Monoisotopic mass. e.g.: 149.051
#' @param dbid Database or user-defined ID. e.g.: "M001"
#' @param name Metabolite name. e.g.: "Methionine"
#' @param formula Chemical formula. e.g.: "C5H11NO2S"
#' @param queryadductlist List of adducts to be used for searching. eg:
#' c("M+H","M+Na","M+K"), c("positive") for positive adducts, c("negative") for
#' negative adducts c("all") for all adducts
#' @param syssleep Wait time between queries to prevent overloading the KEGG
#' REST interface. e.g.: 0.1
#' @return Returns an R object with a list of expected m/z for the input
#' monoisotopic mass.
#' @author Karan Uppal
get_mz_by_monoisotopicmass <- function(monoisotopicmass,
dbid = NA, name = NA, formula = NA, queryadductlist = c("M+H"),
syssleep = 0.01, adduct_table = NA) {
cnames <- c("mz", "ID", "Name", "Formula", "MonoisotopicMass",
"Adduct", "AdductMass")
if (is.na(adduct_table) == TRUE) {
try(rm(adduct_table), silent = TRUE)
}
# if(is.na(adduct_table)==TRUE)
{
data(adduct_table)
adduct_table <- as.data.frame(adduct_table)
}
# adduct_table<-read.table('/Users/karanuppal/Documents/Emory/JonesLab/Projects/xMSannotator/adduct_table.txt',sep='\t',header=TRUE)
# adduct_table<-adduct_table[c(which(adduct_table[,6]=='S'),which(adduct_table[,6]=='Acetonitrile')),]
adduct_names <- as.character(adduct_table[, 1])
adductlist <- adduct_table[, 4]
mult_charge <- adduct_table[, 3]
num_mol <- adduct_table[, 2]
names(adductlist) <- as.character(adduct_names)
names(mult_charge) <- as.character(adduct_names)
names(num_mol) <- as.character(adduct_names)
alladducts <- adduct_names
if (queryadductlist[1] == "positive") {
queryadductlist <- adduct_names[which(adduct_table[,
5] == "positive")]
} else {
if (queryadductlist[1] == "negative") {
queryadductlist <- adduct_names[which(adduct_table[,
5] == "negative")]
} else {
if (queryadductlist[1] == "all") {
queryadductlist <- alladducts
} else {
if (length(which(queryadductlist %in% alladducts ==
FALSE)) > 0) {
errormsg <- paste("Adduct should be one of:",
sep = "")
for (i in alladducts) {
errormsg <- paste(errormsg, i, sep = " ; ")
}
stop(errormsg, "\n\nUsage: feat.batch.annotation.KEGG(dataA,max.mz.diff=10, queryadductlist=c(\"M+H\", \"M+Na\"), xMSannotator.outloc, numnodes=1)",
"\n\n OR feat.batch.annotation.KEGG(dataA,max.mz.diff=10, queryadductlist=c(\"positive\"), xMSannotator.outloc, numnodes=1)",
"\n\n OR feat.batch.annotation.KEGG(dataA,max.mz.diff=10, queryadductlist=c(\"negative\"), xMSannotator.outloc, numnodes=1)",
"\n\n OR feat.batch.annotation.KEGG(dataA,max.mz.diff=10, queryadductlist=c(\"all\"), xMSannotator.outloc, numnodes=1)")
}
}
}
}
map_res <- {
}
if (is.na(dbid) == TRUE) {
dbid = "-"
}
if (is.na(name) == TRUE) {
name = "-"
}
if (is.na(formula) == TRUE) {
formula = "-"
}
exact_mass <- monoisotopicmass
for (adnum in 1:length(queryadductlist)) {
adductname = queryadductlist[adnum]
adductmass = adductlist[as.character(adductname)]
adductcharge = mult_charge[as.character(adductname)]
adductnmol = num_mol[as.character(adductname)]
# mz=((adductnmol*exact_mass)+(adductmass))/(adductcharge))
mz = ((exact_mass * adductnmol) + (adductmass))/adductcharge
# delta_ppm=(max.mz.diff)*(mz/1000000)
# min_mz=round((mz-delta_ppm),5)
# max_mz=round((mz+delta_ppm),5)
res = {
}
mzorig = round(exact_mass, 5)
# delta_ppm=round(delta_ppm,5)
syssleep1 <- (syssleep/5)
Sys.sleep(syssleep1)
cur_map_res <- cbind(mz, dbid, name, formula, adductname,
adductmass, mzorig)
cur_map_res <- as.data.frame(cbind(mz, as.character(dbid),
as.character(name), as.character(formula), mzorig,
adductname, adductmass))
# print(cur_map_res)
cur_map_res <- as.data.frame(cur_map_res)
map_res <- rbind(map_res, cur_map_res)
}
colnames(map_res) <- cnames
map_res <- unique(map_res)
map_res <- as.data.frame(map_res)
return(map_res)
}
| /R/get_mz_by_monoisotopicmass.R | no_license | yanqi219/xMSannotator | R | false | false | 4,975 | r | #' get_mz_by_monoisotopicmass
#'
#' Generate list of expected m/z for a specific monoisotopic mass
#'
#'
#' @param monoisotopicmass Monoisotopic mass. e.g.: 149.051
#' @param dbid Database or user-defined ID. e.g.: "M001"
#' @param name Metabolite name. e.g.: "Methionine"
#' @param formula Chemical formula. e.g.: "C5H11NO2S"
#' @param queryadductlist List of adducts to be used for searching. eg:
#' c("M+H","M+Na","M+K"), c("positive") for positive adducts, c("negative") for
#' negative adducts c("all") for all adducts
#' @param syssleep Wait time between queries to prevent overloading the KEGG
#' REST interface. e.g.: 0.1
#' @return Returns an R object with a list of expected m/z for the input
#' monoisotopic mass.
#' @author Karan Uppal
get_mz_by_monoisotopicmass <- function(monoisotopicmass,
dbid = NA, name = NA, formula = NA, queryadductlist = c("M+H"),
syssleep = 0.01, adduct_table = NA) {
cnames <- c("mz", "ID", "Name", "Formula", "MonoisotopicMass",
"Adduct", "AdductMass")
if (is.na(adduct_table) == TRUE) {
try(rm(adduct_table), silent = TRUE)
}
# if(is.na(adduct_table)==TRUE)
{
data(adduct_table)
adduct_table <- as.data.frame(adduct_table)
}
# adduct_table<-read.table('/Users/karanuppal/Documents/Emory/JonesLab/Projects/xMSannotator/adduct_table.txt',sep='\t',header=TRUE)
# adduct_table<-adduct_table[c(which(adduct_table[,6]=='S'),which(adduct_table[,6]=='Acetonitrile')),]
adduct_names <- as.character(adduct_table[, 1])
adductlist <- adduct_table[, 4]
mult_charge <- adduct_table[, 3]
num_mol <- adduct_table[, 2]
names(adductlist) <- as.character(adduct_names)
names(mult_charge) <- as.character(adduct_names)
names(num_mol) <- as.character(adduct_names)
alladducts <- adduct_names
if (queryadductlist[1] == "positive") {
queryadductlist <- adduct_names[which(adduct_table[,
5] == "positive")]
} else {
if (queryadductlist[1] == "negative") {
queryadductlist <- adduct_names[which(adduct_table[,
5] == "negative")]
} else {
if (queryadductlist[1] == "all") {
queryadductlist <- alladducts
} else {
if (length(which(queryadductlist %in% alladducts ==
FALSE)) > 0) {
errormsg <- paste("Adduct should be one of:",
sep = "")
for (i in alladducts) {
errormsg <- paste(errormsg, i, sep = " ; ")
}
stop(errormsg, "\n\nUsage: feat.batch.annotation.KEGG(dataA,max.mz.diff=10, queryadductlist=c(\"M+H\", \"M+Na\"), xMSannotator.outloc, numnodes=1)",
"\n\n OR feat.batch.annotation.KEGG(dataA,max.mz.diff=10, queryadductlist=c(\"positive\"), xMSannotator.outloc, numnodes=1)",
"\n\n OR feat.batch.annotation.KEGG(dataA,max.mz.diff=10, queryadductlist=c(\"negative\"), xMSannotator.outloc, numnodes=1)",
"\n\n OR feat.batch.annotation.KEGG(dataA,max.mz.diff=10, queryadductlist=c(\"all\"), xMSannotator.outloc, numnodes=1)")
}
}
}
}
map_res <- {
}
if (is.na(dbid) == TRUE) {
dbid = "-"
}
if (is.na(name) == TRUE) {
name = "-"
}
if (is.na(formula) == TRUE) {
formula = "-"
}
exact_mass <- monoisotopicmass
for (adnum in 1:length(queryadductlist)) {
adductname = queryadductlist[adnum]
adductmass = adductlist[as.character(adductname)]
adductcharge = mult_charge[as.character(adductname)]
adductnmol = num_mol[as.character(adductname)]
# mz=((adductnmol*exact_mass)+(adductmass))/(adductcharge))
mz = ((exact_mass * adductnmol) + (adductmass))/adductcharge
# delta_ppm=(max.mz.diff)*(mz/1000000)
# min_mz=round((mz-delta_ppm),5)
# max_mz=round((mz+delta_ppm),5)
res = {
}
mzorig = round(exact_mass, 5)
# delta_ppm=round(delta_ppm,5)
syssleep1 <- (syssleep/5)
Sys.sleep(syssleep1)
cur_map_res <- cbind(mz, dbid, name, formula, adductname,
adductmass, mzorig)
cur_map_res <- as.data.frame(cbind(mz, as.character(dbid),
as.character(name), as.character(formula), mzorig,
adductname, adductmass))
# print(cur_map_res)
cur_map_res <- as.data.frame(cur_map_res)
map_res <- rbind(map_res, cur_map_res)
}
colnames(map_res) <- cnames
map_res <- unique(map_res)
map_res <- as.data.frame(map_res)
return(map_res)
}
|
library(fOptions)
### Name: HestonNandiGarchFit
### Title: Heston-Nandi Garch(1,1) Modelling
### Aliases: HestonNandiGarchFit hngarchSim hngarchFit hngarchStats
### print.hngarch summary.hngarch
### Keywords: models
### ** Examples
## hngarchSim -
# Simulate a Heston Nandi Garch(1,1) Process:
# Symmetric Model - Parameters:
model = list(lambda = 4, omega = 8e-5, alpha = 6e-5,
beta = 0.7, gamma = 0, rf = 0)
ts = hngarchSim(model = model, n = 500, n.start = 100)
par(mfrow = c(2, 1), cex = 0.75)
ts.plot(ts, col = "steelblue", main = "HN Garch Symmetric Model")
grid()
## hngarchFit -
# HN-GARCH log likelihood Parameter Estimation:
# To speed up, we start with the simulated model ...
mle = hngarchFit(model = model, x = ts, symmetric = TRUE)
mle
## summary.hngarch -
# HN-GARCH Diagnostic Analysis:
par(mfrow = c(3, 1), cex = 0.75)
summary(mle)
## hngarchStats -
# HN-GARCH Moments:
hngarchStats(mle$model)
| /data/genthat_extracted_code/fOptions/examples/HestonNandiGarchFit.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 995 | r | library(fOptions)
### Name: HestonNandiGarchFit
### Title: Heston-Nandi Garch(1,1) Modelling
### Aliases: HestonNandiGarchFit hngarchSim hngarchFit hngarchStats
### print.hngarch summary.hngarch
### Keywords: models
### ** Examples
## hngarchSim -
# Simulate a Heston Nandi Garch(1,1) Process:
# Symmetric Model - Parameters:
model = list(lambda = 4, omega = 8e-5, alpha = 6e-5,
beta = 0.7, gamma = 0, rf = 0)
ts = hngarchSim(model = model, n = 500, n.start = 100)
par(mfrow = c(2, 1), cex = 0.75)
ts.plot(ts, col = "steelblue", main = "HN Garch Symmetric Model")
grid()
## hngarchFit -
# HN-GARCH log likelihood Parameter Estimation:
# To speed up, we start with the simulated model ...
mle = hngarchFit(model = model, x = ts, symmetric = TRUE)
mle
## summary.hngarch -
# HN-GARCH Diagnostic Analysis:
par(mfrow = c(3, 1), cex = 0.75)
summary(mle)
## hngarchStats -
# HN-GARCH Moments:
hngarchStats(mle$model)
|
# read in Data
tuesdata <- tidytuesdayR::tt_load('2020-11-10')
mobile <- tuesdata$mobile
landline <- tuesdata$landline
# install packages and libraries
install.packages("png")
library(png)
library(ggplot2)
library(extrafont)
# save images of mobile phone and landline
img1 <- readPNG("telephone.png") #image of landline
image1 <- rasterGrob(img1, interpolate=FALSE)
img2 <- readPNG("mobile-phone.png") # image of iphone
image2 <- rasterGrob(img2, interpolate=FALSE)
# filter US Data
us_mobile <- mobile %>% filter(code == "USA")
us_landline <- landline %>% filter(code == "USA")
# merge US data in each set
us_data <- inner_join(us_mobile, select(us_landline, year, landline_subs))
# plot line graph
plot <- ggplot(us_data, aes(x = year)) +
# add images and place them
annotation_custom(image2, xmin=2016, xmax=2018, ymin=116, ymax=128) +
annotation_custom(image1, xmin=2016, xmax=2018, ymin=32, ymax=44) +
# draw lines for each kind of phone subscription
geom_line(aes(y = landline_subs, color = "blue", size = 0.10), show.legend = FALSE) +
geom_line(aes(y = mobile_subs, color = "red", size = 0.10), show.legend = FALSE) +
theme_bw() +
# add breaks
scale_x_continuous(breaks = seq(1990,2017, 4)) +
labs(title = "Mobile Takeover in the US",
subtitle = "Phone subscriptions in US per 100 people",
caption = "Created by @eliane_mitchell | #TidyTuesday Week 46") +
theme(
text = element_text(size = 12, family = "Arial"),
axis.title.x = element_blank(),
axis.title.y = element_blank(),
plot.title = element_text(size = 12,face ="bold", hjust = 0.5),
plot.subtitle = element_text(size = 10, hjust = 0.5),
plot.caption = element_text(size = 7, hjust = 0.5))
ggsave("Tidy Tuesday Phone Plot.png")
| /Week 46 2020 Phone Data/phonedata_week46_2020.R | no_license | elianemitchell/TidyTuesday | R | false | false | 1,773 | r | # read in Data
tuesdata <- tidytuesdayR::tt_load('2020-11-10')
mobile <- tuesdata$mobile
landline <- tuesdata$landline
# install packages and libraries
install.packages("png")
library(png)
library(ggplot2)
library(extrafont)
# save images of mobile phone and landline
img1 <- readPNG("telephone.png") #image of landline
image1 <- rasterGrob(img1, interpolate=FALSE)
img2 <- readPNG("mobile-phone.png") # image of iphone
image2 <- rasterGrob(img2, interpolate=FALSE)
# filter US Data
us_mobile <- mobile %>% filter(code == "USA")
us_landline <- landline %>% filter(code == "USA")
# merge US data in each set
us_data <- inner_join(us_mobile, select(us_landline, year, landline_subs))
# plot line graph
plot <- ggplot(us_data, aes(x = year)) +
# add images and place them
annotation_custom(image2, xmin=2016, xmax=2018, ymin=116, ymax=128) +
annotation_custom(image1, xmin=2016, xmax=2018, ymin=32, ymax=44) +
# draw lines for each kind of phone subscription
geom_line(aes(y = landline_subs, color = "blue", size = 0.10), show.legend = FALSE) +
geom_line(aes(y = mobile_subs, color = "red", size = 0.10), show.legend = FALSE) +
theme_bw() +
# add breaks
scale_x_continuous(breaks = seq(1990,2017, 4)) +
labs(title = "Mobile Takeover in the US",
subtitle = "Phone subscriptions in US per 100 people",
caption = "Created by @eliane_mitchell | #TidyTuesday Week 46") +
theme(
text = element_text(size = 12, family = "Arial"),
axis.title.x = element_blank(),
axis.title.y = element_blank(),
plot.title = element_text(size = 12,face ="bold", hjust = 0.5),
plot.subtitle = element_text(size = 10, hjust = 0.5),
plot.caption = element_text(size = 7, hjust = 0.5))
ggsave("Tidy Tuesday Phone Plot.png")
|
#################
# WORDCLOUD #
#################
### TRUMP ###
# Eliminate character
text_trump <- gsub("&", "", text_trump)
text_trump <- gsub("http.*", "", text_trump)
text_trump <- gsub("@.*", "", text_trump)
# create wordcloud
corpus=Corpus(VectorSource(text_trump))
# Convert to lower-case
corpus=tm_map(corpus,tolower)
# Remove stopwords
corpus=tm_map(corpus,function(x) removeWords(x,stopwords()))
# convert corpus to a Plain Text Document
corpus=tm_map(corpus,PlainTextDocument)
# display.brewer.all() to see possible colors!
col=brewer.pal(8,"Paired")
wordcloud(corpus, min.freq=5, scale=c(4,.7),rot.per = 0.25,
random.color=T, max.word=250, random.order=F,colors=col)
### CLINTON ###
# Eliminate character
text_clinton <- gsub("&", "", text_clinton)
text_clinton <- gsub("http.*", "", text_clinton)
text_clinton <- gsub("@.*", "", text_clinton)
# create wordcloud
corpus=Corpus(VectorSource(text_clinton))
# Convert to lower-case
corpus=tm_map(corpus,tolower)
# Remove stopwords
corpus=tm_map(corpus,function(x) removeWords(x,stopwords()))
# convert corpus to a Plain Text Document
corpus=tm_map(corpus,PlainTextDocument)
# display.brewer.all() to see possible colors!
col=brewer.pal(8,"Paired")
wordcloud(corpus, min.freq=5, scale=c(5,1),rot.per = 0.25,
random.color=T, max.word=100, random.order=F,colors=col)
| /Sentiment Analysis/[001]Twitter-Sentiment-Analysis/03-wordcloud.R | no_license | romele-stefano/R-scripts | R | false | false | 1,373 | r | #################
# WORDCLOUD #
#################
### TRUMP ###
# Eliminate character
text_trump <- gsub("&", "", text_trump)
text_trump <- gsub("http.*", "", text_trump)
text_trump <- gsub("@.*", "", text_trump)
# create wordcloud
corpus=Corpus(VectorSource(text_trump))
# Convert to lower-case
corpus=tm_map(corpus,tolower)
# Remove stopwords
corpus=tm_map(corpus,function(x) removeWords(x,stopwords()))
# convert corpus to a Plain Text Document
corpus=tm_map(corpus,PlainTextDocument)
# display.brewer.all() to see possible colors!
col=brewer.pal(8,"Paired")
wordcloud(corpus, min.freq=5, scale=c(4,.7),rot.per = 0.25,
random.color=T, max.word=250, random.order=F,colors=col)
### CLINTON ###
# Eliminate character
text_clinton <- gsub("&", "", text_clinton)
text_clinton <- gsub("http.*", "", text_clinton)
text_clinton <- gsub("@.*", "", text_clinton)
# create wordcloud
corpus=Corpus(VectorSource(text_clinton))
# Convert to lower-case
corpus=tm_map(corpus,tolower)
# Remove stopwords
corpus=tm_map(corpus,function(x) removeWords(x,stopwords()))
# convert corpus to a Plain Text Document
corpus=tm_map(corpus,PlainTextDocument)
# display.brewer.all() to see possible colors!
col=brewer.pal(8,"Paired")
wordcloud(corpus, min.freq=5, scale=c(5,1),rot.per = 0.25,
random.color=T, max.word=100, random.order=F,colors=col)
|
# status, no terminado, WIP
library(readr)
dataset <- read_csv("resultados_muestras.csv")
dataset$Origen <- as.factor(dataset$Origen)
library(bnlearn)
res = hc(dataset)
# plot the network structure.
plot(res)
res2 = iamb(dataset)
# plot the new network structure.
plot(res2)
| /bayesian.R | permissive | anadiedrichs/jugos-uva-canizo | R | false | false | 279 | r | # status, no terminado, WIP
library(readr)
dataset <- read_csv("resultados_muestras.csv")
dataset$Origen <- as.factor(dataset$Origen)
library(bnlearn)
res = hc(dataset)
# plot the network structure.
plot(res)
res2 = iamb(dataset)
# plot the new network structure.
plot(res2)
|
c(1,2)
a <- c(1,2)
a
v <- c(2,3,5,8,13)
c("Country")
Country = c("Brazil", "China","India","Switzerland","USA")
LifeExpectancy = c(74,76,65,83,79)
Country[1]
LifeExpectancy[1]
as.data.frame(Country,LifeExpectancy)
seq(0,100,2)
CountryData = data.frame(Country,LifeExpectancy)
CountryData
CountryData$Population = c(199000,1390000,1240000,7997,318000)
data = read_csv("WHO.csv")
| /src/data_analytics/get_started.R | no_license | zhmz90/Kuafu | R | false | false | 393 | r | c(1,2)
a <- c(1,2)
a
v <- c(2,3,5,8,13)
c("Country")
Country = c("Brazil", "China","India","Switzerland","USA")
LifeExpectancy = c(74,76,65,83,79)
Country[1]
LifeExpectancy[1]
as.data.frame(Country,LifeExpectancy)
seq(0,100,2)
CountryData = data.frame(Country,LifeExpectancy)
CountryData
CountryData$Population = c(199000,1390000,1240000,7997,318000)
data = read_csv("WHO.csv")
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/DFO.R
\name{DFO_plot2}
\alias{DFO_plot2}
\title{Deparment of Fisheries and Oceans default plot 2}
\usage{
DFO_plot2(MSEobj, nam = NA, panel = T, Bcut = 50, Ycut = 50)
}
\arguments{
\item{MSEobj}{An object of class MSE}
\item{nam}{Title of plot}
\item{panel}{Should the plots be organized in many panels in a single figure}
\item{Bcut}{The cutoff biomass for satisficing (relative to BMSY)}
\item{Ycut}{the cutoff yield for satisficing (relative to reference yield)}
}
\value{
A table of performance metrics.
}
\description{
A preliminary plot for returning trade-offs plots and performance table for
probability of obtaining half reference (FMSY) yield and probability of biomass
dropping below 50 per cent BMSY
}
\author{
T. Carruthers
}
| /man/DFO_plot2.Rd | no_license | Lijiuqi/DLMtool | R | false | true | 822 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/DFO.R
\name{DFO_plot2}
\alias{DFO_plot2}
\title{Deparment of Fisheries and Oceans default plot 2}
\usage{
DFO_plot2(MSEobj, nam = NA, panel = T, Bcut = 50, Ycut = 50)
}
\arguments{
\item{MSEobj}{An object of class MSE}
\item{nam}{Title of plot}
\item{panel}{Should the plots be organized in many panels in a single figure}
\item{Bcut}{The cutoff biomass for satisficing (relative to BMSY)}
\item{Ycut}{the cutoff yield for satisficing (relative to reference yield)}
}
\value{
A table of performance metrics.
}
\description{
A preliminary plot for returning trade-offs plots and performance table for
probability of obtaining half reference (FMSY) yield and probability of biomass
dropping below 50 per cent BMSY
}
\author{
T. Carruthers
}
|
#Austin Water Quality Case Study
#Load Packages
library(tidyverse)
library(lubridate)
library(stringr)
#Read in the dataset
water <- read_csv('http://594442.youcanlearnit.net/austinwater.csv')
#Take a look
glimpse(water)
#Overwriting water tibble with only columns required
water <- tibble('siteName' = water$SITE_NAME,
'siteType' = water$SITE_TYPE,
'sampleTime' = water$SAMPLE_DATE,
'parameterType' = water$PARAM_TYPE,
'parameter' = water$PARAMETER,
'result' = water$RESULT,
'unit' = water$UNIT)
glimpse(water)
# Too many rows
############################
#Filtering the Dataset
############################
#Taking a look at parameter column to see values of requirement
unique(water$parameter)
# We can see Alkalinity/Hardness/pH and Conventional are values of our interest
unique(water$parameterType)
#Subset water table, keep only rows where parameterType is Alkalinity/Hardness/pH or Conventional
filtered_water <- subset(water, (parameterType == 'Alkalinity/Hardness/pH') | parameterType == 'Conventionals')
glimpse(filtered_water)
unique(filtered_water$parameter)
#Subsetting filteres_water table with only parameter values of PH and WATER TEMPERATURE
filtered_water <- subset(filtered_water, parameter == 'PH' | parameter == 'WATER TEMPERATURE')
glimpse(filtered_water)
summary(filtered_water)
##################################
# Correcting the Data Types
##################################
#Convert character types to Factors (Categorical variables)
filtered_water$siteType <- as.factor(filtered_water$siteType)
filtered_water$parameterType <- as.factor(filtered_water$parameterType)
filtered_water$parameter <- as.factor(filtered_water$parameter)
filtered_water$unit <- as.factor(filtered_water$unit)
#filtered_water$sampleTime
#Convert Sample Time to a Date Time Object
filtered_water$sampleTime <- mdy_hms(filtered_water$sampleTime)
#################################
#Correcting the data-entry Errors
#################################
summary(filtered_water)
#Checking the unit column, Has two strange unit measurements
subset(filtered_water, unit=='Feet')
convert <- which(filtered_water$unit=='Feet')
filtered_water$unit[convert] <- 'Deg. Fahrenheit'
glimpse(subset(filtered_water, unit=='MG/L' & parameter == 'PH'))
convert <- which(filtered_water$unit=='MG/L' & filtered_water$parameter == 'PH')
convert
filtered_water$unit[convert] <- 'Standard units'
glimpse(subset(filtered_water, unit=='MG/L'))
convert <- which(filtered_water$unit=='MG/L' & filtered_water$result > 70)
convert
filtered_water$unit[convert] <- 'Deg. Fahrenheit'
glimpse(subset(filtered_water, unit=='MG/L'))
convert <- which(filtered_water$unit=='MG/L')
filtered_water$unit[convert] <- 'Deg. Celsius'
summary(filtered_water)
######################################
#Identifying and removing the Outliers
######################################
ggplot(data = filtered_water, mapping = aes(x= sampleTime, y = result)) +
geom_point()
glimpse(subset(filtered_water, result > 1000000))
remove <- which(filtered_water$result > 1000000 | is.na(filtered_water$result))
filtered_water <- filtered_water[-remove,]
summary(filtered_water)
glimpse(subset(filtered_water, result > 1000))
remove <- which(filtered_water$result > 1000)
filtered_water <- filtered_water[-remove,]
summary(filtered_water)
ggplot(data= filtered_water, mapping = aes(x=unit, y = result)) +
geom_boxplot()
convert <- which(filtered_water$result > 60 & filtered_water$unit == 'Deg. Celsius')
filtered_water$unit[convert] <- 'Deg. Fahrenheit'
##################################################
#Converting temperature from Fahrenheit to Celsius
##################################################
fahrenheit <- which(filtered_water$unit == 'Deg. Fahrenheit')
filtered_water$result[fahrenheit] <- (filtered_water$result[fahrenheit] - 32) *(5.0/9.0)
filtered_water$unit[fahrenheit] <- 'Deg. Celsius'
summary(filtered_water)
#Removing the empty factor levels
filtered_water$unit <- droplevels(filtered_water$unit)
#############################################
#Widening the dataset
#############################################
filtered_water <- filtered_water[, -c(4,7)]
summary(filtered_water)
filtered_water_wide <- spread(filtered_water, parameter, result)
##error duplicate identifiers
dup_check <- filtered_water[,-5]
duplicates <- which(duplicated(dup_check))
#Removing the Duplicates
filtered_water <- filtered_water[-duplicates,]
filtered_water_wide <- spread(filtered_water, parameter, result)
glimpse(filtered_water_wide)
colnames(filtered_water_wide)[4] <- 'pH'
colnames(filtered_water_wide)[5] <- 'temperature'
| /austinWaterQuality.R | no_license | vinikalra/Data-Wrangling-in-R | R | false | false | 4,727 | r | #Austin Water Quality Case Study
#Load Packages
library(tidyverse)
library(lubridate)
library(stringr)
#Read in the dataset
water <- read_csv('http://594442.youcanlearnit.net/austinwater.csv')
#Take a look
glimpse(water)
#Overwriting water tibble with only columns required
water <- tibble('siteName' = water$SITE_NAME,
'siteType' = water$SITE_TYPE,
'sampleTime' = water$SAMPLE_DATE,
'parameterType' = water$PARAM_TYPE,
'parameter' = water$PARAMETER,
'result' = water$RESULT,
'unit' = water$UNIT)
glimpse(water)
# Too many rows
############################
#Filtering the Dataset
############################
#Taking a look at parameter column to see values of requirement
unique(water$parameter)
# We can see Alkalinity/Hardness/pH and Conventional are values of our interest
unique(water$parameterType)
#Subset water table, keep only rows where parameterType is Alkalinity/Hardness/pH or Conventional
filtered_water <- subset(water, (parameterType == 'Alkalinity/Hardness/pH') | parameterType == 'Conventionals')
glimpse(filtered_water)
unique(filtered_water$parameter)
#Subsetting filteres_water table with only parameter values of PH and WATER TEMPERATURE
filtered_water <- subset(filtered_water, parameter == 'PH' | parameter == 'WATER TEMPERATURE')
glimpse(filtered_water)
summary(filtered_water)
##################################
# Correcting the Data Types
##################################
#Convert character types to Factors (Categorical variables)
filtered_water$siteType <- as.factor(filtered_water$siteType)
filtered_water$parameterType <- as.factor(filtered_water$parameterType)
filtered_water$parameter <- as.factor(filtered_water$parameter)
filtered_water$unit <- as.factor(filtered_water$unit)
#filtered_water$sampleTime
#Convert Sample Time to a Date Time Object
filtered_water$sampleTime <- mdy_hms(filtered_water$sampleTime)
#################################
#Correcting the data-entry Errors
#################################
summary(filtered_water)
#Checking the unit column, Has two strange unit measurements
subset(filtered_water, unit=='Feet')
convert <- which(filtered_water$unit=='Feet')
filtered_water$unit[convert] <- 'Deg. Fahrenheit'
glimpse(subset(filtered_water, unit=='MG/L' & parameter == 'PH'))
convert <- which(filtered_water$unit=='MG/L' & filtered_water$parameter == 'PH')
convert
filtered_water$unit[convert] <- 'Standard units'
glimpse(subset(filtered_water, unit=='MG/L'))
convert <- which(filtered_water$unit=='MG/L' & filtered_water$result > 70)
convert
filtered_water$unit[convert] <- 'Deg. Fahrenheit'
glimpse(subset(filtered_water, unit=='MG/L'))
convert <- which(filtered_water$unit=='MG/L')
filtered_water$unit[convert] <- 'Deg. Celsius'
summary(filtered_water)
######################################
#Identifying and removing the Outliers
######################################
ggplot(data = filtered_water, mapping = aes(x= sampleTime, y = result)) +
geom_point()
glimpse(subset(filtered_water, result > 1000000))
remove <- which(filtered_water$result > 1000000 | is.na(filtered_water$result))
filtered_water <- filtered_water[-remove,]
summary(filtered_water)
glimpse(subset(filtered_water, result > 1000))
remove <- which(filtered_water$result > 1000)
filtered_water <- filtered_water[-remove,]
summary(filtered_water)
ggplot(data= filtered_water, mapping = aes(x=unit, y = result)) +
geom_boxplot()
convert <- which(filtered_water$result > 60 & filtered_water$unit == 'Deg. Celsius')
filtered_water$unit[convert] <- 'Deg. Fahrenheit'
##################################################
#Converting temperature from Fahrenheit to Celsius
##################################################
fahrenheit <- which(filtered_water$unit == 'Deg. Fahrenheit')
filtered_water$result[fahrenheit] <- (filtered_water$result[fahrenheit] - 32) *(5.0/9.0)
filtered_water$unit[fahrenheit] <- 'Deg. Celsius'
summary(filtered_water)
#Removing the empty factor levels
filtered_water$unit <- droplevels(filtered_water$unit)
#############################################
#Widening the dataset
#############################################
filtered_water <- filtered_water[, -c(4,7)]
summary(filtered_water)
filtered_water_wide <- spread(filtered_water, parameter, result)
##error duplicate identifiers
dup_check <- filtered_water[,-5]
duplicates <- which(duplicated(dup_check))
#Removing the Duplicates
filtered_water <- filtered_water[-duplicates,]
filtered_water_wide <- spread(filtered_water, parameter, result)
glimpse(filtered_water_wide)
colnames(filtered_water_wide)[4] <- 'pH'
colnames(filtered_water_wide)[5] <- 'temperature'
|
makeVizData <- function(dataSets){
names(dataSets) <- toupper(names(dataSets))
seuratObject <- dataSets[["SEURAT"]]
monocleObject <- dataSets[["MONOCLE"]]
cellBarcodes <- getCellBarcodes(seurat = seuratObject, monocle = monocleObject)
featureList <- getFeatures(seurat = seuratObject, monocle = monocleObject)
sceObject <- initializeSCE(cellBCs = cellBarcodes, features = featureList)
sceObject <- addSeuratData(sceObject, seuratObject)
sceObject <- addMonocleData(sceObject, monocleObject)
return(sceObject)
}
getCellBarcodes <- function(seurat, monocle){
barcodes <- union(seurat@cell.names, colnames(monocle))
}
getFeatures <- function(seurat, monocle){
features <- union(rownames(seurat@scale.data), rownames(monocle))
}
initializeSCE <- function(cellBCs, features){
cellDF <- data.frame(row.names = cellBCs)
featureDF <- data.frame(row.names = features)
sce <- SingleCellExperiment::SingleCellExperiment(colData = cellDF, rowData = featureDF)
return(sce)
}
addExprData <- function(sce, exprMat, datName){
exprMat <- Matrix(exprMat, sparse = TRUE)
missingGenes <- rownames(sce)[ ! rownames(sce) %in% rownames(exprMat) ]
missingCells <- colnames(sce)[ ! colnames(sce) %in% colnames(exprMat) ]
cellMat <- Matrix( nrow = nrow(exprMat), ncol = length(missingCells))
geneMat <- Matrix( nrow = length(missingGenes), ncol = ncol(exprMat))
exprMat <- cbind(exprMat, cellMat)
exprMat <- rbind(exprMat, geneMat)
sce@assays$data[[datName]] <- exprMat
return(sce)
}
makeDataMatrix <- function(data2add, cellBCs, features){
data2add <- Matrix::Matrix(data2add, sparse = TRUE)
missingBCs <- cellBCs[! cellBCs %in% colnames(data2add)]
missingFeatures <- features[! features %in% rownames(data2add)]
if(length(missingBCs) > 0){
bcMat <- Matrix::Matrix( nrow = nrow(data2add), ncol = length(missingBCs))
data2add <- cbind(data2add, bcMat)
}
if(length(missingFeatures) > 0){
featureMat <- Matrix::Matrix( nrow = length(missingFeatures), ncol = ncol(data2add))
data2add <- rbind(data2add, featureMat)
}
return(data2add)
}
addSeuratData <- function(sce, seurat){
barcodeList <- colnames(sce)
featureList <- rownames(sce)
colnames(seurat@meta.data) <- paste0(colnames(seurat@meta.data), ".Seurat")
sce@assays$data[["RawData.Seurat"]] <- makeDataMatrix(dat = seurat@raw.data, cellBCs = barcodeList, features = featureList)
sce@assays$data[["ScaleData.Seurat"]] <- makeDataMatrix(dat = seurat@scale.data, cellBCs = barcodeList, features = featureList)
sce@colData[rownames(seurat@meta.data), colnames(seurat@meta.data)] <- seurat@meta.data[rownames(seurat@meta.data), colnames(seurat@meta.data)]
cellEmbeddings <- lapply(names(seurat@dr), function(id){seurat@dr[[id]]@cell.embeddings})
names(cellEmbeddings) <- paste0(names(seurat@dr), ".Seurat")
sce@reducedDims@listData <- cellEmbeddings
return(sce)
}
addMonocleData <- function(sce, monocleObj){
barcodeList <- colnames(sce)
featureList <- rownames(sce)
sce@reducedDims@listData[["Pseudotime.Monocle"]] <- Matrix::Matrix(data = t(monocleObj@reducedDimS), sparse = TRUE)
sce@assays$data[["Counts.Monocle"]] <- makeDataMatrix(dat = monocleObj@assayData$exprs, cellBCs = barcodeList, features = featureList)
names(monocleObj@phenoData@data) <- paste0(names(monocleObj@phenoData@data), ".Monocle")
sce@colData[rownames(monocleObj@phenoData@data), colnames(monocleObj@phenoData@data)] <- monocleObj@phenoData@data[rownames(monocleObj@phenoData@data), colnames(monocleObj@phenoData@data)]
return(sce)
}
| /R/dataTools.R | no_license | morris-lab/CellTagViz | R | false | false | 3,752 | r |
makeVizData <- function(dataSets){
names(dataSets) <- toupper(names(dataSets))
seuratObject <- dataSets[["SEURAT"]]
monocleObject <- dataSets[["MONOCLE"]]
cellBarcodes <- getCellBarcodes(seurat = seuratObject, monocle = monocleObject)
featureList <- getFeatures(seurat = seuratObject, monocle = monocleObject)
sceObject <- initializeSCE(cellBCs = cellBarcodes, features = featureList)
sceObject <- addSeuratData(sceObject, seuratObject)
sceObject <- addMonocleData(sceObject, monocleObject)
return(sceObject)
}
getCellBarcodes <- function(seurat, monocle){
barcodes <- union(seurat@cell.names, colnames(monocle))
}
getFeatures <- function(seurat, monocle){
features <- union(rownames(seurat@scale.data), rownames(monocle))
}
initializeSCE <- function(cellBCs, features){
cellDF <- data.frame(row.names = cellBCs)
featureDF <- data.frame(row.names = features)
sce <- SingleCellExperiment::SingleCellExperiment(colData = cellDF, rowData = featureDF)
return(sce)
}
addExprData <- function(sce, exprMat, datName){
exprMat <- Matrix(exprMat, sparse = TRUE)
missingGenes <- rownames(sce)[ ! rownames(sce) %in% rownames(exprMat) ]
missingCells <- colnames(sce)[ ! colnames(sce) %in% colnames(exprMat) ]
cellMat <- Matrix( nrow = nrow(exprMat), ncol = length(missingCells))
geneMat <- Matrix( nrow = length(missingGenes), ncol = ncol(exprMat))
exprMat <- cbind(exprMat, cellMat)
exprMat <- rbind(exprMat, geneMat)
sce@assays$data[[datName]] <- exprMat
return(sce)
}
makeDataMatrix <- function(data2add, cellBCs, features){
data2add <- Matrix::Matrix(data2add, sparse = TRUE)
missingBCs <- cellBCs[! cellBCs %in% colnames(data2add)]
missingFeatures <- features[! features %in% rownames(data2add)]
if(length(missingBCs) > 0){
bcMat <- Matrix::Matrix( nrow = nrow(data2add), ncol = length(missingBCs))
data2add <- cbind(data2add, bcMat)
}
if(length(missingFeatures) > 0){
featureMat <- Matrix::Matrix( nrow = length(missingFeatures), ncol = ncol(data2add))
data2add <- rbind(data2add, featureMat)
}
return(data2add)
}
addSeuratData <- function(sce, seurat){
barcodeList <- colnames(sce)
featureList <- rownames(sce)
colnames(seurat@meta.data) <- paste0(colnames(seurat@meta.data), ".Seurat")
sce@assays$data[["RawData.Seurat"]] <- makeDataMatrix(dat = seurat@raw.data, cellBCs = barcodeList, features = featureList)
sce@assays$data[["ScaleData.Seurat"]] <- makeDataMatrix(dat = seurat@scale.data, cellBCs = barcodeList, features = featureList)
sce@colData[rownames(seurat@meta.data), colnames(seurat@meta.data)] <- seurat@meta.data[rownames(seurat@meta.data), colnames(seurat@meta.data)]
cellEmbeddings <- lapply(names(seurat@dr), function(id){seurat@dr[[id]]@cell.embeddings})
names(cellEmbeddings) <- paste0(names(seurat@dr), ".Seurat")
sce@reducedDims@listData <- cellEmbeddings
return(sce)
}
addMonocleData <- function(sce, monocleObj){
barcodeList <- colnames(sce)
featureList <- rownames(sce)
sce@reducedDims@listData[["Pseudotime.Monocle"]] <- Matrix::Matrix(data = t(monocleObj@reducedDimS), sparse = TRUE)
sce@assays$data[["Counts.Monocle"]] <- makeDataMatrix(dat = monocleObj@assayData$exprs, cellBCs = barcodeList, features = featureList)
names(monocleObj@phenoData@data) <- paste0(names(monocleObj@phenoData@data), ".Monocle")
sce@colData[rownames(monocleObj@phenoData@data), colnames(monocleObj@phenoData@data)] <- monocleObj@phenoData@data[rownames(monocleObj@phenoData@data), colnames(monocleObj@phenoData@data)]
return(sce)
}
|
c DCNF-Autarky [version 0.0.1].
c Copyright (c) 2018-2019 Swansea University.
c
c Input Clause Count: 9161
c Performing E1-Autarky iteration.
c Remaining clauses count after E-Reduction: 8153
c
c Performing E1-Autarky iteration.
c Remaining clauses count after E-Reduction: 8153
c
c Input Parameter (command line, file):
c input filename QBFLIB/QBF_1.0/NuSMV_diam_qdimacs/Dme1/dme1_7.qdimacs
c output filename /tmp/dcnfAutarky.dimacs
c autarky level 1
c conformity level 0
c encoding type 2
c no.of var 2933
c no.of clauses 9161
c no.of taut cls 0
c
c Output Parameters:
c remaining no.of clauses 8153
c
c QBFLIB/QBF_1.0/NuSMV_diam_qdimacs/Dme1/dme1_7.qdimacs 2933 9161 E1 [180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378] 0 1253 805 8153 RED
| /code/dcnf-ankit-optimized/Results/QBFLIB-2018/E1/Experiments/QBF_1.0/NuSMV_diam_qdimacs/Dme1/dme1_7/dme1_7.R | no_license | arey0pushpa/dcnf-autarky | R | false | false | 4,494 | r | c DCNF-Autarky [version 0.0.1].
c Copyright (c) 2018-2019 Swansea University.
c
c Input Clause Count: 9161
c Performing E1-Autarky iteration.
c Remaining clauses count after E-Reduction: 8153
c
c Performing E1-Autarky iteration.
c Remaining clauses count after E-Reduction: 8153
c
c Input Parameter (command line, file):
c input filename QBFLIB/QBF_1.0/NuSMV_diam_qdimacs/Dme1/dme1_7.qdimacs
c output filename /tmp/dcnfAutarky.dimacs
c autarky level 1
c conformity level 0
c encoding type 2
c no.of var 2933
c no.of clauses 9161
c no.of taut cls 0
c
c Output Parameters:
c remaining no.of clauses 8153
c
c QBFLIB/QBF_1.0/NuSMV_diam_qdimacs/Dme1/dme1_7.qdimacs 2933 9161 E1 [180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378] 0 1253 805 8153 RED
|
library(data.table)
library(lubridate)
library(RPostgres)
username <- "gsb25" # WRDS USERNAME
from <- as.Date("2016-1-1")
conn <- dbConnect(Postgres(),
host = "wrds-pgdata.wharton.upenn.edu",
port = 9737,
dbname = "wrds",
user = username,
sslmode = "require")
from.crsp <- paste0("'", from, "'")
to.crsp <- paste0("'", to.crsp, "'")
# Extract CRSP data
SQL.crsp <- paste("SELECT a.permno, a.permco, a.date, b.shrcd AS share_code,
b.exchcd AS exchange_code, a.ret, a.retx, a.prc AS price,
a.vol AS volume, a.shrout AS shares_out,
CASE WHEN (a.hsiccd BETWEEN 6000 and 6999) THEN 1 ELSE 0 END AS is_financial
FROM crsp.dsf AS a
LEFT join crsp.msenames AS b
ON a.permno = b.permno
AND b.namedt <= a.date
AND a.date <= b.nameendt
WHERE a.date BETWEEN", from.crsp,
"AND", to.crsp,
"AND b.exchcd BETWEEN 1 AND 3",
sep = " ")
res <- dbSendQuery(conn, SQL.crsp)
crsp <- data.table(dbFetch(res))
dbClearResult(res)
crsp <- na.omit(crsp, cols = c("ret", "retx"))
# Extract delisted return
SQL.delret <- "SELECT permno, dlret AS delist_ret, dlstdt AS date
FROM crsp.dsedelist"
res <- dbSendQuery(conn, SQL.delret)
delret <- data.table(dbFetch(res))
dbClearResult(res)
# Merge crsp and delisted returns
crsp <- merge(crsp, delret, on = c("permno", "date"), all.x = TRUE)
set(crsp, which(is.na(crsp[["delist_ret"]])), "delist_ret", 0)
# Calculate returns from delisting returns
crsp[, adj_ret := (1 + ret) * (1 + delist_ret) - 1]
crsp[, adj_retx := (1 + retx) * (1 + delist_ret) - 1]
crsp[, delist_ret := NULL]
crsp <- crsp[order(date, permco)]
rets <- na.omit(crsp[, c("date", "permno", "adj_ret", "adj_retx", "rebalance_date")], cols = c("adj_ret", "adj_retx"))
# Split data by rebalance date for cross-sectional regressions. Note: Could wait to do this in cross_sectional_regression.R
# Get start and end dates for index time series
from.ind <- paste0("'", from %m-% months(preceding + 1), "'")
to.ind <- to.crsp
# Obtain returns for CRSP value-weighted index
SQL.ind <- paste("SELECT date, vwretd AS ind_ret, LAG(vwretd, 1) OVER(ORDER BY date) as lag_ind_ret
FROM crsp.msi
WHERE date BETWEEN", from.ind,
"AND", to.ind,
sep = " ")
res <- dbSendQuery(conn, SQL.ind)
ind <- as.data.table(dbFetch(res))[-1]
dbClearResult(res)
# Change dates to end of month and add lagged index variable
ind[, date := end_of_month(date)]
# Start and end dates for risk-free rate time series
from.rf <- from.crsp
to.rf <- to.crsp
SQL.rf <- paste("SELECT caldt AS date, t90ret AS risk_free
FROM crsp.mcti
WHERE caldt BETWEEN", from.rf,
"AND", to.rf,
sep = " ")
res <- dbSendQuery(conn, SQL.rf)
rf <- as.data.table(dbFetch(res))
dbClearResult(res)
# Change date to end of month
rf[, date := end_of_month(date)]
# Merge index data and risk-free rates
market.dt <- merge(ind, rf, by = "date", all.x = TRUE)
# Merge individual stock returns with market data
x$market.dt <- merge(rets, market.dt, by = "date", all.x = TRUE)
class(x) <- "eapr"
| /inst/scripts/daily_download.R | no_license | GregoryBrownson/EAPR | R | false | false | 3,411 | r | library(data.table)
library(lubridate)
library(RPostgres)
username <- "gsb25" # WRDS USERNAME
from <- as.Date("2016-1-1")
conn <- dbConnect(Postgres(),
host = "wrds-pgdata.wharton.upenn.edu",
port = 9737,
dbname = "wrds",
user = username,
sslmode = "require")
from.crsp <- paste0("'", from, "'")
to.crsp <- paste0("'", to.crsp, "'")
# Extract CRSP data
SQL.crsp <- paste("SELECT a.permno, a.permco, a.date, b.shrcd AS share_code,
b.exchcd AS exchange_code, a.ret, a.retx, a.prc AS price,
a.vol AS volume, a.shrout AS shares_out,
CASE WHEN (a.hsiccd BETWEEN 6000 and 6999) THEN 1 ELSE 0 END AS is_financial
FROM crsp.dsf AS a
LEFT join crsp.msenames AS b
ON a.permno = b.permno
AND b.namedt <= a.date
AND a.date <= b.nameendt
WHERE a.date BETWEEN", from.crsp,
"AND", to.crsp,
"AND b.exchcd BETWEEN 1 AND 3",
sep = " ")
res <- dbSendQuery(conn, SQL.crsp)
crsp <- data.table(dbFetch(res))
dbClearResult(res)
crsp <- na.omit(crsp, cols = c("ret", "retx"))
# Extract delisted return
SQL.delret <- "SELECT permno, dlret AS delist_ret, dlstdt AS date
FROM crsp.dsedelist"
res <- dbSendQuery(conn, SQL.delret)
delret <- data.table(dbFetch(res))
dbClearResult(res)
# Merge crsp and delisted returns
crsp <- merge(crsp, delret, on = c("permno", "date"), all.x = TRUE)
set(crsp, which(is.na(crsp[["delist_ret"]])), "delist_ret", 0)
# Calculate returns from delisting returns
crsp[, adj_ret := (1 + ret) * (1 + delist_ret) - 1]
crsp[, adj_retx := (1 + retx) * (1 + delist_ret) - 1]
crsp[, delist_ret := NULL]
crsp <- crsp[order(date, permco)]
rets <- na.omit(crsp[, c("date", "permno", "adj_ret", "adj_retx", "rebalance_date")], cols = c("adj_ret", "adj_retx"))
# Split data by rebalance date for cross-sectional regressions. Note: Could wait to do this in cross_sectional_regression.R
# Get start and end dates for index time series
from.ind <- paste0("'", from %m-% months(preceding + 1), "'")
to.ind <- to.crsp
# Obtain returns for CRSP value-weighted index
SQL.ind <- paste("SELECT date, vwretd AS ind_ret, LAG(vwretd, 1) OVER(ORDER BY date) as lag_ind_ret
FROM crsp.msi
WHERE date BETWEEN", from.ind,
"AND", to.ind,
sep = " ")
res <- dbSendQuery(conn, SQL.ind)
ind <- as.data.table(dbFetch(res))[-1]
dbClearResult(res)
# Change dates to end of month and add lagged index variable
ind[, date := end_of_month(date)]
# Start and end dates for risk-free rate time series
from.rf <- from.crsp
to.rf <- to.crsp
SQL.rf <- paste("SELECT caldt AS date, t90ret AS risk_free
FROM crsp.mcti
WHERE caldt BETWEEN", from.rf,
"AND", to.rf,
sep = " ")
res <- dbSendQuery(conn, SQL.rf)
rf <- as.data.table(dbFetch(res))
dbClearResult(res)
# Change date to end of month
rf[, date := end_of_month(date)]
# Merge index data and risk-free rates
market.dt <- merge(ind, rf, by = "date", all.x = TRUE)
# Merge individual stock returns with market data
x$market.dt <- merge(rets, market.dt, by = "date", all.x = TRUE)
class(x) <- "eapr"
|
library(dplyr)
library(tidyr)
library(ggplot2)
library(ggpubr)
####Primera aproximación a los datos#####
Canes <- read.csv("~/Documents/Osmias/Canes.csv", sep=";", quote="\"", stringsAsFactors=FALSE)
Canes<-Canes[c(1:1716),]
Datos_distancia_intertegular <- read.csv("~/Documents/Osmias/Datos_distancia_intertegular.csv", sep=";")
Datos_species_distancia_2 <- read.csv("~/Documents/Osmias/Datos_species_distancia_2.csv", sep=";", comment.char="#")
Datos_species_distancia <- read.csv("~/Documents/Osmias/Datos_species_distancia.csv", sep=";")
Sexo_specie_osmias <- read.csv("~/Documents/Osmias/Sexo_specie_osmias .csv", sep=";")
Sexo_specie_osmias_reubicacion <- read.csv("~/Documents/Osmias/Sexo_specie_osmias_reubicacion.csv", sep=";")
Sexo_specie_prueba_filtrado<-read.csv("~/Documents/Osmias/Sexo_specie_prueba_filtrado.csv", sep=";")
colnames(Sexo_specie_osmias_reubicacion)[1]<-"ID"
str(Canes)
extract_forCanes<-Canes[,c(1,3,4)]
#Primero monto un bucle para poner cada una de las clasificaciones de NS para poder hacer
#un merge con los datos de temperatura y de paisaje de los bombus.
code<-NULL
for (n in 1:nrow(Canes)) {
if (Canes$Location[n]== "vivero" ) {
code[n]<-3} else if (Canes$Location[n]== "UPO" ) {
code[n]<-8}else if (Canes$Location[n]== "Rocina" ) {
code[n]<-9}else if (Canes$Location[n]== "Martinazo" ) {
code[n]<-10}else if (Canes$Location[n]== "hinojos" ) {
code[n]<-7}else if (Canes$Location[n]== "guadiamar" ) {
code[n]<-11}else if (Canes$Location[n]== "dehesa nueva" ) {
code[n]<-6}else if (Canes$Location[n]== "Dehesa de abajo" ) {
code[n]<-2}else if (Canes$Location[n]== "coria del rio" ) {
code[n]<-4}else if (Canes$Location[n]== "choza huerta tejada" ) {
code[n]<-1}else if (Canes$Location[n]== "Charena" ) {
code[n]<-5}}
CODE_ORIENTATION<-NULL
for (n in 1:nrow(Canes)) {
if (Canes$Orientation[n]== "norte" ) {
CODE_ORIENTATION[n]<-"N"} else if (Canes$Orientation[n]== "sur" ) {
CODE_ORIENTATION[n]<-"S"}}
Code<-paste(code,CODE_ORIENTATION, sep="")
Canes<-cbind(Canes,Code)
View(Canes)
combined<-anti_join(Sexo_specie_osmias_reubicacion,Sexo_specie_osmias, by="Specie")
specie_sexo_unico<-unique(Sexo_specie_prueba_filtrado)
specie_sexo_unico_solo_numeros<-unique(Sexo_specie_prueba_filtrado$ID)
str(specie_sexo_unico)
str(Datos_distancia_intertegular)
Datos_species_distancia<-Datos_species_distancia[,-3]
colnames(Datos_species_distancia)[1]<-"ID"
Datos_specie<-merge(Datos_species_distancia, Datos_species_distancia_2, by="ID")
colnames(Datos_specie)[2]<-"IT"
colnames(Datos_specie)[5]<-"Specie"
Datos_specie<-Datos_specie[,c(-3,-4,-7)]
Datos_distancia_intertegular<-Datos_distancia_intertegular[,c(1:2)]
Datos_specie$IT<-as.numeric(Datos_specie$IT)
Sexo_specie_it<-full_join(specie_sexo_unico, Datos_distancia_intertegular, by=NULL)
Datos_specie<-Datos_specie[,c(1,4,3,2)]
Sexo_specie_it[,3]
#comprobando por el número de caña y por el tipo completo los datos que faltan de la especie en 8
for (n in 1:nrow(Sexo_specie_it)) {
if ( Sexo_specie_it$ID[n]== "75.6" || Sexo_specie_it$ID[n]== "68.4"
|| Sexo_specie_it$ID[n]== "124.2" || Sexo_specie_it$ID[n]== "114.4" ) {
Sexo_specie_it$Specie[n]<-"Osmia bicornis"} else if
( Sexo_specie_it$ID[n]== "58.14" || Sexo_specie_it$ID[n]== "3a.6"
|| Sexo_specie_it$ID[n]== "49.8" || Sexo_specie_it$ID[n]== "23.2"
|| Sexo_specie_it$ID[n]== "23.1") {
Sexo_specie_it$Specie[n]<-"Osmia latreillei"} else if
( Sexo_specie_it$ID[n]== "129.13") {
Sexo_specie_it$Specie[n]<-"Osmia brevicornis"}else if
( Sexo_specie_it$ID[n]== "5a.2") {
Sexo_specie_it$Specie[n]<-"Osmia submicans"}}
#Añado esta columna para identificar donde están los duplicados
b <- Datos_specie[,1]
b <- b==1
b <- cumsum(b)
Datos_specie<-cbind(Datos_specie,b)
b <- Sexo_specie_it[,1]
b <- b==3
b <- cumsum(b)
b<-replace(b,b=="0","1")
Sexo_specie_it<-cbind(Sexo_specie_it,b)
#Reordeno el dataframe por sus columnas para que puedan encajarse con un rbind
str(b)
Datos_individuales<-rbind(Datos_specie, Sexo_specie_it)
str(Datos_individuales)
#Curro se llevo dos especies a identiicar que no están pasadas. Importante localizarlas.
str(specie_sexo_unico)
str(Datos_distancia_intertegular)
str(Datos_specie)
str(Sexo_specie_it)
#junto los dos dataframes
str(Sexo_specie_it)
Sexo_specie_final<-unique(Datos_individuales)
View
View(Sexo_specie_final)
str(Sexo_specie_final_ID)
#Encontrar los duplicados por si existen posibles errores al pasar los datos
#Esto es solo una prueba!
number_of_duplicates<-Sexo_specie_final[duplicated(Sexo_specie_final$ID),]
View(Sexo_specie_final)
View(Sexo_specie_it)
View(Datos_specie)
table(number_of_duplicates$ID)
exclusivos_datos_specie<-anti_join(Datos_specie, Sexo_specie_it, by="ID")
exclusivos_totales_prueba<-rbind(exclusivos_datos_specie, Sexo_specie_it)
str(exclusivos_totales_prueba)
Sexo_specie_final_prueba<-unique(exclusivos_totales_prueba)
str(Sexo_specie_final_prueba)
number_of_duplicates_prueba<-Sexo_specie_final_prueba[duplicated(Sexo_specie_final_prueba$ID),]
#De los datos repetidos en el caso de que sean erroneos el sexo, se revisan, en el caso de que sean medidas repetidas
#realizo la media. En este caso sería en los números 11.5, 11.4, 11.3,11.2,11.1.
Cribado_Osmias_sexo_specie<-subset(Sexo_specie_final_prueba, ID!=128.2 & ID!=94.1 & ID!=86.3 & ID!=75.12 &
ID!=61.7& ID!=58.1& ID!=152.3& ID!=129.18& ID!=112.10& ID!=11.4& ID!=11.3& ID!=103.1)
#Hago las medias a los que sí se puede
#CUIDADO, SI ESTÁ GGPLOT2 CARGADO TIENE INCOMPATIBILIDADES CON DPLYR Y SOLO HACE LA MEDIA DE UNA FILA.
#DESCARGAR PLYR.
Sexo_specie_final_prueba_medias<-Cribado_Osmias_sexo_specie%>%
filter(ID=="11.5" | ID =="11.2" | ID=="11.1") %>%
group_by(ID, Sex, Specie)%>%
summarise (mean_it= mean (IT))
Sexo_specie_final_prueba_medias<-as.data.frame(Sexo_specie_final_prueba_medias)
#Hago por caña tanto media IT de macho como IT de hembra, para unirlo al dataframe anterior
Sexo_specie_final_prueba_completo_a_unir_a_medias<-Cribado_Osmias_sexo_specie%>%
filter(ID!="11.5" | ID !="11.2" | ID!="11.1")
Sexo_specie_final_prueba_completo_a_unir_a_medias<-as.data.frame(Sexo_specie_final_prueba_completo_a_unir_a_medias)
Numbers_per_specie <-Sexo_specie_final_prueba_completo_a_unir_a_medias%>%
group_by(Specie) %>%
summarise(no_rows = length(Specie))
Number_per_specie<-as.data.frame(Numbers_per_specie)
Sexo_specie_final_prueba_completo_a_unir_a_medias<-Sexo_specie_final_prueba_completo_a_unir_a_medias[,c(-5)]
colnames(Sexo_specie_final_prueba_medias)[4]<-"IT"
Sexo_specie_final<-rbind(Sexo_specie_final_prueba_medias,Sexo_specie_final_prueba_completo_a_unir_a_medias)
#En el caso de que los duplicados coincidan en todo menos en su número, hago medias de su IT
#debido a que son medidas repetidas.
#En el caso a que no coincidan he de revisarlas.
#Separo por caña la medida y la especie
Sexo_specie_it_total_cane<-Sexo_specie_final %>% separate(ID,
c("number", "Individual"))
Sexo_specie_it_total_cane$number<-as.numeric(Sexo_specie_it_total_cane$number)
write.csv(Sexo_specie_it_total_cane, "Sexo_specie_it_total_cane.csv")
#Medias por caña y por especie
Media_sexo_specie_caña<-Sexo_specie_it_total_cane %>% group_by(number,Sex,Specie) %>% filter(!is.na(IT)) %>% summarise (mean_it = mean(IT))
Media_sexo_specie_caña_total<-Sexo_specie_it_total_cane %>% group_by(number,Specie) %>% filter(!is.na(IT)) %>% summarise (mean_it = mean(IT))
View(Media_sexo_specie_caña)
#Solo voy a trabajar con el de Osmias bicornis de momento
Osmia_bicornis<-subset(Media_sexo_specie_caña, Specie=="Osmia bicornis")
Osmia_bicornis_media_total<-subset(Media_sexo_specie_caña_total, Specie=="Osmia bicornis")
Osmia_bicornis_splitted<-split(Osmia_bicornis, Osmia_bicornis$Sex, drop=TRUE)
Osmia_bicornis_splitted<-as.data.frame(Osmia_bicornis_splitted)
Osmia_bicornis_female<-Osmia_bicornis_splitted[[1]]
Osmia_bicornis_male<-Osmia_bicornis_splitted[[2]]
Osmia_bicornis_male<-as.data.frame(Osmia_bicornis_male)
Osmia_bicornis_female<-as.data.frame(Osmia_bicornis_female)
colnames(Osmia_bicornis_male)[4]<-"mean_male"
colnames(Osmia_bicornis_female)[4]<-"mean_female"
Mean_male<-Osmia_bicornis_male[,c(1,4)]
Mean_female<-Osmia_bicornis_female[,c(1,4)]
Mean_total<-Osmia_bicornis_media_total[,c(1,3)]
merge1<-full_join(Mean_male,Mean_female, by="number")
Dataframe_mean_male_female_total_osmia_bicornis<-full_join(merge1,Mean_total, by="number")
colnames(Dataframe_mean_male_female_total_osmia_bicornis)[4]<-"mean_it_bicornis"
Dataframe_mean_male_female_total_osmia_bicornis<-full_join(merge1,Mean_total, by="number")
Dataframe_total_osmia_bicornis<-merge(Dataframe_mean_male_female_total_osmia_bicornis,Canes, by="number")
head(Dataframe_total_osmia_bicornis)
View(Dataframe_total_osmia_bicornis)
#Lo mismo para el resto de especies
Osmia_latreillei<-subset(Media_sexo_specie_caña, Specie=="Osmia latreillei")
Osmia_latreillei_media_total<-subset(Media_sexo_specie_caña_total, Specie=="Osmia latreillei")
Osmia_latreillei_splitted<-split(Osmia_latreillei, Osmia_latreillei$Sex, drop=TRUE)
Osmia_latreillei_splitted<-as.data.frame(Osmia_latreillei_splitted)
Osmia_latreillei_female<-Osmia_latreillei_splitted[[1]]
Osmia_latreillei_male<-Osmia_latreillei_splitted[[2]]
Osmia_latreillei_male<-as.data.frame(Osmia_latreillei_male)
Osmia_latreillei_female<-as.data.frame(Osmia_latreillei_female)
colnames(Osmia_latreillei_male)[4]<-"mean_male_latreillei"
colnames(Osmia_latreillei_female)[4]<-"mean_female_latreillei"
Mean_male_latreillei<-Osmia_latreillei_male[,c(1,4)]
Mean_female_latreillei<-Osmia_latreillei_female[,c(1,4)]
Mean_total_latreillei<-Osmia_latreillei_media_total[,c(1,3)]
merge_latreillei<-full_join(Mean_male_latreillei,Mean_female_latreillei, by="number")
Dataframe_mean_male_female_total_osmia_latreillei<-full_join(merge_latreillei,Mean_total_latreillei, by="number")
colnames(Dataframe_mean_male_female_total_osmia_latreillei)[4]<-"mean_it_latreillei"
Dataframe_total_osmia_latreillei<-merge(Dataframe_mean_male_female_total_osmia_latreillei,Canes, by="number")
Dataframe_total_osmia_bicornis_latreillei<-left_join(Dataframe_total_osmia_bicornis,Dataframe_total_osmia_latreillei, by="number")
head(Dataframe_total_osmia_bicornis_latreillei)
str(Dataframe_total_osmia_bicornis_latreillei)
View(Dataframe_total_osmia_bicornis)
#!!!!!!!No sé como hacer para que estén correlacionados las medidas de los machos y hembras para ver si se corresponden con la caña.
Dataframe_total_osmia_bicornis$vestib<-as.numeric(Dataframe_total_osmia_bicornis$vestib)
Dataframe_total_osmia_bicornis$vestib3<-as.numeric(Dataframe_total_osmia_bicornis$vestib3)
#Voy a tener que transformar los NAs a cero en los vestibulos y el resto porque no los capta.
#Medir el volumen que ocuparon, 25cm de caña vacías versus el tamaño de diámetro y total ocupado (vestíbulos, chambers, tapón..)
Osmia_summarise_NA_cero<-Dataframe_total_osmia_bicornis %>% mutate(vestib =if_else(is.na(vestib),0,vestib),
vestib2 =if_else(is.na(vestib2),0,vestib2),
vestib3 =if_else(is.na(vestib3),0,vestib3),
vestib4 =if_else(is.na(vestib4),0,vestib4),
cap.s =if_else(is.na(cap.s),0,cap.s),
final.space =if_else(is.na(final.space),0,final.space))
View(Osmia_summarise_NA_cero)
View(Osmia_summarise_vestib)
Osmia_summarise_vestib<-Osmia_summarise_NA_cero %>%
mutate (internal_space=vestib+vestib2+vestib3+vestib4+final.space, buffer_space= cap.s + internal_space )
Osmia_summarise_size<-Osmia_summarise_vestib %>% group_by(number,a.b) %>% filter(!is.na(size)) %>%
summarise (number_cells = length(size), mean_cells=mean(size))
str(Dataframe_total_osmia_bicornis)
str(Dataframe_total_osmia_bicornis)
Osmia_summarise_size<-as.data.frame(Osmia_summarise_size)
Osmia_summarise_size$number<-as.numeric(Osmia_summarise_size$number)
Osmia_bicornis_semifinal<-full_join(Osmia_summarise_vestib, Osmia_summarise_size, by=c("number","a.b"))
str(Osmia_bicornis_semifinal)
Osmia_bicornis_semifinal2_aborts<-Osmia_bicornis_semifinal %>% group_by(number,a.b) %>% filter(!is.na(Aborts)) %>%
summarise (number_of_aborts = sum(Aborts))
Osmia_bicornis_semifinal2_aborts<-as.data.frame(Osmia_bicornis_semifinal2_aborts)
Osmia_bicornis_semifinal<-Osmia_bicornis_semifinal[,c(-17,-18,-20,-21,-24,-26,-27,-28,-29,-30,-31,-32)]
head(Osmia_bicornis_semifinal)
Osmia_bicornis_semifinal2_aborts_to_join<-Osmia_bicornis_semifinal %>% group_by(number,a.b)
head(Osmia_bicornis_semifinal2_aborts_to_join)
View(Osmia_bicornis_semifinal2_aborts_to_join)
Osmia_bicornis_semifinal2_aborts_to_join<-unique(Osmia_bicornis_semifinal2_aborts_to_join)
Code<-paste(code,CODE_ORIENTATION, sep="")
Osmia_bicornis_semifinal2_aborts_to_join<-as.data.frame(Osmia_bicornis_semifinal2_aborts_to_join)
dataframe_final_bombus2 <- read.csv("~/Documents/bombus/dataframe_final_bombus2.csv")
data_from_bombus_to_join_osmia<-dataframe_final_bombus2[,c(3,13,75,76,77)]
write.csv(data_from_bombus_to_join_osmia, "data_from_bombus_to_join_osmia.csv")
abundance_percent_1km<-read.csv("~/Documents/Osmias/abundancia_percent_osmia_dataframe.csv")
abundance_percent_1km<-abundance_percent_1km[,c(5,6,7)]
colnames(abundance_percent_1km)[3]<-"Code"
colnames(data_from_bombus_to_join_osmia)[1]<-"Code"
data_from_bombus_to_join_osmia<-merge(data_from_bombus_to_join_osmia, abundance_percent_1km, by="Code")
data_from_bombus_to_join_osmia<-unique(data_from_bombus_to_join_osmia)
dataframe_osmia_bicornis_semifinal<-merge(Osmia_bicornis_semifinal2_aborts_to_join, data_from_bombus_to_join_osmia, by="Code")
dataframe_osmia_bicornis_semifinal_aborts<-merge(dataframe_osmia_bicornis_semifinal, Osmia_bicornis_semifinal2_aborts, by= c("number","a.b"))
dataframe_osmia_bicornis_semifinal_summary<-dataframe_osmia_bicornis_semifinal %>% group_by(Code)
write.csv(dataframe_osmia_bicornis_semifinal,"dataframe_osmia_bicornis_semifinal.csv")
str(dataframe_osmia_bicornis_semifinal)
#Algunos ploteos sin hacer summary por code.
p<-ggscatter(dataframe_osmia_bicornis_semifinal, x = "abundancia_relativa", y = "number_cells",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "abundancia_realtiva", ylab = "number_cells")
print(p)
p<-ggscatter(dataframe_osmia_bicornis_semifinal, x = "abundancia_relativa", y = "mean_cells",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "abundancia_realtiva", ylab = "number_cells")
print(p)
p<-ggscatter(Osmia_bicornis_semifinal, x = "abundancia_relativa", y = "number_cells",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "abundancia_realtiva", ylab = "number_cells")
print(p)
p<-ggscatter(dataframe_osmia_bicornis_semifinal, x = "riqueza_relativa", y = "number_cells",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "riqueza relativa", ylab = "Intmales_mean")
print(p)
p<-ggscatter(dataframe_osmia_bicornis_semifinal, x = "temp_mean", y = "buffer_space",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "temp_mean", ylab = "Intmales_mean")
print(p)
p<-ggscatter(dataframe_osmia_bicornis_semifinal, x = "abundancia_relativa", y = "mean_it",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "abundancia_relativa", ylab = "mean_it_total")
print(p)
p<-ggscatter(dataframe_osmia_bicornis_semifinal, x = "temp_mean", y = "mean_it",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "temp_mean", ylab = "Int_mean")
print(p)
p<-ggscatter(dataframe_osmia_bicornis_semifinal, x = "temp_range", y = "buffer_space",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "temp_range", ylab = "buffer_space")
print(p)
View(dataframe_osmia_bicornis_semifinal_aborts)
p<-ggscatter(dataframe_osmia_bicornis_semifinal_aborts, x = "abundancia_relativa", y = "number_of_aborts",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "abundancia_relativa", ylab = "number_of_aborts")
print(p)
z<-ggscatter(dataframe_osmia_bicornis_semifinal_aborts, x = "buffer_space", y = "number_of_aborts",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "buffer_space", ylab = "number_cells")
print(z)
z<-ggscatter(dataframe_osmia_bicornis_semifinal_aborts, x = "buffer_space", y = "number_of_aborts",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "buffer_space", ylab = "number_cells")
print(z)
str(dataframe_osmia_bicornis_semifinal_aborts)
z<-ggscatter(dataframe_osmia_bicornis_semifinal_aborts, x = "buffer_space", y = "number_of_aborts",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "buffer_space", ylab = "number_cells")
print(z)
z<-ggscatter(dataframe_osmia_bicornis_semifinal_aborts, x = "buffer_space", y = "number_of_aborts",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "buffer_space", ylab = "number_cells")
print(z)
z<-ggscatter(dataframe_osmia_bicornis_semifinal_aborts, x = "cap.s", y = "number_of_aborts",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "cap.s", ylab = "number_of_aborts")
print(z)
z<-ggscatter(dataframe_osmia_bicornis_semifinal_aborts, x = "cap.s", y = "number_of_aborts",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "cap.s", ylab = "number_cells")
print(z)
#####Con todo el dataframe completo de osmia bicornis para no tener solo medias###
Osmia_bicornis_todo<-subset(Sexo_specie_it_total_cane, Specie=="Osmia bicornis")
Osmia_bicornis_splitted_todo<-split(Osmia_bicornis_todo, Osmia_bicornis_todo$Sex, drop=TRUE)
Osmia_bicornis_splitted_todo<-as.data.frame(Osmia_bicornis_splitted_todo)
Osmia_bicornis_female<-Osmia_bicornis_splitted_todo[[1]]
Osmia_bicornis_male<-Osmia_bicornis_splitted_todo[[2]]
Osmia_bicornis_male<-as.data.frame(Osmia_bicornis_male)
Osmia_bicornis_female<-as.data.frame(Osmia_bicornis_female)
merge1<-rbind(Osmia_bicornis_male,Osmia_bicornis_female)
Dataframe_total_osmia_bicornis_todo<-merge(merge1,Canes, by="number")
head(Dataframe_total_osmia_bicornis_todo)
Dataframe_osmia_bicornis_todo<-merge(Dataframe_total_osmia_bicornis_todo, data_from_bombus_to_join_osmia, by="Code")
head(Dataframe_osmia_bicornis_todo)
Dataframe_osmia_bicornis_todo<-Dataframe_osmia_bicornis_todo[,c(1,2,3,4,5,6,35,36,37,38)]
Dataframe_osmia_bicornis_todo<-unique(Dataframe_osmia_bicornis_todo)
View(Dataframe_osmia_bicornis_todo)
bicornis_a<-Dataframe_osmia_bicornis_todo[Dataframe_osmia_bicornis_todo$Sex == "Male",]
bicornis_f<-Dataframe_osmia_bicornis_todo[Dataframe_osmia_bicornis_todo$Sex == "Female",]
str(Dataframe_osmia_bicornis_todo)
###!!!He incluido los valores que ponia David que no eran muy precisos.
p<-ggscatter(bicornis_a, x = "abundancia_relativa", y = "IT",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "abundancia_realtiva", ylab = "IT")
print(p)
p<-ggscatter(bicornis_a, x = "temp_mean", y = "IT",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "temp_mean", ylab = "IT")
print(p)
p<-ggscatter(bicornis_f, x = "abundancia_relativa", y = "IT",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "abundancia_realtiva", ylab = "IT")
print(p)
p<-ggscatter(bicornis_f, x = "temp_mean", y = "IT",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "temp_mean", ylab = "IT")
print(p)
p<-ggscatter(bicornis_f, x = "abundancia_relativa", y = "IT",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "abundancia_realtiva", ylab = "IT")
print(p)
p<-ggscatter(bicornis_f, x = "temp_range", y = "IT",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "temp_range", ylab = "IT")
print(p)
p<-ggscatter(bicornis_a, x = "temp_range", y = "IT",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "temp_range", ylab = "IT")
print(p)
Grupo_sexo_specie_caña_orientation_osmia_bicornis<-Dataframe_total_osmias %>% group_by(Orientation) %>% filter(!is.na(Mean_total))
d<-ggplot(Dataframe_total_osmia_bicornis,aes(Orientation,Mean_total))
print(d)
Grupo_sexo_specie_caña<-Dataframe_total_osmias %>% group_by(Location) %>% filter(!is.na(size)) %>% summarise (number_cells = n(size))
Grupo_sexo_specie_caña<-Dataframe_total_osmias %>% group_by(Orientation) %>% filter(!is.na(size)) %>% summarise (number_cells = n(size))
d+geom_boxplot()+geom_jitter()
str(Dataframe_total_osmias)
Grupo_sexo_specie_caña<-Dataframe_total_osmias %>% group_by(number,Sex,Specie) %>% filter(!is.na(IT)) %>% summarise (mean_it = mean(IT))
Grupo_sexo_specie_caña<-Dataframe_total_osmias %>% group_by(number,Sex) %>% filter(!is.na("ID")) %>% summarise (count_ID= count(ID))
str(Sexo_specie_osmias)
str(Datos_distancia_intertegular)
str(Canes)
head(Datos_specie)
head(Datos_distancia_intertegular)
head(Sexo_specie_it)
str(Datos_specie)
str(Sexo_specie_it)
head(Canes)
str(Canes)
head(Datos_distancia_intertegular)
View(Sexo_specie_it)
#Transformar todos los caracteres como factores de manera rápida
Canes<-as.data.frame(unclass(Canes))
levels(Canes$l)
Canes$Aborts<-as.numeric(Canes$Aborts)
summary(Canes)
Aborts_per_zone<-Canes %>% group_by(Location,Orientation) %>% filter(!is.na(Aborts)) %>% summarise (sum_aborts = sum(Aborts))
#Esto significa que a las abejas se les da peor en estos lugares
Aborts_mean<-Canes %>% group_by(Location) %>% filter(!is.na(Aborts)) %>% summarise (mean_aborts = mean(Aborts)) %>% arrange(mean_aborts)
Aborts_median<-Canes %>% group_by(Location) %>% filter(!is.na(Aborts)) %>% summarise (median_aborts = median(Aborts)) %>% arrange(median_aborts)
Aborts_mean_byt<-Canes %>% group_by(Location,Orientation) %>% filter(!is.na(Aborts)) %>% summarise (mean_aborts = mean(Aborts)) %>% arrange(mean_aborts)
a<-ggplot(Aborts_median,aes(Location,median_aborts))
b<-ggplot(Aborts_per_zone,aes(Orientation,sum_aborts))
c<-ggplot(Aborts_mean_byt,aes(Orientation,mean_aborts))
d<-ggplot(Grupo_sexo_specie_caña,aes(Orientation,mean_it))
a+geom_point(aes(reorder(Location, median_aborts), y=median_aborts))
b+geom_boxplot()+geom_jitter()
c+geom_boxplot()+geom_jitter()
d+geom_boxplot()+geom_jitter()
Celdas_orientation<-Canes %>% group_by(Location,Orientation) %>% filter(!is.na(Aborts)) %>% summarise (sum_aborts = sum(Aborts))
Celdas_orientation<-Canes %>% group_by(number,Orientation, Location) %>% filter(!is.na(size)) %>% filter(specie="Osmia bicornis") %>% count(size)
Celdas_orientation2<-Celdas_orientation %>% group_by(number,Location,Orientation) %>% summarise (sum_cells=sum(n))
Celdas_location<-Celdas_orientation2%>% group_by(Location,Orientation) %>% summarise (sum_cells=mean(sum_cells))
Celdas_location_mean<-Celdas_orientation2%>% group_by(Location,Orientation) %>% summarise (mean_cells=mean(sum_cells))
#Solo para osmia bicornis
Celdas_orientation<-Canes %>% group_by(number,Orientation, Location) %>% filter(!is.na(size)) %>% count(size)
Celdas_orientation2<-Celdas_orientation %>% group_by(number,Location,Orientation) %>% summarise (sum_cells=sum(n))
Celdas_location<-Celdas_orientation2%>% group_by(Location,Orientation) %>% summarise (sum_cells=mean(sum_cells))
Celdas_location_mean<-Celdas_orientation2%>% group_by(Location,Orientation) %>% summarise (mean_cells=mean(sum_cells))
z_orientation<-ggplot(Celdas_location,aes(Orientation, sum_cells))
y_orientation<-ggplot(Celdas_location_mean,aes(Orientation, mean_cells))
z_orientation+geom_boxplot()+geom_jitter()
y_orientation+geom_boxplot()+geom_jitter()
z<-ggplot(Celdas_location,aes(Location, sum_cells))
y<-ggplot(Celdas_location_mean,aes(Location, mean_cells))
b<-ggplot(Aborts_per_zone,aes(Orientation,sum_aborts))
c<-ggplot(Aborts_mean_byt,aes(Orientation,mean_aborts))
d<-ggplot(Grupo_sexo_specie_caña,aes(Orientation,mean_it))
a+geom_point(aes(reorder(Location, median_aborts), y=median_aborts))
b+geom_boxplot()+geom_jitter()
c+geom_boxplot()+geom_jitter()
d+geom_boxplot()+geom_jitter()
z+geom_boxplot(aes(reorder(Location, sum_cells), y=sum_cells))+geom_jitter()
y+geom_boxplot(aes(reorder(Location, mean_cells), y=mean_cells))+geom_jitter()
head(Canes)
str(Canes)
############# ##########
| /Osmias.R | no_license | cztrello/Osmias | R | false | false | 26,606 | r | library(dplyr)
library(tidyr)
library(ggplot2)
library(ggpubr)
####Primera aproximación a los datos#####
Canes <- read.csv("~/Documents/Osmias/Canes.csv", sep=";", quote="\"", stringsAsFactors=FALSE)
Canes<-Canes[c(1:1716),]
Datos_distancia_intertegular <- read.csv("~/Documents/Osmias/Datos_distancia_intertegular.csv", sep=";")
Datos_species_distancia_2 <- read.csv("~/Documents/Osmias/Datos_species_distancia_2.csv", sep=";", comment.char="#")
Datos_species_distancia <- read.csv("~/Documents/Osmias/Datos_species_distancia.csv", sep=";")
Sexo_specie_osmias <- read.csv("~/Documents/Osmias/Sexo_specie_osmias .csv", sep=";")
Sexo_specie_osmias_reubicacion <- read.csv("~/Documents/Osmias/Sexo_specie_osmias_reubicacion.csv", sep=";")
Sexo_specie_prueba_filtrado<-read.csv("~/Documents/Osmias/Sexo_specie_prueba_filtrado.csv", sep=";")
colnames(Sexo_specie_osmias_reubicacion)[1]<-"ID"
str(Canes)
extract_forCanes<-Canes[,c(1,3,4)]
#Primero monto un bucle para poner cada una de las clasificaciones de NS para poder hacer
#un merge con los datos de temperatura y de paisaje de los bombus.
code<-NULL
for (n in 1:nrow(Canes)) {
if (Canes$Location[n]== "vivero" ) {
code[n]<-3} else if (Canes$Location[n]== "UPO" ) {
code[n]<-8}else if (Canes$Location[n]== "Rocina" ) {
code[n]<-9}else if (Canes$Location[n]== "Martinazo" ) {
code[n]<-10}else if (Canes$Location[n]== "hinojos" ) {
code[n]<-7}else if (Canes$Location[n]== "guadiamar" ) {
code[n]<-11}else if (Canes$Location[n]== "dehesa nueva" ) {
code[n]<-6}else if (Canes$Location[n]== "Dehesa de abajo" ) {
code[n]<-2}else if (Canes$Location[n]== "coria del rio" ) {
code[n]<-4}else if (Canes$Location[n]== "choza huerta tejada" ) {
code[n]<-1}else if (Canes$Location[n]== "Charena" ) {
code[n]<-5}}
CODE_ORIENTATION<-NULL
for (n in 1:nrow(Canes)) {
if (Canes$Orientation[n]== "norte" ) {
CODE_ORIENTATION[n]<-"N"} else if (Canes$Orientation[n]== "sur" ) {
CODE_ORIENTATION[n]<-"S"}}
Code<-paste(code,CODE_ORIENTATION, sep="")
Canes<-cbind(Canes,Code)
View(Canes)
combined<-anti_join(Sexo_specie_osmias_reubicacion,Sexo_specie_osmias, by="Specie")
specie_sexo_unico<-unique(Sexo_specie_prueba_filtrado)
specie_sexo_unico_solo_numeros<-unique(Sexo_specie_prueba_filtrado$ID)
str(specie_sexo_unico)
str(Datos_distancia_intertegular)
Datos_species_distancia<-Datos_species_distancia[,-3]
colnames(Datos_species_distancia)[1]<-"ID"
Datos_specie<-merge(Datos_species_distancia, Datos_species_distancia_2, by="ID")
colnames(Datos_specie)[2]<-"IT"
colnames(Datos_specie)[5]<-"Specie"
Datos_specie<-Datos_specie[,c(-3,-4,-7)]
Datos_distancia_intertegular<-Datos_distancia_intertegular[,c(1:2)]
Datos_specie$IT<-as.numeric(Datos_specie$IT)
Sexo_specie_it<-full_join(specie_sexo_unico, Datos_distancia_intertegular, by=NULL)
Datos_specie<-Datos_specie[,c(1,4,3,2)]
Sexo_specie_it[,3]
#comprobando por el número de caña y por el tipo completo los datos que faltan de la especie en 8
for (n in 1:nrow(Sexo_specie_it)) {
if ( Sexo_specie_it$ID[n]== "75.6" || Sexo_specie_it$ID[n]== "68.4"
|| Sexo_specie_it$ID[n]== "124.2" || Sexo_specie_it$ID[n]== "114.4" ) {
Sexo_specie_it$Specie[n]<-"Osmia bicornis"} else if
( Sexo_specie_it$ID[n]== "58.14" || Sexo_specie_it$ID[n]== "3a.6"
|| Sexo_specie_it$ID[n]== "49.8" || Sexo_specie_it$ID[n]== "23.2"
|| Sexo_specie_it$ID[n]== "23.1") {
Sexo_specie_it$Specie[n]<-"Osmia latreillei"} else if
( Sexo_specie_it$ID[n]== "129.13") {
Sexo_specie_it$Specie[n]<-"Osmia brevicornis"}else if
( Sexo_specie_it$ID[n]== "5a.2") {
Sexo_specie_it$Specie[n]<-"Osmia submicans"}}
#Añado esta columna para identificar donde están los duplicados
b <- Datos_specie[,1]
b <- b==1
b <- cumsum(b)
Datos_specie<-cbind(Datos_specie,b)
b <- Sexo_specie_it[,1]
b <- b==3
b <- cumsum(b)
b<-replace(b,b=="0","1")
Sexo_specie_it<-cbind(Sexo_specie_it,b)
#Reordeno el dataframe por sus columnas para que puedan encajarse con un rbind
str(b)
Datos_individuales<-rbind(Datos_specie, Sexo_specie_it)
str(Datos_individuales)
#Curro se llevo dos especies a identiicar que no están pasadas. Importante localizarlas.
str(specie_sexo_unico)
str(Datos_distancia_intertegular)
str(Datos_specie)
str(Sexo_specie_it)
#junto los dos dataframes
str(Sexo_specie_it)
Sexo_specie_final<-unique(Datos_individuales)
View
View(Sexo_specie_final)
str(Sexo_specie_final_ID)
#Encontrar los duplicados por si existen posibles errores al pasar los datos
#Esto es solo una prueba!
number_of_duplicates<-Sexo_specie_final[duplicated(Sexo_specie_final$ID),]
View(Sexo_specie_final)
View(Sexo_specie_it)
View(Datos_specie)
table(number_of_duplicates$ID)
exclusivos_datos_specie<-anti_join(Datos_specie, Sexo_specie_it, by="ID")
exclusivos_totales_prueba<-rbind(exclusivos_datos_specie, Sexo_specie_it)
str(exclusivos_totales_prueba)
Sexo_specie_final_prueba<-unique(exclusivos_totales_prueba)
str(Sexo_specie_final_prueba)
number_of_duplicates_prueba<-Sexo_specie_final_prueba[duplicated(Sexo_specie_final_prueba$ID),]
#De los datos repetidos en el caso de que sean erroneos el sexo, se revisan, en el caso de que sean medidas repetidas
#realizo la media. En este caso sería en los números 11.5, 11.4, 11.3,11.2,11.1.
Cribado_Osmias_sexo_specie<-subset(Sexo_specie_final_prueba, ID!=128.2 & ID!=94.1 & ID!=86.3 & ID!=75.12 &
ID!=61.7& ID!=58.1& ID!=152.3& ID!=129.18& ID!=112.10& ID!=11.4& ID!=11.3& ID!=103.1)
#Hago las medias a los que sí se puede
#CUIDADO, SI ESTÁ GGPLOT2 CARGADO TIENE INCOMPATIBILIDADES CON DPLYR Y SOLO HACE LA MEDIA DE UNA FILA.
#DESCARGAR PLYR.
Sexo_specie_final_prueba_medias<-Cribado_Osmias_sexo_specie%>%
filter(ID=="11.5" | ID =="11.2" | ID=="11.1") %>%
group_by(ID, Sex, Specie)%>%
summarise (mean_it= mean (IT))
Sexo_specie_final_prueba_medias<-as.data.frame(Sexo_specie_final_prueba_medias)
#Hago por caña tanto media IT de macho como IT de hembra, para unirlo al dataframe anterior
Sexo_specie_final_prueba_completo_a_unir_a_medias<-Cribado_Osmias_sexo_specie%>%
filter(ID!="11.5" | ID !="11.2" | ID!="11.1")
Sexo_specie_final_prueba_completo_a_unir_a_medias<-as.data.frame(Sexo_specie_final_prueba_completo_a_unir_a_medias)
Numbers_per_specie <-Sexo_specie_final_prueba_completo_a_unir_a_medias%>%
group_by(Specie) %>%
summarise(no_rows = length(Specie))
Number_per_specie<-as.data.frame(Numbers_per_specie)
Sexo_specie_final_prueba_completo_a_unir_a_medias<-Sexo_specie_final_prueba_completo_a_unir_a_medias[,c(-5)]
colnames(Sexo_specie_final_prueba_medias)[4]<-"IT"
Sexo_specie_final<-rbind(Sexo_specie_final_prueba_medias,Sexo_specie_final_prueba_completo_a_unir_a_medias)
#En el caso de que los duplicados coincidan en todo menos en su número, hago medias de su IT
#debido a que son medidas repetidas.
#En el caso a que no coincidan he de revisarlas.
#Separo por caña la medida y la especie
Sexo_specie_it_total_cane<-Sexo_specie_final %>% separate(ID,
c("number", "Individual"))
Sexo_specie_it_total_cane$number<-as.numeric(Sexo_specie_it_total_cane$number)
write.csv(Sexo_specie_it_total_cane, "Sexo_specie_it_total_cane.csv")
#Medias por caña y por especie
Media_sexo_specie_caña<-Sexo_specie_it_total_cane %>% group_by(number,Sex,Specie) %>% filter(!is.na(IT)) %>% summarise (mean_it = mean(IT))
Media_sexo_specie_caña_total<-Sexo_specie_it_total_cane %>% group_by(number,Specie) %>% filter(!is.na(IT)) %>% summarise (mean_it = mean(IT))
View(Media_sexo_specie_caña)
#Solo voy a trabajar con el de Osmias bicornis de momento
Osmia_bicornis<-subset(Media_sexo_specie_caña, Specie=="Osmia bicornis")
Osmia_bicornis_media_total<-subset(Media_sexo_specie_caña_total, Specie=="Osmia bicornis")
Osmia_bicornis_splitted<-split(Osmia_bicornis, Osmia_bicornis$Sex, drop=TRUE)
Osmia_bicornis_splitted<-as.data.frame(Osmia_bicornis_splitted)
Osmia_bicornis_female<-Osmia_bicornis_splitted[[1]]
Osmia_bicornis_male<-Osmia_bicornis_splitted[[2]]
Osmia_bicornis_male<-as.data.frame(Osmia_bicornis_male)
Osmia_bicornis_female<-as.data.frame(Osmia_bicornis_female)
colnames(Osmia_bicornis_male)[4]<-"mean_male"
colnames(Osmia_bicornis_female)[4]<-"mean_female"
Mean_male<-Osmia_bicornis_male[,c(1,4)]
Mean_female<-Osmia_bicornis_female[,c(1,4)]
Mean_total<-Osmia_bicornis_media_total[,c(1,3)]
merge1<-full_join(Mean_male,Mean_female, by="number")
Dataframe_mean_male_female_total_osmia_bicornis<-full_join(merge1,Mean_total, by="number")
colnames(Dataframe_mean_male_female_total_osmia_bicornis)[4]<-"mean_it_bicornis"
Dataframe_mean_male_female_total_osmia_bicornis<-full_join(merge1,Mean_total, by="number")
Dataframe_total_osmia_bicornis<-merge(Dataframe_mean_male_female_total_osmia_bicornis,Canes, by="number")
head(Dataframe_total_osmia_bicornis)
View(Dataframe_total_osmia_bicornis)
#Lo mismo para el resto de especies
Osmia_latreillei<-subset(Media_sexo_specie_caña, Specie=="Osmia latreillei")
Osmia_latreillei_media_total<-subset(Media_sexo_specie_caña_total, Specie=="Osmia latreillei")
Osmia_latreillei_splitted<-split(Osmia_latreillei, Osmia_latreillei$Sex, drop=TRUE)
Osmia_latreillei_splitted<-as.data.frame(Osmia_latreillei_splitted)
Osmia_latreillei_female<-Osmia_latreillei_splitted[[1]]
Osmia_latreillei_male<-Osmia_latreillei_splitted[[2]]
Osmia_latreillei_male<-as.data.frame(Osmia_latreillei_male)
Osmia_latreillei_female<-as.data.frame(Osmia_latreillei_female)
colnames(Osmia_latreillei_male)[4]<-"mean_male_latreillei"
colnames(Osmia_latreillei_female)[4]<-"mean_female_latreillei"
Mean_male_latreillei<-Osmia_latreillei_male[,c(1,4)]
Mean_female_latreillei<-Osmia_latreillei_female[,c(1,4)]
Mean_total_latreillei<-Osmia_latreillei_media_total[,c(1,3)]
merge_latreillei<-full_join(Mean_male_latreillei,Mean_female_latreillei, by="number")
Dataframe_mean_male_female_total_osmia_latreillei<-full_join(merge_latreillei,Mean_total_latreillei, by="number")
colnames(Dataframe_mean_male_female_total_osmia_latreillei)[4]<-"mean_it_latreillei"
Dataframe_total_osmia_latreillei<-merge(Dataframe_mean_male_female_total_osmia_latreillei,Canes, by="number")
Dataframe_total_osmia_bicornis_latreillei<-left_join(Dataframe_total_osmia_bicornis,Dataframe_total_osmia_latreillei, by="number")
head(Dataframe_total_osmia_bicornis_latreillei)
str(Dataframe_total_osmia_bicornis_latreillei)
View(Dataframe_total_osmia_bicornis)
#!!!!!!!No sé como hacer para que estén correlacionados las medidas de los machos y hembras para ver si se corresponden con la caña.
Dataframe_total_osmia_bicornis$vestib<-as.numeric(Dataframe_total_osmia_bicornis$vestib)
Dataframe_total_osmia_bicornis$vestib3<-as.numeric(Dataframe_total_osmia_bicornis$vestib3)
#Voy a tener que transformar los NAs a cero en los vestibulos y el resto porque no los capta.
#Medir el volumen que ocuparon, 25cm de caña vacías versus el tamaño de diámetro y total ocupado (vestíbulos, chambers, tapón..)
Osmia_summarise_NA_cero<-Dataframe_total_osmia_bicornis %>% mutate(vestib =if_else(is.na(vestib),0,vestib),
vestib2 =if_else(is.na(vestib2),0,vestib2),
vestib3 =if_else(is.na(vestib3),0,vestib3),
vestib4 =if_else(is.na(vestib4),0,vestib4),
cap.s =if_else(is.na(cap.s),0,cap.s),
final.space =if_else(is.na(final.space),0,final.space))
View(Osmia_summarise_NA_cero)
View(Osmia_summarise_vestib)
Osmia_summarise_vestib<-Osmia_summarise_NA_cero %>%
mutate (internal_space=vestib+vestib2+vestib3+vestib4+final.space, buffer_space= cap.s + internal_space )
Osmia_summarise_size<-Osmia_summarise_vestib %>% group_by(number,a.b) %>% filter(!is.na(size)) %>%
summarise (number_cells = length(size), mean_cells=mean(size))
str(Dataframe_total_osmia_bicornis)
str(Dataframe_total_osmia_bicornis)
Osmia_summarise_size<-as.data.frame(Osmia_summarise_size)
Osmia_summarise_size$number<-as.numeric(Osmia_summarise_size$number)
Osmia_bicornis_semifinal<-full_join(Osmia_summarise_vestib, Osmia_summarise_size, by=c("number","a.b"))
str(Osmia_bicornis_semifinal)
Osmia_bicornis_semifinal2_aborts<-Osmia_bicornis_semifinal %>% group_by(number,a.b) %>% filter(!is.na(Aborts)) %>%
summarise (number_of_aborts = sum(Aborts))
Osmia_bicornis_semifinal2_aborts<-as.data.frame(Osmia_bicornis_semifinal2_aborts)
Osmia_bicornis_semifinal<-Osmia_bicornis_semifinal[,c(-17,-18,-20,-21,-24,-26,-27,-28,-29,-30,-31,-32)]
head(Osmia_bicornis_semifinal)
Osmia_bicornis_semifinal2_aborts_to_join<-Osmia_bicornis_semifinal %>% group_by(number,a.b)
head(Osmia_bicornis_semifinal2_aborts_to_join)
View(Osmia_bicornis_semifinal2_aborts_to_join)
Osmia_bicornis_semifinal2_aborts_to_join<-unique(Osmia_bicornis_semifinal2_aborts_to_join)
Code<-paste(code,CODE_ORIENTATION, sep="")
Osmia_bicornis_semifinal2_aborts_to_join<-as.data.frame(Osmia_bicornis_semifinal2_aborts_to_join)
dataframe_final_bombus2 <- read.csv("~/Documents/bombus/dataframe_final_bombus2.csv")
data_from_bombus_to_join_osmia<-dataframe_final_bombus2[,c(3,13,75,76,77)]
write.csv(data_from_bombus_to_join_osmia, "data_from_bombus_to_join_osmia.csv")
abundance_percent_1km<-read.csv("~/Documents/Osmias/abundancia_percent_osmia_dataframe.csv")
abundance_percent_1km<-abundance_percent_1km[,c(5,6,7)]
colnames(abundance_percent_1km)[3]<-"Code"
colnames(data_from_bombus_to_join_osmia)[1]<-"Code"
data_from_bombus_to_join_osmia<-merge(data_from_bombus_to_join_osmia, abundance_percent_1km, by="Code")
data_from_bombus_to_join_osmia<-unique(data_from_bombus_to_join_osmia)
dataframe_osmia_bicornis_semifinal<-merge(Osmia_bicornis_semifinal2_aborts_to_join, data_from_bombus_to_join_osmia, by="Code")
dataframe_osmia_bicornis_semifinal_aborts<-merge(dataframe_osmia_bicornis_semifinal, Osmia_bicornis_semifinal2_aborts, by= c("number","a.b"))
dataframe_osmia_bicornis_semifinal_summary<-dataframe_osmia_bicornis_semifinal %>% group_by(Code)
write.csv(dataframe_osmia_bicornis_semifinal,"dataframe_osmia_bicornis_semifinal.csv")
str(dataframe_osmia_bicornis_semifinal)
#Algunos ploteos sin hacer summary por code.
p<-ggscatter(dataframe_osmia_bicornis_semifinal, x = "abundancia_relativa", y = "number_cells",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "abundancia_realtiva", ylab = "number_cells")
print(p)
p<-ggscatter(dataframe_osmia_bicornis_semifinal, x = "abundancia_relativa", y = "mean_cells",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "abundancia_realtiva", ylab = "number_cells")
print(p)
p<-ggscatter(Osmia_bicornis_semifinal, x = "abundancia_relativa", y = "number_cells",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "abundancia_realtiva", ylab = "number_cells")
print(p)
p<-ggscatter(dataframe_osmia_bicornis_semifinal, x = "riqueza_relativa", y = "number_cells",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "riqueza relativa", ylab = "Intmales_mean")
print(p)
p<-ggscatter(dataframe_osmia_bicornis_semifinal, x = "temp_mean", y = "buffer_space",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "temp_mean", ylab = "Intmales_mean")
print(p)
p<-ggscatter(dataframe_osmia_bicornis_semifinal, x = "abundancia_relativa", y = "mean_it",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "abundancia_relativa", ylab = "mean_it_total")
print(p)
p<-ggscatter(dataframe_osmia_bicornis_semifinal, x = "temp_mean", y = "mean_it",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "temp_mean", ylab = "Int_mean")
print(p)
p<-ggscatter(dataframe_osmia_bicornis_semifinal, x = "temp_range", y = "buffer_space",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "temp_range", ylab = "buffer_space")
print(p)
View(dataframe_osmia_bicornis_semifinal_aborts)
p<-ggscatter(dataframe_osmia_bicornis_semifinal_aborts, x = "abundancia_relativa", y = "number_of_aborts",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "abundancia_relativa", ylab = "number_of_aborts")
print(p)
z<-ggscatter(dataframe_osmia_bicornis_semifinal_aborts, x = "buffer_space", y = "number_of_aborts",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "buffer_space", ylab = "number_cells")
print(z)
z<-ggscatter(dataframe_osmia_bicornis_semifinal_aborts, x = "buffer_space", y = "number_of_aborts",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "buffer_space", ylab = "number_cells")
print(z)
str(dataframe_osmia_bicornis_semifinal_aborts)
z<-ggscatter(dataframe_osmia_bicornis_semifinal_aborts, x = "buffer_space", y = "number_of_aborts",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "buffer_space", ylab = "number_cells")
print(z)
z<-ggscatter(dataframe_osmia_bicornis_semifinal_aborts, x = "buffer_space", y = "number_of_aborts",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "buffer_space", ylab = "number_cells")
print(z)
z<-ggscatter(dataframe_osmia_bicornis_semifinal_aborts, x = "cap.s", y = "number_of_aborts",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "cap.s", ylab = "number_of_aborts")
print(z)
z<-ggscatter(dataframe_osmia_bicornis_semifinal_aborts, x = "cap.s", y = "number_of_aborts",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "cap.s", ylab = "number_cells")
print(z)
#####Con todo el dataframe completo de osmia bicornis para no tener solo medias###
Osmia_bicornis_todo<-subset(Sexo_specie_it_total_cane, Specie=="Osmia bicornis")
Osmia_bicornis_splitted_todo<-split(Osmia_bicornis_todo, Osmia_bicornis_todo$Sex, drop=TRUE)
Osmia_bicornis_splitted_todo<-as.data.frame(Osmia_bicornis_splitted_todo)
Osmia_bicornis_female<-Osmia_bicornis_splitted_todo[[1]]
Osmia_bicornis_male<-Osmia_bicornis_splitted_todo[[2]]
Osmia_bicornis_male<-as.data.frame(Osmia_bicornis_male)
Osmia_bicornis_female<-as.data.frame(Osmia_bicornis_female)
merge1<-rbind(Osmia_bicornis_male,Osmia_bicornis_female)
Dataframe_total_osmia_bicornis_todo<-merge(merge1,Canes, by="number")
head(Dataframe_total_osmia_bicornis_todo)
Dataframe_osmia_bicornis_todo<-merge(Dataframe_total_osmia_bicornis_todo, data_from_bombus_to_join_osmia, by="Code")
head(Dataframe_osmia_bicornis_todo)
Dataframe_osmia_bicornis_todo<-Dataframe_osmia_bicornis_todo[,c(1,2,3,4,5,6,35,36,37,38)]
Dataframe_osmia_bicornis_todo<-unique(Dataframe_osmia_bicornis_todo)
View(Dataframe_osmia_bicornis_todo)
bicornis_a<-Dataframe_osmia_bicornis_todo[Dataframe_osmia_bicornis_todo$Sex == "Male",]
bicornis_f<-Dataframe_osmia_bicornis_todo[Dataframe_osmia_bicornis_todo$Sex == "Female",]
str(Dataframe_osmia_bicornis_todo)
###!!!He incluido los valores que ponia David que no eran muy precisos.
p<-ggscatter(bicornis_a, x = "abundancia_relativa", y = "IT",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "abundancia_realtiva", ylab = "IT")
print(p)
p<-ggscatter(bicornis_a, x = "temp_mean", y = "IT",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "temp_mean", ylab = "IT")
print(p)
p<-ggscatter(bicornis_f, x = "abundancia_relativa", y = "IT",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "abundancia_realtiva", ylab = "IT")
print(p)
p<-ggscatter(bicornis_f, x = "temp_mean", y = "IT",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "temp_mean", ylab = "IT")
print(p)
p<-ggscatter(bicornis_f, x = "abundancia_relativa", y = "IT",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "spearman",
xlab = "abundancia_realtiva", ylab = "IT")
print(p)
p<-ggscatter(bicornis_f, x = "temp_range", y = "IT",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "temp_range", ylab = "IT")
print(p)
p<-ggscatter(bicornis_a, x = "temp_range", y = "IT",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "temp_range", ylab = "IT")
print(p)
Grupo_sexo_specie_caña_orientation_osmia_bicornis<-Dataframe_total_osmias %>% group_by(Orientation) %>% filter(!is.na(Mean_total))
d<-ggplot(Dataframe_total_osmia_bicornis,aes(Orientation,Mean_total))
print(d)
Grupo_sexo_specie_caña<-Dataframe_total_osmias %>% group_by(Location) %>% filter(!is.na(size)) %>% summarise (number_cells = n(size))
Grupo_sexo_specie_caña<-Dataframe_total_osmias %>% group_by(Orientation) %>% filter(!is.na(size)) %>% summarise (number_cells = n(size))
d+geom_boxplot()+geom_jitter()
str(Dataframe_total_osmias)
Grupo_sexo_specie_caña<-Dataframe_total_osmias %>% group_by(number,Sex,Specie) %>% filter(!is.na(IT)) %>% summarise (mean_it = mean(IT))
Grupo_sexo_specie_caña<-Dataframe_total_osmias %>% group_by(number,Sex) %>% filter(!is.na("ID")) %>% summarise (count_ID= count(ID))
str(Sexo_specie_osmias)
str(Datos_distancia_intertegular)
str(Canes)
head(Datos_specie)
head(Datos_distancia_intertegular)
head(Sexo_specie_it)
str(Datos_specie)
str(Sexo_specie_it)
head(Canes)
str(Canes)
head(Datos_distancia_intertegular)
View(Sexo_specie_it)
#Transformar todos los caracteres como factores de manera rápida
Canes<-as.data.frame(unclass(Canes))
levels(Canes$l)
Canes$Aborts<-as.numeric(Canes$Aborts)
summary(Canes)
Aborts_per_zone<-Canes %>% group_by(Location,Orientation) %>% filter(!is.na(Aborts)) %>% summarise (sum_aborts = sum(Aborts))
#Esto significa que a las abejas se les da peor en estos lugares
Aborts_mean<-Canes %>% group_by(Location) %>% filter(!is.na(Aborts)) %>% summarise (mean_aborts = mean(Aborts)) %>% arrange(mean_aborts)
Aborts_median<-Canes %>% group_by(Location) %>% filter(!is.na(Aborts)) %>% summarise (median_aborts = median(Aborts)) %>% arrange(median_aborts)
Aborts_mean_byt<-Canes %>% group_by(Location,Orientation) %>% filter(!is.na(Aborts)) %>% summarise (mean_aborts = mean(Aborts)) %>% arrange(mean_aborts)
a<-ggplot(Aborts_median,aes(Location,median_aborts))
b<-ggplot(Aborts_per_zone,aes(Orientation,sum_aborts))
c<-ggplot(Aborts_mean_byt,aes(Orientation,mean_aborts))
d<-ggplot(Grupo_sexo_specie_caña,aes(Orientation,mean_it))
a+geom_point(aes(reorder(Location, median_aborts), y=median_aborts))
b+geom_boxplot()+geom_jitter()
c+geom_boxplot()+geom_jitter()
d+geom_boxplot()+geom_jitter()
Celdas_orientation<-Canes %>% group_by(Location,Orientation) %>% filter(!is.na(Aborts)) %>% summarise (sum_aborts = sum(Aborts))
Celdas_orientation<-Canes %>% group_by(number,Orientation, Location) %>% filter(!is.na(size)) %>% filter(specie="Osmia bicornis") %>% count(size)
Celdas_orientation2<-Celdas_orientation %>% group_by(number,Location,Orientation) %>% summarise (sum_cells=sum(n))
Celdas_location<-Celdas_orientation2%>% group_by(Location,Orientation) %>% summarise (sum_cells=mean(sum_cells))
Celdas_location_mean<-Celdas_orientation2%>% group_by(Location,Orientation) %>% summarise (mean_cells=mean(sum_cells))
#Solo para osmia bicornis
Celdas_orientation<-Canes %>% group_by(number,Orientation, Location) %>% filter(!is.na(size)) %>% count(size)
Celdas_orientation2<-Celdas_orientation %>% group_by(number,Location,Orientation) %>% summarise (sum_cells=sum(n))
Celdas_location<-Celdas_orientation2%>% group_by(Location,Orientation) %>% summarise (sum_cells=mean(sum_cells))
Celdas_location_mean<-Celdas_orientation2%>% group_by(Location,Orientation) %>% summarise (mean_cells=mean(sum_cells))
z_orientation<-ggplot(Celdas_location,aes(Orientation, sum_cells))
y_orientation<-ggplot(Celdas_location_mean,aes(Orientation, mean_cells))
z_orientation+geom_boxplot()+geom_jitter()
y_orientation+geom_boxplot()+geom_jitter()
z<-ggplot(Celdas_location,aes(Location, sum_cells))
y<-ggplot(Celdas_location_mean,aes(Location, mean_cells))
b<-ggplot(Aborts_per_zone,aes(Orientation,sum_aborts))
c<-ggplot(Aborts_mean_byt,aes(Orientation,mean_aborts))
d<-ggplot(Grupo_sexo_specie_caña,aes(Orientation,mean_it))
a+geom_point(aes(reorder(Location, median_aborts), y=median_aborts))
b+geom_boxplot()+geom_jitter()
c+geom_boxplot()+geom_jitter()
d+geom_boxplot()+geom_jitter()
z+geom_boxplot(aes(reorder(Location, sum_cells), y=sum_cells))+geom_jitter()
y+geom_boxplot(aes(reorder(Location, mean_cells), y=mean_cells))+geom_jitter()
head(Canes)
str(Canes)
############# ##########
|
\name{association.4.6}
\alias{association.4.6}
\docType{data}
\title{Simulated data for association plots, set 2}
\description{Simulated data set.}
\usage{data("association.4.6")}
\format{
A data frame with 121 observations on the following 4 variables.
\describe{
\item{\code{x2}}{a numeric vector}
\item{\code{y4}}{a numeric vector}
\item{\code{y5}}{a numeric vector}
\item{\code{y6}}{a numeric vector}
}
}
\examples{
data(association.4.6)
plot(association.4.6$x2, association.4.6$y4)
}
\keyword{datasets}
| /openintro/man/association.4.6.Rd | no_license | mine-cetinkaya-rundel/openintro-r-package | R | false | false | 530 | rd | \name{association.4.6}
\alias{association.4.6}
\docType{data}
\title{Simulated data for association plots, set 2}
\description{Simulated data set.}
\usage{data("association.4.6")}
\format{
A data frame with 121 observations on the following 4 variables.
\describe{
\item{\code{x2}}{a numeric vector}
\item{\code{y4}}{a numeric vector}
\item{\code{y5}}{a numeric vector}
\item{\code{y6}}{a numeric vector}
}
}
\examples{
data(association.4.6)
plot(association.4.6$x2, association.4.6$y4)
}
\keyword{datasets}
|
## ----xaringan-themer, include=FALSE, warning=FALSE----------------------------
library(xaringanthemer)
style_mono_accent(
base_color = "#1c5253",
text_font_google = google_font("Montserrat", "300", "300i"),
code_font_google = google_font("Fira Mono")
)
## ----echo=FALSE---------------------------------------------------------------
step_by_step_eq <- function(eqlist, before="", after="", title=" ") {
# drop slide pauses in before content
before_inner <- gsub("--", "", before)
for (i in 2:length(eqlist)) {
eqlist[i] <- paste0(eqlist[i-1],"\\\\\n",eqlist[i])
}
out <- ""
for (i in 1:length(eqlist)) {
if (i > 1) out <- paste0(out, "count:false", "\n")
out <- paste0(out, "# ", title, "\n") # print title
# print before content
if (i == 1) out <- paste0(out, before, "\n") else out <- paste0(out, before_inner, "\n")
out <- paste0(out, "$$\n\\begin{aligned}\n",eqlist[[i]],"\n\\end{aligned}\n$$\n\n") # print equation
if (i < length(eqlist)) out <- paste0(out, "---\n\n")
}
out <- paste0(out, after, "\n") # print after content
out <- paste0(out, "---\n")
cat(out)
}
## ----echo=FALSE, results="asis"-----------------------------------------------
title <- "Parallel Trends Assumption"
before <- "## Parallel Trends Assumption
$$\\E[\\Delta Y_{t^*}(0) | D=1] = \\E[\\Delta Y_{t^*}(0) | D=0]$$
--
<br><br>
Recovering $ATT$ under parallel trends:
"
eqlist <- list("ATT &= \\E[Y_{t^*}(1) | D=1] - \\E[Y_{t^*}(0) | D=1] \\hspace{150pt}",
"&= \\E[Y_{t^*}(1) - Y_{t^*-1}(0) | D=1] - \\E[Y_{t^*}(0) - Y_{t^*-1}(0) | D=1]",
"&= \\E[Y_{t^*}(1) - Y_{t^*-1}(0) | D=1] - \\E[Y_{t^*}(0) - Y_{t^*-1}(0) | D=0]",
"&= \\E[\\Delta Y_{t^*} | D=1] - \\E[\\Delta Y_{t^*} | D=0]")
after <- "
--
which is where difference in differences gets its name
"
step_by_step_eq(before=before,
eqlist=eqlist,
after=after,
title=title)
## ----echo=FALSE, results="asis"-----------------------------------------------
title <- "Estimation"
before <- "Given the above discussion, estimation of the $ATT$ is very easy.
--
"
eqlist <- list("\\widehat{ATT} &= \\hat{\\E}[\\Delta Y_{t^*} | D=1] - \\hat{\\E}[\\Delta Y_{t^*}|D=0] \\hspace{150pt}",
"&= \\frac{1}{n} \\sum_{i=1}^n \\frac{D_i}{\\hat{p}} \\Delta Y_{it^*} - \\frac{1}{n} \\sum_{i=1}^n \\frac{(1-D_i)}{(1-\\hat{p})} \\Delta Y_{it^*}")
after <- "
--
Or, even more easily, run the following <span class=\"alert\">two-way fixed effects regression (TWFE):</span>
$$Y_{it} = \\theta_t + \\eta_i + \\alpha D_{it} + v_{it}$$
--
<span class=\"alert-blue\">Pros:</span> Economists know a lot about this sort of regression and you can just read off standard errors, etc.
"
step_by_step_eq(title=title,
before=before,
eqlist=eqlist,
after=after)
| /Courses/FEEM/modern_did_session1.R | permissive | bcallaway11/bcallaway11.github.io | R | false | false | 2,919 | r | ## ----xaringan-themer, include=FALSE, warning=FALSE----------------------------
library(xaringanthemer)
style_mono_accent(
base_color = "#1c5253",
text_font_google = google_font("Montserrat", "300", "300i"),
code_font_google = google_font("Fira Mono")
)
## ----echo=FALSE---------------------------------------------------------------
step_by_step_eq <- function(eqlist, before="", after="", title=" ") {
# drop slide pauses in before content
before_inner <- gsub("--", "", before)
for (i in 2:length(eqlist)) {
eqlist[i] <- paste0(eqlist[i-1],"\\\\\n",eqlist[i])
}
out <- ""
for (i in 1:length(eqlist)) {
if (i > 1) out <- paste0(out, "count:false", "\n")
out <- paste0(out, "# ", title, "\n") # print title
# print before content
if (i == 1) out <- paste0(out, before, "\n") else out <- paste0(out, before_inner, "\n")
out <- paste0(out, "$$\n\\begin{aligned}\n",eqlist[[i]],"\n\\end{aligned}\n$$\n\n") # print equation
if (i < length(eqlist)) out <- paste0(out, "---\n\n")
}
out <- paste0(out, after, "\n") # print after content
out <- paste0(out, "---\n")
cat(out)
}
## ----echo=FALSE, results="asis"-----------------------------------------------
title <- "Parallel Trends Assumption"
before <- "## Parallel Trends Assumption
$$\\E[\\Delta Y_{t^*}(0) | D=1] = \\E[\\Delta Y_{t^*}(0) | D=0]$$
--
<br><br>
Recovering $ATT$ under parallel trends:
"
eqlist <- list("ATT &= \\E[Y_{t^*}(1) | D=1] - \\E[Y_{t^*}(0) | D=1] \\hspace{150pt}",
"&= \\E[Y_{t^*}(1) - Y_{t^*-1}(0) | D=1] - \\E[Y_{t^*}(0) - Y_{t^*-1}(0) | D=1]",
"&= \\E[Y_{t^*}(1) - Y_{t^*-1}(0) | D=1] - \\E[Y_{t^*}(0) - Y_{t^*-1}(0) | D=0]",
"&= \\E[\\Delta Y_{t^*} | D=1] - \\E[\\Delta Y_{t^*} | D=0]")
after <- "
--
which is where difference in differences gets its name
"
step_by_step_eq(before=before,
eqlist=eqlist,
after=after,
title=title)
## ----echo=FALSE, results="asis"-----------------------------------------------
title <- "Estimation"
before <- "Given the above discussion, estimation of the $ATT$ is very easy.
--
"
eqlist <- list("\\widehat{ATT} &= \\hat{\\E}[\\Delta Y_{t^*} | D=1] - \\hat{\\E}[\\Delta Y_{t^*}|D=0] \\hspace{150pt}",
"&= \\frac{1}{n} \\sum_{i=1}^n \\frac{D_i}{\\hat{p}} \\Delta Y_{it^*} - \\frac{1}{n} \\sum_{i=1}^n \\frac{(1-D_i)}{(1-\\hat{p})} \\Delta Y_{it^*}")
after <- "
--
Or, even more easily, run the following <span class=\"alert\">two-way fixed effects regression (TWFE):</span>
$$Y_{it} = \\theta_t + \\eta_i + \\alpha D_{it} + v_{it}$$
--
<span class=\"alert-blue\">Pros:</span> Economists know a lot about this sort of regression and you can just read off standard errors, etc.
"
step_by_step_eq(title=title,
before=before,
eqlist=eqlist,
after=after)
|
# Load function definiton
source("load_ensemble.R")
library(PEcAn.all)
library(ncdf4.helpers)
#pecan.id <- 1432 ## DALEC, 1000 member ensemble -- Betsy; original
#pecan.id <- 1436 ## DALEC, 200 member ensemble -- Alexey; more recent
pecan.id <- 1437 ## SIPNET, 500 member ensemble -- Alexey
#pecan.id <- 1347 ## Linkages; Ann(?); Complete ensemble analysis
output.dir <- paste0("/fs/data2/output/PEcAn_100000", pecan.id)
settings <- xmlToList(xmlParse(file.path(output.dir,"pecan.xml")))
runs <- list.files(settings$modeloutdir, full.names = TRUE)
# get the output variable names, probably not the best way to do it but meh
nc <- nc_open(dir(runs[1], full.names=TRUE)[1])
vars <- nc.get.variable.list(nc)
nc_close(nc)
print(vars)
ensemble.out <- load_ensemble(outdir = output.dir, settings = settings, variable = vars)
# Simple plot of all points
plot(GPP ~ temperate.coniferous.Amax, data=ensemble.out)
#----------
# Alexey's Notes
## Load ensemble samples
# ensemble.all.files <- list.files(output.dir, "ensemble")
# ensemble.file <- list.files(output.dir, "ensemble.samples")
# load(file.path(output.dir, ensemble.file))
## "ensemble.output" contains the following:
## ensemble.output -- List containing a single named value ("AGB")
## "ensemble.samples" contains the following:
## ens.run.ids -- Run IDs, not paired with anything, but probably in same order as samples
## ens.ensemble.id -- Just one value; probably in case there are multiple ensembles?
## ens.samples -- For each PFT, data frame of sampled parameter values. Not linked to run IDs, but presumably in same order?
## pft.names -- Names of each PFT
## trait.names -- List of names of traits | /example.R | no_license | ashiklom/Global-sensitivity-analysis | R | false | false | 1,699 | r | # Load function definiton
source("load_ensemble.R")
library(PEcAn.all)
library(ncdf4.helpers)
#pecan.id <- 1432 ## DALEC, 1000 member ensemble -- Betsy; original
#pecan.id <- 1436 ## DALEC, 200 member ensemble -- Alexey; more recent
pecan.id <- 1437 ## SIPNET, 500 member ensemble -- Alexey
#pecan.id <- 1347 ## Linkages; Ann(?); Complete ensemble analysis
output.dir <- paste0("/fs/data2/output/PEcAn_100000", pecan.id)
settings <- xmlToList(xmlParse(file.path(output.dir,"pecan.xml")))
runs <- list.files(settings$modeloutdir, full.names = TRUE)
# get the output variable names, probably not the best way to do it but meh
nc <- nc_open(dir(runs[1], full.names=TRUE)[1])
vars <- nc.get.variable.list(nc)
nc_close(nc)
print(vars)
ensemble.out <- load_ensemble(outdir = output.dir, settings = settings, variable = vars)
# Simple plot of all points
plot(GPP ~ temperate.coniferous.Amax, data=ensemble.out)
#----------
# Alexey's Notes
## Load ensemble samples
# ensemble.all.files <- list.files(output.dir, "ensemble")
# ensemble.file <- list.files(output.dir, "ensemble.samples")
# load(file.path(output.dir, ensemble.file))
## "ensemble.output" contains the following:
## ensemble.output -- List containing a single named value ("AGB")
## "ensemble.samples" contains the following:
## ens.run.ids -- Run IDs, not paired with anything, but probably in same order as samples
## ens.ensemble.id -- Just one value; probably in case there are multiple ensembles?
## ens.samples -- For each PFT, data frame of sampled parameter values. Not linked to run IDs, but presumably in same order?
## pft.names -- Names of each PFT
## trait.names -- List of names of traits |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/check_csv.R
\name{check_csv}
\alias{check_csv}
\title{Check the metadata csv files and write only the 'good' entries.}
\usage{
check_csv(file_type = "OrgDb", bioc_version = "3.9",
eu_version = "44", verbose = FALSE)
}
\arguments{
\item{file_type}{Is this an OrgDB, GRanges, TxDb, OrganismDbi, or BSGenome dataset?}
\item{bioc_version}{Which bioconductor version is this for?}
\item{eu_version}{Which eupathdb version is this for?}
\item{verbose}{Print some information about what happened?}
}
\description{
While we are at it, put the failed entries into their own csv file so that I
can step through and look for why they failed.
}
| /man/check_csv.Rd | no_license | hupef/EuPathDB | R | false | true | 716 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/check_csv.R
\name{check_csv}
\alias{check_csv}
\title{Check the metadata csv files and write only the 'good' entries.}
\usage{
check_csv(file_type = "OrgDb", bioc_version = "3.9",
eu_version = "44", verbose = FALSE)
}
\arguments{
\item{file_type}{Is this an OrgDB, GRanges, TxDb, OrganismDbi, or BSGenome dataset?}
\item{bioc_version}{Which bioconductor version is this for?}
\item{eu_version}{Which eupathdb version is this for?}
\item{verbose}{Print some information about what happened?}
}
\description{
While we are at it, put the failed entries into their own csv file so that I
can step through and look for why they failed.
}
|
# load the distance matrix
library(rethinking)
library(bayesplot)
data(islandsDistMatrix)
# display (measured in thousands of km)
Dmat <- islandsDistMatrix
colnames(Dmat) <- c("Ml","Ti","SC","Ya","Fi","Tr","Ch","Mn","To","Ha")
round(Dmat,1)
data(Kline2) # load the ordinary data, now with coordinates
d <- Kline2
d$society <- 1:10 # index observations
14.38
dat_list <- list(
T = d$total_tools,
P = d$population,
society = d$society,
Dmat=islandsDistMatrix )
m14.7 <- ulam(
alist(
T ~ dpois(lambda),
lambda <- (a*P^b/g)*exp(k[society]),
vector[10]:k ~ multi_normal( 0 , SIGMA ),
matrix[10,10]:SIGMA <- cov_GPL2( Dmat , etasq , rhosq , 0.01 ),
c(a,b,g) ~ dexp( 1 ),
etasq ~ dexp( 2 ),
rhosq ~ dexp( 0.5 )
), data=dat_list , chains=4 , cores=4 , iter=2000 )
post <- extract.samples(m14.7)
mcmc_areas(post2[,1:11])
post2=as.matrix(m14.7@stanfit)
canary_title1 <- ggtitle("Posterior distributions for Mc Elreaths' Island models",
"with medians and 95% intervals")
mcmc_areas(post2,
pars = c("a", "b", #"b2", "b3",
"k[1]", "k[2]", "k[3]", "k[4]", "k[5]", "k[6]", "k[7]",
"etasq","rhosq"),
,prob = 0.95) + canary_title1
| /mc_elreath.R | no_license | DjentMachine/PhD-1 | R | false | false | 1,284 | r | # load the distance matrix
library(rethinking)
library(bayesplot)
data(islandsDistMatrix)
# display (measured in thousands of km)
Dmat <- islandsDistMatrix
colnames(Dmat) <- c("Ml","Ti","SC","Ya","Fi","Tr","Ch","Mn","To","Ha")
round(Dmat,1)
data(Kline2) # load the ordinary data, now with coordinates
d <- Kline2
d$society <- 1:10 # index observations
14.38
dat_list <- list(
T = d$total_tools,
P = d$population,
society = d$society,
Dmat=islandsDistMatrix )
m14.7 <- ulam(
alist(
T ~ dpois(lambda),
lambda <- (a*P^b/g)*exp(k[society]),
vector[10]:k ~ multi_normal( 0 , SIGMA ),
matrix[10,10]:SIGMA <- cov_GPL2( Dmat , etasq , rhosq , 0.01 ),
c(a,b,g) ~ dexp( 1 ),
etasq ~ dexp( 2 ),
rhosq ~ dexp( 0.5 )
), data=dat_list , chains=4 , cores=4 , iter=2000 )
post <- extract.samples(m14.7)
mcmc_areas(post2[,1:11])
post2=as.matrix(m14.7@stanfit)
canary_title1 <- ggtitle("Posterior distributions for Mc Elreaths' Island models",
"with medians and 95% intervals")
mcmc_areas(post2,
pars = c("a", "b", #"b2", "b3",
"k[1]", "k[2]", "k[3]", "k[4]", "k[5]", "k[6]", "k[7]",
"etasq","rhosq"),
,prob = 0.95) + canary_title1
|
source("get_epower_data.R")
png(file="plot4.png",width=480,height=480)
par(mfrow=c(2,2))
with(df3,{
plot(Datetime,Global_active_power,type='l',
ylab="Global Active Power")
plot(Datetime,Voltage,type='l')
plot(Datetime,Sub_metering_1,type='l',col="Black",ylab="Energy sub metering")
lines(Datetime,Sub_metering_2,col="Red")
lines(Datetime,Sub_metering_3,col="Blue")
legend("topright",
lty=1,
col = c("Black","Red","Blue"),
legend = c("Sub_metering_1","Sub_metering_2","Sub_metering_3"))
plot(Datetime,Global_reactive_power,type='l')
})
dev.off() | /plot4.R | no_license | sandeepbm/ExData_Plotting1 | R | false | false | 660 | r | source("get_epower_data.R")
png(file="plot4.png",width=480,height=480)
par(mfrow=c(2,2))
with(df3,{
plot(Datetime,Global_active_power,type='l',
ylab="Global Active Power")
plot(Datetime,Voltage,type='l')
plot(Datetime,Sub_metering_1,type='l',col="Black",ylab="Energy sub metering")
lines(Datetime,Sub_metering_2,col="Red")
lines(Datetime,Sub_metering_3,col="Blue")
legend("topright",
lty=1,
col = c("Black","Red","Blue"),
legend = c("Sub_metering_1","Sub_metering_2","Sub_metering_3"))
plot(Datetime,Global_reactive_power,type='l')
})
dev.off() |
| pc = 0xc001 | a = 0xff | x = 0x00 | y = 0x00 | sp = 0x00fd | p[NV-BDIZC] = 10110100 |
| pc = 0xc003 | a = 0x01 | x = 0x00 | y = 0x00 | sp = 0x00fd | p[NV-BDIZC] = 00110100 |
| pc = 0xc005 | a = 0x01 | x = 0x00 | y = 0x00 | sp = 0x00fd | p[NV-BDIZC] = 00110111 |
| pc = 0xc007 | a = 0x01 | x = 0x00 | y = 0x00 | sp = 0x00fd | p[NV-BDIZC] = 00110111 |
| /res/AND.r | no_license | LucasRodolfo/MC861 | R | false | false | 352 | r | | pc = 0xc001 | a = 0xff | x = 0x00 | y = 0x00 | sp = 0x00fd | p[NV-BDIZC] = 10110100 |
| pc = 0xc003 | a = 0x01 | x = 0x00 | y = 0x00 | sp = 0x00fd | p[NV-BDIZC] = 00110100 |
| pc = 0xc005 | a = 0x01 | x = 0x00 | y = 0x00 | sp = 0x00fd | p[NV-BDIZC] = 00110111 |
| pc = 0xc007 | a = 0x01 | x = 0x00 | y = 0x00 | sp = 0x00fd | p[NV-BDIZC] = 00110111 |
|
loadData <- function() {
columns <- c(rep('character',2), rep('numeric', 7))
power <- read.table('household_power_consumption.txt', sep = ';',
header = TRUE, stringsAsFactors = FALSE)
power <- subset(power, Date == '1/2/2007' | Date == '2/2/2007')
sapply(3:9, function(i) {
power[,i] <<- as.numeric(power[,i])
})
moments <- paste(power[,1], power[,2])
parsedMoments <- strptime(moments, '%d/%m/%Y %H:%M:%S')
cbind(parsedMoments, power[3:9])
}
plot4 <- function(data, toFile = FALSE) {
if (toFile) png(filename = "plot4.png", width = 480, height = 480, bg =rgb(0,0,0,0))
par(mfrow = c(2,2))
plot(data[,1], data[,2], type = "l",
ylab = "Global Active Power",
xlab = "")
plot(data[,1], data[,4], type = "l", ylab=names(data)[4], xlab = "datetime")
plot(data[,1], data[,6], type="n", xlab="", ylab="Energy sub metering")
lines(data[,1], data[,6])
lines(data[,1], data[,7], col = "red")
lines(data[,1], data[,8], col = "blue")
legend("topright",legend=names(data)[6:8], col=c("black", "red", "blue"), lty=c(1,1))
plot(data[,1], data[,3], type = "l", ylab=names(data)[3], xlab = "datetime")
if (toFile) dev.off()
}
power <- loadData()
plot4(power, TRUE) | /ExploratoryDataAnalysis/Assignment1/plot4.R | no_license | hazam/datasciencecoursera | R | false | false | 1,281 | r | loadData <- function() {
columns <- c(rep('character',2), rep('numeric', 7))
power <- read.table('household_power_consumption.txt', sep = ';',
header = TRUE, stringsAsFactors = FALSE)
power <- subset(power, Date == '1/2/2007' | Date == '2/2/2007')
sapply(3:9, function(i) {
power[,i] <<- as.numeric(power[,i])
})
moments <- paste(power[,1], power[,2])
parsedMoments <- strptime(moments, '%d/%m/%Y %H:%M:%S')
cbind(parsedMoments, power[3:9])
}
plot4 <- function(data, toFile = FALSE) {
if (toFile) png(filename = "plot4.png", width = 480, height = 480, bg =rgb(0,0,0,0))
par(mfrow = c(2,2))
plot(data[,1], data[,2], type = "l",
ylab = "Global Active Power",
xlab = "")
plot(data[,1], data[,4], type = "l", ylab=names(data)[4], xlab = "datetime")
plot(data[,1], data[,6], type="n", xlab="", ylab="Energy sub metering")
lines(data[,1], data[,6])
lines(data[,1], data[,7], col = "red")
lines(data[,1], data[,8], col = "blue")
legend("topright",legend=names(data)[6:8], col=c("black", "red", "blue"), lty=c(1,1))
plot(data[,1], data[,3], type = "l", ylab=names(data)[3], xlab = "datetime")
if (toFile) dev.off()
}
power <- loadData()
plot4(power, TRUE) |
library(sfsmisc)
### Name: toLatex.numeric
### Title: LaTeX or Sweave friendly Formatting of Numbers
### Aliases: toLatex.numeric
### Keywords: misc
### ** Examples
xx <- pi * 10^(-9:9)
format(xx)
formatC(xx)
toLatex(xx) #-> scientific = TRUE is chosen
toLatex(xx, scientific=FALSE)
sapply(xx, toLatex)
sapply(xx, toLatex, digits = 2)
| /data/genthat_extracted_code/sfsmisc/examples/toLatex.numeric.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 345 | r | library(sfsmisc)
### Name: toLatex.numeric
### Title: LaTeX or Sweave friendly Formatting of Numbers
### Aliases: toLatex.numeric
### Keywords: misc
### ** Examples
xx <- pi * 10^(-9:9)
format(xx)
formatC(xx)
toLatex(xx) #-> scientific = TRUE is chosen
toLatex(xx, scientific=FALSE)
sapply(xx, toLatex)
sapply(xx, toLatex, digits = 2)
|
test_that("check that join didn't introduce NAs or otherwise cull information", {
source("../R/shared_residues.R")
source("../R/read_mafft_map.R")
source("../R/read_feature_csv.R")
map <- read_mafft_map("O24426_CHLRE.fasta.map")
feat <- read_feature_csv("../inputs/protein_features/atp_binding.csv")
df <- combine_alignment_and_feature(mafft_map = map, feature_df = feat)
expect_equal(nrow(feat), nrow(df))
expect_equal(nrow(df[complete.cases(df), ]), nrow(df))
})
| /tests/test-combine_alignment_and_feature.R | permissive | Arcadia-Science/2022-actin-prediction | R | false | false | 481 | r | test_that("check that join didn't introduce NAs or otherwise cull information", {
source("../R/shared_residues.R")
source("../R/read_mafft_map.R")
source("../R/read_feature_csv.R")
map <- read_mafft_map("O24426_CHLRE.fasta.map")
feat <- read_feature_csv("../inputs/protein_features/atp_binding.csv")
df <- combine_alignment_and_feature(mafft_map = map, feature_df = feat)
expect_equal(nrow(feat), nrow(df))
expect_equal(nrow(df[complete.cases(df), ]), nrow(df))
})
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ld_matrix.R
\name{ld_matrix_local}
\alias{ld_matrix_local}
\title{Get LD matrix using local plink binary and reference dataset}
\usage{
ld_matrix_local(variants, bfile, plink_bin, with_alleles = TRUE)
}
\arguments{
\item{variants}{List of variants (rsids)}
\item{bfile}{Path to bed/bim/fam ld reference panel}
\item{plink_bin}{Specify path to plink binary. Default = \code{NULL}.
See \url{https://github.com/explodecomputer/plinkbinr} for convenient access to plink binaries}
\item{with_alleles}{Whether to append the allele names to the SNP names.
Default: \code{TRUE}}
}
\value{
data frame
}
\description{
Get LD matrix using local plink binary and reference dataset
}
| /man/ld_matrix_local.Rd | permissive | MRCIEU/ieugwasr | R | false | true | 752 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ld_matrix.R
\name{ld_matrix_local}
\alias{ld_matrix_local}
\title{Get LD matrix using local plink binary and reference dataset}
\usage{
ld_matrix_local(variants, bfile, plink_bin, with_alleles = TRUE)
}
\arguments{
\item{variants}{List of variants (rsids)}
\item{bfile}{Path to bed/bim/fam ld reference panel}
\item{plink_bin}{Specify path to plink binary. Default = \code{NULL}.
See \url{https://github.com/explodecomputer/plinkbinr} for convenient access to plink binaries}
\item{with_alleles}{Whether to append the allele names to the SNP names.
Default: \code{TRUE}}
}
\value{
data frame
}
\description{
Get LD matrix using local plink binary and reference dataset
}
|
################## HEADER #######################
# Company : Stevens
# Project : CS 513 Final Project
# Purpose : Perform C5.0 to predict arrest likelihood
# First Name : Justin
# Last Name : Tsang
# Id :
# Date : October 29, 2018
# Comments : NULLs and outliers removed
rm(list=ls())
#################################################
###### Load data #####
file_path <- "/Users/justint/Documents/2018-Fall/CS-513/Project/1_remove_null_outlier/1-Categorized.csv"
# df <- read.csv(
# file=file_path,
# header=TRUE,
# sep=",",
# na.strings=c(""),
# stringsAsFactors = FALSE
# )
df <- read.csv(
file=file_path,
header=TRUE,
sep=",",
na.strings=c("(null)", "", "V", "("),
stringsAsFactors = FALSE
)
# features <- c("STOP_WAS_INITIATED", "ISSUING_OFFICER_RANK", "SUPERVISING_OFFICER_RANK", "SUSPECTED_CRIME_DESCRIPTION",
# "FRISKED_FLAG", "SEARCHED_FLAG", "OTHER_CONTRABAND_FLAG", "FIREARM_FLAG", "KNIFE_CUTTER_FLAG",
# "OTHER_WEAPON_FLAG", "WEAPON_FOUND_FLAG", "PHYSICAL_FORCE_HANDCUFF_SUSPECT_FLAG",
# "BACKROUND_CIRCUMSTANCES_VIOLENT_CRIME_FLAG", "BACKROUND_CIRCUMSTANCES_SUSPECT_KNOWN_TO_CARRY_WEAPON_FLAG",
# "SUSPECTS_ACTIONS_CONCEALED_POSSESSION_WEAPON_FLAG", "SUSPECTS_ACTIONS_DRUG_TRANSACTIONS_FLAG",
# "SUSPECTS_ACTIONS_IDENTIFY_CRIME_PATTERN_FLAG",
# "CATEGORIZED_SUSPECT_REPORTED_AGE", "SUSPECT_SEX", "SUSPECT_RACE_DESCRIPTION", "CATEGORIZED_SUSPECT_HEIGHT", "CATEGORIZED_SUSPECT_WEIGHT",
# "STOP_LOCATION_PRECINCT")
features <- c("STOP_WAS_INITIATED", "ISSUING_OFFICER_RANK", "SUPERVISING_OFFICER_RANK", "SUSPECTED_CRIME_DESCRIPTION",
"FRISKED_FLAG", "SEARCHED_FLAG", "OTHER_CONTRABAND_FLAG", "FIREARM_FLAG", "KNIFE_CUTTER_FLAG",
"OTHER_WEAPON_FLAG", "WEAPON_FOUND_FLAG", "PHYSICAL_FORCE_HANDCUFF_SUSPECT_FLAG",
"BACKROUND_CIRCUMSTANCES_VIOLENT_CRIME_FLAG", "BACKROUND_CIRCUMSTANCES_SUSPECT_KNOWN_TO_CARRY_WEAPON_FLAG",
"SUSPECTS_ACTIONS_CONCEALED_POSSESSION_WEAPON_FLAG", "SUSPECTS_ACTIONS_DRUG_TRANSACTIONS_FLAG",
"SUSPECTS_ACTIONS_IDENTIFY_CRIME_PATTERN_FLAG",
"CATEGORIZED_SUSPECT_REPORTED_AGE", "SUSPECT_SEX", "SUSPECT_RACE_DESCRIPTION", "CATEGORIZED_SUSPECT_HEIGHT", "CATEGORIZED_SUSPECT_WEIGHT",
"STOP_LOCATION_PRECINCT")
dependent <- c("SUSPECT_ARRESTED_FLAG")
ranks <- c("POF", "POM", "DT1", "DT2", "DT3", "DTS", "SSA", "SGT", "SDS", "LSA", "LT", "CPT", "DI", "LCD")
sqf_df <- df[c(features, dependent)]
##### CLEANUP DATA #####
library(modeest)
for (feature in features) {
na_rows <- is.na(sqf_df[, feature])
if (feature == "FIREARM_FLAG" || feature == "KNIFE_CUTTER_FLAG" || feature == "OTHER_WEAPON_FLAG" || feature == "WEAPON_FOUND_FLAG" ||
feature == "PHYSICAL_FORCE_HANDCUFF_SUSPECT_FLAG" || feature == "BACKROUND_CIRCUMSTANCES_VIOLENT_CRIME_FLAG" ||
feature == "BACKROUND_CIRCUMSTANCES_SUSPECT_KNOWN_TO_CARRY_WEAPON_FLAG" || feature == "SUSPECTS_ACTIONS_CONCEALED_POSSESSION_WEAPON_FLAG" ||
feature == "SUSPECTS_ACTIONS_DRUG_TRANSACTIONS_FLAG" || feature == "SUSPECTS_ACTIONS_IDENTIFY_CRIME_PATTERN_FLAG") {
sqf_df[na_rows, feature] <- "N"
}
# } else if (feature == "SUSPECT_REPORTED_AGE") {
# mode_age <- mlv(sqf_df[, feature], method="mfv", na.rm=TRUE) # most frequent value
# sqf_df[na_rows, feature] <- mode_age$M
# } else if (feature == "SUSPECT_SEX") {
# sqf_df[sqf_df$SUSPECT_SEX == "MALE" | sqf_df$SUSPECT_SEX == "FEMALE", "SUSPECT_SEX"]
# }
}
sqf_df = na.omit(sqf_df) # Remove any rows with missing value
##### Cast to correct data type #####
for (feature in c(features, dependent)) {
# Should be factor
if (feature == "FRISKED_FLAG" || feature == "SEARCHED_FLAG" || feature == "OTHER_CONTRABAND_FLAG" ||
feature == "FIREARM_FLAG" || feature == "KNIFE_CUTTER_FLAG" || feature == "OTHER_WEAPON_FLAG" || feature == "WEAPON_FOUND_FLAG" ||
feature == "PHYSICAL_FORCE_HANDCUFF_SUSPECT_FLAG" || feature == "BACKROUND_CIRCUMSTANCES_VIOLENT_CRIME_FLAG" ||
feature == "BACKROUND_CIRCUMSTANCES_SUSPECT_KNOWN_TO_CARRY_WEAPON_FLAG" || feature == "SUSPECTS_ACTIONS_CONCEALED_POSSESSION_WEAPON_FLAG" ||
feature == "SUSPECTS_ACTIONS_DRUG_TRANSACTIONS_FLAG" || feature == "SUSPECTS_ACTIONS_IDENTIFY_CRIME_PATTERN_FLAG" ||
feature == "SUSPECT_ARRESTED_FLAG") {
sqf_df[, feature] <- factor(sqf_df[, feature], levels = c("Y", "N"))
} else if (feature == "ISSUING_OFFICER_RANK" || feature == "SUPERVISING_OFFICER_RANK") {
sqf_df[, feature] <- factor(sqf_df[, feature], ranks)
} else if (feature == "STOP_WAS_INITIATED" || feature == "SUSPECTED_CRIME_DESCRIPTION") {
sqf_df[, feature] <- factor(sqf_df[, feature])
} else if (feature == "SUSPECT_SEX") {
sqf_df[, feature] <- factor(sqf_df[, feature], levels = c("MALE", "FEMALE"))
} else if (feature == "SUSPECT_RACE_DESCRIPTION" || feature == "CATEGORIZED_SUSPECT_REPORTED_AGE" ||
feature == "CATEGORIZED_SUSPECT_HEIGHT" || feature == "CATEGORIZED_SUSPECT_WEIGHT" ||
feature == "STOP_LOCATION_PRECINCT") {
sqf_df[, feature] <- factor(sqf_df[, feature])
}
}
##### Split data ######
df_rows <- nrow(sqf_df)
idx <- sample(x=df_rows, size=as.integer(0.20*df_rows))
test <- sqf_df[idx, ]
training <- sqf_df[-idx, ]
##### Install packages #####
# install.packages('e1071', dependencies = TRUE)
library(class)
library(e1071)
##### Main function #####
class(sqf_df)
prop.table
# Get table of percentage for class and survived
##### Naive bayes #####
nBayes_arrest <- naiveBayes(
SUSPECT_ARRESTED_FLAG ~ .,
data=training
)
##### Predict tests ####
# Use predict function to predict
predict_arrest <- predict(nBayes_arrest, test, type="class")
test_arrest <- test$SUSPECT_ARRESTED_FLAG
table_k <- table(test_arrest, predict_arrest)
accuracy_k <- sum(diag(table_k)) / sum(table_k)
print("Table Naive Bayes")
print(table_k)
print(paste("Accuracy: ", accuracy_k))
| /sqf-datamining/2_estimate_nulls/archive/naive_bayes.R | no_license | jollyliu66/data-mining | R | false | false | 6,147 | r | ################## HEADER #######################
# Company : Stevens
# Project : CS 513 Final Project
# Purpose : Perform C5.0 to predict arrest likelihood
# First Name : Justin
# Last Name : Tsang
# Id :
# Date : October 29, 2018
# Comments : NULLs and outliers removed
rm(list=ls())
#################################################
###### Load data #####
file_path <- "/Users/justint/Documents/2018-Fall/CS-513/Project/1_remove_null_outlier/1-Categorized.csv"
# df <- read.csv(
# file=file_path,
# header=TRUE,
# sep=",",
# na.strings=c(""),
# stringsAsFactors = FALSE
# )
df <- read.csv(
file=file_path,
header=TRUE,
sep=",",
na.strings=c("(null)", "", "V", "("),
stringsAsFactors = FALSE
)
# features <- c("STOP_WAS_INITIATED", "ISSUING_OFFICER_RANK", "SUPERVISING_OFFICER_RANK", "SUSPECTED_CRIME_DESCRIPTION",
# "FRISKED_FLAG", "SEARCHED_FLAG", "OTHER_CONTRABAND_FLAG", "FIREARM_FLAG", "KNIFE_CUTTER_FLAG",
# "OTHER_WEAPON_FLAG", "WEAPON_FOUND_FLAG", "PHYSICAL_FORCE_HANDCUFF_SUSPECT_FLAG",
# "BACKROUND_CIRCUMSTANCES_VIOLENT_CRIME_FLAG", "BACKROUND_CIRCUMSTANCES_SUSPECT_KNOWN_TO_CARRY_WEAPON_FLAG",
# "SUSPECTS_ACTIONS_CONCEALED_POSSESSION_WEAPON_FLAG", "SUSPECTS_ACTIONS_DRUG_TRANSACTIONS_FLAG",
# "SUSPECTS_ACTIONS_IDENTIFY_CRIME_PATTERN_FLAG",
# "CATEGORIZED_SUSPECT_REPORTED_AGE", "SUSPECT_SEX", "SUSPECT_RACE_DESCRIPTION", "CATEGORIZED_SUSPECT_HEIGHT", "CATEGORIZED_SUSPECT_WEIGHT",
# "STOP_LOCATION_PRECINCT")
features <- c("STOP_WAS_INITIATED", "ISSUING_OFFICER_RANK", "SUPERVISING_OFFICER_RANK", "SUSPECTED_CRIME_DESCRIPTION",
"FRISKED_FLAG", "SEARCHED_FLAG", "OTHER_CONTRABAND_FLAG", "FIREARM_FLAG", "KNIFE_CUTTER_FLAG",
"OTHER_WEAPON_FLAG", "WEAPON_FOUND_FLAG", "PHYSICAL_FORCE_HANDCUFF_SUSPECT_FLAG",
"BACKROUND_CIRCUMSTANCES_VIOLENT_CRIME_FLAG", "BACKROUND_CIRCUMSTANCES_SUSPECT_KNOWN_TO_CARRY_WEAPON_FLAG",
"SUSPECTS_ACTIONS_CONCEALED_POSSESSION_WEAPON_FLAG", "SUSPECTS_ACTIONS_DRUG_TRANSACTIONS_FLAG",
"SUSPECTS_ACTIONS_IDENTIFY_CRIME_PATTERN_FLAG",
"CATEGORIZED_SUSPECT_REPORTED_AGE", "SUSPECT_SEX", "SUSPECT_RACE_DESCRIPTION", "CATEGORIZED_SUSPECT_HEIGHT", "CATEGORIZED_SUSPECT_WEIGHT",
"STOP_LOCATION_PRECINCT")
dependent <- c("SUSPECT_ARRESTED_FLAG")
ranks <- c("POF", "POM", "DT1", "DT2", "DT3", "DTS", "SSA", "SGT", "SDS", "LSA", "LT", "CPT", "DI", "LCD")
sqf_df <- df[c(features, dependent)]
##### CLEANUP DATA #####
library(modeest)
for (feature in features) {
na_rows <- is.na(sqf_df[, feature])
if (feature == "FIREARM_FLAG" || feature == "KNIFE_CUTTER_FLAG" || feature == "OTHER_WEAPON_FLAG" || feature == "WEAPON_FOUND_FLAG" ||
feature == "PHYSICAL_FORCE_HANDCUFF_SUSPECT_FLAG" || feature == "BACKROUND_CIRCUMSTANCES_VIOLENT_CRIME_FLAG" ||
feature == "BACKROUND_CIRCUMSTANCES_SUSPECT_KNOWN_TO_CARRY_WEAPON_FLAG" || feature == "SUSPECTS_ACTIONS_CONCEALED_POSSESSION_WEAPON_FLAG" ||
feature == "SUSPECTS_ACTIONS_DRUG_TRANSACTIONS_FLAG" || feature == "SUSPECTS_ACTIONS_IDENTIFY_CRIME_PATTERN_FLAG") {
sqf_df[na_rows, feature] <- "N"
}
# } else if (feature == "SUSPECT_REPORTED_AGE") {
# mode_age <- mlv(sqf_df[, feature], method="mfv", na.rm=TRUE) # most frequent value
# sqf_df[na_rows, feature] <- mode_age$M
# } else if (feature == "SUSPECT_SEX") {
# sqf_df[sqf_df$SUSPECT_SEX == "MALE" | sqf_df$SUSPECT_SEX == "FEMALE", "SUSPECT_SEX"]
# }
}
sqf_df = na.omit(sqf_df) # Remove any rows with missing value
##### Cast to correct data type #####
for (feature in c(features, dependent)) {
# Should be factor
if (feature == "FRISKED_FLAG" || feature == "SEARCHED_FLAG" || feature == "OTHER_CONTRABAND_FLAG" ||
feature == "FIREARM_FLAG" || feature == "KNIFE_CUTTER_FLAG" || feature == "OTHER_WEAPON_FLAG" || feature == "WEAPON_FOUND_FLAG" ||
feature == "PHYSICAL_FORCE_HANDCUFF_SUSPECT_FLAG" || feature == "BACKROUND_CIRCUMSTANCES_VIOLENT_CRIME_FLAG" ||
feature == "BACKROUND_CIRCUMSTANCES_SUSPECT_KNOWN_TO_CARRY_WEAPON_FLAG" || feature == "SUSPECTS_ACTIONS_CONCEALED_POSSESSION_WEAPON_FLAG" ||
feature == "SUSPECTS_ACTIONS_DRUG_TRANSACTIONS_FLAG" || feature == "SUSPECTS_ACTIONS_IDENTIFY_CRIME_PATTERN_FLAG" ||
feature == "SUSPECT_ARRESTED_FLAG") {
sqf_df[, feature] <- factor(sqf_df[, feature], levels = c("Y", "N"))
} else if (feature == "ISSUING_OFFICER_RANK" || feature == "SUPERVISING_OFFICER_RANK") {
sqf_df[, feature] <- factor(sqf_df[, feature], ranks)
} else if (feature == "STOP_WAS_INITIATED" || feature == "SUSPECTED_CRIME_DESCRIPTION") {
sqf_df[, feature] <- factor(sqf_df[, feature])
} else if (feature == "SUSPECT_SEX") {
sqf_df[, feature] <- factor(sqf_df[, feature], levels = c("MALE", "FEMALE"))
} else if (feature == "SUSPECT_RACE_DESCRIPTION" || feature == "CATEGORIZED_SUSPECT_REPORTED_AGE" ||
feature == "CATEGORIZED_SUSPECT_HEIGHT" || feature == "CATEGORIZED_SUSPECT_WEIGHT" ||
feature == "STOP_LOCATION_PRECINCT") {
sqf_df[, feature] <- factor(sqf_df[, feature])
}
}
##### Split data ######
df_rows <- nrow(sqf_df)
idx <- sample(x=df_rows, size=as.integer(0.20*df_rows))
test <- sqf_df[idx, ]
training <- sqf_df[-idx, ]
##### Install packages #####
# install.packages('e1071', dependencies = TRUE)
library(class)
library(e1071)
##### Main function #####
class(sqf_df)
prop.table
# Get table of percentage for class and survived
##### Naive bayes #####
nBayes_arrest <- naiveBayes(
SUSPECT_ARRESTED_FLAG ~ .,
data=training
)
##### Predict tests ####
# Use predict function to predict
predict_arrest <- predict(nBayes_arrest, test, type="class")
test_arrest <- test$SUSPECT_ARRESTED_FLAG
table_k <- table(test_arrest, predict_arrest)
accuracy_k <- sum(diag(table_k)) / sum(table_k)
print("Table Naive Bayes")
print(table_k)
print(paste("Accuracy: ", accuracy_k))
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{rse}
\alias{rse}
\title{The Rosenberg Self-Esteem Scale}
\format{A data frame with 1000 participants who responded to 10 rating scale items in an interactive version of the Rosenberg Self-Esteem Scale (Rosenberg, 1965). There are also additional demographic items about the participants:
\describe{
\item{Q1}{I feel that I am a person of worth, at least on an equal plane with others.}
\item{Q2}{I feel that I have a number of good qualities.}
\item{Q3}{All in all, I am inclined to feel that I am a failure.}
\item{Q4}{I am able to do things as well as most other people.}
\item{Q5}{I feel I do not have much to be proud of.}
\item{Q6}{I take a positive attitude toward myself.}
\item{Q7}{On the whole, I am satisfied with myself.}
\item{Q8}{I wish I could have more respect for myself.}
\item{Q9}{I certainly feel useless at times.}
\item{Q10}{At times, I think I am no good at all.}
\item{gender}{Chosen from a drop down list (1=male, 2=female, 3=other; 0=none was chosen)}
\item{age}{Entered as a free response. (0=response that could not be converted to integer)}
\item{source}{How the user came to the web page of the RSE scale (1=Front page of personality website, 2=Google search, 3=other)}
\item{country}{Inferred from technical information using MaxMind GeoLite}
\item{person}{Participant identifier}
}}
\source{
The The Rosenberg Self-Esteem Scale is available at \url{http://personality-testing.info/tests/RSE.php}.
}
\usage{
rse
}
\description{
The RSE data set was obtained via online with an interactive version of the Rosenberg Self-Esteem Scale (Rosenberg, 1965). Individuals were informed at the start of the test that their data would be saved. When they completed the scale, they were asked to confirm that the responses they had given were accurate and could be used for research, only those who confirmed are included in this dataset. A random sample of 1000 participants who completed all of the items in the scale were included in the RSE data set. All of the 10 rating scale items were rated on a 4-point scale (i.e., 1=strongly disagree, 2=disagree, 3=agree, and 4=strongly agree). Items 3, 5, 8, 9 and 10 were reversed-coded in order to place all the items in the same direction. That is, higher scores indicate higher self-esteem.
}
\references{
Rosenberg, M. (1965). Society and the adolescent self-image. Princeton, NJ: Princeton University Press.
}
\keyword{dataset}
| /man/rse.Rd | no_license | cddesja/REPM | R | false | true | 2,541 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{rse}
\alias{rse}
\title{The Rosenberg Self-Esteem Scale}
\format{A data frame with 1000 participants who responded to 10 rating scale items in an interactive version of the Rosenberg Self-Esteem Scale (Rosenberg, 1965). There are also additional demographic items about the participants:
\describe{
\item{Q1}{I feel that I am a person of worth, at least on an equal plane with others.}
\item{Q2}{I feel that I have a number of good qualities.}
\item{Q3}{All in all, I am inclined to feel that I am a failure.}
\item{Q4}{I am able to do things as well as most other people.}
\item{Q5}{I feel I do not have much to be proud of.}
\item{Q6}{I take a positive attitude toward myself.}
\item{Q7}{On the whole, I am satisfied with myself.}
\item{Q8}{I wish I could have more respect for myself.}
\item{Q9}{I certainly feel useless at times.}
\item{Q10}{At times, I think I am no good at all.}
\item{gender}{Chosen from a drop down list (1=male, 2=female, 3=other; 0=none was chosen)}
\item{age}{Entered as a free response. (0=response that could not be converted to integer)}
\item{source}{How the user came to the web page of the RSE scale (1=Front page of personality website, 2=Google search, 3=other)}
\item{country}{Inferred from technical information using MaxMind GeoLite}
\item{person}{Participant identifier}
}}
\source{
The The Rosenberg Self-Esteem Scale is available at \url{http://personality-testing.info/tests/RSE.php}.
}
\usage{
rse
}
\description{
The RSE data set was obtained via online with an interactive version of the Rosenberg Self-Esteem Scale (Rosenberg, 1965). Individuals were informed at the start of the test that their data would be saved. When they completed the scale, they were asked to confirm that the responses they had given were accurate and could be used for research, only those who confirmed are included in this dataset. A random sample of 1000 participants who completed all of the items in the scale were included in the RSE data set. All of the 10 rating scale items were rated on a 4-point scale (i.e., 1=strongly disagree, 2=disagree, 3=agree, and 4=strongly agree). Items 3, 5, 8, 9 and 10 were reversed-coded in order to place all the items in the same direction. That is, higher scores indicate higher self-esteem.
}
\references{
Rosenberg, M. (1965). Society and the adolescent self-image. Princeton, NJ: Princeton University Press.
}
\keyword{dataset}
|
#' Create a GitHub pull request.
#'
#' @inheritParams template_github_request
#' @param title Title for the pull request.
#' @param head Branch to merge from (the branch you are wanting merged).
#' @param base The "master" branch or the branch you want your changes to be
#' merged into (the branch you want to merge into).
#' @param body The description and details of the pull request.
#'
#' @return Nothing.
#'
gh_new_pull_request <- function(repo, title, head, base, body) {
in_development()
# https://developer.github.com/v3/pulls/#create-a-pull-request
# https://developer.github.com/v3/pulls/#alternative-input
}
#' Make edits to existing GitHub Pull Requests.
#'
#' @inheritParams template_github_request
#' @inheritParams gh_new_pull_request
#' @param number The existing pull request number.
#' @param state Whether to open or close the pull request.
#' @param base Whether to change the base ("master") branch to merge into.
#'
#' @return Nothing.
#'
gh_edit_pull_request <- function(repo, number, title, body, state, base) {
in_development()
# https://developer.github.com/v3/pulls/#update-a-pull-request
}
#' Close a currently open GitHub Pull Request.
#'
#' @inheritParams gh_edit_pull_request
#'
#' @return Nothing.
#'
gh_close_pull_request <- function(repo, number) {
in_development()
# gh_edit_pull_request(repo = repo, number = number, state = "closed")
}
#' List all Pull Requests in a GitHub repository.
#'
#' @inheritParams template_github_request
#'
#' @return A tidied [tibble][tibble::tibble-package] or a `gh_response` class as
#' a list.
#'
gh_list_pull_requests <- function(repo) {
in_development()
# https://developer.github.com/v3/pulls/#list-pull-requests
}
| /R/pull-requests.R | permissive | rostools/githubr | R | false | false | 1,732 | r |
#' Create a GitHub pull request.
#'
#' @inheritParams template_github_request
#' @param title Title for the pull request.
#' @param head Branch to merge from (the branch you are wanting merged).
#' @param base The "master" branch or the branch you want your changes to be
#' merged into (the branch you want to merge into).
#' @param body The description and details of the pull request.
#'
#' @return Nothing.
#'
gh_new_pull_request <- function(repo, title, head, base, body) {
in_development()
# https://developer.github.com/v3/pulls/#create-a-pull-request
# https://developer.github.com/v3/pulls/#alternative-input
}
#' Make edits to existing GitHub Pull Requests.
#'
#' @inheritParams template_github_request
#' @inheritParams gh_new_pull_request
#' @param number The existing pull request number.
#' @param state Whether to open or close the pull request.
#' @param base Whether to change the base ("master") branch to merge into.
#'
#' @return Nothing.
#'
gh_edit_pull_request <- function(repo, number, title, body, state, base) {
in_development()
# https://developer.github.com/v3/pulls/#update-a-pull-request
}
#' Close a currently open GitHub Pull Request.
#'
#' @inheritParams gh_edit_pull_request
#'
#' @return Nothing.
#'
gh_close_pull_request <- function(repo, number) {
in_development()
# gh_edit_pull_request(repo = repo, number = number, state = "closed")
}
#' List all Pull Requests in a GitHub repository.
#'
#' @inheritParams template_github_request
#'
#' @return A tidied [tibble][tibble::tibble-package] or a `gh_response` class as
#' a list.
#'
gh_list_pull_requests <- function(repo) {
in_development()
# https://developer.github.com/v3/pulls/#list-pull-requests
}
|
library(tidyverse)
library(readxl)
library(janitor)
files <- list.files("inst/data-raw", pattern = "PRESIDENCIAL", recursive = TRUE, full.names = TRUE)
filenames <- list.files("inst/data-raw", pattern = "PRESIDENCIAL", recursive = TRUE) %>% tools::file_path_sans_ext() %>% str_remove(".*/")
listado_informacion <- tibble(files = files,
filenames = filenames) %>%
separate(filenames, into = c("eleccion", "tipo_cargo"), extra = "merge") %>%
mutate(ballotage = str_starts(tipo_cargo, "2V"),
tipo_cargo = str_remove(tipo_cargo, "^2V_")) %>%
separate(tipo_cargo, into = c("tipo_dato", "tipo_eleccion")) %>%
arrange(tipo_dato, eleccion)
padrones <- listado_informacion %>%
filter(tipo_dato == "Padron") %>%
mutate(data = map(files, read_excel) %>% map(clean_names)) %>%
select(eleccion, data) %>%
unnest(data)
autoridades <- listado_informacion %>%
filter(tipo_dato == "Autoridades") %>%
mutate(data = map(files, read_excel) %>% map(clean_names)) %>%
select(eleccion, ballotage, data) %>%
unnest(data)
candidatos <- listado_informacion %>%
filter(tipo_dato == "Candidatos") %>%
mutate(data = map(files, read_excel) %>% map(clean_names)) %>%
select(eleccion, ballotage, data) %>%
unnest(data)
# TODO: ISP presidencial
# listado_informacion %>%
# filter(tipo_dato == "ISP") %>%
# mutate(data = map(files, read_excel, col_types = "text") %>% map(clean_names) %>% map(names)) %>%
# select(eleccion, ballotage, data) %>%
# unnest(data) %>%
# count(data)
| /inst/script/limpieza_presidencial.R | permissive | calderonsamuel/infogob | R | false | false | 1,509 | r | library(tidyverse)
library(readxl)
library(janitor)
files <- list.files("inst/data-raw", pattern = "PRESIDENCIAL", recursive = TRUE, full.names = TRUE)
filenames <- list.files("inst/data-raw", pattern = "PRESIDENCIAL", recursive = TRUE) %>% tools::file_path_sans_ext() %>% str_remove(".*/")
listado_informacion <- tibble(files = files,
filenames = filenames) %>%
separate(filenames, into = c("eleccion", "tipo_cargo"), extra = "merge") %>%
mutate(ballotage = str_starts(tipo_cargo, "2V"),
tipo_cargo = str_remove(tipo_cargo, "^2V_")) %>%
separate(tipo_cargo, into = c("tipo_dato", "tipo_eleccion")) %>%
arrange(tipo_dato, eleccion)
padrones <- listado_informacion %>%
filter(tipo_dato == "Padron") %>%
mutate(data = map(files, read_excel) %>% map(clean_names)) %>%
select(eleccion, data) %>%
unnest(data)
autoridades <- listado_informacion %>%
filter(tipo_dato == "Autoridades") %>%
mutate(data = map(files, read_excel) %>% map(clean_names)) %>%
select(eleccion, ballotage, data) %>%
unnest(data)
candidatos <- listado_informacion %>%
filter(tipo_dato == "Candidatos") %>%
mutate(data = map(files, read_excel) %>% map(clean_names)) %>%
select(eleccion, ballotage, data) %>%
unnest(data)
# TODO: ISP presidencial
# listado_informacion %>%
# filter(tipo_dato == "ISP") %>%
# mutate(data = map(files, read_excel, col_types = "text") %>% map(clean_names) %>% map(names)) %>%
# select(eleccion, ballotage, data) %>%
# unnest(data) %>%
# count(data)
|
/genefunc.R | no_license | apkide93/tryon | R | false | false | 2,498 | r | ||
library(ape)
library(phangorn)
source("/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/MO_Ternary/MO_ternary_function.R")
parameter_setting = expand.grid(m10=c(1),
alpha=c(0.05,0.1,0.2,0.4),
sites =c(20,40,80))
set.seed(1000)
All_loc_dat=data.frame(matrix(ncol = 3, nrow = 0))
for(paraInd in 8:dim(parameter_setting)[1]){
m10 = parameter_setting[paraInd,1]
alpha = parameter_setting[paraInd,2]
beta = parameter_setting[paraInd,2]
numSites= parameter_setting[paraInd,3]
alpha01= alpha
alpha02= alpha*beta/2
beta10= beta/2
beta12= beta/2
gamma20 =0
gamma21 =0
sequencing_error_model=matrix(c(1-alpha01-alpha02,alpha01,alpha02,
beta10,1-beta10-beta12,beta12,
gamma20,gamma21,1-gamma21-gamma21),nrow=3,byrow = TRUE)
print(sequencing_error_model)
unit_theta = 1
unit_gamma = 0
unit_mu = 0
number_br = 100
number_cell = 51
if (m10 < 0.1)
{
m10_str = sprintf('00%s', m10*100)
} else if (m10 >= 0.1 & m10<1){
m10_str = sprintf('0%s', m10*10)
} else{
m10_str = sprintf('%s', m10)
}
if (alpha < 0.1)
{
alpha_str = sprintf('0%s', alpha*100)
} else
{
alpha_str = sprintf('%s', alpha*10)
}
if (beta < 0.1)
{
beta_str = sprintf('0%s', beta*100)
} else
{
beta_str = sprintf('%s', beta*10)
}
#ternary_folder_form_result = sprintf('/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/MO_Ternary/Ternary_alpha0%s_beta0%s_%s_result',alpha_str, beta_str,numSites)
#dir.create(ternary_folder_form_result)
location_acc = data.frame(matrix(ncol = 3, nrow = 100))
for (indexn in 1:20){
print(indexn)
if (indexn < 10){trueTree_form=sprintf("/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/results_alpha_0%s_beta_0%s_%s/trees_dir/trees.000%s",alpha_str,alpha_str,numSites,indexn)
}else if( indexn>=10 & indexn<100){
trueTree_form=sprintf("/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/results_alpha_0%s_beta_0%s_%s/trees_dir/trees.00%s",alpha_str,alpha_str,numSites,indexn)
}else{
trueTree_form=sprintf("/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/results_alpha_0%s_beta_0%s_%s/trees_dir/trees.0%s",alpha_str,alpha_str,numSites,indexn)
}
scanned_trueTree_form = scan(file=trueTree_form,what=character(), n = -1, sep = "")
trueTree = read.tree(text=scanned_trueTree_form)
sampletr_original= trueTree
sampletr = sampletr_original
sampletr$edge.length= sampletr_original$edge.length*1000
#ternary_folder_form_result_br = sprintf("/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/results_alpha_0%s_beta_0%s_%s/true_haplotypes_dir/br_collapsed_true_hap%s.csv",alpha_str,alpha_str,numSites,indexn)
obs_ternary_folder_form_result_br = sprintf("/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/results_alpha_0%s_beta_0%s_%s/snv_haplotypes_dir/scaled_finite_br_collapsed_snv_hap%s_%s.csv",alpha_str,alpha_str,numSites,indexn, m10_str)
mat_obs_form_0_1_2 = read.csv(obs_ternary_folder_form_result_br)
ternary_prob_matrix_all_0_1_all=c()
ternary_prob_matrix_01_0_1_all=c()
ternary_prob_matrix_02_0_1_all=c()
ternary_prob_matrix_012_0_1_all=c()
if (dim(mat_obs_form_0_1_2)[1]>0){
ternary_prob_matrix_all_0_1=c()
ternary_prob_matrix_01_0_1=c()
ternary_prob_matrix_02_0_1=c()
ternary_prob_matrix_012_0_1=c()
normal_genotype_0_1_2 = rep(0,dim(mat_obs_form_0_1_2)[1])
mutation_genotype_0_1_2 = rep(2,dim(mat_obs_form_0_1_2)[1])
initial_obs_0_1 = data.frame(mat_obs_form_0_1_2)
for (i in 1:dim(initial_obs_0_1)[1]){
print(i)
#rd_unit_theta <- rbeta(10, (10^7)*unit_theta, (10^7)*(1-unit_theta))
#rd_unit_gamma <- rbeta(10, (10^14)*unit_gamma, (10^14)*(1-unit_gamma))
#rd_unit_mu <- rbeta(10, 100*unit_mu, 100*(1-unit_mu))
generate_prob_br <- generate_prob(sequencing_error_model,unit_theta,unit_gamma,unit_mu,number_br,number_cell,
normal_genotype_0_1_2[i],mutation_genotype_0_1_2[i],initial_obs_0_1[i,],sampletr)
generate_prob_br_0_1_single <- c(generate_prob_br[,1],rep(0,number_br-dim(generate_prob_br)[1]))
generate_prob_br_0_2_single <- c(generate_prob_br[,2],rep(0,number_br-dim(generate_prob_br)[1]))
generate_prob_br_0_1_2_single <- c(generate_prob_br[,3],rep(0,number_br-dim(generate_prob_br)[1]))
generate_prob_br_all_single <- c(generate_prob_br[,4],rep(0,number_br-dim(generate_prob_br)[1]))
generate_prob_br_0_1=generate_prob_br_0_1_single
generate_prob_br_0_2=generate_prob_br_0_2_single
generate_prob_br_0_1_2=generate_prob_br_0_1_2_single
generate_prob_br_all=generate_prob_br_all_single
ternary_prob_matrix_all_0_1 = rbind(ternary_prob_matrix_all_0_1,generate_prob_br_all)
ternary_prob_matrix_01_0_1 = rbind(ternary_prob_matrix_01_0_1,generate_prob_br_0_1)
ternary_prob_matrix_02_0_1 = rbind(ternary_prob_matrix_02_0_1,generate_prob_br_0_2)
ternary_prob_matrix_012_0_1 = rbind(ternary_prob_matrix_012_0_1,generate_prob_br_0_1_2)
}
ternary_prob_matrix_all_0_1_all=rbind(ternary_prob_matrix_all_0_1_all,data.frame(ternary_prob_matrix_all_0_1))
}
ternary_prob_matrix_all_0_1_2_out = sprintf("/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/MO_Ternary/Ternary_alpha0%s_beta0%s_%s_result/scaled_all_ternary_prob_matrix_all_0_1_out_matrix%s_mu%s.csv",alpha_str,alpha_str,numSites,indexn,m10_str)
selected_br=c()
for(i in 1:dim(ternary_prob_matrix_all_0_1)[1]){
selected_br[i]=which.max(ternary_prob_matrix_all_0_1[i,])
}
ternary_prob_matrix_all_0_1_rownames=cbind(selected_br,ternary_prob_matrix_all_0_1)
location_acc[indexn,]=c(sum(ternary_prob_matrix_all_0_1_rownames[,1]==ternary_prob_matrix_all_0_1_rownames[,3]),dim(ternary_prob_matrix_all_0_1_rownames)[1],alpha)
write.csv(ternary_prob_matrix_all_0_1_rownames,file=ternary_prob_matrix_all_0_1_2_out)
}
#All_loc_dat=rbind(All_loc_dat,location_acc)
}
#write.csv(All_loc_dat,file="/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/MO_Ternary/All_MO_ternary_location_accuracy.csv")
| /simulation/FiguresS10_Scenarios11_12/Scenario12_result/MO_Ternary/MO_ternary_cellcoal_finite_mu1_copy2.R | no_license | DavidSimone/MO | R | false | false | 6,994 | r | library(ape)
library(phangorn)
source("/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/MO_Ternary/MO_ternary_function.R")
parameter_setting = expand.grid(m10=c(1),
alpha=c(0.05,0.1,0.2,0.4),
sites =c(20,40,80))
set.seed(1000)
All_loc_dat=data.frame(matrix(ncol = 3, nrow = 0))
for(paraInd in 8:dim(parameter_setting)[1]){
m10 = parameter_setting[paraInd,1]
alpha = parameter_setting[paraInd,2]
beta = parameter_setting[paraInd,2]
numSites= parameter_setting[paraInd,3]
alpha01= alpha
alpha02= alpha*beta/2
beta10= beta/2
beta12= beta/2
gamma20 =0
gamma21 =0
sequencing_error_model=matrix(c(1-alpha01-alpha02,alpha01,alpha02,
beta10,1-beta10-beta12,beta12,
gamma20,gamma21,1-gamma21-gamma21),nrow=3,byrow = TRUE)
print(sequencing_error_model)
unit_theta = 1
unit_gamma = 0
unit_mu = 0
number_br = 100
number_cell = 51
if (m10 < 0.1)
{
m10_str = sprintf('00%s', m10*100)
} else if (m10 >= 0.1 & m10<1){
m10_str = sprintf('0%s', m10*10)
} else{
m10_str = sprintf('%s', m10)
}
if (alpha < 0.1)
{
alpha_str = sprintf('0%s', alpha*100)
} else
{
alpha_str = sprintf('%s', alpha*10)
}
if (beta < 0.1)
{
beta_str = sprintf('0%s', beta*100)
} else
{
beta_str = sprintf('%s', beta*10)
}
#ternary_folder_form_result = sprintf('/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/MO_Ternary/Ternary_alpha0%s_beta0%s_%s_result',alpha_str, beta_str,numSites)
#dir.create(ternary_folder_form_result)
location_acc = data.frame(matrix(ncol = 3, nrow = 100))
for (indexn in 1:20){
print(indexn)
if (indexn < 10){trueTree_form=sprintf("/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/results_alpha_0%s_beta_0%s_%s/trees_dir/trees.000%s",alpha_str,alpha_str,numSites,indexn)
}else if( indexn>=10 & indexn<100){
trueTree_form=sprintf("/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/results_alpha_0%s_beta_0%s_%s/trees_dir/trees.00%s",alpha_str,alpha_str,numSites,indexn)
}else{
trueTree_form=sprintf("/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/results_alpha_0%s_beta_0%s_%s/trees_dir/trees.0%s",alpha_str,alpha_str,numSites,indexn)
}
scanned_trueTree_form = scan(file=trueTree_form,what=character(), n = -1, sep = "")
trueTree = read.tree(text=scanned_trueTree_form)
sampletr_original= trueTree
sampletr = sampletr_original
sampletr$edge.length= sampletr_original$edge.length*1000
#ternary_folder_form_result_br = sprintf("/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/results_alpha_0%s_beta_0%s_%s/true_haplotypes_dir/br_collapsed_true_hap%s.csv",alpha_str,alpha_str,numSites,indexn)
obs_ternary_folder_form_result_br = sprintf("/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/results_alpha_0%s_beta_0%s_%s/snv_haplotypes_dir/scaled_finite_br_collapsed_snv_hap%s_%s.csv",alpha_str,alpha_str,numSites,indexn, m10_str)
mat_obs_form_0_1_2 = read.csv(obs_ternary_folder_form_result_br)
ternary_prob_matrix_all_0_1_all=c()
ternary_prob_matrix_01_0_1_all=c()
ternary_prob_matrix_02_0_1_all=c()
ternary_prob_matrix_012_0_1_all=c()
if (dim(mat_obs_form_0_1_2)[1]>0){
ternary_prob_matrix_all_0_1=c()
ternary_prob_matrix_01_0_1=c()
ternary_prob_matrix_02_0_1=c()
ternary_prob_matrix_012_0_1=c()
normal_genotype_0_1_2 = rep(0,dim(mat_obs_form_0_1_2)[1])
mutation_genotype_0_1_2 = rep(2,dim(mat_obs_form_0_1_2)[1])
initial_obs_0_1 = data.frame(mat_obs_form_0_1_2)
for (i in 1:dim(initial_obs_0_1)[1]){
print(i)
#rd_unit_theta <- rbeta(10, (10^7)*unit_theta, (10^7)*(1-unit_theta))
#rd_unit_gamma <- rbeta(10, (10^14)*unit_gamma, (10^14)*(1-unit_gamma))
#rd_unit_mu <- rbeta(10, 100*unit_mu, 100*(1-unit_mu))
generate_prob_br <- generate_prob(sequencing_error_model,unit_theta,unit_gamma,unit_mu,number_br,number_cell,
normal_genotype_0_1_2[i],mutation_genotype_0_1_2[i],initial_obs_0_1[i,],sampletr)
generate_prob_br_0_1_single <- c(generate_prob_br[,1],rep(0,number_br-dim(generate_prob_br)[1]))
generate_prob_br_0_2_single <- c(generate_prob_br[,2],rep(0,number_br-dim(generate_prob_br)[1]))
generate_prob_br_0_1_2_single <- c(generate_prob_br[,3],rep(0,number_br-dim(generate_prob_br)[1]))
generate_prob_br_all_single <- c(generate_prob_br[,4],rep(0,number_br-dim(generate_prob_br)[1]))
generate_prob_br_0_1=generate_prob_br_0_1_single
generate_prob_br_0_2=generate_prob_br_0_2_single
generate_prob_br_0_1_2=generate_prob_br_0_1_2_single
generate_prob_br_all=generate_prob_br_all_single
ternary_prob_matrix_all_0_1 = rbind(ternary_prob_matrix_all_0_1,generate_prob_br_all)
ternary_prob_matrix_01_0_1 = rbind(ternary_prob_matrix_01_0_1,generate_prob_br_0_1)
ternary_prob_matrix_02_0_1 = rbind(ternary_prob_matrix_02_0_1,generate_prob_br_0_2)
ternary_prob_matrix_012_0_1 = rbind(ternary_prob_matrix_012_0_1,generate_prob_br_0_1_2)
}
ternary_prob_matrix_all_0_1_all=rbind(ternary_prob_matrix_all_0_1_all,data.frame(ternary_prob_matrix_all_0_1))
}
ternary_prob_matrix_all_0_1_2_out = sprintf("/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/MO_Ternary/Ternary_alpha0%s_beta0%s_%s_result/scaled_all_ternary_prob_matrix_all_0_1_out_matrix%s_mu%s.csv",alpha_str,alpha_str,numSites,indexn,m10_str)
selected_br=c()
for(i in 1:dim(ternary_prob_matrix_all_0_1)[1]){
selected_br[i]=which.max(ternary_prob_matrix_all_0_1[i,])
}
ternary_prob_matrix_all_0_1_rownames=cbind(selected_br,ternary_prob_matrix_all_0_1)
location_acc[indexn,]=c(sum(ternary_prob_matrix_all_0_1_rownames[,1]==ternary_prob_matrix_all_0_1_rownames[,3]),dim(ternary_prob_matrix_all_0_1_rownames)[1],alpha)
write.csv(ternary_prob_matrix_all_0_1_rownames,file=ternary_prob_matrix_all_0_1_2_out)
}
#All_loc_dat=rbind(All_loc_dat,location_acc)
}
#write.csv(All_loc_dat,file="/fs/project/kubatko.2-temp/gao.957/workspace/cellcoal-master/cellcoal_50tips/MO_Ternary/All_MO_ternary_location_accuracy.csv")
|
#' A simple learning and prediction workflow
#'
#' @param train a data frame for training
#' @param test a data frame for testing
#' @param time the name of the column in \code{train} and
#' \code{test} containing time-stamps
#' @param site_id the name of the column in \code{train} and
#' \code{test} containing location IDs
#' @param form a formula describing the model to learn
#' @param model the name of the algorithm to use
#' @param handleNAs string indicating how to deal with NAs.
#' If "centralImput", training observations with at least 80\%
#' of non-NA columns, will have their NAs substituted by the mean
#' value and testing observatiosn will have their NAs filled in with
#' mean value regardless.
#' @param min_train a minimum number of observations that must be
#' left to train a model. If there are not enough observations,
#' predictions will be \code{NA}. Default is 2.
#' @param nORp a maximum number or fraction of columns with missing
#' values above which a row will be removed from train before
#' learning the model. Only works if \code{handleNAs} was
#' set to centralImputation. Default is 0.2.
#' @param ... other parameters to feed to \code{model}
#'
#' @return a data frame containing time-stamps, location IDs,
#' true values and predicted values
#'
#' @export
simple_workflow <- function(train, test, form, model="lm",
handleNAs=NULL, min_train=2, nORp = 0.2,
time="time", site_id="site", ...){
dotargs <- list(...)
# get true values
trues <- responseValues(form, test)
col.inds <- which(colnames(train) %in% c(time, site_id))
# correct default mtry if model is ranger and there is no argument given
if(model=="ranger" & !("mtry" %in% dotargs) & is.numeric(trues))
dotargs$mtry <- max(floor(ncol(train[,-col.inds])/3), 1)
# pre-process NAs
if(!is.null(handleNAs)){
if(handleNAs=="centralImput"){
idxs <- DMwR2::manyNAs(train, nORp = nORp)
if(length(idxs)) train <- train[-idxs, ]
if(anyNA(train)) train <- DMwR2::centralImputation(train)
if(anyNA(test)) test <- DMwR2::centralImputation(test)
}
}
if(nrow(train)>=min_train){
# train model
m <- do.call(model, c(list(form, train[,-col.inds]), dotargs))
# make predictions
preds <- if(model!="ranger") predict(m, test[,-col.inds]) else predict(m, test[,-col.inds])$predictions
# prepare result object
res <- data.frame(time=test[[time]], site_id=test[[site_id]],
trues=trues, preds=preds)
}else{
warning("nrow(train)<min_train", call. = FALSE)
res <- data.frame(time=test[[time]], site_id=test[[site_id]],
trues=trues, preds=as.numeric(NA))
}
colnames(res)[1:2] <- c(time, site_id)
res
}
#' Evalute the results of a predictive workflow
#'
#' Calculate evaluation metrics from the raw results of a workflow
#' @param wfRes a data frame (or list of data frames) containing the results of
#' a predictive workflow with columns \code{trues} and \code{preds} containing
#' the real and predicted values, respectively
#' @param eval.function the function to be used to calculate error metrics from \code{wfRes}
#' @param .keptTrain a Boolean indicating whether \code{.keepTrain} was
#' set to \code{TRUE} in calls to estimation methods. Only useful
#' if evaluation metrics need training data.
#' @param ... parameters to pass to \code{eval.function}
#'
#' @return The results (or a list of results) of \code{eval.function} applied to
#' the data frame (or list of data frames) in \code{wfRes}
#'
#' @export
evaluate <- function(wfRes,
eval.function = get("regressionMetrics", asNamespace("performanceEstimation")),
.keptTrain = TRUE,
...){
if(!.keptTrain){
if(!("results" %in% names(wfRes)))
fold.res <- t(sapply(wfRes, function(x)
eval.function(trues=x$results$trues,
preds=x$results$preds, ...)))
else fold.res <- t(eval.function(trues=wfRes$results$trues,
preds=wfRes$results$preds, ...))
}else{
if(!("results" %in% names(wfRes)))
fold.res <- t(sapply(wfRes, function(x)
eval.function(trues=x$results$trues,
preds=x$results$preds,
y_train=x$train[,3], ...)))
else fold.res <- t(eval.function(trues=wfRes$results$trues,
preds=wfRes$results$preds,
y_train=wfRes$train[,3], ...))
}
fold.res
}
#' Estimate error using a chosen method
#'
#' @param data a data frame
#' @param form a formula for learning
#' @param estimator the name of an error estimator function
#' @param est.pars a named list of arguments to feed to \code{estimator}
#' @param workflow the name of the workflow to use for making predictions
#' @param wf.pars a named list of arguments to feed to \code{workflow}
#' @param evaluator the name of the function to use to calculate evaluation results
#' @param eval.pars a named list of arguments to feed to \code{evaluator}
#' @param seed a seed to set before performing estimates
#'
#' @return The results of \code{evaluator} after applying \code{estimator} to the
#' learning task
#'
#' @export
estimates <- function(data, form, estimator="kf_xval",
est.pars = list(nfolds=10,
fold.alloc.proc="Trand_SPrand"),
workflow = "simple_workflow", wf.pars=NULL,
evaluator = "evaluate", eval.pars=NULL,
seed=1234){
if(!is.null(seed)) set.seed(1234)
res <- do.call(estimator, c(list(data=data, form=form,
FUN=get(workflow, mode="function")),
est.pars, wf.pars))
est.res <- do.call(evaluator, c(list(wfRes=res), eval.pars))
list(evalRes = est.res, rawRes = res)
}
| /R/workflows.R | no_license | SkanderHn/Evaluation-procedures-for-forecasting-with-spatio-temporal-data | R | false | false | 5,951 | r | #' A simple learning and prediction workflow
#'
#' @param train a data frame for training
#' @param test a data frame for testing
#' @param time the name of the column in \code{train} and
#' \code{test} containing time-stamps
#' @param site_id the name of the column in \code{train} and
#' \code{test} containing location IDs
#' @param form a formula describing the model to learn
#' @param model the name of the algorithm to use
#' @param handleNAs string indicating how to deal with NAs.
#' If "centralImput", training observations with at least 80\%
#' of non-NA columns, will have their NAs substituted by the mean
#' value and testing observatiosn will have their NAs filled in with
#' mean value regardless.
#' @param min_train a minimum number of observations that must be
#' left to train a model. If there are not enough observations,
#' predictions will be \code{NA}. Default is 2.
#' @param nORp a maximum number or fraction of columns with missing
#' values above which a row will be removed from train before
#' learning the model. Only works if \code{handleNAs} was
#' set to centralImputation. Default is 0.2.
#' @param ... other parameters to feed to \code{model}
#'
#' @return a data frame containing time-stamps, location IDs,
#' true values and predicted values
#'
#' @export
simple_workflow <- function(train, test, form, model="lm",
handleNAs=NULL, min_train=2, nORp = 0.2,
time="time", site_id="site", ...){
dotargs <- list(...)
# get true values
trues <- responseValues(form, test)
col.inds <- which(colnames(train) %in% c(time, site_id))
# correct default mtry if model is ranger and there is no argument given
if(model=="ranger" & !("mtry" %in% dotargs) & is.numeric(trues))
dotargs$mtry <- max(floor(ncol(train[,-col.inds])/3), 1)
# pre-process NAs
if(!is.null(handleNAs)){
if(handleNAs=="centralImput"){
idxs <- DMwR2::manyNAs(train, nORp = nORp)
if(length(idxs)) train <- train[-idxs, ]
if(anyNA(train)) train <- DMwR2::centralImputation(train)
if(anyNA(test)) test <- DMwR2::centralImputation(test)
}
}
if(nrow(train)>=min_train){
# train model
m <- do.call(model, c(list(form, train[,-col.inds]), dotargs))
# make predictions
preds <- if(model!="ranger") predict(m, test[,-col.inds]) else predict(m, test[,-col.inds])$predictions
# prepare result object
res <- data.frame(time=test[[time]], site_id=test[[site_id]],
trues=trues, preds=preds)
}else{
warning("nrow(train)<min_train", call. = FALSE)
res <- data.frame(time=test[[time]], site_id=test[[site_id]],
trues=trues, preds=as.numeric(NA))
}
colnames(res)[1:2] <- c(time, site_id)
res
}
#' Evalute the results of a predictive workflow
#'
#' Calculate evaluation metrics from the raw results of a workflow
#' @param wfRes a data frame (or list of data frames) containing the results of
#' a predictive workflow with columns \code{trues} and \code{preds} containing
#' the real and predicted values, respectively
#' @param eval.function the function to be used to calculate error metrics from \code{wfRes}
#' @param .keptTrain a Boolean indicating whether \code{.keepTrain} was
#' set to \code{TRUE} in calls to estimation methods. Only useful
#' if evaluation metrics need training data.
#' @param ... parameters to pass to \code{eval.function}
#'
#' @return The results (or a list of results) of \code{eval.function} applied to
#' the data frame (or list of data frames) in \code{wfRes}
#'
#' @export
evaluate <- function(wfRes,
eval.function = get("regressionMetrics", asNamespace("performanceEstimation")),
.keptTrain = TRUE,
...){
if(!.keptTrain){
if(!("results" %in% names(wfRes)))
fold.res <- t(sapply(wfRes, function(x)
eval.function(trues=x$results$trues,
preds=x$results$preds, ...)))
else fold.res <- t(eval.function(trues=wfRes$results$trues,
preds=wfRes$results$preds, ...))
}else{
if(!("results" %in% names(wfRes)))
fold.res <- t(sapply(wfRes, function(x)
eval.function(trues=x$results$trues,
preds=x$results$preds,
y_train=x$train[,3], ...)))
else fold.res <- t(eval.function(trues=wfRes$results$trues,
preds=wfRes$results$preds,
y_train=wfRes$train[,3], ...))
}
fold.res
}
#' Estimate error using a chosen method
#'
#' @param data a data frame
#' @param form a formula for learning
#' @param estimator the name of an error estimator function
#' @param est.pars a named list of arguments to feed to \code{estimator}
#' @param workflow the name of the workflow to use for making predictions
#' @param wf.pars a named list of arguments to feed to \code{workflow}
#' @param evaluator the name of the function to use to calculate evaluation results
#' @param eval.pars a named list of arguments to feed to \code{evaluator}
#' @param seed a seed to set before performing estimates
#'
#' @return The results of \code{evaluator} after applying \code{estimator} to the
#' learning task
#'
#' @export
estimates <- function(data, form, estimator="kf_xval",
est.pars = list(nfolds=10,
fold.alloc.proc="Trand_SPrand"),
workflow = "simple_workflow", wf.pars=NULL,
evaluator = "evaluate", eval.pars=NULL,
seed=1234){
if(!is.null(seed)) set.seed(1234)
res <- do.call(estimator, c(list(data=data, form=form,
FUN=get(workflow, mode="function")),
est.pars, wf.pars))
est.res <- do.call(evaluator, c(list(wfRes=res), eval.pars))
list(evalRes = est.res, rawRes = res)
}
|
\name{fit.GPDb}
\alias{fit.GPDb}
\title{
Fit Generalized Pareto Model B
}
\description{
fits a generalized Pareto distribution to threshold exceedances using nlminb()
rather than nlmin()
}
\usage{
fit.GPDb(data, threshold=NA, nextremes=NA, method="ml", information="observed")
}
\arguments{
\item{data}{
data vector or times series
}
\item{threshold}{
a threshold value (either this or "nextremes" must be given but not both)
}
\item{nextremes}{
the number of upper extremes to be used (either this or "threshold" must be given but not both)
}
\item{method}{
whether parameters should be estimated by the maximum likelihood method "ml" or
the probability-weighted moments method "pwm"
}
\item{information}{
whether standard errors should be calculated with "observed" or "expected" information.
This only applies to maximum likelihood method; for "pwm" method "expected" information
is used if possible.
}
}
\value{
a list containing parameter estimates, standard errors and details of the fit
}
\details{
see page 278 of QRM; this function uses "nlminb" for ML
}
\section{References}{
Parameter and quantile estimation for the generalized Pareto distribution,
JRM Hosking and JR Wallis, Technometrics 29(3), pages 339-349, 1987.
}
\seealso{
\code{\link{fit.GPD}},
\code{\link{fit.GEV}},
\code{\link{RiskMeasures}}
}
\examples{
data(danish);
losses <- seriesData(danish);
mod <- fit.GPDb(losses,threshold=10);
mod;
}
\keyword{methods}
| /man/fit.GPDb.Rd | no_license | cran/QRMlib | R | false | false | 1,517 | rd | \name{fit.GPDb}
\alias{fit.GPDb}
\title{
Fit Generalized Pareto Model B
}
\description{
fits a generalized Pareto distribution to threshold exceedances using nlminb()
rather than nlmin()
}
\usage{
fit.GPDb(data, threshold=NA, nextremes=NA, method="ml", information="observed")
}
\arguments{
\item{data}{
data vector or times series
}
\item{threshold}{
a threshold value (either this or "nextremes" must be given but not both)
}
\item{nextremes}{
the number of upper extremes to be used (either this or "threshold" must be given but not both)
}
\item{method}{
whether parameters should be estimated by the maximum likelihood method "ml" or
the probability-weighted moments method "pwm"
}
\item{information}{
whether standard errors should be calculated with "observed" or "expected" information.
This only applies to maximum likelihood method; for "pwm" method "expected" information
is used if possible.
}
}
\value{
a list containing parameter estimates, standard errors and details of the fit
}
\details{
see page 278 of QRM; this function uses "nlminb" for ML
}
\section{References}{
Parameter and quantile estimation for the generalized Pareto distribution,
JRM Hosking and JR Wallis, Technometrics 29(3), pages 339-349, 1987.
}
\seealso{
\code{\link{fit.GPD}},
\code{\link{fit.GEV}},
\code{\link{RiskMeasures}}
}
\examples{
data(danish);
losses <- seriesData(danish);
mod <- fit.GPDb(losses,threshold=10);
mod;
}
\keyword{methods}
|
# debugging counting NGramLM class
library(hash)
trim <- function (x) gsub("^\\s+|\\s+$", "", x)
# arpa ngram format = http://www.speech.sri.com/projects/srilm/manpages/ngram-format.5.html
readARPA <- function(mypath) {
con <- file(mypath, "r", blocking = FALSE)
arpa <- list(ngrams=c())
# regular expression used for parsing
blank.rex <- "^[[:blank:]]*$"
ngram.rex <- "^[[:blank:]]*ngram[[:blank:]]+(?<N>[[:digit:]]+)[[:blank:]]*=[[:blank:]]*(?<ngrams>[[:digit:]]+)"
data.rex <- "(?<N>[[:digit:]]+)-grams:"
n.rex <- "[[:digit:]]+"
state = "begin"
while(TRUE) {
line <-readLines(con,n=1)
if( state == "begin")
if( line != "\\data\\") next # wait until we find a \data\ section
else { state <- "counts"; next }
if(grepl(blank.rex,line,perl=TRUE)) next # skip blank lines
if(line == "\\end\\") {
state <- "end"
break
}
if( state == "counts") {
# get data from "ngram n=NNN" line
if(grepl(ngram.rex,line,perl=TRUE)) {
result <- regexpr(ngram.rex, line,perl=TRUE)
st <- attr(result, "capture.start")[1,]
len <- attr(result, "capture.length")[1,]
n <-substring(line,st[1],st[1]+len[1]-1) # put this in a list
ngrams <-substring(line,st[2],st[2]+len[2]-1)
arpa[["ngrams"]] <- c(arpa[["ngrams"]], as.double(ngrams))
names(arpa[["ngrams"]])[length(arpa[["ngrams"]])] <- n
next
}
# get data from "\N-grams:" line
if(grepl(data.rex,line,perl=TRUE)) {
result <- regexpr(data.rex, line, perl=TRUE)
st <- attr(result, "capture.start")[1,]
len <- attr(result, "capture.length")[1,]
n<-substring(line,st[1],st[1]+len[1]-1) # put this in a list
state <- n
arpa[[paste0('p',state)]] <- hash()
arpa[[paste0('bow',state)]] <- hash()
next
}
}
# read data from ngram section
if(grepl("[[:digit:]]+", state, perl=TRUE)) { # read ngram data
n <- as.numeric(state)
# "\N-grams:" line - end of present section, start new section
if(grepl(data.rex,line,perl=TRUE)) {
result <- regexpr(data.rex, line, perl=TRUE)
st <- attr(result, "capture.start")[1,]
len <- attr(result, "capture.length")[1,]
n<-substring(line,st[1],st[1]+len[1]-1) # put this in a list
state <- n
arpa[[paste0('p',state)]] <- hash()
arpa[[paste0('bow',state)]] <- hash()
next
}
# read a data from "p w1 w2 ... [bow]" line, implemented only for ngrams without bow
tokens <- unlist(strsplit(line," "))
if(length(tokens) < 2) {
print("Syntax error in ")
print(line)
break
}
ng <- paste(tokens[2:(2+n-1)],collapse=" ")
arpa[[paste0('p',state)]][[ng]] <- as.double(tokens[1])
if(length(tokens) == n+2)
arpa[[paste0('bow',state)]][[ng]] <- as.double(tokens[length(tokens)])
next
}
print("Syntax error in:")
print(line)
break
}
close(con)
return(arpa)
}
# testing read an ARPA format LM
setwd("~/Documents/Courses/DataScience/CapStone")
mypath <- "python/smallBigram.LM"
start.time <- Sys.time()
small <- readARPA(mypath)
Sys.time() - start.time
str(small)
small$ngrams
length(small[['p1']])
length(small[['bow1']])
length(small[['p2']])
length(small[['bow2']])
source("NGramLM.R")
mypath <- "python/microUnigram.LM"
start.time <- Sys.time()
micro <- loadNGramLM(mypath)
Sys.time() - start.time
str(micro)
micro$ngrams
length(micro[['p1']])
length(micro[['bow1']])
length(micro[['p2']])
small <- VCorpus(DirSource("data/small/en_US/"))
small <- tm_map(small, content_transformer(function(x) iconv(x, from="latin1", to="ASCII", sub="")))
small <- tm_map(small, content_transformer(tolower))
small <- tm_map(small, removePunctuation, preserve_intra_word_dashes = TRUE)
small <- tm_map(small, removeNumbers)
small <- tm_map(small, stripWhitespace)
mod2 <- trainNGramLM(small,N=2,threshold=0,debug=TRUE)
str(mod2)
| /task3/arpaformat.R | no_license | guidogallopyn/TextPrediction | R | false | false | 4,092 | r |
# debugging counting NGramLM class
library(hash)
trim <- function (x) gsub("^\\s+|\\s+$", "", x)
# arpa ngram format = http://www.speech.sri.com/projects/srilm/manpages/ngram-format.5.html
readARPA <- function(mypath) {
con <- file(mypath, "r", blocking = FALSE)
arpa <- list(ngrams=c())
# regular expression used for parsing
blank.rex <- "^[[:blank:]]*$"
ngram.rex <- "^[[:blank:]]*ngram[[:blank:]]+(?<N>[[:digit:]]+)[[:blank:]]*=[[:blank:]]*(?<ngrams>[[:digit:]]+)"
data.rex <- "(?<N>[[:digit:]]+)-grams:"
n.rex <- "[[:digit:]]+"
state = "begin"
while(TRUE) {
line <-readLines(con,n=1)
if( state == "begin")
if( line != "\\data\\") next # wait until we find a \data\ section
else { state <- "counts"; next }
if(grepl(blank.rex,line,perl=TRUE)) next # skip blank lines
if(line == "\\end\\") {
state <- "end"
break
}
if( state == "counts") {
# get data from "ngram n=NNN" line
if(grepl(ngram.rex,line,perl=TRUE)) {
result <- regexpr(ngram.rex, line,perl=TRUE)
st <- attr(result, "capture.start")[1,]
len <- attr(result, "capture.length")[1,]
n <-substring(line,st[1],st[1]+len[1]-1) # put this in a list
ngrams <-substring(line,st[2],st[2]+len[2]-1)
arpa[["ngrams"]] <- c(arpa[["ngrams"]], as.double(ngrams))
names(arpa[["ngrams"]])[length(arpa[["ngrams"]])] <- n
next
}
# get data from "\N-grams:" line
if(grepl(data.rex,line,perl=TRUE)) {
result <- regexpr(data.rex, line, perl=TRUE)
st <- attr(result, "capture.start")[1,]
len <- attr(result, "capture.length")[1,]
n<-substring(line,st[1],st[1]+len[1]-1) # put this in a list
state <- n
arpa[[paste0('p',state)]] <- hash()
arpa[[paste0('bow',state)]] <- hash()
next
}
}
# read data from ngram section
if(grepl("[[:digit:]]+", state, perl=TRUE)) { # read ngram data
n <- as.numeric(state)
# "\N-grams:" line - end of present section, start new section
if(grepl(data.rex,line,perl=TRUE)) {
result <- regexpr(data.rex, line, perl=TRUE)
st <- attr(result, "capture.start")[1,]
len <- attr(result, "capture.length")[1,]
n<-substring(line,st[1],st[1]+len[1]-1) # put this in a list
state <- n
arpa[[paste0('p',state)]] <- hash()
arpa[[paste0('bow',state)]] <- hash()
next
}
# read a data from "p w1 w2 ... [bow]" line, implemented only for ngrams without bow
tokens <- unlist(strsplit(line," "))
if(length(tokens) < 2) {
print("Syntax error in ")
print(line)
break
}
ng <- paste(tokens[2:(2+n-1)],collapse=" ")
arpa[[paste0('p',state)]][[ng]] <- as.double(tokens[1])
if(length(tokens) == n+2)
arpa[[paste0('bow',state)]][[ng]] <- as.double(tokens[length(tokens)])
next
}
print("Syntax error in:")
print(line)
break
}
close(con)
return(arpa)
}
# testing read an ARPA format LM
setwd("~/Documents/Courses/DataScience/CapStone")
mypath <- "python/smallBigram.LM"
start.time <- Sys.time()
small <- readARPA(mypath)
Sys.time() - start.time
str(small)
small$ngrams
length(small[['p1']])
length(small[['bow1']])
length(small[['p2']])
length(small[['bow2']])
source("NGramLM.R")
mypath <- "python/microUnigram.LM"
start.time <- Sys.time()
micro <- loadNGramLM(mypath)
Sys.time() - start.time
str(micro)
micro$ngrams
length(micro[['p1']])
length(micro[['bow1']])
length(micro[['p2']])
small <- VCorpus(DirSource("data/small/en_US/"))
small <- tm_map(small, content_transformer(function(x) iconv(x, from="latin1", to="ASCII", sub="")))
small <- tm_map(small, content_transformer(tolower))
small <- tm_map(small, removePunctuation, preserve_intra_word_dashes = TRUE)
small <- tm_map(small, removeNumbers)
small <- tm_map(small, stripWhitespace)
mod2 <- trainNGramLM(small,N=2,threshold=0,debug=TRUE)
str(mod2)
|
context("test-fmt_table1")
test_that("fmt_table1 creates output without error/warning (no by var)", {
expect_error(
purrr::map(list(mtcars, iris), ~ fmt_table1(.x)),
NA
)
expect_warning(
purrr::map(list(mtcars, iris), ~ fmt_table1(.x)),
NA
)
})
test_that("fmt_table1 creates output without error/warning (with by var)", {
expect_error(
fmt_table1(mtcars, by = "am"),
NA
)
expect_warning(
fmt_table1(mtcars, by = "am"),
NA
)
})
| /tests/testthat/test-table1.R | permissive | shijianasdf/clintable | R | false | false | 478 | r | context("test-fmt_table1")
test_that("fmt_table1 creates output without error/warning (no by var)", {
expect_error(
purrr::map(list(mtcars, iris), ~ fmt_table1(.x)),
NA
)
expect_warning(
purrr::map(list(mtcars, iris), ~ fmt_table1(.x)),
NA
)
})
test_that("fmt_table1 creates output without error/warning (with by var)", {
expect_error(
fmt_table1(mtcars, by = "am"),
NA
)
expect_warning(
fmt_table1(mtcars, by = "am"),
NA
)
})
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/max-reps-tables.R
\name{max_reps_tables}
\alias{max_reps_tables}
\alias{get_max_reps}
\alias{get_max_perc_1RM}
\alias{get_predicted_1RM}
\title{Max-Reps Tables}
\usage{
get_max_reps(perc_1RM, RIR = 0, type = "grinding")
get_max_perc_1RM(max_reps, RIR = 0, type = "grinding")
get_predicted_1RM(weight, reps, RIR = 0, type = "grinding")
}
\arguments{
\item{perc_1RM}{Percent of 1RM}
\item{RIR}{Reps In Reserve}
\item{type}{Type of max rep table. Options are grinding (Default) and ballistic.}
\item{max_reps}{Maximum repetitions}
\item{weight}{Weight used}
\item{reps}{Number of repetitions done}
}
\description{
Family of functions to represent max reps tables
}
\section{Functions}{
\itemize{
\item \code{get_max_reps}: Get maximum number of repetitions
\item \code{get_max_perc_1RM}: Get maximum \%1RM
\item \code{get_predicted_1RM}: Get predicted 1RM
}}
\examples{
# Get max reps that can be done with 75\% 1RM
get_max_reps(0.75)
# Get max reps that can be done with 80\% 1RM with 2 reps in reserve
get_max_reps(
perc_1RM = 0.8,
RIR = 3
)
# Get max reps that can be done with 90\% 1RM with 2 reps in reserve
# using ballistic table
get_max_reps(
perc_1RM = 0.9,
RIR = 2,
type = "ballistic"
)
# Get max \%1RM to be used when doing 5 reps to failure
get_max_perc_1RM(5)
# Get max \%1RM to be used when doing 3 reps with 2 reps in reserve
get_max_perc_1RM(
max_reps = 3,
RIR = 2
)
# Get max \%1RM to be used when doing 2 reps with 3 reps in reserve
# using ballistic table
get_max_perc_1RM(
max_reps = 3,
RIR = 2,
type = "ballistic"
)
# Get predicted 1RM when lifting 100kg for 5 reps to failure
get_predicted_1RM(
weight = 100,
reps = 5
)
# Get predicted 1RM when lifting 120kg for 3 reps with 2 reps in reserve
get_predicted_1RM(
weight = 120,
reps = 3,
RIR = 2
)
# Get predicted 1RM when lifting 120kg for 2 reps with 1 reps in reserve
# using ballistic table
get_predicted_1RM(
weight = 120,
reps = 2,
RIR = 1,
type = "ballistic"
)
}
| /man/max_reps_tables.Rd | permissive | Ed-Rubi0/STM | R | false | true | 2,069 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/max-reps-tables.R
\name{max_reps_tables}
\alias{max_reps_tables}
\alias{get_max_reps}
\alias{get_max_perc_1RM}
\alias{get_predicted_1RM}
\title{Max-Reps Tables}
\usage{
get_max_reps(perc_1RM, RIR = 0, type = "grinding")
get_max_perc_1RM(max_reps, RIR = 0, type = "grinding")
get_predicted_1RM(weight, reps, RIR = 0, type = "grinding")
}
\arguments{
\item{perc_1RM}{Percent of 1RM}
\item{RIR}{Reps In Reserve}
\item{type}{Type of max rep table. Options are grinding (Default) and ballistic.}
\item{max_reps}{Maximum repetitions}
\item{weight}{Weight used}
\item{reps}{Number of repetitions done}
}
\description{
Family of functions to represent max reps tables
}
\section{Functions}{
\itemize{
\item \code{get_max_reps}: Get maximum number of repetitions
\item \code{get_max_perc_1RM}: Get maximum \%1RM
\item \code{get_predicted_1RM}: Get predicted 1RM
}}
\examples{
# Get max reps that can be done with 75\% 1RM
get_max_reps(0.75)
# Get max reps that can be done with 80\% 1RM with 2 reps in reserve
get_max_reps(
perc_1RM = 0.8,
RIR = 3
)
# Get max reps that can be done with 90\% 1RM with 2 reps in reserve
# using ballistic table
get_max_reps(
perc_1RM = 0.9,
RIR = 2,
type = "ballistic"
)
# Get max \%1RM to be used when doing 5 reps to failure
get_max_perc_1RM(5)
# Get max \%1RM to be used when doing 3 reps with 2 reps in reserve
get_max_perc_1RM(
max_reps = 3,
RIR = 2
)
# Get max \%1RM to be used when doing 2 reps with 3 reps in reserve
# using ballistic table
get_max_perc_1RM(
max_reps = 3,
RIR = 2,
type = "ballistic"
)
# Get predicted 1RM when lifting 100kg for 5 reps to failure
get_predicted_1RM(
weight = 100,
reps = 5
)
# Get predicted 1RM when lifting 120kg for 3 reps with 2 reps in reserve
get_predicted_1RM(
weight = 120,
reps = 3,
RIR = 2
)
# Get predicted 1RM when lifting 120kg for 2 reps with 1 reps in reserve
# using ballistic table
get_predicted_1RM(
weight = 120,
reps = 2,
RIR = 1,
type = "ballistic"
)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/forecast_output_validator.R
\name{forecast_output_validator}
\alias{forecast_output_validator}
\title{forecast_output_validator}
\usage{
forecast_output_validator(
forecast_file,
grouping_variables = c("siteID", "time"),
target_variables = c("oxygen", "temperature", "richness", "abundance", "nee", "le",
"vswc", "gcc_90", "rcc_90", "ixodes_scapularis", "ambloyomma_americanum"),
theme_names = c("aquatics", "beetles", "phenology", "terrestrial_30min",
"terrestrial_daily", "ticks")
)
}
\arguments{
\item{forecast_file}{Your forecast csv or nc file}
\item{grouping_variables}{Grouping variables}
\item{target_variables}{Possible target variables}
\item{theme_names}{valid EFI theme names}
}
\description{
forecast_output_validator
}
\examples{
forecast_file <- system.file("extdata/aquatics-2021-02-01-EFInull.csv.gz",
package = "neon4cast")
forecast_output_validator(forecast_file)
}
| /man/forecast_output_validator.Rd | permissive | dlebauer/neon4cast | R | false | true | 1,015 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/forecast_output_validator.R
\name{forecast_output_validator}
\alias{forecast_output_validator}
\title{forecast_output_validator}
\usage{
forecast_output_validator(
forecast_file,
grouping_variables = c("siteID", "time"),
target_variables = c("oxygen", "temperature", "richness", "abundance", "nee", "le",
"vswc", "gcc_90", "rcc_90", "ixodes_scapularis", "ambloyomma_americanum"),
theme_names = c("aquatics", "beetles", "phenology", "terrestrial_30min",
"terrestrial_daily", "ticks")
)
}
\arguments{
\item{forecast_file}{Your forecast csv or nc file}
\item{grouping_variables}{Grouping variables}
\item{target_variables}{Possible target variables}
\item{theme_names}{valid EFI theme names}
}
\description{
forecast_output_validator
}
\examples{
forecast_file <- system.file("extdata/aquatics-2021-02-01-EFInull.csv.gz",
package = "neon4cast")
forecast_output_validator(forecast_file)
}
|
## load data
hpc <- read.csv("~/R projects/coursera/data/household_power_consumption.txt",
header=T, sep=';', na.strings="?", nrows=2075259, check.names=F,
stringsAsFactors=F, comment.char="", quote='\"')
hpc1 <- subset(hpc, Date %in% c("1/2/2007","2/2/2007"))
hpc1$Date <- as.Date(hpc1$Date, format="%d/%m/%Y")
dt <- paste(as.Date(hpc1$Date), hpc1$Time)
hpc1$dt <- as.POSIXct(dt)
## make plot2
with(hpc1,{
plot(Global_active_power~dt, type="l", ylab="Global Active Power (kilowatts)")
})
## make png
dev.copy(png, file="plot2.png", height=480, width=480)
dev.off() | /plot2.R | no_license | cjpurington/Exploratory-Data-Project-1 | R | false | false | 607 | r | ## load data
hpc <- read.csv("~/R projects/coursera/data/household_power_consumption.txt",
header=T, sep=';', na.strings="?", nrows=2075259, check.names=F,
stringsAsFactors=F, comment.char="", quote='\"')
hpc1 <- subset(hpc, Date %in% c("1/2/2007","2/2/2007"))
hpc1$Date <- as.Date(hpc1$Date, format="%d/%m/%Y")
dt <- paste(as.Date(hpc1$Date), hpc1$Time)
hpc1$dt <- as.POSIXct(dt)
## make plot2
with(hpc1,{
plot(Global_active_power~dt, type="l", ylab="Global Active Power (kilowatts)")
})
## make png
dev.copy(png, file="plot2.png", height=480, width=480)
dev.off() |
GPLearn <- function(preprocData, TF = NULL, targets = NULL,
useGpdisim = !is.null(TF), randomize = FALSE,
addPriors = FALSE, fixedParams = FALSE,
initParams = NULL, initialZero = TRUE,
fixComps = NULL, dontOptimise = FALSE,
allowNegativeSensitivities = FALSE, quiet = FALSE,
gpsimOptions = NULL, allArgs = NULL) {
if (!is.null(allArgs)) {
for (i in seq(along=allArgs))
assign(names(allArgs)[[i]], allArgs[[i]])
}
if (class(preprocData) != "ExpressionTimeSeries")
stop("preprocData should be an instance of ExpressionTimeSeries. Please use processData or processRawData to create one.")
if (is.null(targets))
stop("no target genes specified, unable to define a model")
if (is.list(targets))
targets <- unlist(targets)
if (useGpdisim)
genes <- c(TF, targets)
else
genes <- targets
if (class(try(preprocData[genes,], silent=TRUE)) == "try-error") {
if (! all(TF %in% featureNames(preprocData))) {
stop("unknown TF")
} else {
stop("unknown target gene")
}
}
# The preprocessed data is searched for the data of the specified genes.
newData <- .getProcessedData(preprocData[genes,])
y <- newData$y
yvar <- newData$yvar
times <- newData$times
Nrep <- length(y)
options <- list(includeNoise=0, optimiser="SCG")
if (any(yvar[[1]] == 0))
options$includeNoise = 1
if (!is.null(gpsimOptions)) {
for (i in seq(along=gpsimOptions))
options[[names(gpsimOptions)[[i]]]] <- gpsimOptions[[i]]
}
if (addPriors)
options$addPriors <- TRUE
if (!initialZero && useGpdisim)
options$timeSkew <- 1000.0
#options$gaussianInitial <- TRUE
if (useGpdisim) {
Ngenes <- length(genes) - 1
options$fix$names <- "di_variance"
options$fix$value <- expTransform(c(1), "xtoa")
}
else {
Ngenes <- length(genes)
if (allowNegativeSensitivities) {
options$fix$names <- "sim1_sensitivity"
options$fix$value <- 1
}
else {
options$fix$names <- "sim1_variance"
options$fix$value <- expTransform(c(1), "xtoa")
}
}
Ntf <- 1
if (initialZero && !useGpdisim) {
options$proteinPrior <- list(values=array(0), times=array(0))
## Set the variance of the latent function to 1.
options$fix$names <- append(options$fix$names, "rbf1_variance")
options$fix$value <- append(options$fix$value, expTransform(1, "xtoa"))
}
# fixing first output sensitivity to fix the scaling
if (fixedParams && !is.null(initParams)) {
if (is.list(initParams))
initParams <- unlist(initParams)
if (!is.null(names(initParams))) {
options$fix$names <- names(initParams)
options$fix$value <- initParams
}
else {
I <- which(!is.na(initParams))
for (k in 1:length(I)) {
options$fix$index[k+1] <- I[k]
options$fix$value[k+1] <- initParams[I[k]]
}
}
}
if (! is.null(fixComps))
options$fixedBlocks <- fixComps
if (allowNegativeSensitivities)
options$isNegativeS <- TRUE
# initializing the model
if (useGpdisim)
model <- list(type="cgpdisim")
else
model <- list(type="cgpsim")
if (length(annotation(preprocData)) > 0)
dataAnnotation <- annotation(preprocData)
else
dataAnnotation <- NULL
for ( i in seq(length=Nrep) ) {
#repNames <- names(model$comp)
if (useGpdisim) {
model$comp[[i]] <- gpdisimCreate(Ngenes, Ntf, times, t(y[[i]]), t(yvar[[i]]), options, genes = featureNames(preprocData[genes,]), annotation=dataAnnotation)
}
else {
model$comp[[i]] <- gpsimCreate(Ngenes, Ntf, times, t(y[[i]]), t(yvar[[i]]), options, genes = featureNames(preprocData[genes,]), annotation=dataAnnotation)
}
#names(model$comp) <- c(repNames, paste("rep", i, sep=""))
#if (fixedParams) {
# model$comp[[i]]$kern <- multiKernFixBlocks(model$comp[[i]]$kern, fixComps)
#}
}
optOptions <- optimiDefaultOptions()
optOptions$maxit <- 3000
optOptions$optimiser <- "SCG"
if (quiet)
optOptions$display <- FALSE
if (randomize) {
a <- modelExtractParam(model)
#I <- a==0
n <- length(a)
a <- array(rnorm(n), dim = c(1, n))
#a[I] <- 0
model <- modelExpandParam(model, a)
}
if (!is.null(initParams)) {
if (!is.list(initParams) && all(is.finite(initParams))
&& is.null(names(initParams)))
model <- modelExpandParam(model, initParams)
else if (!is.null(names(initParams))) {
params <- modelExtractParam(model, only.values=FALSE)
for (i in seq(along=initParams)) {
j <- grep(names(initParams)[i], names(params))
if (length(j) > 0)
params[j] <- initParams[i]
else
warning(paste("Ignoring invalid initial parameter specification:", initParams$names[i]))
}
model <- modelExpandParam(model, params)
}
}
if (!dontOptimise) {
if (useGpdisim) {
message(paste(c("Optimising the model.\nTF: ", TF, "\nTargets: ", paste(targets, collapse=' '), "\n"), sep=" "))
} else {
message(paste(c("Optimising the model.\nTargets: ", paste(genes, collapse=' '), "\n"), sep=" "))
}
model <- modelOptimise(model, optOptions)
}
model <- modelUpdateProcesses(model)
model$allArgs <- list(TF=TF, targets=targets, useGpdisim=useGpdisim,
randomize=randomize, addPriors=addPriors,
fixedParams=fixedParams, initParams=initParams,
initialZero=initialZero, fixComps=fixComps,
dontOptimise=dontOptimise, gpsimOptions=gpsimOptions)
return (new("GPModel", model))
}
GPRankTargets <- function(preprocData, TF = NULL, knownTargets = NULL,
testTargets = NULL, filterLimit = 1.8,
returnModels = FALSE, options = NULL,
scoreSaveFile = NULL,
datasetName = "", experimentSet = "") {
if (class(preprocData) != "ExpressionTimeSeries")
stop("preprocData should be an instance of ExpressionTimeSeries. Please use processData or processRawData to create one.")
if (is.null(testTargets))
testTargets <- featureNames(preprocData)
if (is.list(testTargets))
testTargets <- unlist(testTargets)
if (is.list(knownTargets))
knownTargets <- unlist(knownTargets)
if (!is.null(TF) && !all(TF %in% featureNames(preprocData))) {
stop("unknown TF")
}
if (!all(knownTargets %in% featureNames(preprocData))) {
stop("unknown genes in knownTargets")
}
if (class(try(preprocData[testTargets,], silent=TRUE)) == "try-error") {
stop("unknown genes in testTargets")
}
if ('var.exprs' %in% assayDataElementNames(preprocData))
testTargets <- .filterTargets(preprocData[testTargets,], filterLimit)
else
testTargets <- featureNames(preprocData[testTargets,])
if (length(testTargets) < 1)
stop("No test targets passed filtering")
# The variable useGpdisim determines whether GPDISIM is used in creating the
# models. GPSIM (default) is used if no TF has been specified.
useGpdisim = !is.null(TF)
if (!useGpdisim && is.null(knownTargets))
stop("There are no known targets for GPSIM.")
if (useGpdisim)
numberOfKnownGenes <- length(knownTargets) + 1
else
numberOfKnownGenes <- length(knownTargets)
logLikelihoods <- rep(NA, length.out=length(testTargets))
baselogLikelihoods <- rep(NA, length.out=length(testTargets))
modelParams <- list()
modelArgs <- list()
if (returnModels)
rankedModels <- list()
genes <- list()
if (!is.null(options)) {
allArgs <- options
allArgs$useGpdisim <- useGpdisim
}
else
allArgs <- list(useGpdisim=useGpdisim)
if (!is.null(knownTargets) && length(knownTargets) > 0) {
baselineModel <- .formModel(preprocData, TF, knownTargets, allArgs=allArgs)
baselineParameters <- modelExtractParam(baselineModel$model,
only.values=FALSE)
sharedModel <- list(ll=baselineModel$ll,
params=baselineModel$params,
args=modelStruct(baselineModel$model)$allArgs)
allArgs$fixedParams <- TRUE
allArgs$initParams <- baselineParameters
J <- grep('Basal', names(allArgs$initParams))
names(allArgs$initParams)[J] <-
paste(names(allArgs$initParams)[J], '$', sep='')
allArgs$fixComps <- 1:numberOfKnownGenes
}
else {
baselineModel <- NULL
}
for (i in seq(along=testTargets)) {
returnData <- .formModel(preprocData, TF, knownTargets,
testTargets[i], allArgs = allArgs)
if (!is.finite(returnData$ll)) {
logLikelihoods[i] <- NA
modelParams[[i]] <- NA
modelArgs[[i]] <- NA
if (returnModels)
rankedModels[[i]] <- NA
}
else {
logLikelihoods[i] <- returnData$ll
modelParams[[i]] <- returnData$params
modelArgs[[i]] <- modelStruct(returnData$model)$allArgs
if (returnModels)
rankedModels[[i]] <- returnData$model
}
genes[[i]] <- testTargets[[i]]
testdata <- preprocData[testTargets[i],]
testdata$experiments <- rep(1, length(testdata$experiments))
newData <- .getProcessedData(testdata)
baselogLikelihoods[i] <- .baselineOptimise(newData$y[[1]], newData$yvar[[1]], list(includeNoise=(any(newData$yvar[[1]]==0))))
if (!is.null(scoreSaveFile)) {
scoreList <- new("scoreList", params = modelParams,
loglikelihoods = logLikelihoods,
baseloglikelihoods = baselogLikelihoods,
genes = genes, modelArgs = modelArgs,
knownTargets = knownTargets, TF = TF,
sharedModel = sharedModel,
datasetName = datasetName,
experimentSet = experimentSet)
save(scoreList, file=scoreSaveFile)
}
}
scoreList <- new("scoreList", params = modelParams,
loglikelihoods = logLikelihoods,
baseloglikelihoods = baselogLikelihoods,
genes = genes, modelArgs = modelArgs,
knownTargets = knownTargets, TF = TF,
sharedModel = sharedModel,
datasetName = datasetName,
experimentSet = experimentSet)
if (returnModels)
return (list(scores=scoreList, models=rankedModels))
else
return (scoreList)
}
GPRankTFs <- function(preprocData, TFs, targets,
filterLimit = 1.8,
returnModels = FALSE, options = NULL,
scoreSaveFile = NULL,
datasetName = "", experimentSet = "") {
if (class(preprocData) != "ExpressionTimeSeries")
stop("preprocData should be an instance of ExpressionTimeSeries. Please use processData or processRawData to create one.")
if (is.null(targets)) stop("No targets specified.")
if (is.list(targets))
targets <- unlist(targets)
numberOfTargets <- length(targets)
genes = c(TFs, targets)
if (class(try(preprocData[TFs,], silent=TRUE)) == "try-error") {
stop("unknown genes in TFs")
}
if (!all(targets %in% featureNames(preprocData))) {
stop("unknown genes in targets")
}
## Filtering the genes based on the calculated ratios. If the limit is 0, all genes are accepted.
if ('var.exprs' %in% assayDataElementNames(preprocData))
TFs <- .filterTargets(preprocData[TFs,], filterLimit)
else
TFs <- featureNames(preprocData[TFs,])
if (length(TFs) < 1)
stop("No TFs passed the filtering.")
logLikelihoods <- rep(NA, length.out=length(TFs))
modelParams <- list()
modelArgs <- list()
if (returnModels)
rankedModels <- list()
if (!is.null(options)) {
allArgs <- options
allArgs$useGpdisim <- TRUE
}
else
allArgs <- list(useGpdisim=TRUE)
numberOfTargets <- length(targets)
genes <- list()
for (i in 1:length(TFs)) {
returnData <- .formModel(preprocData, TF = TFs[i], targets, allArgs=allArgs)
if (!is.finite(returnData$ll)) {
logLikelihoods[i] <- NA
modelParams[[i]] <- NA
modelArgs[[i]] <- NA
if (returnModels)
rankedModels[[i]] <- NA
}
else {
logLikelihoods[i] <- returnData$ll
modelParams[[i]] <- returnData$params
modelArgs[[i]] <- modelStruct(returnData$model)$allArgs
if (returnModels)
rankedModels[[i]] <- returnData$model
}
genes[[i]] <- TFs[[i]]
if (!is.null(scoreSaveFile)) {
scoreList <- new("scoreList", params = modelParams,
loglikelihoods = logLikelihoods,
genes = genes, modelArgs = modelArgs,
knownTargets = targets, TF = '(see genes)',
datasetName = datasetName,
experimentSet = experimentSet)
save(scoreList, file=scoreSaveFile)
}
}
scoreList <- new("scoreList", params = modelParams,
loglikelihoods = logLikelihoods,
genes = genes, modelArgs = modelArgs,
knownTargets = targets, TF = '(see genes)',
datasetName = datasetName,
experimentSet = experimentSet)
if (returnModels)
return (list(scores=scoreList, models=rankedModels))
else
return (scoreList)
}
.formModel <- function(preprocData, TF = NULL, knownTargets = NULL,
testTarget = NULL, allArgs = NULL) {
if (!is.null(testTarget))
targets <- append(knownTargets, testTarget)
else
targets <- knownTargets
error1 <- TRUE
error2 <- TRUE
tryCatch({
model <- GPLearn(preprocData, TF, targets, allArgs = allArgs)
error1 <- FALSE
}, error = function(ex) {
warning("Stopped due to an error.\n")
})
if (error1) {
success <- FALSE
i <- 0
allArgs$randomize <- TRUE
while (!success && i < 10) {
tryCatch({
message("Trying again with different parameters.\n")
model <- GPLearn(preprocData, TF, targets, allArgs = allArgs)
success <- TRUE
error2 <- FALSE
}, error = function(ex) {
warning("Stopped due to an error.\n")
})
i <- i + 1
}
}
else {
error2 <- FALSE
}
if (error2) {
logLikelihood <- -Inf
params <- NA
rankedModel <- NA
}
else {
logLikelihood <- modelLogLikelihood(model)
params <- modelExtractParam(model)
rankedModel <- model
}
return (list(ll=logLikelihood, model=rankedModel, params=params))
}
generateModels <- function(preprocData, scores) {
models <- list()
# recreate the models for each gene in the scoreList
for (i in seq(along=params(scores))) {
args <- modelArgs(scores)[[i]]
args$initParams <- params(scores)[[i]]
args$dontOptimise <- TRUE
models[[i]] <- GPLearn(preprocData, allArgs=args)
}
return (models)
}
.getProcessedData <- function(data) {
times <- data$modeltime
experiments <- data$experiments
y <- assayDataElement(data, 'exprs')
if ('var.exprs' %in% assayDataElementNames(data))
yvar <- assayDataElement(data, 'var.exprs')
else
yvar <- 0 * y
scale <- sqrt(rowMeans(y^2))
scale[scale==0] <- 1
scaleMat <- scale %*% array(1, dim = c(1, ncol(data)))
y <- y / scaleMat
yvar <- yvar / scaleMat^2
ylist <- list()
yvarlist <- list()
expids <- unique(experiments)
for (i in seq(along=expids)) {
ylist[[i]] <- y[,experiments==expids[i], drop=FALSE]
yvarlist[[i]] <- yvar[,experiments==expids[i], drop=FALSE]
}
times <- times[experiments==expids[1]]
genes <- featureNames(data)
newData <- list(y = ylist, yvar = yvarlist, genes = genes, times = times)
return (newData)
}
.filterTargets <- function(data, filterLimit) {
y <- assayDataElement(data, 'exprs')
yvar <- assayDataElement(data, 'var.exprs')
zScores <- rowMeans(y / sqrt(yvar))
testTargets <- names(which(zScores > filterLimit))
return (testTargets)
}
| /R/GPrank.R | no_license | ahonkela/tigre | R | false | false | 16,060 | r | GPLearn <- function(preprocData, TF = NULL, targets = NULL,
useGpdisim = !is.null(TF), randomize = FALSE,
addPriors = FALSE, fixedParams = FALSE,
initParams = NULL, initialZero = TRUE,
fixComps = NULL, dontOptimise = FALSE,
allowNegativeSensitivities = FALSE, quiet = FALSE,
gpsimOptions = NULL, allArgs = NULL) {
if (!is.null(allArgs)) {
for (i in seq(along=allArgs))
assign(names(allArgs)[[i]], allArgs[[i]])
}
if (class(preprocData) != "ExpressionTimeSeries")
stop("preprocData should be an instance of ExpressionTimeSeries. Please use processData or processRawData to create one.")
if (is.null(targets))
stop("no target genes specified, unable to define a model")
if (is.list(targets))
targets <- unlist(targets)
if (useGpdisim)
genes <- c(TF, targets)
else
genes <- targets
if (class(try(preprocData[genes,], silent=TRUE)) == "try-error") {
if (! all(TF %in% featureNames(preprocData))) {
stop("unknown TF")
} else {
stop("unknown target gene")
}
}
# The preprocessed data is searched for the data of the specified genes.
newData <- .getProcessedData(preprocData[genes,])
y <- newData$y
yvar <- newData$yvar
times <- newData$times
Nrep <- length(y)
options <- list(includeNoise=0, optimiser="SCG")
if (any(yvar[[1]] == 0))
options$includeNoise = 1
if (!is.null(gpsimOptions)) {
for (i in seq(along=gpsimOptions))
options[[names(gpsimOptions)[[i]]]] <- gpsimOptions[[i]]
}
if (addPriors)
options$addPriors <- TRUE
if (!initialZero && useGpdisim)
options$timeSkew <- 1000.0
#options$gaussianInitial <- TRUE
if (useGpdisim) {
Ngenes <- length(genes) - 1
options$fix$names <- "di_variance"
options$fix$value <- expTransform(c(1), "xtoa")
}
else {
Ngenes <- length(genes)
if (allowNegativeSensitivities) {
options$fix$names <- "sim1_sensitivity"
options$fix$value <- 1
}
else {
options$fix$names <- "sim1_variance"
options$fix$value <- expTransform(c(1), "xtoa")
}
}
Ntf <- 1
if (initialZero && !useGpdisim) {
options$proteinPrior <- list(values=array(0), times=array(0))
## Set the variance of the latent function to 1.
options$fix$names <- append(options$fix$names, "rbf1_variance")
options$fix$value <- append(options$fix$value, expTransform(1, "xtoa"))
}
# fixing first output sensitivity to fix the scaling
if (fixedParams && !is.null(initParams)) {
if (is.list(initParams))
initParams <- unlist(initParams)
if (!is.null(names(initParams))) {
options$fix$names <- names(initParams)
options$fix$value <- initParams
}
else {
I <- which(!is.na(initParams))
for (k in 1:length(I)) {
options$fix$index[k+1] <- I[k]
options$fix$value[k+1] <- initParams[I[k]]
}
}
}
if (! is.null(fixComps))
options$fixedBlocks <- fixComps
if (allowNegativeSensitivities)
options$isNegativeS <- TRUE
# initializing the model
if (useGpdisim)
model <- list(type="cgpdisim")
else
model <- list(type="cgpsim")
if (length(annotation(preprocData)) > 0)
dataAnnotation <- annotation(preprocData)
else
dataAnnotation <- NULL
for ( i in seq(length=Nrep) ) {
#repNames <- names(model$comp)
if (useGpdisim) {
model$comp[[i]] <- gpdisimCreate(Ngenes, Ntf, times, t(y[[i]]), t(yvar[[i]]), options, genes = featureNames(preprocData[genes,]), annotation=dataAnnotation)
}
else {
model$comp[[i]] <- gpsimCreate(Ngenes, Ntf, times, t(y[[i]]), t(yvar[[i]]), options, genes = featureNames(preprocData[genes,]), annotation=dataAnnotation)
}
#names(model$comp) <- c(repNames, paste("rep", i, sep=""))
#if (fixedParams) {
# model$comp[[i]]$kern <- multiKernFixBlocks(model$comp[[i]]$kern, fixComps)
#}
}
optOptions <- optimiDefaultOptions()
optOptions$maxit <- 3000
optOptions$optimiser <- "SCG"
if (quiet)
optOptions$display <- FALSE
if (randomize) {
a <- modelExtractParam(model)
#I <- a==0
n <- length(a)
a <- array(rnorm(n), dim = c(1, n))
#a[I] <- 0
model <- modelExpandParam(model, a)
}
if (!is.null(initParams)) {
if (!is.list(initParams) && all(is.finite(initParams))
&& is.null(names(initParams)))
model <- modelExpandParam(model, initParams)
else if (!is.null(names(initParams))) {
params <- modelExtractParam(model, only.values=FALSE)
for (i in seq(along=initParams)) {
j <- grep(names(initParams)[i], names(params))
if (length(j) > 0)
params[j] <- initParams[i]
else
warning(paste("Ignoring invalid initial parameter specification:", initParams$names[i]))
}
model <- modelExpandParam(model, params)
}
}
if (!dontOptimise) {
if (useGpdisim) {
message(paste(c("Optimising the model.\nTF: ", TF, "\nTargets: ", paste(targets, collapse=' '), "\n"), sep=" "))
} else {
message(paste(c("Optimising the model.\nTargets: ", paste(genes, collapse=' '), "\n"), sep=" "))
}
model <- modelOptimise(model, optOptions)
}
model <- modelUpdateProcesses(model)
model$allArgs <- list(TF=TF, targets=targets, useGpdisim=useGpdisim,
randomize=randomize, addPriors=addPriors,
fixedParams=fixedParams, initParams=initParams,
initialZero=initialZero, fixComps=fixComps,
dontOptimise=dontOptimise, gpsimOptions=gpsimOptions)
return (new("GPModel", model))
}
GPRankTargets <- function(preprocData, TF = NULL, knownTargets = NULL,
testTargets = NULL, filterLimit = 1.8,
returnModels = FALSE, options = NULL,
scoreSaveFile = NULL,
datasetName = "", experimentSet = "") {
if (class(preprocData) != "ExpressionTimeSeries")
stop("preprocData should be an instance of ExpressionTimeSeries. Please use processData or processRawData to create one.")
if (is.null(testTargets))
testTargets <- featureNames(preprocData)
if (is.list(testTargets))
testTargets <- unlist(testTargets)
if (is.list(knownTargets))
knownTargets <- unlist(knownTargets)
if (!is.null(TF) && !all(TF %in% featureNames(preprocData))) {
stop("unknown TF")
}
if (!all(knownTargets %in% featureNames(preprocData))) {
stop("unknown genes in knownTargets")
}
if (class(try(preprocData[testTargets,], silent=TRUE)) == "try-error") {
stop("unknown genes in testTargets")
}
if ('var.exprs' %in% assayDataElementNames(preprocData))
testTargets <- .filterTargets(preprocData[testTargets,], filterLimit)
else
testTargets <- featureNames(preprocData[testTargets,])
if (length(testTargets) < 1)
stop("No test targets passed filtering")
# The variable useGpdisim determines whether GPDISIM is used in creating the
# models. GPSIM (default) is used if no TF has been specified.
useGpdisim = !is.null(TF)
if (!useGpdisim && is.null(knownTargets))
stop("There are no known targets for GPSIM.")
if (useGpdisim)
numberOfKnownGenes <- length(knownTargets) + 1
else
numberOfKnownGenes <- length(knownTargets)
logLikelihoods <- rep(NA, length.out=length(testTargets))
baselogLikelihoods <- rep(NA, length.out=length(testTargets))
modelParams <- list()
modelArgs <- list()
if (returnModels)
rankedModels <- list()
genes <- list()
if (!is.null(options)) {
allArgs <- options
allArgs$useGpdisim <- useGpdisim
}
else
allArgs <- list(useGpdisim=useGpdisim)
if (!is.null(knownTargets) && length(knownTargets) > 0) {
baselineModel <- .formModel(preprocData, TF, knownTargets, allArgs=allArgs)
baselineParameters <- modelExtractParam(baselineModel$model,
only.values=FALSE)
sharedModel <- list(ll=baselineModel$ll,
params=baselineModel$params,
args=modelStruct(baselineModel$model)$allArgs)
allArgs$fixedParams <- TRUE
allArgs$initParams <- baselineParameters
J <- grep('Basal', names(allArgs$initParams))
names(allArgs$initParams)[J] <-
paste(names(allArgs$initParams)[J], '$', sep='')
allArgs$fixComps <- 1:numberOfKnownGenes
}
else {
baselineModel <- NULL
}
for (i in seq(along=testTargets)) {
returnData <- .formModel(preprocData, TF, knownTargets,
testTargets[i], allArgs = allArgs)
if (!is.finite(returnData$ll)) {
logLikelihoods[i] <- NA
modelParams[[i]] <- NA
modelArgs[[i]] <- NA
if (returnModels)
rankedModels[[i]] <- NA
}
else {
logLikelihoods[i] <- returnData$ll
modelParams[[i]] <- returnData$params
modelArgs[[i]] <- modelStruct(returnData$model)$allArgs
if (returnModels)
rankedModels[[i]] <- returnData$model
}
genes[[i]] <- testTargets[[i]]
testdata <- preprocData[testTargets[i],]
testdata$experiments <- rep(1, length(testdata$experiments))
newData <- .getProcessedData(testdata)
baselogLikelihoods[i] <- .baselineOptimise(newData$y[[1]], newData$yvar[[1]], list(includeNoise=(any(newData$yvar[[1]]==0))))
if (!is.null(scoreSaveFile)) {
scoreList <- new("scoreList", params = modelParams,
loglikelihoods = logLikelihoods,
baseloglikelihoods = baselogLikelihoods,
genes = genes, modelArgs = modelArgs,
knownTargets = knownTargets, TF = TF,
sharedModel = sharedModel,
datasetName = datasetName,
experimentSet = experimentSet)
save(scoreList, file=scoreSaveFile)
}
}
scoreList <- new("scoreList", params = modelParams,
loglikelihoods = logLikelihoods,
baseloglikelihoods = baselogLikelihoods,
genes = genes, modelArgs = modelArgs,
knownTargets = knownTargets, TF = TF,
sharedModel = sharedModel,
datasetName = datasetName,
experimentSet = experimentSet)
if (returnModels)
return (list(scores=scoreList, models=rankedModels))
else
return (scoreList)
}
GPRankTFs <- function(preprocData, TFs, targets,
filterLimit = 1.8,
returnModels = FALSE, options = NULL,
scoreSaveFile = NULL,
datasetName = "", experimentSet = "") {
if (class(preprocData) != "ExpressionTimeSeries")
stop("preprocData should be an instance of ExpressionTimeSeries. Please use processData or processRawData to create one.")
if (is.null(targets)) stop("No targets specified.")
if (is.list(targets))
targets <- unlist(targets)
numberOfTargets <- length(targets)
genes = c(TFs, targets)
if (class(try(preprocData[TFs,], silent=TRUE)) == "try-error") {
stop("unknown genes in TFs")
}
if (!all(targets %in% featureNames(preprocData))) {
stop("unknown genes in targets")
}
## Filtering the genes based on the calculated ratios. If the limit is 0, all genes are accepted.
if ('var.exprs' %in% assayDataElementNames(preprocData))
TFs <- .filterTargets(preprocData[TFs,], filterLimit)
else
TFs <- featureNames(preprocData[TFs,])
if (length(TFs) < 1)
stop("No TFs passed the filtering.")
logLikelihoods <- rep(NA, length.out=length(TFs))
modelParams <- list()
modelArgs <- list()
if (returnModels)
rankedModels <- list()
if (!is.null(options)) {
allArgs <- options
allArgs$useGpdisim <- TRUE
}
else
allArgs <- list(useGpdisim=TRUE)
numberOfTargets <- length(targets)
genes <- list()
for (i in 1:length(TFs)) {
returnData <- .formModel(preprocData, TF = TFs[i], targets, allArgs=allArgs)
if (!is.finite(returnData$ll)) {
logLikelihoods[i] <- NA
modelParams[[i]] <- NA
modelArgs[[i]] <- NA
if (returnModels)
rankedModels[[i]] <- NA
}
else {
logLikelihoods[i] <- returnData$ll
modelParams[[i]] <- returnData$params
modelArgs[[i]] <- modelStruct(returnData$model)$allArgs
if (returnModels)
rankedModels[[i]] <- returnData$model
}
genes[[i]] <- TFs[[i]]
if (!is.null(scoreSaveFile)) {
scoreList <- new("scoreList", params = modelParams,
loglikelihoods = logLikelihoods,
genes = genes, modelArgs = modelArgs,
knownTargets = targets, TF = '(see genes)',
datasetName = datasetName,
experimentSet = experimentSet)
save(scoreList, file=scoreSaveFile)
}
}
scoreList <- new("scoreList", params = modelParams,
loglikelihoods = logLikelihoods,
genes = genes, modelArgs = modelArgs,
knownTargets = targets, TF = '(see genes)',
datasetName = datasetName,
experimentSet = experimentSet)
if (returnModels)
return (list(scores=scoreList, models=rankedModels))
else
return (scoreList)
}
.formModel <- function(preprocData, TF = NULL, knownTargets = NULL,
testTarget = NULL, allArgs = NULL) {
if (!is.null(testTarget))
targets <- append(knownTargets, testTarget)
else
targets <- knownTargets
error1 <- TRUE
error2 <- TRUE
tryCatch({
model <- GPLearn(preprocData, TF, targets, allArgs = allArgs)
error1 <- FALSE
}, error = function(ex) {
warning("Stopped due to an error.\n")
})
if (error1) {
success <- FALSE
i <- 0
allArgs$randomize <- TRUE
while (!success && i < 10) {
tryCatch({
message("Trying again with different parameters.\n")
model <- GPLearn(preprocData, TF, targets, allArgs = allArgs)
success <- TRUE
error2 <- FALSE
}, error = function(ex) {
warning("Stopped due to an error.\n")
})
i <- i + 1
}
}
else {
error2 <- FALSE
}
if (error2) {
logLikelihood <- -Inf
params <- NA
rankedModel <- NA
}
else {
logLikelihood <- modelLogLikelihood(model)
params <- modelExtractParam(model)
rankedModel <- model
}
return (list(ll=logLikelihood, model=rankedModel, params=params))
}
generateModels <- function(preprocData, scores) {
models <- list()
# recreate the models for each gene in the scoreList
for (i in seq(along=params(scores))) {
args <- modelArgs(scores)[[i]]
args$initParams <- params(scores)[[i]]
args$dontOptimise <- TRUE
models[[i]] <- GPLearn(preprocData, allArgs=args)
}
return (models)
}
.getProcessedData <- function(data) {
times <- data$modeltime
experiments <- data$experiments
y <- assayDataElement(data, 'exprs')
if ('var.exprs' %in% assayDataElementNames(data))
yvar <- assayDataElement(data, 'var.exprs')
else
yvar <- 0 * y
scale <- sqrt(rowMeans(y^2))
scale[scale==0] <- 1
scaleMat <- scale %*% array(1, dim = c(1, ncol(data)))
y <- y / scaleMat
yvar <- yvar / scaleMat^2
ylist <- list()
yvarlist <- list()
expids <- unique(experiments)
for (i in seq(along=expids)) {
ylist[[i]] <- y[,experiments==expids[i], drop=FALSE]
yvarlist[[i]] <- yvar[,experiments==expids[i], drop=FALSE]
}
times <- times[experiments==expids[1]]
genes <- featureNames(data)
newData <- list(y = ylist, yvar = yvarlist, genes = genes, times = times)
return (newData)
}
.filterTargets <- function(data, filterLimit) {
y <- assayDataElement(data, 'exprs')
yvar <- assayDataElement(data, 'var.exprs')
zScores <- rowMeans(y / sqrt(yvar))
testTargets <- names(which(zScores > filterLimit))
return (testTargets)
}
|
###Criminal Justice System Statistics
#https://www.gov.uk/government/statistics/criminal-justice-system-statistics-quarterly-december-2020
#Data downloaded by zip folders - mac finds the folders too large to open so download them via the terminal to obtain the CSVs
#Load libraries
pacman::p_load(tidyverse, rdrop2, lubridate, purrr, jsonlite, stringr, bbmap,bbplot2,readxl, sf,shadowtext,rgdal,gridExtra, scales, R.utils, googlesheets4, gridExtra,ggpubr, WriteXLS, forcats, janitor, zoo, httr, ggtext, ggrepel, urltools)
library(base)
#############################################################
#All court data - prosecutions and convictions
all_courts <- read_csv("~/Downloads/all_courts_2020.csv")
#Only rape offences and those committed by a person rather than a company
rape_offences <- all_courts %>% filter(grepl("Rape", Offence)) %>% filter(`Person/other` == "01: Person")
rape_offences_year <- rape_offences %>% select(c(Year, `Proceeded against`, Sentenced)) %>% group_by(Year) %>% summarise_all(sum) %>%
mutate(percent = Sentenced/`Proceeded against`)
sentence_graph <- ggplot(rape_offences_year,
aes(x = as.character(Year),
y = percent
)) +
geom_col(fill = '#D1700E') +
coord_flip()+
geom_hline(yintercept = 0, size = 1, colour="#333333") +
scale_y_continuous(labels=scales::percent, limits =c(0,1)) +
bbc_style() +
reith_style() +
labs(title="Less than half of defendants are sentenced",
subtitle = "Percentage of rape prosecutions that were sentenced") +
theme(strip.text = element_text(margin = margin(b= 0.5, unit = 'cm')),
axis.line.x = element_blank(),
axis.ticks.x =element_blank(),
axis.text.x = element_blank(),
panel.spacing.x = unit(0.5, 'cm'),
panel.grid = element_blank(),
plot.margin = margin(l = 0.2, r = 1.2, unit = 'cm')) +
geom_label(
aes(
x = as.character(Year),
y = percent,
label = paste0(format(round(percent*100)),"%")),
hjust = 1,
vjust = 0.5,
colour = '#ffffff',
fill = NA,
label.size = NA,
size = 7,
fontface = "bold")
sentence_graph
finalise_plot(
sentence_graph,
source = paste0('Source: Criminal Justice System Statistics, Ministry of Justice'),
tinify_file = F,
height_pixels = 540,
save_filepath = paste0(
"~/Downloads/sentenced_graph.png"
)
)
#########################################################################
#Sentence outcomes
#Combine other sentence outcomes such as fines and compensation
rape_outcomes <- rape_offences %>% mutate(Other == `Absolute Discharge` + `Conditional Discharge` +
`Fine` + `Compensation (primary disposal)` +`Total Otherwise Dealt With`) %>%
select(c(Year,`Total Community Sentence`,`Suspended Sentence`, `Total Immediate Custody`, `Other`)) %>%
group_by(Year) %>% summarise_all(sum) %>% pivot_longer(!(Year))
rape_outcomes_plot <- ggplot(rape_outcomes,
aes(x = as.character(Year),
y = value,
fill = name,
)) +
geom_bar(position="stack", stat = "identity") +
#coord_flip()+
geom_hline(yintercept = 0, size = 1, colour="#333333") +
#scale_y_continuous(labels=scales::percent, limits =c(0,1)) +
bbc_style() +
reith_style() +
scale_fill_manual(values = rev(bbc_pal('main', 4))) +
labs(title="Majority of defendants are sent to custody",
subtitle = "Sentence outcome for rape offences") +
theme(strip.text = element_text(margin = margin(b= 0.5, unit = 'cm')),
#axis.line.x = element_blank(),
#axis.ticks.x =element_blank(),
#axis.text.x = element_blank(),
panel.spacing.x = unit(0.5, 'cm'),
panel.grid = element_blank(),
plot.margin = margin(l = 0.2, r = 1.2, unit = 'cm'))
# geom_label(
# aes(
# x = as.character(Year),
# y = percent,
# label = paste0(format(round(percent*100)),"%")),
# hjust = 1,
# vjust = 0.5,
# colour = '#ffffff',
# fill = NA,
# label.size = NA,
# size = 7,
# fontface = "bold")
rape_outcomes_plot
finalise_plot(
rape_outcomes_plot,
source = paste0('Source: Criminal Justice System Statistics, Ministry of Justice'),
tinify_file = F,
width_pixels = 1000,
save_filepath = paste0(
"~/Downloads/sentenced_outcomes_graph.png"
)
)
#####################################
#Custody by length
#Combine custody length into categories for the graph
custody_length <- rape_offences %>% select(c(Year, starts_with("Custody")))%>% pivot_longer(!(Year)) %>%
mutate(custody = case_when(name %in% c( "Custody - Up to and including 1 month", "Custody - Over 1 month and up to and including 2 months" ,
"Custody - Over 2 months and up to and including 3 months", "Custody - More than 3 months and under 6 months" ,
"Custody - 6 months", "Custody - More than 6 months and up to 9 months", "Custody - More than 9 months and under 12 months" ,
"Custody - 12 months", "Custody - More than 12 months and up to 18 months", "Custody - More than 18 months and up to 2 years") ~ "Less than 2\nyears",
name %in% c("Custody - More than 2 years and up to 3 years", "Custody - More than 3 years and under 4 years", "Custody - 4 years" ) ~ "Between 2 and 4\nyears",
name %in% c("Custody - More than 4 years and up to 5 years" , "Custody - More than 5 years and up to 6 years" ,"Custody - More than 6 years and up to 7 years",
"Custody - More than 7 years and up to 8 years", "Custody - More than 8 years and up to 9 years","Custody - More than 9 years and up to 10 years") ~ "Between 4 and 10\nyears",
name %in% c("Custody - Life", "Custody - Indeterminate Sentence", "Custody - More than 15 years and less than life") ~ "More than 15\nyears",
name %in% c("Custody - More than 10 years and up to 15 years") ~ "Between 10 and 15\nyears",
TRUE ~ name
)) %>% filter(Year == 2019) %>% group_by(custody) %>% summarise(value=sum(value))
custody_values <- factor(custody_length$custody, level =c('Less than 2\nyears', 'Between 2 and 4\nyears', 'Between 4 and 10\nyears', 'Between 10 and 15\nyears', 'More than 15\nyears' ))
custody_length_plot <- ggplot(custody_length,
aes(x = custody_values,
y = value,
)) +
geom_col(fill = '#D1700E') +
#coord_flip()+
geom_hline(yintercept = 0, size = 1, colour="#333333") +
#scale_y_continuous(labels=scales::percent, limits =c(0,1)) +
bbc_style() +
reith_style() +
#scale_fill_manual(values = rev(bbc_pal('main', 14))) +
labs(title="High proportion sentenced to over 4 years",
subtitle = "Length of custodial sentences for rape in 2019") +
theme(strip.text = element_text(margin = margin(b= 0.5, unit = 'cm')),
#axis.line.x = element_blank(),
#axis.ticks.x =element_blank(),
#axis.text.x = element_text(angle=90, hjust=1),
panel.spacing.x = unit(0.5, 'cm'),
panel.grid = element_blank(),
plot.margin = margin(l = 0.2, r = 1.2, unit = 'cm'))
# geom_label(
# aes(
# x = as.character(Year),
# y = percent,
# label = paste0(format(round(percent*100)),"%")),
# hjust = 1,
# vjust = 0.5,
# colour = '#ffffff',
# fill = NA,
# label.size = NA,
# size = 7,
# fontface = "bold")
custody_length_plot
finalise_plot(
custody_length_plot,
source = paste0('Source: Criminal Justice System Statistics, Ministry of Justice'),
tinify_file = F,
width_pixels = 1200,
save_filepath = paste0(
"~/Downloads/custody_length_graph.png"
)
)
##################################################################
###Areas based on where the offence is dealt with rather than where the offence was committed - add as a footnote?
#By police force area - prosecutions and convictions
pfa <- read_csv("~/Downloads/courts-by-pfa-2020.csv")
pa_rape <- pfa %>% filter(grepl("Rape", Offence)) %>% filter(`Type of Defendant` == "01: Person")
#Prosecutions are all cases at magistrates'
prosecutions <- pa_rape %>% filter(`Court Type` == "02: Magistrates Court") %>% group_by(`Police Force Area`, `Year of Appearance`) %>% summarise(prosecutions=n())
convictions <- pa_rape %>% filter(`Convicted/Not Convicted` == "01: Convicted") %>% group_by(`Police Force Area`, `Year of Appearance`) %>%
summarise(convictions =n())
#Only do the last 5 years
conviction_ratio <- left_join(prosecutions,convictions, by =c("Police Force Area", "Year of Appearance")) %>%
mutate(convictions = replace_na(convictions, 0)) %>% filter(`Year of Appearance` > "2015") %>%
select(!(`Year of Appearance`)) %>% summarise_all(sum) %>% mutate(percent = convictions/prosecutions) %>% arrange(percent)
conviction_ratio <-conviction_ratio[order(conviction_ratio$percent),]
ggplot(tips2, aes(x = reorder(day, -perc), y = perc)) + geom_bar(stat = "identity")
conviction_rate_plot <- ggplot(conviction_ratio,
aes(x = reorder(`Police Force Area`, percent),
y = percent,
)) +
#geom_col(fill = '#D1700E') +
geom_bar(stat = "identity", fill = '#D1700E') +
coord_flip()+
geom_hline(yintercept = 0, size = 1, colour="#333333") +
scale_y_continuous(labels=scales::percent, limits =c(0,1)) +
bbc_style() +
reith_style() +
#scale_fill_manual(values = rev(bbc_pal('main', 14))) +
labs(title="Conviction rate between 30-67% by Police Force Area",
subtitle = "Proportion of convictions for rape prosecutions in 2016 - 2020") +
theme(strip.text = element_text(margin = margin(b= 0.5, unit = 'cm')),
axis.line.x = element_blank(),
axis.ticks.x =element_blank(),
axis.text.x = element_blank(),
panel.spacing.x = unit(0.5, 'cm'),
panel.grid = element_blank(),
plot.margin = margin(l = 0.2, r = 1.2, unit = 'cm')) +
geom_label(
aes(
x = `Police Force Area`,
y = percent,
label = paste0(format(round(percent*100)),"%")),
hjust = 1,
vjust = 0.5,
colour = '#ffffff',
fill = NA,
label.size = NA,
size = 7,
fontface = "bold")
conviction_rate_plot
finalise_plot(
conviction_rate_plot,
source = paste0('Source: Criminal Justice System Statistics, Ministry of Justice'),
tinify_file = F,
width_pixels = 1200,
height_pixels = 3000,
save_filepath = paste0(
"~/Downloads/conviction_by_PFA.png"
)
)
#####################################################
#Remanded by police prior to appearing at court
remands_police <- read_csv("~/Downloads/remands_magistrates_2020.csv")
#Filter by rape offences
remands_police_rape <- remands_police %>% filter(grepl("Rape", Offence)) %>%
group_by(`Year of Appearance`, `Remand status with Police`) %>% summarise(count = sum(Count))
remands_police_rape_per <- remands_police_rape %>% mutate(percent = count/sum(count))
remands_police_rape_per_plot <- ggplot(remands_police_rape_per,
aes(x = `Year of Appearance`,
y = percent,
group = `Remand status with Police`,
fill = `Remand status with Police`,
)) +
#geom_col(fill = '#D1700E') +
geom_bar(position = "dodge", stat = "identity") +
#coord_flip()+
geom_hline(yintercept = 0, size = 1, colour="#333333") +
scale_y_continuous(labels=scales::percent, limits =c(0,1)) +
bbc_style() +
reith_style() +
scale_fill_manual(values = rev(bbc_pal('main', 3))) +
labs(title="Not remanded after charge has increased since 2017",
subtitle = "Defendants’ remand status with Police prior to appearing at magistrates' court") +
theme(strip.text = element_text(margin = margin(b= 0.5, unit = 'cm')),
axis.line.x = element_blank(),
#axis.ticks.x =element_blank(),
#axis.text.x = element_blank(),
panel.spacing.x = unit(0.5, 'cm'),
panel.grid = element_blank(),
plot.margin = margin(l = 0.2, r = 1.2, unit = 'cm'))
# geom_label(
# aes(
# x = `Police Force Area`,
# y = percent,
# label = paste0(format(round(percent*100)),"%")),
# hjust = 1,
# vjust = 0.5,
# colour = '#ffffff',
# fill = NA,
# label.size = NA,
# size = 7,
# fontface = "bold")
remands_police_rape_per_plot
finalise_plot(
remands_police_rape_per_plot,
source = paste0('Source: Criminal Justice System Statistics, Ministry of Justice'),
tinify_file = F,
width_pixels = 900,
save_filepath = paste0(
"~/Downloads/remands_police.png"
)
)
##########################################################
#Remands at the Crown Court
remands_cc <- read_csv("~/Downloads/remands_CC_2020.csv")
#Filter by rape offences
remands_cc_rape <- remands_cc %>% filter(grepl("Rape", Offence)) %>% filter(category == "01: Person") %>%
group_by(`Year of Appearance`, `Remand status at the Crown Court`) %>% summarise(count = sum(count))
remands_cc_rape_per <- remands_cc_rape %>% mutate(percent = count/sum(count))
remands_cc_rape_per_plot <- ggplot(remands_cc_rape_per,
aes(x = `Year of Appearance`,
y = percent,
group = `Remand status at the Crown Court`,
fill = `Remand status at the Crown Court`,
)) +
#geom_col(fill = '#D1700E') +
geom_bar(position = "dodge", stat = "identity") +
#coord_flip()+
geom_hline(yintercept = 0, size = 1, colour="#333333") +
scale_y_continuous(labels=scales::percent, limits =c(0,1)) +
bbc_style() +
reith_style() +
scale_fill_manual(values = rev(bbc_pal('main', 3))) +
labs(title="Custody increased from 2019",
subtitle = "Remanded status for defendants tried or sentenced at the Crown Court") +
theme(strip.text = element_text(margin = margin(b= 0.5, unit = 'cm')),
axis.line.x = element_blank(),
#axis.ticks.x =element_blank(),
#axis.text.x = element_blank(),
panel.spacing.x = unit(0.5, 'cm'),
panel.grid = element_blank(),
plot.margin = margin(l = 0.2, r = 1.2, unit = 'cm'))
# geom_label(
# aes(
# x = `Police Force Area`,
# y = percent,
# label = paste0(format(round(percent*100)),"%")),
# hjust = 1,
# vjust = 0.5,
# colour = '#ffffff',
# fill = NA,
# label.size = NA,
# size = 7,
# fontface = "bold")
remands_cc_rape_per_plot
finalise_plot(
remands_cc_rape_per_plot,
source = paste0('Source: Criminal Justice System Statistics, Ministry of Justice'),
tinify_file = F,
width_pixels = 1000,
save_filepath = paste0(
"~/Downloads/remands_cc.png"
)
)
| /rape_prosecutions.R | no_license | KristinaGray2/test2 | R | false | false | 15,063 | r | ###Criminal Justice System Statistics
#https://www.gov.uk/government/statistics/criminal-justice-system-statistics-quarterly-december-2020
#Data downloaded by zip folders - mac finds the folders too large to open so download them via the terminal to obtain the CSVs
#Load libraries
pacman::p_load(tidyverse, rdrop2, lubridate, purrr, jsonlite, stringr, bbmap,bbplot2,readxl, sf,shadowtext,rgdal,gridExtra, scales, R.utils, googlesheets4, gridExtra,ggpubr, WriteXLS, forcats, janitor, zoo, httr, ggtext, ggrepel, urltools)
library(base)
#############################################################
#All court data - prosecutions and convictions
all_courts <- read_csv("~/Downloads/all_courts_2020.csv")
#Only rape offences and those committed by a person rather than a company
rape_offences <- all_courts %>% filter(grepl("Rape", Offence)) %>% filter(`Person/other` == "01: Person")
rape_offences_year <- rape_offences %>% select(c(Year, `Proceeded against`, Sentenced)) %>% group_by(Year) %>% summarise_all(sum) %>%
mutate(percent = Sentenced/`Proceeded against`)
sentence_graph <- ggplot(rape_offences_year,
aes(x = as.character(Year),
y = percent
)) +
geom_col(fill = '#D1700E') +
coord_flip()+
geom_hline(yintercept = 0, size = 1, colour="#333333") +
scale_y_continuous(labels=scales::percent, limits =c(0,1)) +
bbc_style() +
reith_style() +
labs(title="Less than half of defendants are sentenced",
subtitle = "Percentage of rape prosecutions that were sentenced") +
theme(strip.text = element_text(margin = margin(b= 0.5, unit = 'cm')),
axis.line.x = element_blank(),
axis.ticks.x =element_blank(),
axis.text.x = element_blank(),
panel.spacing.x = unit(0.5, 'cm'),
panel.grid = element_blank(),
plot.margin = margin(l = 0.2, r = 1.2, unit = 'cm')) +
geom_label(
aes(
x = as.character(Year),
y = percent,
label = paste0(format(round(percent*100)),"%")),
hjust = 1,
vjust = 0.5,
colour = '#ffffff',
fill = NA,
label.size = NA,
size = 7,
fontface = "bold")
sentence_graph
finalise_plot(
sentence_graph,
source = paste0('Source: Criminal Justice System Statistics, Ministry of Justice'),
tinify_file = F,
height_pixels = 540,
save_filepath = paste0(
"~/Downloads/sentenced_graph.png"
)
)
#########################################################################
#Sentence outcomes
#Combine other sentence outcomes such as fines and compensation
rape_outcomes <- rape_offences %>% mutate(Other == `Absolute Discharge` + `Conditional Discharge` +
`Fine` + `Compensation (primary disposal)` +`Total Otherwise Dealt With`) %>%
select(c(Year,`Total Community Sentence`,`Suspended Sentence`, `Total Immediate Custody`, `Other`)) %>%
group_by(Year) %>% summarise_all(sum) %>% pivot_longer(!(Year))
rape_outcomes_plot <- ggplot(rape_outcomes,
aes(x = as.character(Year),
y = value,
fill = name,
)) +
geom_bar(position="stack", stat = "identity") +
#coord_flip()+
geom_hline(yintercept = 0, size = 1, colour="#333333") +
#scale_y_continuous(labels=scales::percent, limits =c(0,1)) +
bbc_style() +
reith_style() +
scale_fill_manual(values = rev(bbc_pal('main', 4))) +
labs(title="Majority of defendants are sent to custody",
subtitle = "Sentence outcome for rape offences") +
theme(strip.text = element_text(margin = margin(b= 0.5, unit = 'cm')),
#axis.line.x = element_blank(),
#axis.ticks.x =element_blank(),
#axis.text.x = element_blank(),
panel.spacing.x = unit(0.5, 'cm'),
panel.grid = element_blank(),
plot.margin = margin(l = 0.2, r = 1.2, unit = 'cm'))
# geom_label(
# aes(
# x = as.character(Year),
# y = percent,
# label = paste0(format(round(percent*100)),"%")),
# hjust = 1,
# vjust = 0.5,
# colour = '#ffffff',
# fill = NA,
# label.size = NA,
# size = 7,
# fontface = "bold")
rape_outcomes_plot
finalise_plot(
rape_outcomes_plot,
source = paste0('Source: Criminal Justice System Statistics, Ministry of Justice'),
tinify_file = F,
width_pixels = 1000,
save_filepath = paste0(
"~/Downloads/sentenced_outcomes_graph.png"
)
)
#####################################
#Custody by length
#Combine custody length into categories for the graph
custody_length <- rape_offences %>% select(c(Year, starts_with("Custody")))%>% pivot_longer(!(Year)) %>%
mutate(custody = case_when(name %in% c( "Custody - Up to and including 1 month", "Custody - Over 1 month and up to and including 2 months" ,
"Custody - Over 2 months and up to and including 3 months", "Custody - More than 3 months and under 6 months" ,
"Custody - 6 months", "Custody - More than 6 months and up to 9 months", "Custody - More than 9 months and under 12 months" ,
"Custody - 12 months", "Custody - More than 12 months and up to 18 months", "Custody - More than 18 months and up to 2 years") ~ "Less than 2\nyears",
name %in% c("Custody - More than 2 years and up to 3 years", "Custody - More than 3 years and under 4 years", "Custody - 4 years" ) ~ "Between 2 and 4\nyears",
name %in% c("Custody - More than 4 years and up to 5 years" , "Custody - More than 5 years and up to 6 years" ,"Custody - More than 6 years and up to 7 years",
"Custody - More than 7 years and up to 8 years", "Custody - More than 8 years and up to 9 years","Custody - More than 9 years and up to 10 years") ~ "Between 4 and 10\nyears",
name %in% c("Custody - Life", "Custody - Indeterminate Sentence", "Custody - More than 15 years and less than life") ~ "More than 15\nyears",
name %in% c("Custody - More than 10 years and up to 15 years") ~ "Between 10 and 15\nyears",
TRUE ~ name
)) %>% filter(Year == 2019) %>% group_by(custody) %>% summarise(value=sum(value))
custody_values <- factor(custody_length$custody, level =c('Less than 2\nyears', 'Between 2 and 4\nyears', 'Between 4 and 10\nyears', 'Between 10 and 15\nyears', 'More than 15\nyears' ))
custody_length_plot <- ggplot(custody_length,
aes(x = custody_values,
y = value,
)) +
geom_col(fill = '#D1700E') +
#coord_flip()+
geom_hline(yintercept = 0, size = 1, colour="#333333") +
#scale_y_continuous(labels=scales::percent, limits =c(0,1)) +
bbc_style() +
reith_style() +
#scale_fill_manual(values = rev(bbc_pal('main', 14))) +
labs(title="High proportion sentenced to over 4 years",
subtitle = "Length of custodial sentences for rape in 2019") +
theme(strip.text = element_text(margin = margin(b= 0.5, unit = 'cm')),
#axis.line.x = element_blank(),
#axis.ticks.x =element_blank(),
#axis.text.x = element_text(angle=90, hjust=1),
panel.spacing.x = unit(0.5, 'cm'),
panel.grid = element_blank(),
plot.margin = margin(l = 0.2, r = 1.2, unit = 'cm'))
# geom_label(
# aes(
# x = as.character(Year),
# y = percent,
# label = paste0(format(round(percent*100)),"%")),
# hjust = 1,
# vjust = 0.5,
# colour = '#ffffff',
# fill = NA,
# label.size = NA,
# size = 7,
# fontface = "bold")
custody_length_plot
finalise_plot(
custody_length_plot,
source = paste0('Source: Criminal Justice System Statistics, Ministry of Justice'),
tinify_file = F,
width_pixels = 1200,
save_filepath = paste0(
"~/Downloads/custody_length_graph.png"
)
)
##################################################################
###Areas based on where the offence is dealt with rather than where the offence was committed - add as a footnote?
#By police force area - prosecutions and convictions
pfa <- read_csv("~/Downloads/courts-by-pfa-2020.csv")
pa_rape <- pfa %>% filter(grepl("Rape", Offence)) %>% filter(`Type of Defendant` == "01: Person")
#Prosecutions are all cases at magistrates'
prosecutions <- pa_rape %>% filter(`Court Type` == "02: Magistrates Court") %>% group_by(`Police Force Area`, `Year of Appearance`) %>% summarise(prosecutions=n())
convictions <- pa_rape %>% filter(`Convicted/Not Convicted` == "01: Convicted") %>% group_by(`Police Force Area`, `Year of Appearance`) %>%
summarise(convictions =n())
#Only do the last 5 years
conviction_ratio <- left_join(prosecutions,convictions, by =c("Police Force Area", "Year of Appearance")) %>%
mutate(convictions = replace_na(convictions, 0)) %>% filter(`Year of Appearance` > "2015") %>%
select(!(`Year of Appearance`)) %>% summarise_all(sum) %>% mutate(percent = convictions/prosecutions) %>% arrange(percent)
conviction_ratio <-conviction_ratio[order(conviction_ratio$percent),]
ggplot(tips2, aes(x = reorder(day, -perc), y = perc)) + geom_bar(stat = "identity")
conviction_rate_plot <- ggplot(conviction_ratio,
aes(x = reorder(`Police Force Area`, percent),
y = percent,
)) +
#geom_col(fill = '#D1700E') +
geom_bar(stat = "identity", fill = '#D1700E') +
coord_flip()+
geom_hline(yintercept = 0, size = 1, colour="#333333") +
scale_y_continuous(labels=scales::percent, limits =c(0,1)) +
bbc_style() +
reith_style() +
#scale_fill_manual(values = rev(bbc_pal('main', 14))) +
labs(title="Conviction rate between 30-67% by Police Force Area",
subtitle = "Proportion of convictions for rape prosecutions in 2016 - 2020") +
theme(strip.text = element_text(margin = margin(b= 0.5, unit = 'cm')),
axis.line.x = element_blank(),
axis.ticks.x =element_blank(),
axis.text.x = element_blank(),
panel.spacing.x = unit(0.5, 'cm'),
panel.grid = element_blank(),
plot.margin = margin(l = 0.2, r = 1.2, unit = 'cm')) +
geom_label(
aes(
x = `Police Force Area`,
y = percent,
label = paste0(format(round(percent*100)),"%")),
hjust = 1,
vjust = 0.5,
colour = '#ffffff',
fill = NA,
label.size = NA,
size = 7,
fontface = "bold")
conviction_rate_plot
finalise_plot(
conviction_rate_plot,
source = paste0('Source: Criminal Justice System Statistics, Ministry of Justice'),
tinify_file = F,
width_pixels = 1200,
height_pixels = 3000,
save_filepath = paste0(
"~/Downloads/conviction_by_PFA.png"
)
)
#####################################################
#Remanded by police prior to appearing at court
remands_police <- read_csv("~/Downloads/remands_magistrates_2020.csv")
#Filter by rape offences
remands_police_rape <- remands_police %>% filter(grepl("Rape", Offence)) %>%
group_by(`Year of Appearance`, `Remand status with Police`) %>% summarise(count = sum(Count))
remands_police_rape_per <- remands_police_rape %>% mutate(percent = count/sum(count))
remands_police_rape_per_plot <- ggplot(remands_police_rape_per,
aes(x = `Year of Appearance`,
y = percent,
group = `Remand status with Police`,
fill = `Remand status with Police`,
)) +
#geom_col(fill = '#D1700E') +
geom_bar(position = "dodge", stat = "identity") +
#coord_flip()+
geom_hline(yintercept = 0, size = 1, colour="#333333") +
scale_y_continuous(labels=scales::percent, limits =c(0,1)) +
bbc_style() +
reith_style() +
scale_fill_manual(values = rev(bbc_pal('main', 3))) +
labs(title="Not remanded after charge has increased since 2017",
subtitle = "Defendants’ remand status with Police prior to appearing at magistrates' court") +
theme(strip.text = element_text(margin = margin(b= 0.5, unit = 'cm')),
axis.line.x = element_blank(),
#axis.ticks.x =element_blank(),
#axis.text.x = element_blank(),
panel.spacing.x = unit(0.5, 'cm'),
panel.grid = element_blank(),
plot.margin = margin(l = 0.2, r = 1.2, unit = 'cm'))
# geom_label(
# aes(
# x = `Police Force Area`,
# y = percent,
# label = paste0(format(round(percent*100)),"%")),
# hjust = 1,
# vjust = 0.5,
# colour = '#ffffff',
# fill = NA,
# label.size = NA,
# size = 7,
# fontface = "bold")
remands_police_rape_per_plot
finalise_plot(
remands_police_rape_per_plot,
source = paste0('Source: Criminal Justice System Statistics, Ministry of Justice'),
tinify_file = F,
width_pixels = 900,
save_filepath = paste0(
"~/Downloads/remands_police.png"
)
)
##########################################################
#Remands at the Crown Court
remands_cc <- read_csv("~/Downloads/remands_CC_2020.csv")
#Filter by rape offences
remands_cc_rape <- remands_cc %>% filter(grepl("Rape", Offence)) %>% filter(category == "01: Person") %>%
group_by(`Year of Appearance`, `Remand status at the Crown Court`) %>% summarise(count = sum(count))
remands_cc_rape_per <- remands_cc_rape %>% mutate(percent = count/sum(count))
remands_cc_rape_per_plot <- ggplot(remands_cc_rape_per,
aes(x = `Year of Appearance`,
y = percent,
group = `Remand status at the Crown Court`,
fill = `Remand status at the Crown Court`,
)) +
#geom_col(fill = '#D1700E') +
geom_bar(position = "dodge", stat = "identity") +
#coord_flip()+
geom_hline(yintercept = 0, size = 1, colour="#333333") +
scale_y_continuous(labels=scales::percent, limits =c(0,1)) +
bbc_style() +
reith_style() +
scale_fill_manual(values = rev(bbc_pal('main', 3))) +
labs(title="Custody increased from 2019",
subtitle = "Remanded status for defendants tried or sentenced at the Crown Court") +
theme(strip.text = element_text(margin = margin(b= 0.5, unit = 'cm')),
axis.line.x = element_blank(),
#axis.ticks.x =element_blank(),
#axis.text.x = element_blank(),
panel.spacing.x = unit(0.5, 'cm'),
panel.grid = element_blank(),
plot.margin = margin(l = 0.2, r = 1.2, unit = 'cm'))
# geom_label(
# aes(
# x = `Police Force Area`,
# y = percent,
# label = paste0(format(round(percent*100)),"%")),
# hjust = 1,
# vjust = 0.5,
# colour = '#ffffff',
# fill = NA,
# label.size = NA,
# size = 7,
# fontface = "bold")
remands_cc_rape_per_plot
finalise_plot(
remands_cc_rape_per_plot,
source = paste0('Source: Criminal Justice System Statistics, Ministry of Justice'),
tinify_file = F,
width_pixels = 1000,
save_filepath = paste0(
"~/Downloads/remands_cc.png"
)
)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/codedeploy_operations.R
\name{codedeploy_list_deployment_configs}
\alias{codedeploy_list_deployment_configs}
\title{Lists the deployment configurations with the IAM user or AWS account}
\usage{
codedeploy_list_deployment_configs(nextToken)
}
\arguments{
\item{nextToken}{An identifier returned from the previous \code{ListDeploymentConfigs} call.
It can be used to return the next set of deployment configurations in
the list.}
}
\description{
Lists the deployment configurations with the IAM user or AWS account.
}
\section{Request syntax}{
\preformatted{svc$list_deployment_configs(
nextToken = "string"
)
}
}
\keyword{internal}
| /paws/man/codedeploy_list_deployment_configs.Rd | permissive | johnnytommy/paws | R | false | true | 712 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/codedeploy_operations.R
\name{codedeploy_list_deployment_configs}
\alias{codedeploy_list_deployment_configs}
\title{Lists the deployment configurations with the IAM user or AWS account}
\usage{
codedeploy_list_deployment_configs(nextToken)
}
\arguments{
\item{nextToken}{An identifier returned from the previous \code{ListDeploymentConfigs} call.
It can be used to return the next set of deployment configurations in
the list.}
}
\description{
Lists the deployment configurations with the IAM user or AWS account.
}
\section{Request syntax}{
\preformatted{svc$list_deployment_configs(
nextToken = "string"
)
}
}
\keyword{internal}
|
#CODICE PER LE ANALISI PRELIMINARI SUI DATI
## | SETUP
#Librerie
library(dplyr)
library(tidyr)
library(tibble)
library(tidyverse)
library(naniar)
library(visdat)
#Pulizia ambiente di lavoro
rm(list = ls(all.names = TRUE)) # Pulizia degli oggetti creati
gc() # Pulizia della memoria RAM
## | IMPORTARE I DATI
#Import dei dati
path_ds_appalti = ".../Appalti2015.csv"
path_ds_oggettogare = ".../Oggettigare2015.csv"
path_ds_cigcup = ".../CigCup2015.csv"
path_ds_aggiudicatari = ".../Aggiudicatari2015.csv"
ds_appalti <- read.csv(file = path_ds_appalti, header = FALSE, sep = ";", na.strings = "", stringsAsFactors=FALSE)
ds_oggettogare <- read.csv(file = path_ds_oggettogare, header = FALSE, sep = ";", na.strings = "", stringsAsFactors=FALSE)
ds_cigcup <- read.csv(file = path_ds_cigcup, header = FALSE, sep = ";", na.strings = "", stringsAsFactors=FALSE)
ds_aggiudicatari <- read.csv(file = path_ds_aggiudicatari, header = FALSE, sep = ";", na.strings = "", stringsAsFactors=FALSE)
#Struttura dei dataframe
str(ds_appalti)
str(ds_oggettogare)
str(ds_cigcup)
str(ds_aggiudicatari)
## | AGGIUNTA HEADER
colnames(ds_appalti) <- c("CFStazapp","NomeStazapp","IDCentroCosto", "NomeCentroCosto", "DataPubblicazione", "DataScadenzaOfferta", "NumeroGara", "Cig", "CIGAccordoQuadro", "CPV", "DescrizioneCPV", "ImportoComplessivoGara", "NrLottoComponenti", "ImportoLotto", "CodiceSceltaContraente", "TipoSceltaContraente", "CodiceModalitaRealizzazione", "ModalitaRealizzazione", "CodicePrincipaleContratto", "OggettoPrincipaleContratto", "LuogoIstat", "LuogoNuts", "FlagEscluso", "MotivoEsclusione", "CodiceEsito", "Esito", "DataAggiudicazioneDefinitiva", "CriterioDiAggiudicazione", "ImportoDiAggiudicazione", "NumeroImpreseOfferenti", "RibassoAggiudicazione", "QeBaseImportoLavori", "QeBaseImportoServizi", "QeBaseImportoForniture", "QeBaseImportoSicurezza", "QeBaseUlterioriOneriNoRibasso", "QeBaseImportoProgettazione", "QeBaseSommeADisposizione", "DataStipulaContratto", "DataInizioEffettiva", "DataTermineContrattuale", "QeFineImportoLavori", "QeFineImportoServizi", "QeFineImportoForniture", "QeFineImportoSicurezza", "QeFineImportoProgettazione", "QeFineSommeADisposizione", "DataEffettivaUltimazione")
colnames(ds_oggettogare) <- c("NumeroGara", "OggettoGara", "Cig", "OggettoLotto")
colnames(ds_cigcup) <- c("Cig", "Cup")
colnames(ds_aggiudicatari) <- c("Cig", "CodiceFiscale", "DenominazioneAggiudicatario", "TipoAggiudicatario", "CodiceRuolo", "Ruolo", "CodiceGruppo", "FlagAggiudicatario")
## | MISSING DATA
vis_expect(ds_appalti, ~.x == "NULL")
vis_expect(ds_cigcup, ~.x == "NULL")
vis_expect(ds_oggettogare, ~.x == "NULL")
vis_expect(ds_aggiudicatari, ~.x == "NULL")
## | TYPE CHECKING
#Type checking su "ds_appalti"
#s_appalti$DataPubblicazione <- as.Date(ds_appalti$DataPubblicazione)
ds_appalti$DataScadenzaOfferta <- as.Date(ds_appalti$DataScadenzaOfferta)
ds_appalti$DataAggiudicazioneDefinitiva <- as.Date(ds_appalti$DataAggiudicazioneDefinitiva)
ds_appalti$DataStipulaContratto <- as.Date(ds_appalti$DataStipulaContratto)
ds_appalti$DataInizioEffettiva <- as.Date(ds_appalti$DataStipulaContratto)
ds_appalti$DataTermineContrattuale <- as.Date(ds_appalti$DataStipulaContratto)
ds_appalti$DataEffettivaUltimazione <- as.Date(ds_appalti$DataEffettivaUltimazione, format = "%y%d%m")
ds_appalti$NrLottoComponenti <- as.integer(ds_appalti$NrLottoComponenti)
ds_appalti$CodiceSceltaContraente <- as.integer(ds_appalti$CodiceSceltaContraente)
ds_appalti$CodiceModalitaRealizzazione <- as.integer(ds_appalti$CodiceModalitaRealizzazione)
ds_appalti$LuogoIstat <- as.integer(ds_appalti$LuogoIstat)
ds_appalti$CodiceEsito <- as.integer(ds_appalti$CodiceEsito)
ds_appalti$NumeroImpreseOfferenti <- as.integer(ds_appalti$NumeroImpreseOfferenti)
ds_appalti$ImportoComplessivoGara <- as.numeric(ds_appalti$ImportoComplessivoGara)
ds_appalti$ImportoDiAggiudicazione <- as.numeric(ds_appalti$ImportoDiAggiudicazione)
ds_appalti$RibassoAggiudicazione <- as.numeric(ds_appalti$RibassoAggiudicazione)
ds_appalti$QeBaseImportoServizi <- as.numeric(ds_appalti$QeBaseImportoServizi)
ds_appalti$QeBaseImportoLavori <- as.numeric(ds_appalti$QeBaseImportoLavori)
ds_appalti$QeBaseImportoForniture <- as.numeric(ds_appalti$QeBaseImportoForniture)
ds_appalti$QeBaseImportoSicurezza <- as.numeric(ds_appalti$QeBaseImportoSicurezza)
ds_appalti$QeBaseSommeADisposizione <- as.numeric(ds_appalti$QeBaseSommeADisposizione)
ds_appalti$QeFineImportoLavori <- as.numeric(ds_appalti$QeFineImportoLavori)
ds_appalti$QeFineImportoServizi <- as.numeric(ds_appalti$QeFineImportoServizi)
ds_appalti$QeFineImportoForniture <- as.numeric(ds_appalti$QeFineImportoForniture)
ds_appalti$QeFineImportoSicurezza <- as.numeric(ds_appalti$QeFineImportoSicurezza)
ds_appalti$QeFineImportoProgettazione <- as.numeric(ds_appalti$QeFineImportoProgettazione)
ds_appalti$QeFineSommeADisposizione <- as.numeric(ds_appalti$QeFineSommeADisposizione)
#Type checking su "ds_oggettogare"
ds_oggettogare$NumeroGara <- as.integer(ds_oggettogare$NumeroGara)
#Controllo che i type value siano corretti per tutte le colonne
glimpse(ds_appalti)
glimpse(ds_oggettogare)
glimpse(ds_cigcup)
glimpse(ds_aggiudicatari)
#Normalizzazione dei campi di testo dei dataset
#ds_appalti
ds_appalti$DescrizioneCPV <- tolower(ds_appalti$DescrizioneCPV)
ds_appalti$OggettoPrincipaleContratto <- tolower(ds_appalti$OggettoPrincipaleContratto)
ds_appalti$NomeStazapp <- tolower(ds_appalti$NomeStazapp)
ds_appalti$NomeCentroCosto <- tolower(ds_appalti$NomeCentroCosto)
#ds_aggiudicatari
ds_aggiudicatari$DenominazioneAggiudicatario <- tolower(ds_aggiudicatari$DenominazioneAggiudicatario)
#ds_oggettogare
ds_oggettogare$OggettoGara <- tolower(ds_oggettogare$OggettoGara)
## | GESTIONE ED ELIMINAZIONE "NA"
#Eliminazione NA
# Cancellazione righe che presentano il valore "NA" su tutte le colonne
ds_appalti <- ds_appalti[rowSums(is.na(ds_appalti)) != ncol(ds_appalti), ]
ds_oggettogare <- ds_oggettogare[rowSums(is.na(ds_oggettogare)) != ncol(ds_oggettogare), ]
ds_cigcup <- ds_cigcup[rowSums(is.na(ds_cigcup)) != ncol(ds_cigcup), ]
ds_aggiudicatari <- ds_aggiudicatari[rowSums(is.na(ds_aggiudicatari)) != ncol(ds_aggiudicatari), ]
ds_appalti <- ds_appalti[!is.na(ds_appalti$ImportoDiAggiudicazione),]
ds_appalti$ImportoDiAggiudicazione <- as.numeric(ds_appalti$ImportoDiAggiudicazione)
summary(ds_appalti$ImportoDiAggiudicazione)
## | CREAZIONE TABELLE FINALI
#Controllo sulla futura "tabella stazioni appaltanti"
if(nrow(ds_appalti) == sum(!is.na(ds_appalti$CFStazapp))){ # Il codice fiscale ci deve essere!
print("Controllo superato. Ogni riga di 'ds_appalti' possiede un codice fiscale")
} else {
print("Il controllo ha avuto esito negativo. Ci sono righe senza codice fiscale")
print("Si procede all'eliminazione delle righe senza codice fiscale")
ds_appalti_cf_ok <- ds_appalti %>% drop_na(ds_appalti$CFStazapp)
}
#Creazione della tabella stazioni appaltanti
tab_staz_appaltanti <- ds_appalti %>% select(CFStazapp, NomeStazapp, IDCentroCosto, NomeCentroCosto)
#Creazione della tabella aggiudicatari
tab_aggiudicatari <- ds_aggiudicatari %>% select(Cig, CodiceFiscale, DenominazioneAggiudicatario, TipoAggiudicatario)
#Creazione della tabella gare
#Verifica di quanti CIG ci sono in ds_appalti
ds = ds_appalti
# Il numero di CIG deve corrispondere al numero di righe del datataset.
# Non ci devono essere valori mancanti. Nel caso vanno eliminati.
if(nrow(ds) == sum(!is.na(ds$Cig))){
print("Controllo superato. Ogni riga possiede un CIG")
print(paste0("Il numero di CIG della tabella di input è pari a ",nrow(ds)))
ds_appalti_cig_ok = ds
} else {
print("Il controllo ha avuto esito negativo. Ci sono righe senza CIG")
print("Si procede all'eliminazione delle righe senza CIG")
ds_complete <- ds %>% drop_na(Cig)
print(paste0("Le righe iniziali erano pari a ",nrow(ds)))
print(paste0("Le righe senza CIG sono pari a ",(nrow(ds)-nrow(ds_complete))))
Sys.sleep(2)
cat("\n")
print("Rimozione delle righe prive di CIG...")
print("Costruzione del nuovo dataset...")
ds_appalti_cig_ok = ds_complete
Sys.sleep(2)
cat("\n")
print("Processo terminato!")
print("Il nuovo dataset contiene solo righe il cui CIG è presente")
cat("\n")
print("I dati sono stati salvati nel nuovo dataset 'ds_appalti_cig_ok'")
}
#Verifica di quanti CIG ci sono in ds_oggettogare
ds = ds_oggettogare
# Il numero di CIG deve corrispondere al numero di righe del datataset.
# Non ci devono essere valori mancanti. Nel caso vanno eliminati.
if(nrow(ds) == sum(!is.na(ds$Cig))){
print("Controllo superato. Ogni riga possiede un CIG")
print(paste0("Il numero di CIG della tabella di input è pari a ",nrow(ds)))
ds_oggettogare_cig_ok = ds
} else {
print("Il controllo ha avuto esito negativo. Ci sono righe senza CIG")
print("Si procede all'eliminazione delle righe senza CIG")
ds_complete <- ds %>% drop_na(Cig)
print(paste0("Le righe iniziali erano pari a ",nrow(ds)))
print(paste0("Le righe senza CIG sono pari a ",(nrow(ds)-nrow(ds_complete))))
Sys.sleep(2)
cat("\n")
print("Rimozione delle righe prive di CIG...")
print("Costruzione del nuovo dataset...")
ds_oggettogare_cig_ok = ds_complete
Sys.sleep(2)
cat("\n")
print("Processo terminato!")
print("Il nuovo dataset contiene solo righe il cui CIG è presente")
cat("\n")
print("I dati sono stati salvati nel nuovo dataset 'ds_oggettogare_cig_ok'")
}
# Eliminazione dei duplicati del dataset "ds_appalti_cig_ok" ("ds_appalti" privo di CIG mancanti)
appalti_con_cig_unico <- ds_appalti_cig_ok[!duplicated(ds_appalti_cig_ok$Cig), ]
# Eliminazione dei duplicati del dataset "ds_oggettogare_cig_ok" ("ds_oggettogare" privo di CIG mancanti)
oggettogare_con_cig_unico <- ds_oggettogare_cig_ok[!duplicated(ds_oggettogare_cig_ok$Cig), ]
# Join per costruire la tabella gare
temptab_gare1 <- appalti_con_cig_unico %>% select(NumeroGara, Cig, CIGAccordoQuadro, CPV, DescrizioneCPV, ImportoComplessivoGara, NrLottoComponenti, ImportoLotto, CodiceSceltaContraente, TipoSceltaContraente, DataPubblicazione, DataScadenzaOfferta, CFStazapp, CodiceEsito, Esito, DataAggiudicazioneDefinitiva, CriterioDiAggiudicazione, ImportoDiAggiudicazione, NumeroImpreseOfferenti, RibassoAggiudicazione, CodiceModalitaRealizzazione, ModalitaRealizzazione)
temptab_gare2 <- oggettogare_con_cig_unico %>% select(Cig, OggettoGara, OggettoLotto)
tab_gare <- inner_join(temptab_gare1, temptab_gare2)
## | SALVATAGGIO TABELLE FINALI
write.csv2(tab_staz_appaltanti,'.../tab_staz_appaltanti.csv', row.names=FALSE)
write.csv2(tab_gare,'.../tab_gare.csv', row.names=FALSE)
write.csv2(tab_aggiudicatari,'.../tab_aggiudicatari.csv', row.names=FALSE)
| /pulire_i_dati.R | no_license | mamatteo/analisi-corruzione-appalti-pubblici | R | false | false | 10,672 | r | #CODICE PER LE ANALISI PRELIMINARI SUI DATI
## | SETUP
#Librerie
library(dplyr)
library(tidyr)
library(tibble)
library(tidyverse)
library(naniar)
library(visdat)
#Pulizia ambiente di lavoro
rm(list = ls(all.names = TRUE)) # Pulizia degli oggetti creati
gc() # Pulizia della memoria RAM
## | IMPORTARE I DATI
#Import dei dati
path_ds_appalti = ".../Appalti2015.csv"
path_ds_oggettogare = ".../Oggettigare2015.csv"
path_ds_cigcup = ".../CigCup2015.csv"
path_ds_aggiudicatari = ".../Aggiudicatari2015.csv"
ds_appalti <- read.csv(file = path_ds_appalti, header = FALSE, sep = ";", na.strings = "", stringsAsFactors=FALSE)
ds_oggettogare <- read.csv(file = path_ds_oggettogare, header = FALSE, sep = ";", na.strings = "", stringsAsFactors=FALSE)
ds_cigcup <- read.csv(file = path_ds_cigcup, header = FALSE, sep = ";", na.strings = "", stringsAsFactors=FALSE)
ds_aggiudicatari <- read.csv(file = path_ds_aggiudicatari, header = FALSE, sep = ";", na.strings = "", stringsAsFactors=FALSE)
#Struttura dei dataframe
str(ds_appalti)
str(ds_oggettogare)
str(ds_cigcup)
str(ds_aggiudicatari)
## | AGGIUNTA HEADER
colnames(ds_appalti) <- c("CFStazapp","NomeStazapp","IDCentroCosto", "NomeCentroCosto", "DataPubblicazione", "DataScadenzaOfferta", "NumeroGara", "Cig", "CIGAccordoQuadro", "CPV", "DescrizioneCPV", "ImportoComplessivoGara", "NrLottoComponenti", "ImportoLotto", "CodiceSceltaContraente", "TipoSceltaContraente", "CodiceModalitaRealizzazione", "ModalitaRealizzazione", "CodicePrincipaleContratto", "OggettoPrincipaleContratto", "LuogoIstat", "LuogoNuts", "FlagEscluso", "MotivoEsclusione", "CodiceEsito", "Esito", "DataAggiudicazioneDefinitiva", "CriterioDiAggiudicazione", "ImportoDiAggiudicazione", "NumeroImpreseOfferenti", "RibassoAggiudicazione", "QeBaseImportoLavori", "QeBaseImportoServizi", "QeBaseImportoForniture", "QeBaseImportoSicurezza", "QeBaseUlterioriOneriNoRibasso", "QeBaseImportoProgettazione", "QeBaseSommeADisposizione", "DataStipulaContratto", "DataInizioEffettiva", "DataTermineContrattuale", "QeFineImportoLavori", "QeFineImportoServizi", "QeFineImportoForniture", "QeFineImportoSicurezza", "QeFineImportoProgettazione", "QeFineSommeADisposizione", "DataEffettivaUltimazione")
colnames(ds_oggettogare) <- c("NumeroGara", "OggettoGara", "Cig", "OggettoLotto")
colnames(ds_cigcup) <- c("Cig", "Cup")
colnames(ds_aggiudicatari) <- c("Cig", "CodiceFiscale", "DenominazioneAggiudicatario", "TipoAggiudicatario", "CodiceRuolo", "Ruolo", "CodiceGruppo", "FlagAggiudicatario")
## | MISSING DATA
vis_expect(ds_appalti, ~.x == "NULL")
vis_expect(ds_cigcup, ~.x == "NULL")
vis_expect(ds_oggettogare, ~.x == "NULL")
vis_expect(ds_aggiudicatari, ~.x == "NULL")
## | TYPE CHECKING
#Type checking su "ds_appalti"
#s_appalti$DataPubblicazione <- as.Date(ds_appalti$DataPubblicazione)
ds_appalti$DataScadenzaOfferta <- as.Date(ds_appalti$DataScadenzaOfferta)
ds_appalti$DataAggiudicazioneDefinitiva <- as.Date(ds_appalti$DataAggiudicazioneDefinitiva)
ds_appalti$DataStipulaContratto <- as.Date(ds_appalti$DataStipulaContratto)
ds_appalti$DataInizioEffettiva <- as.Date(ds_appalti$DataStipulaContratto)
ds_appalti$DataTermineContrattuale <- as.Date(ds_appalti$DataStipulaContratto)
ds_appalti$DataEffettivaUltimazione <- as.Date(ds_appalti$DataEffettivaUltimazione, format = "%y%d%m")
ds_appalti$NrLottoComponenti <- as.integer(ds_appalti$NrLottoComponenti)
ds_appalti$CodiceSceltaContraente <- as.integer(ds_appalti$CodiceSceltaContraente)
ds_appalti$CodiceModalitaRealizzazione <- as.integer(ds_appalti$CodiceModalitaRealizzazione)
ds_appalti$LuogoIstat <- as.integer(ds_appalti$LuogoIstat)
ds_appalti$CodiceEsito <- as.integer(ds_appalti$CodiceEsito)
ds_appalti$NumeroImpreseOfferenti <- as.integer(ds_appalti$NumeroImpreseOfferenti)
ds_appalti$ImportoComplessivoGara <- as.numeric(ds_appalti$ImportoComplessivoGara)
ds_appalti$ImportoDiAggiudicazione <- as.numeric(ds_appalti$ImportoDiAggiudicazione)
ds_appalti$RibassoAggiudicazione <- as.numeric(ds_appalti$RibassoAggiudicazione)
ds_appalti$QeBaseImportoServizi <- as.numeric(ds_appalti$QeBaseImportoServizi)
ds_appalti$QeBaseImportoLavori <- as.numeric(ds_appalti$QeBaseImportoLavori)
ds_appalti$QeBaseImportoForniture <- as.numeric(ds_appalti$QeBaseImportoForniture)
ds_appalti$QeBaseImportoSicurezza <- as.numeric(ds_appalti$QeBaseImportoSicurezza)
ds_appalti$QeBaseSommeADisposizione <- as.numeric(ds_appalti$QeBaseSommeADisposizione)
ds_appalti$QeFineImportoLavori <- as.numeric(ds_appalti$QeFineImportoLavori)
ds_appalti$QeFineImportoServizi <- as.numeric(ds_appalti$QeFineImportoServizi)
ds_appalti$QeFineImportoForniture <- as.numeric(ds_appalti$QeFineImportoForniture)
ds_appalti$QeFineImportoSicurezza <- as.numeric(ds_appalti$QeFineImportoSicurezza)
ds_appalti$QeFineImportoProgettazione <- as.numeric(ds_appalti$QeFineImportoProgettazione)
ds_appalti$QeFineSommeADisposizione <- as.numeric(ds_appalti$QeFineSommeADisposizione)
#Type checking su "ds_oggettogare"
ds_oggettogare$NumeroGara <- as.integer(ds_oggettogare$NumeroGara)
#Controllo che i type value siano corretti per tutte le colonne
glimpse(ds_appalti)
glimpse(ds_oggettogare)
glimpse(ds_cigcup)
glimpse(ds_aggiudicatari)
#Normalizzazione dei campi di testo dei dataset
#ds_appalti
ds_appalti$DescrizioneCPV <- tolower(ds_appalti$DescrizioneCPV)
ds_appalti$OggettoPrincipaleContratto <- tolower(ds_appalti$OggettoPrincipaleContratto)
ds_appalti$NomeStazapp <- tolower(ds_appalti$NomeStazapp)
ds_appalti$NomeCentroCosto <- tolower(ds_appalti$NomeCentroCosto)
#ds_aggiudicatari
ds_aggiudicatari$DenominazioneAggiudicatario <- tolower(ds_aggiudicatari$DenominazioneAggiudicatario)
#ds_oggettogare
ds_oggettogare$OggettoGara <- tolower(ds_oggettogare$OggettoGara)
## | GESTIONE ED ELIMINAZIONE "NA"
#Eliminazione NA
# Cancellazione righe che presentano il valore "NA" su tutte le colonne
ds_appalti <- ds_appalti[rowSums(is.na(ds_appalti)) != ncol(ds_appalti), ]
ds_oggettogare <- ds_oggettogare[rowSums(is.na(ds_oggettogare)) != ncol(ds_oggettogare), ]
ds_cigcup <- ds_cigcup[rowSums(is.na(ds_cigcup)) != ncol(ds_cigcup), ]
ds_aggiudicatari <- ds_aggiudicatari[rowSums(is.na(ds_aggiudicatari)) != ncol(ds_aggiudicatari), ]
ds_appalti <- ds_appalti[!is.na(ds_appalti$ImportoDiAggiudicazione),]
ds_appalti$ImportoDiAggiudicazione <- as.numeric(ds_appalti$ImportoDiAggiudicazione)
summary(ds_appalti$ImportoDiAggiudicazione)
## | CREAZIONE TABELLE FINALI
#Controllo sulla futura "tabella stazioni appaltanti"
if(nrow(ds_appalti) == sum(!is.na(ds_appalti$CFStazapp))){ # Il codice fiscale ci deve essere!
print("Controllo superato. Ogni riga di 'ds_appalti' possiede un codice fiscale")
} else {
print("Il controllo ha avuto esito negativo. Ci sono righe senza codice fiscale")
print("Si procede all'eliminazione delle righe senza codice fiscale")
ds_appalti_cf_ok <- ds_appalti %>% drop_na(ds_appalti$CFStazapp)
}
#Creazione della tabella stazioni appaltanti
tab_staz_appaltanti <- ds_appalti %>% select(CFStazapp, NomeStazapp, IDCentroCosto, NomeCentroCosto)
#Creazione della tabella aggiudicatari
tab_aggiudicatari <- ds_aggiudicatari %>% select(Cig, CodiceFiscale, DenominazioneAggiudicatario, TipoAggiudicatario)
#Creazione della tabella gare
#Verifica di quanti CIG ci sono in ds_appalti
ds = ds_appalti
# Il numero di CIG deve corrispondere al numero di righe del datataset.
# Non ci devono essere valori mancanti. Nel caso vanno eliminati.
if(nrow(ds) == sum(!is.na(ds$Cig))){
print("Controllo superato. Ogni riga possiede un CIG")
print(paste0("Il numero di CIG della tabella di input è pari a ",nrow(ds)))
ds_appalti_cig_ok = ds
} else {
print("Il controllo ha avuto esito negativo. Ci sono righe senza CIG")
print("Si procede all'eliminazione delle righe senza CIG")
ds_complete <- ds %>% drop_na(Cig)
print(paste0("Le righe iniziali erano pari a ",nrow(ds)))
print(paste0("Le righe senza CIG sono pari a ",(nrow(ds)-nrow(ds_complete))))
Sys.sleep(2)
cat("\n")
print("Rimozione delle righe prive di CIG...")
print("Costruzione del nuovo dataset...")
ds_appalti_cig_ok = ds_complete
Sys.sleep(2)
cat("\n")
print("Processo terminato!")
print("Il nuovo dataset contiene solo righe il cui CIG è presente")
cat("\n")
print("I dati sono stati salvati nel nuovo dataset 'ds_appalti_cig_ok'")
}
#Verifica di quanti CIG ci sono in ds_oggettogare
ds = ds_oggettogare
# Il numero di CIG deve corrispondere al numero di righe del datataset.
# Non ci devono essere valori mancanti. Nel caso vanno eliminati.
if(nrow(ds) == sum(!is.na(ds$Cig))){
print("Controllo superato. Ogni riga possiede un CIG")
print(paste0("Il numero di CIG della tabella di input è pari a ",nrow(ds)))
ds_oggettogare_cig_ok = ds
} else {
print("Il controllo ha avuto esito negativo. Ci sono righe senza CIG")
print("Si procede all'eliminazione delle righe senza CIG")
ds_complete <- ds %>% drop_na(Cig)
print(paste0("Le righe iniziali erano pari a ",nrow(ds)))
print(paste0("Le righe senza CIG sono pari a ",(nrow(ds)-nrow(ds_complete))))
Sys.sleep(2)
cat("\n")
print("Rimozione delle righe prive di CIG...")
print("Costruzione del nuovo dataset...")
ds_oggettogare_cig_ok = ds_complete
Sys.sleep(2)
cat("\n")
print("Processo terminato!")
print("Il nuovo dataset contiene solo righe il cui CIG è presente")
cat("\n")
print("I dati sono stati salvati nel nuovo dataset 'ds_oggettogare_cig_ok'")
}
# Eliminazione dei duplicati del dataset "ds_appalti_cig_ok" ("ds_appalti" privo di CIG mancanti)
appalti_con_cig_unico <- ds_appalti_cig_ok[!duplicated(ds_appalti_cig_ok$Cig), ]
# Eliminazione dei duplicati del dataset "ds_oggettogare_cig_ok" ("ds_oggettogare" privo di CIG mancanti)
oggettogare_con_cig_unico <- ds_oggettogare_cig_ok[!duplicated(ds_oggettogare_cig_ok$Cig), ]
# Join per costruire la tabella gare
temptab_gare1 <- appalti_con_cig_unico %>% select(NumeroGara, Cig, CIGAccordoQuadro, CPV, DescrizioneCPV, ImportoComplessivoGara, NrLottoComponenti, ImportoLotto, CodiceSceltaContraente, TipoSceltaContraente, DataPubblicazione, DataScadenzaOfferta, CFStazapp, CodiceEsito, Esito, DataAggiudicazioneDefinitiva, CriterioDiAggiudicazione, ImportoDiAggiudicazione, NumeroImpreseOfferenti, RibassoAggiudicazione, CodiceModalitaRealizzazione, ModalitaRealizzazione)
temptab_gare2 <- oggettogare_con_cig_unico %>% select(Cig, OggettoGara, OggettoLotto)
tab_gare <- inner_join(temptab_gare1, temptab_gare2)
## | SALVATAGGIO TABELLE FINALI
write.csv2(tab_staz_appaltanti,'.../tab_staz_appaltanti.csv', row.names=FALSE)
write.csv2(tab_gare,'.../tab_gare.csv', row.names=FALSE)
write.csv2(tab_aggiudicatari,'.../tab_aggiudicatari.csv', row.names=FALSE)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/mega2genabeltst.R
\name{Mega2GenABELtst}
\alias{Mega2GenABELtst}
\title{compare two gwaa.data-class objects}
\usage{
Mega2GenABELtst(mega_ = mega, gwaa_ = srdta, full = TRUE, envir = ENV)
}
\arguments{
\item{mega_}{name of first gwaa.data-class object}
\item{gwaa_}{name of second gwaa.data-class object}
\item{full}{if TRUE convert genotypes to text as.character(gwaa_@gtdata)\cr and as.character(mega_@gtdata).
Then standardize the order for heterozygous alleles and finally compare.
This step is optional because it can be rather slow.}
\item{envir}{'environment' containing SQLite database and other globals}
}
\value{
None
}
\description{
Verify by fields, all the fields in two gwaa.data-class objects.
Show more detailed marker information iff the coding values are different. (When comparing
two gwaa.data-class objects, one native and one created via \bold{Mega2R} sometimes
when an allele frequency is .5 for both alleles, the allele order 1/2 vs 2/1 can not be
currently be determined.)
}
\examples{
\dontrun{
db = system.file("exdata", "seqsimm.db", package="Mega2R")
require("GenABEL")
ENV = read.Mega2DB(db)
y = Mega2ENVGenABEL()
Mega2GenABELtst(y, y, full = FALSE)
}
\dontrun{
# donttestcheck: if you have more time, try ...
x = Mega2GenABEL()
Mega2GenABELtst(x, y, full = FALSE)
}
}
| /fuzzedpackages/Mega2R/man/Mega2GenABELtst.Rd | no_license | akhikolla/testpackages | R | false | true | 1,388 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/mega2genabeltst.R
\name{Mega2GenABELtst}
\alias{Mega2GenABELtst}
\title{compare two gwaa.data-class objects}
\usage{
Mega2GenABELtst(mega_ = mega, gwaa_ = srdta, full = TRUE, envir = ENV)
}
\arguments{
\item{mega_}{name of first gwaa.data-class object}
\item{gwaa_}{name of second gwaa.data-class object}
\item{full}{if TRUE convert genotypes to text as.character(gwaa_@gtdata)\cr and as.character(mega_@gtdata).
Then standardize the order for heterozygous alleles and finally compare.
This step is optional because it can be rather slow.}
\item{envir}{'environment' containing SQLite database and other globals}
}
\value{
None
}
\description{
Verify by fields, all the fields in two gwaa.data-class objects.
Show more detailed marker information iff the coding values are different. (When comparing
two gwaa.data-class objects, one native and one created via \bold{Mega2R} sometimes
when an allele frequency is .5 for both alleles, the allele order 1/2 vs 2/1 can not be
currently be determined.)
}
\examples{
\dontrun{
db = system.file("exdata", "seqsimm.db", package="Mega2R")
require("GenABEL")
ENV = read.Mega2DB(db)
y = Mega2ENVGenABEL()
Mega2GenABELtst(y, y, full = FALSE)
}
\dontrun{
# donttestcheck: if you have more time, try ...
x = Mega2GenABEL()
Mega2GenABELtst(x, y, full = FALSE)
}
}
|
## Shared functionality between application and presentation
simulateData <- function(choices, days, count, rate, boost, effectsize) {
# Determine Dates to be used for simulation
today <- Sys.Date()
startDate <- today - (days - 1)
dates <- seq.Date(startDate, today, by = 1)
num.choices <- length(choices)
total <- days * num.choices
# Generate the raw counts per day
counts <- matrix(data = rpois(total, count),
nrow = days,
ncol = num.choices)
# Calculate the success rate by converting percent to decimal and multiply
# by the count
successes <- matrix(rpois(total, count * rate / 100),
nrow = days,
ncol = num.choices)
# The choice that's selected for adjustment can now get a boost
index <- grep(boost, choices)
successes[,index] <- successes[,index] + successes[,index] * effectsize / 100
rates <- round(successes / counts * 100, 2)
# Manipulate the data so I can graph it how I want to
df <- data.frame(rates)
colnames(df) <- choices
df <- data.frame(date = dates, df)
df
}
run.ttest <- function(control, boost, data, choices) {
control.index <- grep(control, choices) + 1
boost.index <- grep(boost, choices) + 1
ttest <- t.test(data[,boost.index],
data[,control.index],
alternative = "greater")
ttest
}
parseChoices <- function(choices) {
choices <- strsplit(choices, split = ",")[[1]]
choices <- str_trim(choices)
choices
} | /app/common.R | permissive | daveaingram/experiment-planner | R | false | false | 1,607 | r | ## Shared functionality between application and presentation
simulateData <- function(choices, days, count, rate, boost, effectsize) {
# Determine Dates to be used for simulation
today <- Sys.Date()
startDate <- today - (days - 1)
dates <- seq.Date(startDate, today, by = 1)
num.choices <- length(choices)
total <- days * num.choices
# Generate the raw counts per day
counts <- matrix(data = rpois(total, count),
nrow = days,
ncol = num.choices)
# Calculate the success rate by converting percent to decimal and multiply
# by the count
successes <- matrix(rpois(total, count * rate / 100),
nrow = days,
ncol = num.choices)
# The choice that's selected for adjustment can now get a boost
index <- grep(boost, choices)
successes[,index] <- successes[,index] + successes[,index] * effectsize / 100
rates <- round(successes / counts * 100, 2)
# Manipulate the data so I can graph it how I want to
df <- data.frame(rates)
colnames(df) <- choices
df <- data.frame(date = dates, df)
df
}
run.ttest <- function(control, boost, data, choices) {
control.index <- grep(control, choices) + 1
boost.index <- grep(boost, choices) + 1
ttest <- t.test(data[,boost.index],
data[,control.index],
alternative = "greater")
ttest
}
parseChoices <- function(choices) {
choices <- strsplit(choices, split = ",")[[1]]
choices <- str_trim(choices)
choices
} |
stuff <- read.table("/Users/patrick/desktop/twinprimes.output.txt", header = TRUE)
nums = stuff[,1]
hist(nums, main="Histogram of Twin Primes (2 to 100,000)",
xlab="Twin prime pairs", breaks=500)
stuff <- read.table("/Users/patrick/desktop/twinprimes2.output.txt", header = TRUE)
nums2 = stuff[,1]
hist(nums2, main="Histogram of Twin Primes (3 to 100,000)",
xlab="Twin prime pairs", breaks=45)
| /r/histograms.R | no_license | patrickhsanders/twinprimes | R | false | false | 409 | r | stuff <- read.table("/Users/patrick/desktop/twinprimes.output.txt", header = TRUE)
nums = stuff[,1]
hist(nums, main="Histogram of Twin Primes (2 to 100,000)",
xlab="Twin prime pairs", breaks=500)
stuff <- read.table("/Users/patrick/desktop/twinprimes2.output.txt", header = TRUE)
nums2 = stuff[,1]
hist(nums2, main="Histogram of Twin Primes (3 to 100,000)",
xlab="Twin prime pairs", breaks=45)
|
# Exercise 5: dplyr grouped operations
# Install the `"nycflights13"` package. Load (`library()`) the package.
# You'll also need to load `dplyr`
#install.packages("nycflights13") # should be done already
library("nycflights13")
library("dplyr")
View(flights)
# What was the average departure delay in each month?
# Save this as a data frame `dep_delay_by_month`
# Hint: you'll have to perform a grouping operation then summarizing your data
avg_dep <- data.frame(flights %>%
group_by(month)) %>% summarize(mean(dep_delay, na.rm = TRUE),
stringAsFactors = FALSE)
print(avg_dep)
# Which month had the greatest average departure delay?
flights %>%
group_by(month) %>%
summarize(
avg_dep_delay = mean(dep_delay, na.rm = TRUE)
) %>%
filter(avg_dep_delay == max(avg_dep_delay))
# If your above data frame contains just two columns (e.g., "month", and "delay"
# in that order), you can create a scatterplot by passing that data frame to the
# `plot()` function
delay_by_month <- flights %>%
group_by(month) %>%
summariuze(
avg_dep_delay = mean(dep_delay, na.rm = TRUE)
)
plot(delay_by_month)
# To which destinations were the average arrival delays the highest?
# Hint: you'll have to perform a grouping operation then summarize your data
# You can use the `head()` function to view just the first few rows
flights %>%
group_by(dest) %>%
summarize(
avg_arr_delay = mean(arr_delay, na.rm = TRUE)
) %>%
filter(avg_arr_delay == max(avg_arr_delay, na.rm = TRUE))
# You can look up these airports in the `airports` data frame!
View(airports)
airports %>%
filter(faa == "CAE")
# Which city was flown to with the highest average speed?
flights %>%
group_by(dest) %>%
summarize(
avg_speed = mean(distance/air_time, na.rm = TRUE)
) %>%
filter(avg_speed == max(avg_speed, na.rm = TRUE))
| /chapter-11-exercises/exercise-5/exercise.R | permissive | emilyphantastic/book-exercises | R | false | false | 1,882 | r | # Exercise 5: dplyr grouped operations
# Install the `"nycflights13"` package. Load (`library()`) the package.
# You'll also need to load `dplyr`
#install.packages("nycflights13") # should be done already
library("nycflights13")
library("dplyr")
View(flights)
# What was the average departure delay in each month?
# Save this as a data frame `dep_delay_by_month`
# Hint: you'll have to perform a grouping operation then summarizing your data
avg_dep <- data.frame(flights %>%
group_by(month)) %>% summarize(mean(dep_delay, na.rm = TRUE),
stringAsFactors = FALSE)
print(avg_dep)
# Which month had the greatest average departure delay?
flights %>%
group_by(month) %>%
summarize(
avg_dep_delay = mean(dep_delay, na.rm = TRUE)
) %>%
filter(avg_dep_delay == max(avg_dep_delay))
# If your above data frame contains just two columns (e.g., "month", and "delay"
# in that order), you can create a scatterplot by passing that data frame to the
# `plot()` function
delay_by_month <- flights %>%
group_by(month) %>%
summariuze(
avg_dep_delay = mean(dep_delay, na.rm = TRUE)
)
plot(delay_by_month)
# To which destinations were the average arrival delays the highest?
# Hint: you'll have to perform a grouping operation then summarize your data
# You can use the `head()` function to view just the first few rows
flights %>%
group_by(dest) %>%
summarize(
avg_arr_delay = mean(arr_delay, na.rm = TRUE)
) %>%
filter(avg_arr_delay == max(avg_arr_delay, na.rm = TRUE))
# You can look up these airports in the `airports` data frame!
View(airports)
airports %>%
filter(faa == "CAE")
# Which city was flown to with the highest average speed?
flights %>%
group_by(dest) %>%
summarize(
avg_speed = mean(distance/air_time, na.rm = TRUE)
) %>%
filter(avg_speed == max(avg_speed, na.rm = TRUE))
|
# Install and load packages
package_names <- c("survey","dplyr","foreign","devtools")
lapply(package_names, function(x) if(!x %in% installed.packages()) install.packages(x))
lapply(package_names, require, character.only=T)
install_github("e-mitchell/meps_r_pkg/MEPS")
library(MEPS)
options(survey.lonely.psu="adjust")
# Load FYC file
FYC <- read.xport('C:/MEPS/.FYC..ssp');
year <- .year.
if(year <= 2001) FYC <- FYC %>% mutate(VARPSU = VARPSU.yy., VARSTR=VARSTR.yy.)
if(year <= 1998) FYC <- FYC %>% rename(PERWT.yy.F = WTDPER.yy.)
if(year == 1996) FYC <- FYC %>% mutate(AGE42X = AGE2X, AGE31X = AGE1X)
FYC <- FYC %>%
mutate_at(vars(starts_with("AGE")),funs(replace(., .< 0, NA))) %>%
mutate(AGELAST = coalesce(AGE.yy.X, AGE42X, AGE31X))
FYC$ind = 1
# Add aggregate event variables
FYC <- FYC %>% mutate(
HHTEXP.yy. = HHAEXP.yy. + HHNEXP.yy., # Home Health Agency + Independent providers
ERTEXP.yy. = ERFEXP.yy. + ERDEXP.yy., # Doctor + Facility Expenses for OP, ER, IP events
IPTEXP.yy. = IPFEXP.yy. + IPDEXP.yy.,
OPTEXP.yy. = OPFEXP.yy. + OPDEXP.yy., # All Outpatient
OPYEXP.yy. = OPVEXP.yy. + OPSEXP.yy., # Physician only
OPZEXP.yy. = OPOEXP.yy. + OPPEXP.yy., # Non-physician only
OMAEXP.yy. = VISEXP.yy. + OTHEXP.yy.) # Other medical equipment and services
FYC <- FYC %>% mutate(
TOTUSE.yy. = ((DVTOT.yy. > 0) + (RXTOT.yy. > 0) + (OBTOTV.yy. > 0) +
(OPTOTV.yy. > 0) + (ERTOT.yy. > 0) + (IPDIS.yy. > 0) +
(HHTOTD.yy. > 0) + (OMAEXP.yy. > 0))
)
# Perceived health status
if(year == 1996)
FYC <- FYC %>% mutate(RTHLTH53 = RTEHLTH2, RTHLTH42 = RTEHLTH2, RTHLTH31 = RTEHLTH1)
FYC <- FYC %>%
mutate_at(vars(starts_with("RTHLTH")), funs(replace(., .< 0, NA))) %>%
mutate(
health = coalesce(RTHLTH53, RTHLTH42, RTHLTH31),
health = recode_factor(health, .default = "Missing", .missing = "Missing",
"1" = "Excellent",
"2" = "Very good",
"3" = "Good",
"4" = "Fair",
"5" = "Poor"))
FYCdsgn <- svydesign(
id = ~VARPSU,
strata = ~VARSTR,
weights = ~PERWT.yy.F,
data = FYC,
nest = TRUE)
# Loop over event types
events <- c("TOT", "DVT", "RX", "OBV", "OBD", "OBO",
"OPT", "OPY", "OPZ", "ERT", "IPT", "HHT", "OMA")
results <- list()
for(ev in events) {
key <- paste0(ev, "EXP", ".yy.")
formula <- as.formula(sprintf("~%s", key))
results[[key]] <- svyby(formula, FUN = svyquantile, by = ~health, design = subset(FYCdsgn, FYC[[key]] > 0), quantiles=c(0.5), ci=T, method="constant")
}
print(results)
| /mepstrends/hc_use/json/code/r/medEXP__health__event__.r | permissive | RandomCriticalAnalysis/MEPS-summary-tables | R | false | false | 2,622 | r | # Install and load packages
package_names <- c("survey","dplyr","foreign","devtools")
lapply(package_names, function(x) if(!x %in% installed.packages()) install.packages(x))
lapply(package_names, require, character.only=T)
install_github("e-mitchell/meps_r_pkg/MEPS")
library(MEPS)
options(survey.lonely.psu="adjust")
# Load FYC file
FYC <- read.xport('C:/MEPS/.FYC..ssp');
year <- .year.
if(year <= 2001) FYC <- FYC %>% mutate(VARPSU = VARPSU.yy., VARSTR=VARSTR.yy.)
if(year <= 1998) FYC <- FYC %>% rename(PERWT.yy.F = WTDPER.yy.)
if(year == 1996) FYC <- FYC %>% mutate(AGE42X = AGE2X, AGE31X = AGE1X)
FYC <- FYC %>%
mutate_at(vars(starts_with("AGE")),funs(replace(., .< 0, NA))) %>%
mutate(AGELAST = coalesce(AGE.yy.X, AGE42X, AGE31X))
FYC$ind = 1
# Add aggregate event variables
FYC <- FYC %>% mutate(
HHTEXP.yy. = HHAEXP.yy. + HHNEXP.yy., # Home Health Agency + Independent providers
ERTEXP.yy. = ERFEXP.yy. + ERDEXP.yy., # Doctor + Facility Expenses for OP, ER, IP events
IPTEXP.yy. = IPFEXP.yy. + IPDEXP.yy.,
OPTEXP.yy. = OPFEXP.yy. + OPDEXP.yy., # All Outpatient
OPYEXP.yy. = OPVEXP.yy. + OPSEXP.yy., # Physician only
OPZEXP.yy. = OPOEXP.yy. + OPPEXP.yy., # Non-physician only
OMAEXP.yy. = VISEXP.yy. + OTHEXP.yy.) # Other medical equipment and services
FYC <- FYC %>% mutate(
TOTUSE.yy. = ((DVTOT.yy. > 0) + (RXTOT.yy. > 0) + (OBTOTV.yy. > 0) +
(OPTOTV.yy. > 0) + (ERTOT.yy. > 0) + (IPDIS.yy. > 0) +
(HHTOTD.yy. > 0) + (OMAEXP.yy. > 0))
)
# Perceived health status
if(year == 1996)
FYC <- FYC %>% mutate(RTHLTH53 = RTEHLTH2, RTHLTH42 = RTEHLTH2, RTHLTH31 = RTEHLTH1)
FYC <- FYC %>%
mutate_at(vars(starts_with("RTHLTH")), funs(replace(., .< 0, NA))) %>%
mutate(
health = coalesce(RTHLTH53, RTHLTH42, RTHLTH31),
health = recode_factor(health, .default = "Missing", .missing = "Missing",
"1" = "Excellent",
"2" = "Very good",
"3" = "Good",
"4" = "Fair",
"5" = "Poor"))
FYCdsgn <- svydesign(
id = ~VARPSU,
strata = ~VARSTR,
weights = ~PERWT.yy.F,
data = FYC,
nest = TRUE)
# Loop over event types
events <- c("TOT", "DVT", "RX", "OBV", "OBD", "OBO",
"OPT", "OPY", "OPZ", "ERT", "IPT", "HHT", "OMA")
results <- list()
for(ev in events) {
key <- paste0(ev, "EXP", ".yy.")
formula <- as.formula(sprintf("~%s", key))
results[[key]] <- svyby(formula, FUN = svyquantile, by = ~health, design = subset(FYCdsgn, FYC[[key]] > 0), quantiles=c(0.5), ci=T, method="constant")
}
print(results)
|
#!/usr/bin/env Rscript
args = commandArgs(trailingOnly=TRUE)
pop <- "Indian"
n.causal <- 100
specificity <- 0
snr <- 0.05
fold <- 1
pop <- args[1]
n.causal <- as.integer(args[2])
specificity <- as.integer(args[3])/100
snr <- as.integer(args[4])/100
fold <- as.integer(args[5])
## Load libraries
suppressMessages(library(tidyverse))
source("utils.R")
pop.label <- pop
if(length(str_split(pop, "-small")[[1]])==2) {
pop = str_split(pop, "-small")[[1]][1]
}
oak <- "/oak/stanford/groups/candes/matteo/transfer_knockoffs"
scratch <- "/scratch/groups/candes/matteo/transfer_knockoffs"
## Load partition information
in.file <- "/oak/stanford/groups/candes/popstruct/data/partitions/hap_chr1.txt"
Partitions <- read_delim(in.file, delim=" ")
Groups <- Partitions %>% mutate(Group = `res_5`) %>% select(SNP, Group)
## Load prior information from British samples
prior.file <- sprintf("%s/stats/lasso_res5_n%s_snr%s_s%s_%s_fold%d.txt", oak, n.causal, round(100*snr), round(100*specificity), "Everyone", fold)
Beta.prior <- read_delim(prior.file, delim=" ", col_types=cols()) %>%
mutate(Knockoff=endsWith(SNP, ".k"), SNP=str_replace(SNP, ".k", "")) %>%
full_join(Groups) %>%
mutate(Z = ifelse(is.na(Z), 0, Z), Beta = ifelse(is.na(Beta), 0, Beta)) %>%
group_by(CHR, Group, SNP) %>%
summarise(Z = sum(abs(Z)))
## Compute knockoff statistics
Stats.prior <- Beta.prior %>%
mutate(Knockoff=endsWith(SNP, ".k"), SNP=str_replace(SNP, ".k", "")) %>%
left_join(Groups) %>%
group_by(CHR, Group, Knockoff) %>%
summarise(SNP.lead=SNP[which.max(abs(Z))], Z=sum(abs(Z))) %>%
group_by(CHR, Group) %>%
summarise(SNP.lead=SNP.lead[1], W.prior=sum(Z[!Knockoff])-sum(Z[Knockoff])) %>%
ungroup() %>%
arrange(desc(abs(W.prior))) %>%
select(-SNP.lead)
## Load importance measures
stats.file <- sprintf("%s/stats/lasso_res5_n%s_snr%s_s%s_%s_fold%d.txt", oak, n.causal, round(100*snr), round(100*specificity), pop.label, fold)
Beta <- read_delim(stats.file, delim=" ")
## Compute knockoff statistics
Stats.new <- Beta %>%
mutate(Knockoff=endsWith(SNP, ".k"), SNP=str_replace(SNP, ".k", "")) %>%
left_join(Groups, by="SNP") %>%
group_by(CHR, Group, Knockoff) %>%
summarise(SNP.lead=SNP[which.max(abs(Z))], Z=sum(abs(Z))) %>%
group_by(CHR, Group) %>%
summarise(SNP=SNP.lead[1], W.new=sum(Z[!Knockoff])-sum(Z[Knockoff])) %>%
ungroup() %>%
arrange(desc(abs(W.new)))
## Load list of causal variants
in.file <- sprintf("%s/data/causal/causal_n%s_s%s.txt", oak, n.causal, round(100*specificity))
Causal <- read_delim(in.file, delim=" ") %>%
mutate(Causal = TRUE) %>%
mutate(SNP.c = SNP) %>%
select(CHR, Group, SNP.c, Causal)
##
theta.list <- seq(0.1,0.9,by=0.1)
Results <- do.call("rbind", lapply(theta.list, function(theta) {
## Combine stats
Stats <- left_join(Stats.new, Stats.prior)%>%
mutate(W = (1-theta)*abs(W.new) + theta*abs(W.prior), W = W*sign(W.new),
W = ifelse(is.na(W), 0, W)) %>%
arrange(desc(abs(W)))
## Apply knockoff filter
Discoveries <- Stats %>%
mutate(Threshold = knockoff.threshold(W, fdr=0.1, offset=1)) %>%
filter(W>Threshold)
## Evaluate discoveries
df <- Causal %>%
full_join(Discoveries) %>%
mutate(Causal = ifelse(is.na(Causal), FALSE, Causal)) %>%
mutate(Discovered = ifelse(is.na(W), FALSE, TRUE)) %>%
mutate(Theta = theta)
res.fdp <- df %>%
filter(Discovered) %>%
summarise(FDP = mean(!Causal)) %>%
mutate(FDP = ifelse(is.nan(FDP), 0, FDP))
res.pow <- df %>%
filter(Causal) %>%
summarise(Power = mean(Discovered))
cat(sprintf("Theta = %.2f, Power: %.2f, FDP: %.2f\n", theta, res.pow, res.fdp))
return(df)
}))
## Save list of discoveries
out.file <- sprintf("%s/discoveries_others/discoveries_transfer_2_res5_n%s_snr%s_s%s_%s_fold%d.txt",
oak, n.causal, round(100*snr), round(100*specificity), pop.label, fold)
Results %>% write_delim(out.file, delim=" ")
cat(sprintf("List of discoveries written to:\n %s\n", out.file))
| /numerical_experiments_gwas/filter_linear.R | no_license | lsn235711/transfer_knockoffs_code | R | false | false | 3,986 | r | #!/usr/bin/env Rscript
args = commandArgs(trailingOnly=TRUE)
pop <- "Indian"
n.causal <- 100
specificity <- 0
snr <- 0.05
fold <- 1
pop <- args[1]
n.causal <- as.integer(args[2])
specificity <- as.integer(args[3])/100
snr <- as.integer(args[4])/100
fold <- as.integer(args[5])
## Load libraries
suppressMessages(library(tidyverse))
source("utils.R")
pop.label <- pop
if(length(str_split(pop, "-small")[[1]])==2) {
pop = str_split(pop, "-small")[[1]][1]
}
oak <- "/oak/stanford/groups/candes/matteo/transfer_knockoffs"
scratch <- "/scratch/groups/candes/matteo/transfer_knockoffs"
## Load partition information
in.file <- "/oak/stanford/groups/candes/popstruct/data/partitions/hap_chr1.txt"
Partitions <- read_delim(in.file, delim=" ")
Groups <- Partitions %>% mutate(Group = `res_5`) %>% select(SNP, Group)
## Load prior information from British samples
prior.file <- sprintf("%s/stats/lasso_res5_n%s_snr%s_s%s_%s_fold%d.txt", oak, n.causal, round(100*snr), round(100*specificity), "Everyone", fold)
Beta.prior <- read_delim(prior.file, delim=" ", col_types=cols()) %>%
mutate(Knockoff=endsWith(SNP, ".k"), SNP=str_replace(SNP, ".k", "")) %>%
full_join(Groups) %>%
mutate(Z = ifelse(is.na(Z), 0, Z), Beta = ifelse(is.na(Beta), 0, Beta)) %>%
group_by(CHR, Group, SNP) %>%
summarise(Z = sum(abs(Z)))
## Compute knockoff statistics
Stats.prior <- Beta.prior %>%
mutate(Knockoff=endsWith(SNP, ".k"), SNP=str_replace(SNP, ".k", "")) %>%
left_join(Groups) %>%
group_by(CHR, Group, Knockoff) %>%
summarise(SNP.lead=SNP[which.max(abs(Z))], Z=sum(abs(Z))) %>%
group_by(CHR, Group) %>%
summarise(SNP.lead=SNP.lead[1], W.prior=sum(Z[!Knockoff])-sum(Z[Knockoff])) %>%
ungroup() %>%
arrange(desc(abs(W.prior))) %>%
select(-SNP.lead)
## Load importance measures
stats.file <- sprintf("%s/stats/lasso_res5_n%s_snr%s_s%s_%s_fold%d.txt", oak, n.causal, round(100*snr), round(100*specificity), pop.label, fold)
Beta <- read_delim(stats.file, delim=" ")
## Compute knockoff statistics
Stats.new <- Beta %>%
mutate(Knockoff=endsWith(SNP, ".k"), SNP=str_replace(SNP, ".k", "")) %>%
left_join(Groups, by="SNP") %>%
group_by(CHR, Group, Knockoff) %>%
summarise(SNP.lead=SNP[which.max(abs(Z))], Z=sum(abs(Z))) %>%
group_by(CHR, Group) %>%
summarise(SNP=SNP.lead[1], W.new=sum(Z[!Knockoff])-sum(Z[Knockoff])) %>%
ungroup() %>%
arrange(desc(abs(W.new)))
## Load list of causal variants
in.file <- sprintf("%s/data/causal/causal_n%s_s%s.txt", oak, n.causal, round(100*specificity))
Causal <- read_delim(in.file, delim=" ") %>%
mutate(Causal = TRUE) %>%
mutate(SNP.c = SNP) %>%
select(CHR, Group, SNP.c, Causal)
##
theta.list <- seq(0.1,0.9,by=0.1)
Results <- do.call("rbind", lapply(theta.list, function(theta) {
## Combine stats
Stats <- left_join(Stats.new, Stats.prior)%>%
mutate(W = (1-theta)*abs(W.new) + theta*abs(W.prior), W = W*sign(W.new),
W = ifelse(is.na(W), 0, W)) %>%
arrange(desc(abs(W)))
## Apply knockoff filter
Discoveries <- Stats %>%
mutate(Threshold = knockoff.threshold(W, fdr=0.1, offset=1)) %>%
filter(W>Threshold)
## Evaluate discoveries
df <- Causal %>%
full_join(Discoveries) %>%
mutate(Causal = ifelse(is.na(Causal), FALSE, Causal)) %>%
mutate(Discovered = ifelse(is.na(W), FALSE, TRUE)) %>%
mutate(Theta = theta)
res.fdp <- df %>%
filter(Discovered) %>%
summarise(FDP = mean(!Causal)) %>%
mutate(FDP = ifelse(is.nan(FDP), 0, FDP))
res.pow <- df %>%
filter(Causal) %>%
summarise(Power = mean(Discovered))
cat(sprintf("Theta = %.2f, Power: %.2f, FDP: %.2f\n", theta, res.pow, res.fdp))
return(df)
}))
## Save list of discoveries
out.file <- sprintf("%s/discoveries_others/discoveries_transfer_2_res5_n%s_snr%s_s%s_%s_fold%d.txt",
oak, n.causal, round(100*snr), round(100*specificity), pop.label, fold)
Results %>% write_delim(out.file, delim=" ")
cat(sprintf("List of discoveries written to:\n %s\n", out.file))
|
####################
## Author: Stephanie Teeple
## Date: 1 December 2018
## Summary: This file merges NIH ExPORTER
## project and publication data for upload
## into IRIS' VDE.
####################
rm(list = ls())
# libraries
# devtools::install_github("jayhesselberth/nihexporter")
# libraries(nihexporter) #ExPORTER data through 2016
devtools::install_github("ikashnitsky/sjrdata")
library(sjrdata)
library(dplyr)
library(tidyr)
# 1. Download NIH data from ExPORTER and merge each year's files together
# (projects, publications, and the linking tables). Add to list.
# For many rows, the SCImago variable 'Issn' actually contains two
# ISSNS - one print and one web version. The order is not consistent
# (some rows have web first, then print, and vice versa).
setwd("C:/Users/steeple/Dropbox/EPID_600/final_project_data")
merged <- NULL
pubs <- NULL
link <- NULL
data <- list()
for (i in 2001:2017) {
# Projects
temp <- tempfile()
urli <- paste0("https://exporter.nih.gov/CSVs/final/RePORTER_PRJ_C_FY", i, ".zip")
filenamei <- paste0("RePORTER_PRJ_C_FY", i, ".csv")
download.file(urli, temp, mode = "wb")
print(paste("download projects", i))
unzip(temp, filenamei)
print(paste("unzip projects", i))
projects <- read.csv(filenamei, sep = ",", header = TRUE, fill = TRUE,
comment.char = "", colClasses = "character", row.names = NULL)
projects <- select(projects, ACTIVITY, APPLICATION_ID, BUDGET_START, BUDGET_END,
CORE_PROJECT_NUM, FULL_PROJECT_NUM, FY, ORG_NAME, PI_IDS,
PROJECT_TITLE, STUDY_SECTION_NAME, SUPPORT_YEAR, TOTAL_COST)
projects <- filter(projects, grepl("R|F|K|T|P", ACTIVITY)) # Filter on the grants you're interested in
projects$long_PIs <- nchar(projects$PI_IDS) # Exclude projects with more than one PI
projects <- filter(projects, long_PIs <10)
projects$missing_cost <- nchar(projects$TOTAL_COST)
projects <- filter(projects, missing_cost > 0) # Exclude subprojects and projects with missing cost data
# Pubs
temp <- tempfile()
urli <- paste0("https://exporter.nih.gov/CSVs/final/RePORTER_PUB_C_", i, ".zip")
filenamei <- paste0("RePORTER_PUB_C_", i, ".csv")
download.file(urli, temp, mode = "wb")
print(paste("download pubs", i))
unzip(temp, filenamei)
print(paste("unzip pubs", i))
pubs <- read.csv(filenamei, sep = ",", header = TRUE, fill = TRUE,
comment.char = "", colClasses = "character", row.names = NULL)
pubs$ISSN <- gsub("-", "", pubs$ISSN)
pubs <- select(pubs, ISSN, JOURNAL_TITLE, PMC_ID, PMID, PUB_DATE, PUB_TITLE, PUB_YEAR)
# Links
temp <- tempfile()
urli <- paste0("https://exporter.nih.gov/CSVs/final/RePORTER_PUBLNK_C_", i, ".zip")
filenamei <- paste0("RePORTER_PUBLNK_C_", i, ".csv")
download.file(urli, temp, mode = "wb")
print(paste("download links", i))
unzip(temp, filenamei)
print(paste("unzip links", i))
link <- read.csv(filenamei, sep = ",", header = TRUE, fill = TRUE,
comment.char = "", colClasses = "character", row.names = NULL)
# SCImago
# TODO: pipe this
scimago <- filter(sjr_journals, sjr_journals$year == i)
scimago <- rename(scimago, impact_factor = cites_doc_2years)
scimago <- select(scimago, title, type, issn, impact_factor, year)
scimago <- separate(scimago, col = issn, into = c("ISSN_1", "ISSN_2"), sep = ", ")
scimago <- gather(scimago, key = orig_order, value = ISSN, ISSN_1, ISSN_2)
scimago <- scimago[!is.na(scimago$ISSN),]
# Merge
link <- inner_join(pubs, link, by = "PMID")
link <- inner_join(link, projects,
by = c("PROJECT_NUMBER" = "CORE_PROJECT_NUM"))
# NOTE - approximately 1/3 of publications do not merge with SCImago data.
# ~660,000 of the NIH Exporter pubs don't have an ISSN associated with them.
merged[[i]] <- left_join(link, scimago, by = "ISSN")
}
# 2. Rowbind each of the 17 merged files in list 'merged' together.
data <- bind_rows(merged)
rm(merged)
# NOTE: SCImago does not contain information for 6114 of the 14235 publications
# included in the NIH ExPorter data. The amounts to 1079202 observations of the
# original 3879361.
| /00_data_prep.R | no_license | stephteeple/BMIN503_Final_Project | R | false | false | 4,219 | r | ####################
## Author: Stephanie Teeple
## Date: 1 December 2018
## Summary: This file merges NIH ExPORTER
## project and publication data for upload
## into IRIS' VDE.
####################
rm(list = ls())
# libraries
# devtools::install_github("jayhesselberth/nihexporter")
# libraries(nihexporter) #ExPORTER data through 2016
devtools::install_github("ikashnitsky/sjrdata")
library(sjrdata)
library(dplyr)
library(tidyr)
# 1. Download NIH data from ExPORTER and merge each year's files together
# (projects, publications, and the linking tables). Add to list.
# For many rows, the SCImago variable 'Issn' actually contains two
# ISSNS - one print and one web version. The order is not consistent
# (some rows have web first, then print, and vice versa).
setwd("C:/Users/steeple/Dropbox/EPID_600/final_project_data")
merged <- NULL
pubs <- NULL
link <- NULL
data <- list()
for (i in 2001:2017) {
# Projects
temp <- tempfile()
urli <- paste0("https://exporter.nih.gov/CSVs/final/RePORTER_PRJ_C_FY", i, ".zip")
filenamei <- paste0("RePORTER_PRJ_C_FY", i, ".csv")
download.file(urli, temp, mode = "wb")
print(paste("download projects", i))
unzip(temp, filenamei)
print(paste("unzip projects", i))
projects <- read.csv(filenamei, sep = ",", header = TRUE, fill = TRUE,
comment.char = "", colClasses = "character", row.names = NULL)
projects <- select(projects, ACTIVITY, APPLICATION_ID, BUDGET_START, BUDGET_END,
CORE_PROJECT_NUM, FULL_PROJECT_NUM, FY, ORG_NAME, PI_IDS,
PROJECT_TITLE, STUDY_SECTION_NAME, SUPPORT_YEAR, TOTAL_COST)
projects <- filter(projects, grepl("R|F|K|T|P", ACTIVITY)) # Filter on the grants you're interested in
projects$long_PIs <- nchar(projects$PI_IDS) # Exclude projects with more than one PI
projects <- filter(projects, long_PIs <10)
projects$missing_cost <- nchar(projects$TOTAL_COST)
projects <- filter(projects, missing_cost > 0) # Exclude subprojects and projects with missing cost data
# Pubs
temp <- tempfile()
urli <- paste0("https://exporter.nih.gov/CSVs/final/RePORTER_PUB_C_", i, ".zip")
filenamei <- paste0("RePORTER_PUB_C_", i, ".csv")
download.file(urli, temp, mode = "wb")
print(paste("download pubs", i))
unzip(temp, filenamei)
print(paste("unzip pubs", i))
pubs <- read.csv(filenamei, sep = ",", header = TRUE, fill = TRUE,
comment.char = "", colClasses = "character", row.names = NULL)
pubs$ISSN <- gsub("-", "", pubs$ISSN)
pubs <- select(pubs, ISSN, JOURNAL_TITLE, PMC_ID, PMID, PUB_DATE, PUB_TITLE, PUB_YEAR)
# Links
temp <- tempfile()
urli <- paste0("https://exporter.nih.gov/CSVs/final/RePORTER_PUBLNK_C_", i, ".zip")
filenamei <- paste0("RePORTER_PUBLNK_C_", i, ".csv")
download.file(urli, temp, mode = "wb")
print(paste("download links", i))
unzip(temp, filenamei)
print(paste("unzip links", i))
link <- read.csv(filenamei, sep = ",", header = TRUE, fill = TRUE,
comment.char = "", colClasses = "character", row.names = NULL)
# SCImago
# TODO: pipe this
scimago <- filter(sjr_journals, sjr_journals$year == i)
scimago <- rename(scimago, impact_factor = cites_doc_2years)
scimago <- select(scimago, title, type, issn, impact_factor, year)
scimago <- separate(scimago, col = issn, into = c("ISSN_1", "ISSN_2"), sep = ", ")
scimago <- gather(scimago, key = orig_order, value = ISSN, ISSN_1, ISSN_2)
scimago <- scimago[!is.na(scimago$ISSN),]
# Merge
link <- inner_join(pubs, link, by = "PMID")
link <- inner_join(link, projects,
by = c("PROJECT_NUMBER" = "CORE_PROJECT_NUM"))
# NOTE - approximately 1/3 of publications do not merge with SCImago data.
# ~660,000 of the NIH Exporter pubs don't have an ISSN associated with them.
merged[[i]] <- left_join(link, scimago, by = "ISSN")
}
# 2. Rowbind each of the 17 merged files in list 'merged' together.
data <- bind_rows(merged)
rm(merged)
# NOTE: SCImago does not contain information for 6114 of the 14235 publications
# included in the NIH ExPorter data. The amounts to 1079202 observations of the
# original 3879361.
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/compute_pairwise_norm.R
\name{compute_pairwise_norm}
\alias{compute_pairwise_norm}
\title{Title Compute normalised mmpd}
\usage{
compute_pairwise_norm(
.data,
gran_x = NULL,
gran_facet = NULL,
response = NULL,
quantile_prob = seq(0.01, 0.99, 0.01),
dist_ordered = TRUE,
lambda = 0.67,
nperm = 100,
seed = 9000
)
}
\arguments{
\item{.data}{data for which mmpd needs to be calculated}
\item{gran_x}{granularities mapped across x levels}
\item{gran_facet}{granularities mapped across facetss}
\item{response}{univarite response variable}
\item{quantile_prob}{probabilities}
\item{dist_ordered}{if categories are ordered}
\item{nperm}{number of permutations for normalization}
\item{seed}{seed considered}
}
\value{
weighted pairwise distances normalized using permutation approach
}
\description{
Title Compute normalised mmpd
}
\examples{
library(tidyverse)
library(gravitas)
library(parallel)
sm <- smart_meter10 \%>\%
dplyr::filter(customer_id \%in\% c("10017936"))
gran_x <- "week_month"
gran_facet <- "wknd_wday"
v <- compute_pairwise_norm(sm, gran_x, gran_facet,
response = general_supply_kwh, nperm = 20, lambda = 0.9
)
# month of the year not working in this setup
}
| /man/compute_pairwise_norm.Rd | no_license | Sayani07/hakear | R | false | true | 1,279 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/compute_pairwise_norm.R
\name{compute_pairwise_norm}
\alias{compute_pairwise_norm}
\title{Title Compute normalised mmpd}
\usage{
compute_pairwise_norm(
.data,
gran_x = NULL,
gran_facet = NULL,
response = NULL,
quantile_prob = seq(0.01, 0.99, 0.01),
dist_ordered = TRUE,
lambda = 0.67,
nperm = 100,
seed = 9000
)
}
\arguments{
\item{.data}{data for which mmpd needs to be calculated}
\item{gran_x}{granularities mapped across x levels}
\item{gran_facet}{granularities mapped across facetss}
\item{response}{univarite response variable}
\item{quantile_prob}{probabilities}
\item{dist_ordered}{if categories are ordered}
\item{nperm}{number of permutations for normalization}
\item{seed}{seed considered}
}
\value{
weighted pairwise distances normalized using permutation approach
}
\description{
Title Compute normalised mmpd
}
\examples{
library(tidyverse)
library(gravitas)
library(parallel)
sm <- smart_meter10 \%>\%
dplyr::filter(customer_id \%in\% c("10017936"))
gran_x <- "week_month"
gran_facet <- "wknd_wday"
v <- compute_pairwise_norm(sm, gran_x, gran_facet,
response = general_supply_kwh, nperm = 20, lambda = 0.9
)
# month of the year not working in this setup
}
|
#!usr/bin/env Rscript
library(methods)
library(Matrix)
library(MASS)
#library(Rcpp)
library(lme4)
# Read in your data as an R dataframe
basedir <- c("/seastor/helenhelen/ISR_2015")
resultdir <- paste(basedir,"/me/tmap/results/ln",sep="/")
setwd(resultdir)
r.itemInfo <- matrix(data=NA, nr=2, nc=4)
## read data
#get data for each trial
item_file <- paste(basedir,"/me/tmap/data/item/ln.txt",sep="")
item_data <- read.table(item_file,header=FALSE)
load(paste("/home/helenhelen/DQ/project/gitrepo/ISR_2015/ROI_based/me/col_names.Rda",sep=""))
colnames(item_data) <- col_names
item_data$subid <- as.factor(item_data$subid)
item_data$pid <- as.factor(item_data$pid)
subdata <- item_data
itemInfo_actmean <- lmer(LANG_rsadiff~PRC_actmean+(1+PRC_actmean|subid),REML=FALSE,data=subdata)
itemInfo_actmean.null <- lmer(LANG_rsadiff~1+(1+PRC_actmean|subid),REML=FALSE,data=subdata)
itemInfo_di <- lmer(LANG_rsadiff~PRC_actmean+(1+PRC_rsadiff|subid),REML=FALSE,data=subdata)
itemInfo_di.null <- lmer(LANG_rsadiff~1+(1+PRC_rsadiff|subid),REML=FALSE,data=subdata)
mainEffect.itemInfo_actmean <- anova(itemInfo_actmean,itemInfo_actmean.null)
r.itemInfo[1,1]=mainEffect.itemInfo_actmean[2,6]
r.itemInfo[1,2]=mainEffect.itemInfo_actmean[2,7]
r.itemInfo[1,3]=mainEffect.itemInfo_actmean[2,8]
r.itemInfo[1,4]=fixef(itemInfo_actmean)[2];
mainEffect.itemInfo_di <- anova(itemInfo_di,itemInfo_di.null)
r.itemInfo[2,1]=mainEffect.itemInfo_di[2,6]
r.itemInfo[2,2]=mainEffect.itemInfo_di[2,7]
r.itemInfo[2,3]=mainEffect.itemInfo_di[2,8]
r.itemInfo[2,4]=fixef(itemInfo_di)[2];
write.matrix(r.itemInfo,file="itemInfso_LANG_PRC.txt",sep="\t")
| /ROI_based/me/ln/itemInfo_LANG_PRC.R | no_license | QQXiao/ISR_2015 | R | false | false | 1,619 | r | #!usr/bin/env Rscript
library(methods)
library(Matrix)
library(MASS)
#library(Rcpp)
library(lme4)
# Read in your data as an R dataframe
basedir <- c("/seastor/helenhelen/ISR_2015")
resultdir <- paste(basedir,"/me/tmap/results/ln",sep="/")
setwd(resultdir)
r.itemInfo <- matrix(data=NA, nr=2, nc=4)
## read data
#get data for each trial
item_file <- paste(basedir,"/me/tmap/data/item/ln.txt",sep="")
item_data <- read.table(item_file,header=FALSE)
load(paste("/home/helenhelen/DQ/project/gitrepo/ISR_2015/ROI_based/me/col_names.Rda",sep=""))
colnames(item_data) <- col_names
item_data$subid <- as.factor(item_data$subid)
item_data$pid <- as.factor(item_data$pid)
subdata <- item_data
itemInfo_actmean <- lmer(LANG_rsadiff~PRC_actmean+(1+PRC_actmean|subid),REML=FALSE,data=subdata)
itemInfo_actmean.null <- lmer(LANG_rsadiff~1+(1+PRC_actmean|subid),REML=FALSE,data=subdata)
itemInfo_di <- lmer(LANG_rsadiff~PRC_actmean+(1+PRC_rsadiff|subid),REML=FALSE,data=subdata)
itemInfo_di.null <- lmer(LANG_rsadiff~1+(1+PRC_rsadiff|subid),REML=FALSE,data=subdata)
mainEffect.itemInfo_actmean <- anova(itemInfo_actmean,itemInfo_actmean.null)
r.itemInfo[1,1]=mainEffect.itemInfo_actmean[2,6]
r.itemInfo[1,2]=mainEffect.itemInfo_actmean[2,7]
r.itemInfo[1,3]=mainEffect.itemInfo_actmean[2,8]
r.itemInfo[1,4]=fixef(itemInfo_actmean)[2];
mainEffect.itemInfo_di <- anova(itemInfo_di,itemInfo_di.null)
r.itemInfo[2,1]=mainEffect.itemInfo_di[2,6]
r.itemInfo[2,2]=mainEffect.itemInfo_di[2,7]
r.itemInfo[2,3]=mainEffect.itemInfo_di[2,8]
r.itemInfo[2,4]=fixef(itemInfo_di)[2];
write.matrix(r.itemInfo,file="itemInfso_LANG_PRC.txt",sep="\t")
|
library(raster)
# Get PTHA routines
ptha18 = new.env()
.file_nci = '/g/data/w85/tsunami/CODE/gadi/ptha/ptha_access/get_PTHA_results.R'
.file_home = '../../../../../../AustPTHA/CODE/ptha/ptha_access/get_PTHA_results.R'
source(ifelse(file.exists(.file_nci), .file_nci, .file_home), local=ptha18, chdir=TRUE)
#' Given max-depth matrices and scenario rates, compute rate that a
#' depth_threshold is exceeded.
#'
#' NA values in the max-depth matrix will be treated as dry.
#'
#' @param included_indices a vector of non-repeated indices in
#' 1:length(scenario_rates) giving the rasters to include. This CAN be used for
#' splitting the calculation in parallel (sum chunks in parallel, then sum the
#' final result in serial). A serial calculation would use 1:length(scenario_rates).
#' It often may be more efficient to use coarser-grained parallelism (i.e. calling
#' this routine in serial, with different cores treating different domains).
#' @param max_depth_files A list of rasters containing the max_depth (one for
#' each entry of tarred_multidomain_dirs).
#' @param scenario_rates A vector with the individual scenario rates for each
#' entry of max_depth_files
#' @param depth_threshold The function will compute the exceedance rate of
#' (depth > depth_threshold), where 'depth' might be some other quantity (whatever is
#' stored in the max_depth_files).
#' @param print_progress Print index as each raster is processed
#'
get_exceedance_rate_at_threshold_depth<-function(included_indices,
max_depth_files, scenario_rates, depth_threshold, print_progress=FALSE){
stopifnot(length(scenario_rates) == length(max_depth_files))
stopifnot(length(included_indices) == length(unique(included_indices)))
stopifnot( (min(included_indices) >= 1) &
(max(included_indices) <= length(max_depth_files)) )
stopifnot(length(depth_threshold) == 1)
local_max_depth_files = max_depth_files[included_indices]
local_scenario_rates = scenario_rates[included_indices]
# Sum [scenario_rate * above_depth_threshold] for all scenarios
#r1 = as.matrix(terra::rast(local_max_depth_files[1]), wide=TRUE)
r1 = as.matrix(raster(local_max_depth_files[1]))
#target_dim = terra::dim(r1)
target_dim = dim(r1)
rm(r1)
ex_rate = matrix(0, ncol=target_dim[2], nrow=target_dim[1])
for(i in 1:length(local_scenario_rates)){
if(print_progress) print(i)
# Read the raster
#x_mat = as.matrix(terra::rast(local_max_depth_files[i]), wide=TRUE)
x_mat = as.matrix(raster(local_max_depth_files[i]))
stopifnot(all(dim(x_mat) == dim(ex_rate)))
# Make a raster that is 1 where we exceed the depth threshold, and 0 elsewhere
# (including at NA sites)
out = 1 - (x_mat <= depth_threshold)
out[is.na(out)] = 0
# Sum the exceedance-rates
ex_rate = ex_rate + local_scenario_rates[i] * out
rm(out, x_mat)
}
gc()
return(ex_rate)
}
#' Exceedance-rates on one tile, for one source-zone representations.
#'
#' Compute 'exceedance-rate' raster for the given domain index and the
#' given 'scenarios_name' (which corresponds either to an unsegmented source-zone representation,
#' or a segment)
#'
#' @param input_domain_index_and_scenarios_name List of the form list(domain_index=3, scenarios_name='unsegmented_HS'),
#' where the domain_index is the integer in the domain tif file (inside a tar archive), and scenarios_name is one
#' entry of names(scenario_databases)
#' @param tarred_multidomain_dirs Vector of tarred multidomain directories
#' @param scenario_databases List with scenario databases
#' @param output_directory Directory in which tiffs will be saved, relative to working directory.
#' @param depth_thresholds_for_exceedance_rate_calculations Vector of depths for which we compute exceedance-rates.
#' @param raster_name_start leading characters of the rasters to be read in
#' each tarred raster dir, before the domain_index, e.g.'max_stage_domain_'
#' @param print_progress_debug Print index as each raster is processed
#' @return Zero on successful completions -- but as a side-effect it creates the exceedance-rate raster.
compute_exceedance_rates_on_tile<-function(
input_domain_index_and_scenarios_name,
tarred_multidomain_dirs,
scenario_databases,
output_directory,
depth_thresholds_for_exceedance_rate_calculations,
raster_name_start,
print_progress_debug=FALSE){
# To ease parallel calculation we take the input parameters in a list, and
# unpack here.
domain_index = input_domain_index_and_scenarios_name$domain_index
scenarios_name = input_domain_index_and_scenarios_name$scenarios_name
# Useful to have one of the rasters ready as a template [to help with data
# The raster depth file associated with each md_dir. There should only be one
# per md_dir. All rasters must have the same extent and resolution.
raster_files_one_domain = paste0('/vsitar/', dirname(tarred_multidomain_dirs),
"/raster_output_files.tar/", raster_name_start,
domain_index, ".tif")
raster_template = raster(raster_files_one_domain[1])
#raster_template = terra::rast(raster_files_one_domain[1])
# This text will appear in the filename of all output rasters
output_raster_name_tag = paste0('domain_', domain_index, '_', raster_name_start)
# Map the rows of the database to the rasters
ind = match(dirname(scenario_databases[[scenarios_name]]$md_dir),
gsub('/vsitar/', '', dirname(dirname(raster_files_one_domain)), fixed=TRUE) )
# Make rates for each raster.
scenario_rates = rep(0, length(raster_files_one_domain))
for(i in 1:length(ind)){
# Here we loop over the scenario_database, and add the rate from the
# table to scenario_rates. Notice this automatically treats double
# counts, etc. Here we are only treating a single segment (or single
# unsegmented) source at once
scenario_rates[ind[i]] = scenario_rates[ind[i]] +
scenario_databases[[scenarios_name]]$importance_sampling_scenario_rates_basic[i]
}
t0 = sum(scenario_rates)
t1 = sum(scenario_databases[[scenarios_name]]$importance_sampling_scenario_rates_basic)
errmess = 'Scenario rates do not sum to the same value: Issue with bookkeeping or file naming'
if(!isTRUE(all.equal(t0, t1))) stop(errmess)
# For each depth-threshold, make the exceedance-rate raster. NOTE: Could
# make calculations parallel over depth_threshold too.
for(depth_threshold in depth_thresholds_for_exceedance_rate_calculations){
tile_exceedance_rates = get_exceedance_rate_at_threshold_depth(
included_indices = seq(1, length(raster_files_one_domain)),
max_depth_files = raster_files_one_domain,
scenario_rates = scenario_rates,
depth_threshold=depth_threshold,
print_progress=print_progress_debug)
# For the raster output, it is nice to set regions that are never
# inundated to NA (genuinely NA regions that are not priority
# domain will also be NA)
tile_exceedance_rates[tile_exceedance_rates == 0] = NA
# Convert to a raster and write to file
#exrates_rast = terra::setValues(raster_template, tile_exceedance_rates)
exrates_rast = setValues(raster_template, tile_exceedance_rates)
raster_output_file = paste0(output_directory, '/',
scenarios_name, '_', output_raster_name_tag,
'_exceedance_rate_with_threshold_', depth_threshold,
'.tif')
#terra::writeRaster(exrates_rast, raster_output_file,
writeRaster(exrates_rast, raster_output_file,
options=c('COMPRESS=DEFLATE'), overwrite=TRUE)
rm(exrates_rast, tile_exceedance_rates)
gc()
}
rm(raster_template); gc()
return(0)
}
#' Given max-depth rasters and scenario rates, compute rate that a
#' depth_threshold is exceeded, and estimate the variance of the Monte-Carlo
#' error in that rate.
#'
#' NA values in the max-depth matrix will be treated as dry. Note in this
#' routine the inputs MUST be arranged by SAMPLED SCENARIOS. So there may be
#' repeated scenario_rasters. This arrangement is suitable for computing the
#' error variance.
#'
#' @param max_depth_files A list of rasters containing the max_depth (one for
#' each sampled scenario -- there can be repeats since we are sampling with replacement).
#' @param scenario_rates A vector with the individual scenario rates for each
#' entry of max_depth_files
#' @param scenario_magnitudes the magnitudes of each scenario.
#' @param depth_threshold The function will compute the exceedance rate of
#' (depth > depth_threshold), where 'depth' might be some other quantity (whatever is
#' stored in the max_depth_files).
#' @param print_progress Print index as each raster is processed
#'
get_exceedance_rate_and_error_variance_at_threshold_depth<-function(
max_depth_files, scenario_rates, scenario_magnitudes, depth_threshold,
print_progress=FALSE){
# Logical checks on arguments
stopifnot(length(scenario_rates) == length(max_depth_files))
stopifnot(length(scenario_rates) == length(scenario_magnitudes))
stopifnot(length(depth_threshold) == 1)
# Get unique Mw values, confirming they are evenly spaced
unique_Mw = ptha18$unique_sorted_with_check_for_even_spacing(scenario_magnitudes)
local_max_depth_files = max_depth_files
local_scenario_rates = scenario_rates
# Make space to store exceedance-rate, and exceedance_rate variance
r1 = as.matrix(raster(local_max_depth_files[1]))
target_dim = dim(r1)
rm(r1)
ex_rate = matrix(0, ncol=target_dim[2], nrow=target_dim[1])
ex_rate_variance = ex_rate
for(mw in unique_Mw){
k = which(scenario_magnitudes == mw)
num_Mw = length(k) # Number of sceanrios in this magnitude bin
# Within-bin values are needed for intermediate calculations
ex_rate_in_bin = matrix(0, ncol=target_dim[2], nrow=target_dim[1])
ex_rate_variance_in_bin = ex_rate_in_bin
# Compute the within bin exceedance-rate, Eq 17 in Davies et al. 2022
for(i in k){
if(print_progress) print(i)
# Read the raster
x_mat = as.matrix(raster(local_max_depth_files[i]))
stopifnot(all(dim(x_mat) == dim(ex_rate)))
# Make a raster that is 1 where we exceed the depth threshold, and 0 elsewhere
# (including at NA sites)
out = 1 - (x_mat <= depth_threshold)
out[is.na(out)] = 0
# Sum the exceedance-rates
ex_rate_in_bin = ex_rate_in_bin + local_scenario_rates[i] * out
}
# Add the rates in this magnitude bin to the total rates
ex_rate = ex_rate + ex_rate_in_bin
# Compute the within-bin variance of the error in the exceedance-rate
# Eq 20 in Davies et al. 2022
for(i in k){
if(print_progress) print(i)
# Read the raster
x_mat = as.matrix(raster(local_max_depth_files[i]))
stopifnot(all(dim(x_mat) == dim(ex_rate)))
# Make a raster that is 1 where we exceed the depth threshold, and 0 elsewhere
# (including at NA sites)
out = 1 - (x_mat <= depth_threshold)
out[is.na(out)] = 0
# This is rearranged from Eq 20 in Davies et al., 2022
ex_rate_variance_in_bin = ex_rate_variance_in_bin +
(local_scenario_rates[i] * out - ex_rate_in_bin/num_Mw)^2
}
# Add the variance in this bin to the total variance
ex_rate_variance = ex_rate_variance + ex_rate_variance_in_bin
gc()
}
return(list(exrate=ex_rate, exrate_var=ex_rate_variance))
}
#' Estimated Exceedance-rates and Monte-Carlo variance of the estimate, on one
#' tile, for one source-zone representations.
#'
#' Compute 'exceedance-rate' raster for the given domain index and the
#' given 'scenarios_name' (which corresponds either to an unsegmented source-zone representation,
#' or a segment). This gives an estimate of the true exceedance rate. Also
#' compute an estimate of the variance of the exceedance-rate error. This corresponds
#' to Equations 17 and 20 from
#' Davies, G.; Weber, R.; Wilson, K. & Cummins, P.
#' From offshore to onshore probabilistic tsunami hazard assessment via efficient Monte-Carlo sampling, 2021
#'
#' @param input_domain_index_and_scenarios_name List of the form list(domain_index=3, scenarios_name='unsegmented_HS'),
#' where the domain_index is the integer in the domain tif file (inside a tar archive), and scenarios_name is one
#' entry of names(scenario_databases)
#' @param tarred_multidomain_dirs Vector of tarred multidomain directories
#' @param scenario_databases List with scenario databases
#' @param output_directory Directory in which tiffs will be saved, relative to working directory.
#' @param depth_thresholds_for_exceedance_rate_calculations Vector of depths for which we compute exceedance-rates.
#' @param raster_name_start leading characters of the rasters to be read in
#' each tarred raster dir, before the domain_index, e.g.'max_stage_domain_'
#' @param print_progress_debug Print index as each raster is processed
#' @return Zero on successful completions -- but as a side-effect it creates the exceedance-rate raster.
compute_exceedance_rates_and_error_variance_on_tile<-function(
input_domain_index_and_scenarios_name,
tarred_multidomain_dirs,
scenario_databases,
output_directory,
depth_thresholds_for_exceedance_rate_calculations,
raster_name_start,
print_progress_debug=FALSE){
# To ease parallel calculation we take the input parameters in a list, and
# unpack here.
domain_index = input_domain_index_and_scenarios_name$domain_index
scenarios_name = input_domain_index_and_scenarios_name$scenarios_name
# Useful to have one of the rasters ready as a template [to help with data
# The raster depth file associated with each md_dir. There should only be one
# per md_dir. All rasters must have the same extent and resolution.
raster_files_one_domain = paste0('/vsitar/', dirname(tarred_multidomain_dirs),
"/raster_output_files.tar/", raster_name_start,
domain_index, ".tif")
raster_template = raster(raster_files_one_domain[1])
#raster_template = terra::rast(raster_files_one_domain[1])
# This text will appear in the filename of all output rasters
output_raster_name_tag = paste0('domain_', domain_index, '_', raster_name_start)
output_variance_raster_name_tag = paste0('domain_', domain_index, '_', raster_name_start, '_variance_of_')
# Map the rows of the database to the rasters
#ind = match(dirname(scenario_databases[[scenarios_name]]$md_dir),
# gsub('/vsitar/', '', dirname(dirname(raster_files_one_domain)), fixed=TRUE) )
# Map the rasters to rows of the database
ind = match(dirname(scenario_databases[[scenarios_name]]$md_dir),
gsub('/vsitar/', '', dirname(dirname(raster_files_one_domain)), fixed=TRUE))
scenario_rasters = raster_files_one_domain[ind]
scenario_rates = scenario_databases[[scenarios_name]]$importance_sampling_scenario_rates_basic
scenario_mw = scenario_databases[[scenarios_name]]$mw
# For each depth-threshold, make the exceedance-rate raster. NOTE: Could
# make calculations parallel over depth_threshold too.
for(depth_threshold in depth_thresholds_for_exceedance_rate_calculations){
tile_exceedance_rates_and_error_variance = get_exceedance_rate_and_error_variance_at_threshold_depth(
max_depth_files = scenario_rasters,
scenario_rates = scenario_rates,
scenario_magnitudes = scenario_mw,
depth_threshold=depth_threshold,
print_progress=print_progress_debug)
# For the raster output, it is nice to set regions that are never
# inundated to NA (genuinely NA regions that are not priority
# domain will also be NA)
null_regions = (tile_exceedance_rates_and_error_variance$exrate == 0)
tile_exceedance_rates_and_error_variance$exrate[null_regions] = NA
tile_exceedance_rates_and_error_variance$exrate_var[null_regions] = NA
rm(null_regions)
# Convert to a raster and write to file
#exrates_rast = terra::setValues(raster_template, tile_exceedance_rates_and_error_variance)
exrates_rast = setValues(raster_template, tile_exceedance_rates_and_error_variance$exrate)
raster_output_file = paste0(output_directory, '/',
scenarios_name, '_', output_raster_name_tag,
'_exceedance_rate_with_threshold_', depth_threshold,
'.tif')
#terra::writeRaster(exrates_rast, raster_output_file,
writeRaster(exrates_rast, raster_output_file,
options=c('COMPRESS=DEFLATE'), overwrite=TRUE)
exrates_var_rast = setValues(raster_template, tile_exceedance_rates_and_error_variance$exrate_var)
raster_output_file = paste0(output_directory, '/',
scenarios_name, '_', output_variance_raster_name_tag,
'_exceedance_rate_with_threshold_', depth_threshold,
'.tif')
#terra::writeRaster(exrates_rast, raster_output_file,
writeRaster(exrates_var_rast, raster_output_file,
options=c('COMPRESS=DEFLATE'), overwrite=TRUE)
rm(exrates_rast, exrates_var_rast, tile_exceedance_rates_and_error_variance)
gc()
}
rm(raster_template); gc()
return(0)
}
# Quick tests of the exceedance-rate calculations
.test_exceedance_rate_raster_calculations<-function(){
#
# Compare the max-stage exceedance-rate against an independently
# calculated value at a point, using code from '../max_stage_at_a_point'
#
# This test could be much improved -- it relies on a large set of existing simulated data.
#
input_domain_index_and_scenarios_name = list(
domain_index = 39, # Compare against a site on domain 39
scenarios_name = 'logic_tree_mean_curve_HS')
scenario_databases = list()
scenario_databases$logic_tree_mean_curve_HS = read.csv(
'../../sources/hazard/random_sunda2/random_scenarios_sunda2_logic_tree_mean_curve_HS.csv')
.local_find_matching_md_dir<-function(row_indices, tarred_multidomain_dirs){
# Make a string with the start of the SWALS output folder name (beneath
# ../../swals/OUTPUTS/...)
matching_string = paste0('ptha18_random_scenarios_sunda2_row_',
substring(as.character(1e+07 + row_indices), 2, 8), '_')
# Match with the tarred_multidomain_dirs, with NA if we don't match or get multiple matches
matching_ind = sapply(matching_string, f<-function(x){
p = grep(x, tarred_multidomain_dirs)
if(length(p) != 1) p = NA
return(p)})
if(any(is.na(matching_ind))) stop('Could not find simulation matching scenario')
return(tarred_multidomain_dirs[matching_ind])
}
#
tarred_multidomain_dirs = Sys.glob(
'../../swals/OUTPUTS/ptha18-BunburyBusseltonRevised-sealevel60cm/random_sunda2/ptha*/RUN*.tar')
for(i in 1:length(scenario_databases)){
scenario_databases[[i]]$md_dir = .local_find_matching_md_dir(
scenario_databases[[i]]$inds, tarred_multidomain_dirs)
}
# Get "max-stage-exceedance' of "MSL + 1.0"
MSL = 0.6
depth_thresholds_for_exceedance_rate_calculations = MSL + 1.0
output_directory = 'test_dir'
raster_name_start = 'max_stage_domain_'
dir.create(output_directory, showWarnings=FALSE)
#
# Test 1 -- basic exceedance-rate calculation
#
make_raster = compute_exceedance_rates_on_tile(
input_domain_index_and_scenarios_name,
tarred_multidomain_dirs,
scenario_databases,
output_directory,
depth_thresholds_for_exceedance_rate_calculations,
raster_name_start)
x = raster('test_dir/logic_tree_mean_curve_HS_domain_39_max_stage_domain__exceedance_rate_with_threshold_1.6.tif')
result = extract(x, matrix(c(113.0708085, -28.5679802), ncol=2))
# Previously I computed the exceedance-rates at this site using very different code, see here:
# ../max_stage_at_point/
# Visually the exceedance rate (for 1m above MSL) is very close to 1.0e-04.
# Actually it is 9.800535e-05
if(abs(result - 9.800535e-05) < 1e-10){
print('PASS')
}else{
print('FAIL')
}
#
# Test 2 -- exceedance-rate with variance. Here the exceedance-rate should
# be the same as the above up to floating-point differnces due to
# reordering of a sum, and we also get the error variance
output_directory = 'test_dir_2'
dir.create(output_directory, showWarnings=FALSE)
make_raster = compute_exceedance_rates_and_error_variance_on_tile(
input_domain_index_and_scenarios_name,
tarred_multidomain_dirs,
scenario_databases,
output_directory,
depth_thresholds_for_exceedance_rate_calculations,
raster_name_start)
xnew = raster('test_dir_2/logic_tree_mean_curve_HS_domain_39_max_stage_domain__exceedance_rate_with_threshold_1.6.tif')
xnew_var = raster('test_dir_2/logic_tree_mean_curve_HS_domain_39_max_stage_domain__variance_of__exceedance_rate_with_threshold_1.6.tif')
result = extract(xnew, matrix(c(113.0708085, -28.5679802), ncol=2))
# Previously I computed the exceedance-rates at this site using very different code, see here:
# ../max_stage_at_point/
# Visually the exceedance rate (for 1m above MSL) is very close to 1.0e-04.
# Actually it is 9.800535e-05
if(abs(result - 9.800535e-05) < 1e-10){
print('PASS')
}else{
print('FAIL')
}
# Check the exceedance-rates are just like before
if(all(abs(as.matrix(xnew - x)) <= 1.0e-08*as.matrix(xnew), na.rm=TRUE)){
print('PASS')
}else{
print('FAIL')
}
# From a separate calculation, the variance should be
# [1] 2.37206e-10
result = extract(xnew_var, matrix(c(113.0708085, -28.5679802), ncol=2))
if(abs(result - 2.37206e-10) < 1.0e-14){
print('PASS')
}else{
print('FAIL')
}
}
| /misc/SW_WA_2021_2024/bunbury_busselton/analysis/probabilistic_inundation/exceedance_rate_raster_calculations.R | permissive | GeoscienceAustralia/ptha | R | false | false | 22,448 | r | library(raster)
# Get PTHA routines
ptha18 = new.env()
.file_nci = '/g/data/w85/tsunami/CODE/gadi/ptha/ptha_access/get_PTHA_results.R'
.file_home = '../../../../../../AustPTHA/CODE/ptha/ptha_access/get_PTHA_results.R'
source(ifelse(file.exists(.file_nci), .file_nci, .file_home), local=ptha18, chdir=TRUE)
#' Given max-depth matrices and scenario rates, compute rate that a
#' depth_threshold is exceeded.
#'
#' NA values in the max-depth matrix will be treated as dry.
#'
#' @param included_indices a vector of non-repeated indices in
#' 1:length(scenario_rates) giving the rasters to include. This CAN be used for
#' splitting the calculation in parallel (sum chunks in parallel, then sum the
#' final result in serial). A serial calculation would use 1:length(scenario_rates).
#' It often may be more efficient to use coarser-grained parallelism (i.e. calling
#' this routine in serial, with different cores treating different domains).
#' @param max_depth_files A list of rasters containing the max_depth (one for
#' each entry of tarred_multidomain_dirs).
#' @param scenario_rates A vector with the individual scenario rates for each
#' entry of max_depth_files
#' @param depth_threshold The function will compute the exceedance rate of
#' (depth > depth_threshold), where 'depth' might be some other quantity (whatever is
#' stored in the max_depth_files).
#' @param print_progress Print index as each raster is processed
#'
get_exceedance_rate_at_threshold_depth<-function(included_indices,
max_depth_files, scenario_rates, depth_threshold, print_progress=FALSE){
stopifnot(length(scenario_rates) == length(max_depth_files))
stopifnot(length(included_indices) == length(unique(included_indices)))
stopifnot( (min(included_indices) >= 1) &
(max(included_indices) <= length(max_depth_files)) )
stopifnot(length(depth_threshold) == 1)
local_max_depth_files = max_depth_files[included_indices]
local_scenario_rates = scenario_rates[included_indices]
# Sum [scenario_rate * above_depth_threshold] for all scenarios
#r1 = as.matrix(terra::rast(local_max_depth_files[1]), wide=TRUE)
r1 = as.matrix(raster(local_max_depth_files[1]))
#target_dim = terra::dim(r1)
target_dim = dim(r1)
rm(r1)
ex_rate = matrix(0, ncol=target_dim[2], nrow=target_dim[1])
for(i in 1:length(local_scenario_rates)){
if(print_progress) print(i)
# Read the raster
#x_mat = as.matrix(terra::rast(local_max_depth_files[i]), wide=TRUE)
x_mat = as.matrix(raster(local_max_depth_files[i]))
stopifnot(all(dim(x_mat) == dim(ex_rate)))
# Make a raster that is 1 where we exceed the depth threshold, and 0 elsewhere
# (including at NA sites)
out = 1 - (x_mat <= depth_threshold)
out[is.na(out)] = 0
# Sum the exceedance-rates
ex_rate = ex_rate + local_scenario_rates[i] * out
rm(out, x_mat)
}
gc()
return(ex_rate)
}
#' Exceedance-rates on one tile, for one source-zone representations.
#'
#' Compute 'exceedance-rate' raster for the given domain index and the
#' given 'scenarios_name' (which corresponds either to an unsegmented source-zone representation,
#' or a segment)
#'
#' @param input_domain_index_and_scenarios_name List of the form list(domain_index=3, scenarios_name='unsegmented_HS'),
#' where the domain_index is the integer in the domain tif file (inside a tar archive), and scenarios_name is one
#' entry of names(scenario_databases)
#' @param tarred_multidomain_dirs Vector of tarred multidomain directories
#' @param scenario_databases List with scenario databases
#' @param output_directory Directory in which tiffs will be saved, relative to working directory.
#' @param depth_thresholds_for_exceedance_rate_calculations Vector of depths for which we compute exceedance-rates.
#' @param raster_name_start leading characters of the rasters to be read in
#' each tarred raster dir, before the domain_index, e.g.'max_stage_domain_'
#' @param print_progress_debug Print index as each raster is processed
#' @return Zero on successful completions -- but as a side-effect it creates the exceedance-rate raster.
compute_exceedance_rates_on_tile<-function(
input_domain_index_and_scenarios_name,
tarred_multidomain_dirs,
scenario_databases,
output_directory,
depth_thresholds_for_exceedance_rate_calculations,
raster_name_start,
print_progress_debug=FALSE){
# To ease parallel calculation we take the input parameters in a list, and
# unpack here.
domain_index = input_domain_index_and_scenarios_name$domain_index
scenarios_name = input_domain_index_and_scenarios_name$scenarios_name
# Useful to have one of the rasters ready as a template [to help with data
# The raster depth file associated with each md_dir. There should only be one
# per md_dir. All rasters must have the same extent and resolution.
raster_files_one_domain = paste0('/vsitar/', dirname(tarred_multidomain_dirs),
"/raster_output_files.tar/", raster_name_start,
domain_index, ".tif")
raster_template = raster(raster_files_one_domain[1])
#raster_template = terra::rast(raster_files_one_domain[1])
# This text will appear in the filename of all output rasters
output_raster_name_tag = paste0('domain_', domain_index, '_', raster_name_start)
# Map the rows of the database to the rasters
ind = match(dirname(scenario_databases[[scenarios_name]]$md_dir),
gsub('/vsitar/', '', dirname(dirname(raster_files_one_domain)), fixed=TRUE) )
# Make rates for each raster.
scenario_rates = rep(0, length(raster_files_one_domain))
for(i in 1:length(ind)){
# Here we loop over the scenario_database, and add the rate from the
# table to scenario_rates. Notice this automatically treats double
# counts, etc. Here we are only treating a single segment (or single
# unsegmented) source at once
scenario_rates[ind[i]] = scenario_rates[ind[i]] +
scenario_databases[[scenarios_name]]$importance_sampling_scenario_rates_basic[i]
}
t0 = sum(scenario_rates)
t1 = sum(scenario_databases[[scenarios_name]]$importance_sampling_scenario_rates_basic)
errmess = 'Scenario rates do not sum to the same value: Issue with bookkeeping or file naming'
if(!isTRUE(all.equal(t0, t1))) stop(errmess)
# For each depth-threshold, make the exceedance-rate raster. NOTE: Could
# make calculations parallel over depth_threshold too.
for(depth_threshold in depth_thresholds_for_exceedance_rate_calculations){
tile_exceedance_rates = get_exceedance_rate_at_threshold_depth(
included_indices = seq(1, length(raster_files_one_domain)),
max_depth_files = raster_files_one_domain,
scenario_rates = scenario_rates,
depth_threshold=depth_threshold,
print_progress=print_progress_debug)
# For the raster output, it is nice to set regions that are never
# inundated to NA (genuinely NA regions that are not priority
# domain will also be NA)
tile_exceedance_rates[tile_exceedance_rates == 0] = NA
# Convert to a raster and write to file
#exrates_rast = terra::setValues(raster_template, tile_exceedance_rates)
exrates_rast = setValues(raster_template, tile_exceedance_rates)
raster_output_file = paste0(output_directory, '/',
scenarios_name, '_', output_raster_name_tag,
'_exceedance_rate_with_threshold_', depth_threshold,
'.tif')
#terra::writeRaster(exrates_rast, raster_output_file,
writeRaster(exrates_rast, raster_output_file,
options=c('COMPRESS=DEFLATE'), overwrite=TRUE)
rm(exrates_rast, tile_exceedance_rates)
gc()
}
rm(raster_template); gc()
return(0)
}
#' Given max-depth rasters and scenario rates, compute rate that a
#' depth_threshold is exceeded, and estimate the variance of the Monte-Carlo
#' error in that rate.
#'
#' NA values in the max-depth matrix will be treated as dry. Note in this
#' routine the inputs MUST be arranged by SAMPLED SCENARIOS. So there may be
#' repeated scenario_rasters. This arrangement is suitable for computing the
#' error variance.
#'
#' @param max_depth_files A list of rasters containing the max_depth (one for
#' each sampled scenario -- there can be repeats since we are sampling with replacement).
#' @param scenario_rates A vector with the individual scenario rates for each
#' entry of max_depth_files
#' @param scenario_magnitudes the magnitudes of each scenario.
#' @param depth_threshold The function will compute the exceedance rate of
#' (depth > depth_threshold), where 'depth' might be some other quantity (whatever is
#' stored in the max_depth_files).
#' @param print_progress Print index as each raster is processed
#'
get_exceedance_rate_and_error_variance_at_threshold_depth<-function(
max_depth_files, scenario_rates, scenario_magnitudes, depth_threshold,
print_progress=FALSE){
# Logical checks on arguments
stopifnot(length(scenario_rates) == length(max_depth_files))
stopifnot(length(scenario_rates) == length(scenario_magnitudes))
stopifnot(length(depth_threshold) == 1)
# Get unique Mw values, confirming they are evenly spaced
unique_Mw = ptha18$unique_sorted_with_check_for_even_spacing(scenario_magnitudes)
local_max_depth_files = max_depth_files
local_scenario_rates = scenario_rates
# Make space to store exceedance-rate, and exceedance_rate variance
r1 = as.matrix(raster(local_max_depth_files[1]))
target_dim = dim(r1)
rm(r1)
ex_rate = matrix(0, ncol=target_dim[2], nrow=target_dim[1])
ex_rate_variance = ex_rate
for(mw in unique_Mw){
k = which(scenario_magnitudes == mw)
num_Mw = length(k) # Number of sceanrios in this magnitude bin
# Within-bin values are needed for intermediate calculations
ex_rate_in_bin = matrix(0, ncol=target_dim[2], nrow=target_dim[1])
ex_rate_variance_in_bin = ex_rate_in_bin
# Compute the within bin exceedance-rate, Eq 17 in Davies et al. 2022
for(i in k){
if(print_progress) print(i)
# Read the raster
x_mat = as.matrix(raster(local_max_depth_files[i]))
stopifnot(all(dim(x_mat) == dim(ex_rate)))
# Make a raster that is 1 where we exceed the depth threshold, and 0 elsewhere
# (including at NA sites)
out = 1 - (x_mat <= depth_threshold)
out[is.na(out)] = 0
# Sum the exceedance-rates
ex_rate_in_bin = ex_rate_in_bin + local_scenario_rates[i] * out
}
# Add the rates in this magnitude bin to the total rates
ex_rate = ex_rate + ex_rate_in_bin
# Compute the within-bin variance of the error in the exceedance-rate
# Eq 20 in Davies et al. 2022
for(i in k){
if(print_progress) print(i)
# Read the raster
x_mat = as.matrix(raster(local_max_depth_files[i]))
stopifnot(all(dim(x_mat) == dim(ex_rate)))
# Make a raster that is 1 where we exceed the depth threshold, and 0 elsewhere
# (including at NA sites)
out = 1 - (x_mat <= depth_threshold)
out[is.na(out)] = 0
# This is rearranged from Eq 20 in Davies et al., 2022
ex_rate_variance_in_bin = ex_rate_variance_in_bin +
(local_scenario_rates[i] * out - ex_rate_in_bin/num_Mw)^2
}
# Add the variance in this bin to the total variance
ex_rate_variance = ex_rate_variance + ex_rate_variance_in_bin
gc()
}
return(list(exrate=ex_rate, exrate_var=ex_rate_variance))
}
#' Estimated Exceedance-rates and Monte-Carlo variance of the estimate, on one
#' tile, for one source-zone representations.
#'
#' Compute 'exceedance-rate' raster for the given domain index and the
#' given 'scenarios_name' (which corresponds either to an unsegmented source-zone representation,
#' or a segment). This gives an estimate of the true exceedance rate. Also
#' compute an estimate of the variance of the exceedance-rate error. This corresponds
#' to Equations 17 and 20 from
#' Davies, G.; Weber, R.; Wilson, K. & Cummins, P.
#' From offshore to onshore probabilistic tsunami hazard assessment via efficient Monte-Carlo sampling, 2021
#'
#' @param input_domain_index_and_scenarios_name List of the form list(domain_index=3, scenarios_name='unsegmented_HS'),
#' where the domain_index is the integer in the domain tif file (inside a tar archive), and scenarios_name is one
#' entry of names(scenario_databases)
#' @param tarred_multidomain_dirs Vector of tarred multidomain directories
#' @param scenario_databases List with scenario databases
#' @param output_directory Directory in which tiffs will be saved, relative to working directory.
#' @param depth_thresholds_for_exceedance_rate_calculations Vector of depths for which we compute exceedance-rates.
#' @param raster_name_start leading characters of the rasters to be read in
#' each tarred raster dir, before the domain_index, e.g.'max_stage_domain_'
#' @param print_progress_debug Print index as each raster is processed
#' @return Zero on successful completions -- but as a side-effect it creates the exceedance-rate raster.
compute_exceedance_rates_and_error_variance_on_tile<-function(
input_domain_index_and_scenarios_name,
tarred_multidomain_dirs,
scenario_databases,
output_directory,
depth_thresholds_for_exceedance_rate_calculations,
raster_name_start,
print_progress_debug=FALSE){
# To ease parallel calculation we take the input parameters in a list, and
# unpack here.
domain_index = input_domain_index_and_scenarios_name$domain_index
scenarios_name = input_domain_index_and_scenarios_name$scenarios_name
# Useful to have one of the rasters ready as a template [to help with data
# The raster depth file associated with each md_dir. There should only be one
# per md_dir. All rasters must have the same extent and resolution.
raster_files_one_domain = paste0('/vsitar/', dirname(tarred_multidomain_dirs),
"/raster_output_files.tar/", raster_name_start,
domain_index, ".tif")
raster_template = raster(raster_files_one_domain[1])
#raster_template = terra::rast(raster_files_one_domain[1])
# This text will appear in the filename of all output rasters
output_raster_name_tag = paste0('domain_', domain_index, '_', raster_name_start)
output_variance_raster_name_tag = paste0('domain_', domain_index, '_', raster_name_start, '_variance_of_')
# Map the rows of the database to the rasters
#ind = match(dirname(scenario_databases[[scenarios_name]]$md_dir),
# gsub('/vsitar/', '', dirname(dirname(raster_files_one_domain)), fixed=TRUE) )
# Map the rasters to rows of the database
ind = match(dirname(scenario_databases[[scenarios_name]]$md_dir),
gsub('/vsitar/', '', dirname(dirname(raster_files_one_domain)), fixed=TRUE))
scenario_rasters = raster_files_one_domain[ind]
scenario_rates = scenario_databases[[scenarios_name]]$importance_sampling_scenario_rates_basic
scenario_mw = scenario_databases[[scenarios_name]]$mw
# For each depth-threshold, make the exceedance-rate raster. NOTE: Could
# make calculations parallel over depth_threshold too.
for(depth_threshold in depth_thresholds_for_exceedance_rate_calculations){
tile_exceedance_rates_and_error_variance = get_exceedance_rate_and_error_variance_at_threshold_depth(
max_depth_files = scenario_rasters,
scenario_rates = scenario_rates,
scenario_magnitudes = scenario_mw,
depth_threshold=depth_threshold,
print_progress=print_progress_debug)
# For the raster output, it is nice to set regions that are never
# inundated to NA (genuinely NA regions that are not priority
# domain will also be NA)
null_regions = (tile_exceedance_rates_and_error_variance$exrate == 0)
tile_exceedance_rates_and_error_variance$exrate[null_regions] = NA
tile_exceedance_rates_and_error_variance$exrate_var[null_regions] = NA
rm(null_regions)
# Convert to a raster and write to file
#exrates_rast = terra::setValues(raster_template, tile_exceedance_rates_and_error_variance)
exrates_rast = setValues(raster_template, tile_exceedance_rates_and_error_variance$exrate)
raster_output_file = paste0(output_directory, '/',
scenarios_name, '_', output_raster_name_tag,
'_exceedance_rate_with_threshold_', depth_threshold,
'.tif')
#terra::writeRaster(exrates_rast, raster_output_file,
writeRaster(exrates_rast, raster_output_file,
options=c('COMPRESS=DEFLATE'), overwrite=TRUE)
exrates_var_rast = setValues(raster_template, tile_exceedance_rates_and_error_variance$exrate_var)
raster_output_file = paste0(output_directory, '/',
scenarios_name, '_', output_variance_raster_name_tag,
'_exceedance_rate_with_threshold_', depth_threshold,
'.tif')
#terra::writeRaster(exrates_rast, raster_output_file,
writeRaster(exrates_var_rast, raster_output_file,
options=c('COMPRESS=DEFLATE'), overwrite=TRUE)
rm(exrates_rast, exrates_var_rast, tile_exceedance_rates_and_error_variance)
gc()
}
rm(raster_template); gc()
return(0)
}
# Quick tests of the exceedance-rate calculations
.test_exceedance_rate_raster_calculations<-function(){
#
# Compare the max-stage exceedance-rate against an independently
# calculated value at a point, using code from '../max_stage_at_a_point'
#
# This test could be much improved -- it relies on a large set of existing simulated data.
#
input_domain_index_and_scenarios_name = list(
domain_index = 39, # Compare against a site on domain 39
scenarios_name = 'logic_tree_mean_curve_HS')
scenario_databases = list()
scenario_databases$logic_tree_mean_curve_HS = read.csv(
'../../sources/hazard/random_sunda2/random_scenarios_sunda2_logic_tree_mean_curve_HS.csv')
.local_find_matching_md_dir<-function(row_indices, tarred_multidomain_dirs){
# Make a string with the start of the SWALS output folder name (beneath
# ../../swals/OUTPUTS/...)
matching_string = paste0('ptha18_random_scenarios_sunda2_row_',
substring(as.character(1e+07 + row_indices), 2, 8), '_')
# Match with the tarred_multidomain_dirs, with NA if we don't match or get multiple matches
matching_ind = sapply(matching_string, f<-function(x){
p = grep(x, tarred_multidomain_dirs)
if(length(p) != 1) p = NA
return(p)})
if(any(is.na(matching_ind))) stop('Could not find simulation matching scenario')
return(tarred_multidomain_dirs[matching_ind])
}
#
tarred_multidomain_dirs = Sys.glob(
'../../swals/OUTPUTS/ptha18-BunburyBusseltonRevised-sealevel60cm/random_sunda2/ptha*/RUN*.tar')
for(i in 1:length(scenario_databases)){
scenario_databases[[i]]$md_dir = .local_find_matching_md_dir(
scenario_databases[[i]]$inds, tarred_multidomain_dirs)
}
# Get "max-stage-exceedance' of "MSL + 1.0"
MSL = 0.6
depth_thresholds_for_exceedance_rate_calculations = MSL + 1.0
output_directory = 'test_dir'
raster_name_start = 'max_stage_domain_'
dir.create(output_directory, showWarnings=FALSE)
#
# Test 1 -- basic exceedance-rate calculation
#
make_raster = compute_exceedance_rates_on_tile(
input_domain_index_and_scenarios_name,
tarred_multidomain_dirs,
scenario_databases,
output_directory,
depth_thresholds_for_exceedance_rate_calculations,
raster_name_start)
x = raster('test_dir/logic_tree_mean_curve_HS_domain_39_max_stage_domain__exceedance_rate_with_threshold_1.6.tif')
result = extract(x, matrix(c(113.0708085, -28.5679802), ncol=2))
# Previously I computed the exceedance-rates at this site using very different code, see here:
# ../max_stage_at_point/
# Visually the exceedance rate (for 1m above MSL) is very close to 1.0e-04.
# Actually it is 9.800535e-05
if(abs(result - 9.800535e-05) < 1e-10){
print('PASS')
}else{
print('FAIL')
}
#
# Test 2 -- exceedance-rate with variance. Here the exceedance-rate should
# be the same as the above up to floating-point differnces due to
# reordering of a sum, and we also get the error variance
output_directory = 'test_dir_2'
dir.create(output_directory, showWarnings=FALSE)
make_raster = compute_exceedance_rates_and_error_variance_on_tile(
input_domain_index_and_scenarios_name,
tarred_multidomain_dirs,
scenario_databases,
output_directory,
depth_thresholds_for_exceedance_rate_calculations,
raster_name_start)
xnew = raster('test_dir_2/logic_tree_mean_curve_HS_domain_39_max_stage_domain__exceedance_rate_with_threshold_1.6.tif')
xnew_var = raster('test_dir_2/logic_tree_mean_curve_HS_domain_39_max_stage_domain__variance_of__exceedance_rate_with_threshold_1.6.tif')
result = extract(xnew, matrix(c(113.0708085, -28.5679802), ncol=2))
# Previously I computed the exceedance-rates at this site using very different code, see here:
# ../max_stage_at_point/
# Visually the exceedance rate (for 1m above MSL) is very close to 1.0e-04.
# Actually it is 9.800535e-05
if(abs(result - 9.800535e-05) < 1e-10){
print('PASS')
}else{
print('FAIL')
}
# Check the exceedance-rates are just like before
if(all(abs(as.matrix(xnew - x)) <= 1.0e-08*as.matrix(xnew), na.rm=TRUE)){
print('PASS')
}else{
print('FAIL')
}
# From a separate calculation, the variance should be
# [1] 2.37206e-10
result = extract(xnew_var, matrix(c(113.0708085, -28.5679802), ncol=2))
if(abs(result - 2.37206e-10) < 1.0e-14){
print('PASS')
}else{
print('FAIL')
}
}
|
gaps.gen <- function(data, potential.centers, k = 6, init = "forgy", iter = 6)
{
#Randomly select potential centers and set them to be centroids
centroids <- gaps.init(data, potential.centers, init, k)
#Store the centroid cluster of each data point in this vector
cluster <- rep(0, nrow(data))
#The current epoch
cur_iter <- 1
#While the current epoch is not equal to the maximum epoch
while(cur_iter <= iter)
{
#Increment the current epoch
cur_iter <- cur_iter + 1
#For each data point in the list of data points
for (i in 1:nrow(data))
{
#Stores the smallest distance calculated so far
min.dist <- 1000000000
#For each centroid in the list of centroids
for (j in 1:k)
{
#Calculate the distance
centroid.dist <- gaps.dist(data[i,], centroids[j,], weighted = TRUE)
#If the distance is smaller than the minimum distance variable
if (centroid.dist <= min.dist)
{
#Set the cluster of the data point to the index of the centroid
cluster[i] <- j
#Set the minimum distance variable to the distance to the centroid
min.dist <- centroid.dist
}
}
}
#For each centroid in the list of centroids
for (i in 1:k)
{
x.coords <- 0.0
y.coords <- 0.0
total.in.cluster <- 0
#For each data point in the list of data points
for (j in 1:nrow(data))
{
#If the data point is in the cluster
if (cluster[j] == i)
{
#Add the current coordinates number to the x.coords and y.coords variable
x.coords <- x.coords + data[j,]$X * data[j,]$COUNT
y.coords <- y.coords + data[j,]$Y * data[j,]$COUNT
#Increment the total.in.cluster variable by 1 times the count (weight)
total.in.cluster <- total.in.cluster + 1 * data[j,]$COUNT
}
}
#If there is more than one data point in the cluster
if (total.in.cluster > 0)
{
#Calculate the mean of the x.coords and y.coords variable
centroids[i,]$X <- x.coords / total.in.cluster
centroids[i,]$Y <- y.coords / total.in.cluster
}
}
#For each centroid in the list of centroids
for (i in 1:k)
{
x.coords <- 0.0
y.coords <- 0.0
min.dist <- 10000000
#For each data point in the list of data points
for (j in 1:nrow(potential.centers))
{
potential.center.dist <- gaps.dist(centroids[i,], potential.centers[j,])
if (potential.center.dist < min.dist)
{
min.dist <- potential.center.dist
x.coords <- potential.centers[j,]$X
y.coords <- potential.centers[j,]$Y
centroids[i,]$NAME <- potential.centers[j,]$NAME
}
}
centroids[i,]$X <- x.coords
centroids[i,]$Y <- y.coords
}
}
#Return the centroids to the user
return(centroids)
}
gaps.init <- function(data, potential.centers, init, k)
{
#If the user wants to use the Forgy Method to initialize
if (init == "forgy")
{
#Randomly select potential centers from
return(potential.centers[sample(nrow(potential.centers),k),])
}
}
gaps.calc.dists <- function(data, centroids, weighted = FALSE)
{
distances <- rep(0, nrow(data))
for (i in 1:nrow(data))
{
min.dist <- 1000000000
for (j in 1:nrow(centroids))
{
potential.dist <- gaps.dist(data[i,], centroids[j,], weighted = weighted)
if (potential.dist < min.dist)
{
min.dist <- potential.dist
}
}
distances[i] <- min.dist
}
return(distances)
}
gaps.inert <- function(data, centroids, metric = "mi", weighted = FALSE)
{
distances <- rep(0,sum(data$COUNT))
iter <- 0
for (i in 1:nrow(data))
{
min.dist <- 1000000000
for (j in 1:nrow(centroids))
{
potential.dist <- gaps.dist(data[i,], centroids[j,], weighted = weighted)
if (potential.dist < min.dist)
{
min.dist <- potential.dist
}
}
for (k in 1:data[i,]$COUNT)
{
iter <- iter + 1
distances[iter] <- min.dist
}
}
return(distances)
}
gaps.dist <- function(loc.one, loc.two, metric = "mi", weighted = FALSE)
{
lat1 <- loc.one$Y * pi / 180.0
lat2 <- loc.two$Y * pi / 180.0
long1 <- loc.one$X * pi / 180.0
long2 <- loc.two$X * pi / 180.0
d <- acos(sin(lat1)*sin(lat2) + cos(lat1)*cos(lat2) * cos(long2-long1))
if (is.nan(d))
{
return(0)
}
if (metric == "mi")
{
dist <- d * 3958.8
}
else
{
dist <- d * 6371.0
}
if (weighted == TRUE)
{
return(dist * loc.one$COUNT)
}
return(dist)
}
gaps <- function(data, potential.centers, k = 6, init = "forgy", generations = 20, epoch = 6)
{
cur_generation <- 0
fitness.score <- Inf
opt.centers <- NULL
while(cur_generation < generations)
{
cur_generation <- cur_generation + 1
potential.opt.centers <- gaps.gen(data, potential.centers, k = k, init = init, iter = epoch)
distances <- gaps.calc.dists(data, potential.opt.centers, weighted = TRUE)
new.fitness.score <- sum(distances * distances)
if (new.fitness.score < fitness.score)
{
fitness.score <- new.fitness.score
opt.centers <- potential.opt.centers
}
}
return(opt.centers)
} | /gaps.R | no_license | deino475/GAPS-Algo | R | false | false | 5,294 | r | gaps.gen <- function(data, potential.centers, k = 6, init = "forgy", iter = 6)
{
#Randomly select potential centers and set them to be centroids
centroids <- gaps.init(data, potential.centers, init, k)
#Store the centroid cluster of each data point in this vector
cluster <- rep(0, nrow(data))
#The current epoch
cur_iter <- 1
#While the current epoch is not equal to the maximum epoch
while(cur_iter <= iter)
{
#Increment the current epoch
cur_iter <- cur_iter + 1
#For each data point in the list of data points
for (i in 1:nrow(data))
{
#Stores the smallest distance calculated so far
min.dist <- 1000000000
#For each centroid in the list of centroids
for (j in 1:k)
{
#Calculate the distance
centroid.dist <- gaps.dist(data[i,], centroids[j,], weighted = TRUE)
#If the distance is smaller than the minimum distance variable
if (centroid.dist <= min.dist)
{
#Set the cluster of the data point to the index of the centroid
cluster[i] <- j
#Set the minimum distance variable to the distance to the centroid
min.dist <- centroid.dist
}
}
}
#For each centroid in the list of centroids
for (i in 1:k)
{
x.coords <- 0.0
y.coords <- 0.0
total.in.cluster <- 0
#For each data point in the list of data points
for (j in 1:nrow(data))
{
#If the data point is in the cluster
if (cluster[j] == i)
{
#Add the current coordinates number to the x.coords and y.coords variable
x.coords <- x.coords + data[j,]$X * data[j,]$COUNT
y.coords <- y.coords + data[j,]$Y * data[j,]$COUNT
#Increment the total.in.cluster variable by 1 times the count (weight)
total.in.cluster <- total.in.cluster + 1 * data[j,]$COUNT
}
}
#If there is more than one data point in the cluster
if (total.in.cluster > 0)
{
#Calculate the mean of the x.coords and y.coords variable
centroids[i,]$X <- x.coords / total.in.cluster
centroids[i,]$Y <- y.coords / total.in.cluster
}
}
#For each centroid in the list of centroids
for (i in 1:k)
{
x.coords <- 0.0
y.coords <- 0.0
min.dist <- 10000000
#For each data point in the list of data points
for (j in 1:nrow(potential.centers))
{
potential.center.dist <- gaps.dist(centroids[i,], potential.centers[j,])
if (potential.center.dist < min.dist)
{
min.dist <- potential.center.dist
x.coords <- potential.centers[j,]$X
y.coords <- potential.centers[j,]$Y
centroids[i,]$NAME <- potential.centers[j,]$NAME
}
}
centroids[i,]$X <- x.coords
centroids[i,]$Y <- y.coords
}
}
#Return the centroids to the user
return(centroids)
}
gaps.init <- function(data, potential.centers, init, k)
{
#If the user wants to use the Forgy Method to initialize
if (init == "forgy")
{
#Randomly select potential centers from
return(potential.centers[sample(nrow(potential.centers),k),])
}
}
gaps.calc.dists <- function(data, centroids, weighted = FALSE)
{
distances <- rep(0, nrow(data))
for (i in 1:nrow(data))
{
min.dist <- 1000000000
for (j in 1:nrow(centroids))
{
potential.dist <- gaps.dist(data[i,], centroids[j,], weighted = weighted)
if (potential.dist < min.dist)
{
min.dist <- potential.dist
}
}
distances[i] <- min.dist
}
return(distances)
}
gaps.inert <- function(data, centroids, metric = "mi", weighted = FALSE)
{
distances <- rep(0,sum(data$COUNT))
iter <- 0
for (i in 1:nrow(data))
{
min.dist <- 1000000000
for (j in 1:nrow(centroids))
{
potential.dist <- gaps.dist(data[i,], centroids[j,], weighted = weighted)
if (potential.dist < min.dist)
{
min.dist <- potential.dist
}
}
for (k in 1:data[i,]$COUNT)
{
iter <- iter + 1
distances[iter] <- min.dist
}
}
return(distances)
}
gaps.dist <- function(loc.one, loc.two, metric = "mi", weighted = FALSE)
{
lat1 <- loc.one$Y * pi / 180.0
lat2 <- loc.two$Y * pi / 180.0
long1 <- loc.one$X * pi / 180.0
long2 <- loc.two$X * pi / 180.0
d <- acos(sin(lat1)*sin(lat2) + cos(lat1)*cos(lat2) * cos(long2-long1))
if (is.nan(d))
{
return(0)
}
if (metric == "mi")
{
dist <- d * 3958.8
}
else
{
dist <- d * 6371.0
}
if (weighted == TRUE)
{
return(dist * loc.one$COUNT)
}
return(dist)
}
gaps <- function(data, potential.centers, k = 6, init = "forgy", generations = 20, epoch = 6)
{
cur_generation <- 0
fitness.score <- Inf
opt.centers <- NULL
while(cur_generation < generations)
{
cur_generation <- cur_generation + 1
potential.opt.centers <- gaps.gen(data, potential.centers, k = k, init = init, iter = epoch)
distances <- gaps.calc.dists(data, potential.opt.centers, weighted = TRUE)
new.fitness.score <- sum(distances * distances)
if (new.fitness.score < fitness.score)
{
fitness.score <- new.fitness.score
opt.centers <- potential.opt.centers
}
}
return(opt.centers)
} |
test_that("summary list works", {
headers <- c("Name", "Date of birth", "Contact information", "Contact details")
info <- c(
"Sarah Philips",
"5 January 1978",
"72 Guild Street <br> London <br> SE23 6FH",
"07700 900457 <br> sarah.phillips@example.com")
summary_check <- gov_summary("sumID", headers, info, action = FALSE)
expect_equal(length(summary_check$children[[1]]), 4)
summary_check <- gov_summary(
"sumID", headers, info, action = TRUE, border = FALSE)
expect_identical(
summary_check$children[[1]]$Name$children[[3]][[3]][[1]][[1]],
"button"
)
})
| /tests/testthat/test-summary.R | no_license | moj-analytical-services/shinyGovstyle | R | false | false | 612 | r | test_that("summary list works", {
headers <- c("Name", "Date of birth", "Contact information", "Contact details")
info <- c(
"Sarah Philips",
"5 January 1978",
"72 Guild Street <br> London <br> SE23 6FH",
"07700 900457 <br> sarah.phillips@example.com")
summary_check <- gov_summary("sumID", headers, info, action = FALSE)
expect_equal(length(summary_check$children[[1]]), 4)
summary_check <- gov_summary(
"sumID", headers, info, action = TRUE, border = FALSE)
expect_identical(
summary_check$children[[1]]$Name$children[[3]][[3]][[1]][[1]],
"button"
)
})
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/RebuildPlot.R
\name{DoProjectPlots}
\alias{DoProjectPlots}
\title{Make plots from Rebuilder program}
\usage{
DoProjectPlots(
dirn = "C:/myfiles/",
fileN = c("res.csv"),
Titles = "",
ncols = 200,
Plots = list(1:25),
Options = list(c(1:9)),
LegLoc = "bottomright",
yearmax = -1,
Outlines = c(2, 2),
OutlineMulti = c(2, 2),
AllTraj = c(1, 2, 3, 4),
AllInd = c(1, 2, 3, 4, 5, 6, 7),
BioType = "Spawning biomass",
CatchUnit = "(mt)",
BioUnit = "(mt)",
BioScalar = 1,
ColorsUsed = "default",
Labels = "default",
pdf = FALSE,
pwidth = 6.5,
pheight = 5,
lwd = 2
)
}
\arguments{
\item{dirn}{Directory (or vector of directories) where rebuilder output
files are stored.}
\item{fileN}{Vector of filenames containing rebuilder output.
Default=c("res.csv").}
\item{Titles}{Titles for plots when using multiple filenames. Default="".}
\item{ncols}{Number of columns to read in output file (fileN). Default=200.}
\item{Plots}{List to get specific plots (currently 1 through 8).
Default=list(1:25). If there are multiple files, supply a list of vectors,
e.g. list(c(1,5),c(2:5))}
\item{Options}{List to get specific strategies in the trajectory plots.
Default=list(c(1:9)).If there are multiple files, supply a list of vectors,
e.g. list(c(1,5),c(2:5))}
\item{LegLoc}{Location for the legend (for plots with a legend).
Default="bottomright".}
\item{yearmax}{Maximum year to show in the plots. Set negative to show all
years. Default=-1.}
\item{Outlines}{Number of rows, columns for some of the plots.
Default=c(2,2).}
\item{OutlineMulti}{Number of rows, columns for other plots.
Default=c(2,2).}
\item{AllTraj}{Vector of trajectories to show. Default=c(1,2,3,4).}
\item{AllInd}{Vector of individual plots to show. Default=c(1,2,3,4,5,6,7).}
\item{BioType}{Label for biomass type. Default="Spawning biomass".}
\item{CatchUnit}{Units of catch. Default="(mt)".}
\item{BioUnit}{Units of biomass. Default="(mt)".}
\item{BioScalar}{Scalar for biomass plot. Default=1.}
\item{ColorsUsed}{Optional vector for alternative line colors.
Default="default".}
\item{Labels}{Optional vector for alternative legend labels.
Default="default".}
\item{pdf}{Option to send figures to pdf file instead of plot window in
Rgui. Default=FALSE.}
\item{pwidth}{Width of the plot window or PDF file (in inches). Default=7.}
\item{pheight}{Height of the plot window or PDF file (in inches). Default=7.}
\item{lwd}{Line width for many of the plot elements. Default=2.}
}
\description{
Make a set of plots based on output from Andre Punt's Rebuilder program.
}
\examples{
\dontrun{
# example with one file
DoProjectPlots(
dirn = "c:/myfiles/", Plots = 1:8,
Options = c(1, 2, 3, 4, 5, 9), LegLoc = "bottomleft"
)
# example with multiple files
# Plots - set to get specific plots
# Options - set to get specific strategies in the trajectory plots
Titles <- c("Res1", "Res2", "Res3")
Plots <- list(c(1:9), c(6:7))
Options <- list(c(7:9, 3), c(5, 7))
DoProjectPlots(
fileN = c("res1.csv", "res2.csv"), Titles = Titles, Plots = Plots,
Options = Options, LegLoc = "bottomleft", yearmax = -1,
Outlines = c(2, 2), OutlineMulti = c(3, 3), AllTraj = c(1:4),
AllInd = c(1:7), BioType = "Spawning numbers", BioUnit = "(lb)",
BioScalar = 1000, CatchUnit = "(lb)",
ColorsUse = rep(c("red", "blue"), 5),
Labels = c("A", "B", "C", "D", "E", "F")
)
}
}
\author{
Andre Punt, Ian Taylor
}
| /man/DoProjectPlots.Rd | no_license | allanhicks/r4ss | R | false | true | 3,490 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/RebuildPlot.R
\name{DoProjectPlots}
\alias{DoProjectPlots}
\title{Make plots from Rebuilder program}
\usage{
DoProjectPlots(
dirn = "C:/myfiles/",
fileN = c("res.csv"),
Titles = "",
ncols = 200,
Plots = list(1:25),
Options = list(c(1:9)),
LegLoc = "bottomright",
yearmax = -1,
Outlines = c(2, 2),
OutlineMulti = c(2, 2),
AllTraj = c(1, 2, 3, 4),
AllInd = c(1, 2, 3, 4, 5, 6, 7),
BioType = "Spawning biomass",
CatchUnit = "(mt)",
BioUnit = "(mt)",
BioScalar = 1,
ColorsUsed = "default",
Labels = "default",
pdf = FALSE,
pwidth = 6.5,
pheight = 5,
lwd = 2
)
}
\arguments{
\item{dirn}{Directory (or vector of directories) where rebuilder output
files are stored.}
\item{fileN}{Vector of filenames containing rebuilder output.
Default=c("res.csv").}
\item{Titles}{Titles for plots when using multiple filenames. Default="".}
\item{ncols}{Number of columns to read in output file (fileN). Default=200.}
\item{Plots}{List to get specific plots (currently 1 through 8).
Default=list(1:25). If there are multiple files, supply a list of vectors,
e.g. list(c(1,5),c(2:5))}
\item{Options}{List to get specific strategies in the trajectory plots.
Default=list(c(1:9)).If there are multiple files, supply a list of vectors,
e.g. list(c(1,5),c(2:5))}
\item{LegLoc}{Location for the legend (for plots with a legend).
Default="bottomright".}
\item{yearmax}{Maximum year to show in the plots. Set negative to show all
years. Default=-1.}
\item{Outlines}{Number of rows, columns for some of the plots.
Default=c(2,2).}
\item{OutlineMulti}{Number of rows, columns for other plots.
Default=c(2,2).}
\item{AllTraj}{Vector of trajectories to show. Default=c(1,2,3,4).}
\item{AllInd}{Vector of individual plots to show. Default=c(1,2,3,4,5,6,7).}
\item{BioType}{Label for biomass type. Default="Spawning biomass".}
\item{CatchUnit}{Units of catch. Default="(mt)".}
\item{BioUnit}{Units of biomass. Default="(mt)".}
\item{BioScalar}{Scalar for biomass plot. Default=1.}
\item{ColorsUsed}{Optional vector for alternative line colors.
Default="default".}
\item{Labels}{Optional vector for alternative legend labels.
Default="default".}
\item{pdf}{Option to send figures to pdf file instead of plot window in
Rgui. Default=FALSE.}
\item{pwidth}{Width of the plot window or PDF file (in inches). Default=7.}
\item{pheight}{Height of the plot window or PDF file (in inches). Default=7.}
\item{lwd}{Line width for many of the plot elements. Default=2.}
}
\description{
Make a set of plots based on output from Andre Punt's Rebuilder program.
}
\examples{
\dontrun{
# example with one file
DoProjectPlots(
dirn = "c:/myfiles/", Plots = 1:8,
Options = c(1, 2, 3, 4, 5, 9), LegLoc = "bottomleft"
)
# example with multiple files
# Plots - set to get specific plots
# Options - set to get specific strategies in the trajectory plots
Titles <- c("Res1", "Res2", "Res3")
Plots <- list(c(1:9), c(6:7))
Options <- list(c(7:9, 3), c(5, 7))
DoProjectPlots(
fileN = c("res1.csv", "res2.csv"), Titles = Titles, Plots = Plots,
Options = Options, LegLoc = "bottomleft", yearmax = -1,
Outlines = c(2, 2), OutlineMulti = c(3, 3), AllTraj = c(1:4),
AllInd = c(1:7), BioType = "Spawning numbers", BioUnit = "(lb)",
BioScalar = 1000, CatchUnit = "(lb)",
ColorsUse = rep(c("red", "blue"), 5),
Labels = c("A", "B", "C", "D", "E", "F")
)
}
}
\author{
Andre Punt, Ian Taylor
}
|
\name{hyp}
\alias{hyp}
\alias{dhyp}
\alias{phyp}
\alias{qhyp}
\alias{rhyp}
\title{Hyperbolic distribution}
\description{
Density, distribution function, quantile function
and random generation for the hyperbolic distribution.
}
\usage{
dhyp(x, alpha = 1, beta = 0, delta = 1, mu = 0,
pm = c("1", "2", "3", "4"), log = FALSE)
phyp(q, alpha = 1, beta = 0, delta = 1, mu = 0,
pm = c("1", "2", "3", "4"), \dots)
qhyp(p, alpha = 1, beta = 0, delta = 1, mu = 0,
pm = c("1", "2", "3", "4"), \dots)
rhyp(n, alpha = 1, beta = 0, delta = 1, mu = 0,
pm = c("1", "2", "3", "4"))
}
\arguments{
\item{alpha, beta, delta, mu}{
shape parameter \code{alpha};
skewness parameter \code{beta}, \code{abs(beta)} is in the
range (0, alpha);
scale parameter \code{delta}, \code{delta} must be zero or
positive;
location parameter \code{mu}, by default 0.
These is the meaning of the parameters in the first
parameterization \code{pm=1} which is the default
parameterization selection.
In the second parameterization, \code{pm=2} \code{alpha}
and \code{beta} take the meaning of the shape parameters
(usually named) \code{zeta} and \code{rho}.
In the third parameterization, \code{pm=3} \code{alpha}
and \code{beta} take the meaning of the shape parameters
(usually named) \code{xi} and \code{chi}.
In the fourth parameterization, \code{pm=4} \code{alpha}
and \code{beta} take the meaning of the shape parameters
(usually named) \code{a.bar} and \code{b.bar}.
}
\item{n}{
number of observations.
}
\item{p}{
a numeric vector of probabilities.
}
\item{pm}{
an integer value between \code{1} and \code{4} for the
selection of the parameterization. The default takes the
first parameterization.
}
\item{x, q}{
a numeric vector of quantiles.
}
\item{log}{
a logical, if TRUE, probabilities \code{p} are given as
\code{log(p)}.
}
\item{\dots}{
arguments to be passed to the function \code{integrate}.
}
}
\value{
All values for the \code{*hyp} functions are numeric vectors:
\code{d*} returns the density,
\code{p*} returns the distribution function,
\code{q*} returns the quantile function, and
\code{r*} generates random deviates.
All values have attributes named \code{"param"} listing
the values of the distributional parameters.
}
\details{
The generator \code{rhyp} is based on the HYP algorithm given
by Atkinson (1982).
}
\author{
David Scott for code implemented from \R's
contributed package \code{HyperbolicDist}.
}
\references{
Atkinson, A.C. (1982);
\emph{The simulation of generalized inverse Gaussian and hyperbolic
random variables},
SIAM J. Sci. Stat. Comput. 3, 502--515.
Barndorff-Nielsen O. (1977);
\emph{Exponentially decreasing distributions for the logarithm of
particle size},
Proc. Roy. Soc. Lond., A353, 401--419.
Barndorff-Nielsen O., Blaesild, P. (1983);
\emph{Hyperbolic distributions. In Encyclopedia of Statistical
Sciences},
Eds., Johnson N.L., Kotz S. and Read C.B.,
Vol. 3, pp. 700--707. New York: Wiley.
Raible S. (2000);
\emph{Levy Processes in Finance: Theory, Numerics and Empirical Facts},
PhD Thesis, University of Freiburg, Germany, 161 pages.
}
\examples{
## hyp -
set.seed(1953)
r = rhyp(5000, alpha = 1, beta = 0.3, delta = 1)
plot(r, type = "l", col = "steelblue",
main = "hyp: alpha=1 beta=0.3 delta=1")
## hyp -
# Plot empirical density and compare with true density:
hist(r, n = 25, probability = TRUE, border = "white", col = "steelblue")
x = seq(-5, 5, 0.25)
lines(x, dhyp(x, alpha = 1, beta = 0.3, delta = 1))
## hyp -
# Plot df and compare with true df:
plot(sort(r), (1:5000/5000), main = "Probability", col = "steelblue")
lines(x, phyp(x, alpha = 1, beta = 0.3, delta = 1))
## hyp -
# Compute Quantiles:
qhyp(phyp(seq(-5, 5, 1), alpha = 1, beta = 0.3, delta = 1),
alpha = 1, beta = 0.3, delta = 1)
}
\keyword{distribution}
| /man/dist-hyp.Rd | no_license | cran/fBasics | R | false | false | 4,335 | rd | \name{hyp}
\alias{hyp}
\alias{dhyp}
\alias{phyp}
\alias{qhyp}
\alias{rhyp}
\title{Hyperbolic distribution}
\description{
Density, distribution function, quantile function
and random generation for the hyperbolic distribution.
}
\usage{
dhyp(x, alpha = 1, beta = 0, delta = 1, mu = 0,
pm = c("1", "2", "3", "4"), log = FALSE)
phyp(q, alpha = 1, beta = 0, delta = 1, mu = 0,
pm = c("1", "2", "3", "4"), \dots)
qhyp(p, alpha = 1, beta = 0, delta = 1, mu = 0,
pm = c("1", "2", "3", "4"), \dots)
rhyp(n, alpha = 1, beta = 0, delta = 1, mu = 0,
pm = c("1", "2", "3", "4"))
}
\arguments{
\item{alpha, beta, delta, mu}{
shape parameter \code{alpha};
skewness parameter \code{beta}, \code{abs(beta)} is in the
range (0, alpha);
scale parameter \code{delta}, \code{delta} must be zero or
positive;
location parameter \code{mu}, by default 0.
These is the meaning of the parameters in the first
parameterization \code{pm=1} which is the default
parameterization selection.
In the second parameterization, \code{pm=2} \code{alpha}
and \code{beta} take the meaning of the shape parameters
(usually named) \code{zeta} and \code{rho}.
In the third parameterization, \code{pm=3} \code{alpha}
and \code{beta} take the meaning of the shape parameters
(usually named) \code{xi} and \code{chi}.
In the fourth parameterization, \code{pm=4} \code{alpha}
and \code{beta} take the meaning of the shape parameters
(usually named) \code{a.bar} and \code{b.bar}.
}
\item{n}{
number of observations.
}
\item{p}{
a numeric vector of probabilities.
}
\item{pm}{
an integer value between \code{1} and \code{4} for the
selection of the parameterization. The default takes the
first parameterization.
}
\item{x, q}{
a numeric vector of quantiles.
}
\item{log}{
a logical, if TRUE, probabilities \code{p} are given as
\code{log(p)}.
}
\item{\dots}{
arguments to be passed to the function \code{integrate}.
}
}
\value{
All values for the \code{*hyp} functions are numeric vectors:
\code{d*} returns the density,
\code{p*} returns the distribution function,
\code{q*} returns the quantile function, and
\code{r*} generates random deviates.
All values have attributes named \code{"param"} listing
the values of the distributional parameters.
}
\details{
The generator \code{rhyp} is based on the HYP algorithm given
by Atkinson (1982).
}
\author{
David Scott for code implemented from \R's
contributed package \code{HyperbolicDist}.
}
\references{
Atkinson, A.C. (1982);
\emph{The simulation of generalized inverse Gaussian and hyperbolic
random variables},
SIAM J. Sci. Stat. Comput. 3, 502--515.
Barndorff-Nielsen O. (1977);
\emph{Exponentially decreasing distributions for the logarithm of
particle size},
Proc. Roy. Soc. Lond., A353, 401--419.
Barndorff-Nielsen O., Blaesild, P. (1983);
\emph{Hyperbolic distributions. In Encyclopedia of Statistical
Sciences},
Eds., Johnson N.L., Kotz S. and Read C.B.,
Vol. 3, pp. 700--707. New York: Wiley.
Raible S. (2000);
\emph{Levy Processes in Finance: Theory, Numerics and Empirical Facts},
PhD Thesis, University of Freiburg, Germany, 161 pages.
}
\examples{
## hyp -
set.seed(1953)
r = rhyp(5000, alpha = 1, beta = 0.3, delta = 1)
plot(r, type = "l", col = "steelblue",
main = "hyp: alpha=1 beta=0.3 delta=1")
## hyp -
# Plot empirical density and compare with true density:
hist(r, n = 25, probability = TRUE, border = "white", col = "steelblue")
x = seq(-5, 5, 0.25)
lines(x, dhyp(x, alpha = 1, beta = 0.3, delta = 1))
## hyp -
# Plot df and compare with true df:
plot(sort(r), (1:5000/5000), main = "Probability", col = "steelblue")
lines(x, phyp(x, alpha = 1, beta = 0.3, delta = 1))
## hyp -
# Compute Quantiles:
qhyp(phyp(seq(-5, 5, 1), alpha = 1, beta = 0.3, delta = 1),
alpha = 1, beta = 0.3, delta = 1)
}
\keyword{distribution}
|
/8 - funcion predicciones en base de datos ensemble minimo - glm.R | no_license | juanfranciscomorales/Forward-Stepwise-glm-test-LRT---modelos-clasificadores | R | false | false | 3,044 | r | ||
# Y=audit quality
# D=big4
# X=controls variables
rm(list=ls())
setwd("D:\\报表")
#install.packages("readxl")
library(readxl)
data <- read_excel("dataForML.xlsx")
y <- data$RM
d <- data$big4
x="lnA+ATURN+MKT+roa+LEV+curr"
sed <- 123 # default=123
setwd("E:\\R-econometrics\\myDML")
source("dml_boost.R")
dml_boost <- dml_boost(data=data,y,x,d,sed=123)
| /MyDML_boost.R | no_license | lixiongyang/DoubleMachineLearning | R | false | false | 384 | r | # Y=audit quality
# D=big4
# X=controls variables
rm(list=ls())
setwd("D:\\报表")
#install.packages("readxl")
library(readxl)
data <- read_excel("dataForML.xlsx")
y <- data$RM
d <- data$big4
x="lnA+ATURN+MKT+roa+LEV+curr"
sed <- 123 # default=123
setwd("E:\\R-econometrics\\myDML")
source("dml_boost.R")
dml_boost <- dml_boost(data=data,y,x,d,sed=123)
|
cdata( ~ var.boot, data = Boys.boot)
cdata( ~ var.boot, data = Boys.boot)[1:2] %>% sqrt()
| /inst/snippet/boot-boys04.R | no_license | rpruim/fastR2 | R | false | false | 91 | r | cdata( ~ var.boot, data = Boys.boot)
cdata( ~ var.boot, data = Boys.boot)[1:2] %>% sqrt()
|
#' @title TableSubgroupGLM: Sub-group analysis table for GLM.
#' @description Sub-group analysis table for GLM.
#' @param formula formula with survival analysis.
#' @param var_subgroup 1 sub-group variable for analysis, Default: NULL
#' @param var_cov Variables for additional adjust, Default: NULL
#' @param data Data or svydesign in survey package.
#' @param family family, "gaussian" or "binomial"
#' @param decimal.estimate Decimal for estimate, Default: 2
#' @param decimal.percent Decimal for percent, Default: 1
#' @param decimal.pvalue Decimal for pvalue, Default: 3
#' @return Sub-group analysis table.
#' @details This result is used to make forestplot.
#' @examples
#' library(survival);library(dplyr)
#' lung %>%
#' mutate(status = as.integer(status == 1),
#' sex = factor(sex),
#' kk = factor(as.integer(pat.karno >= 70))) -> lung
#' TableSubgroupGLM(status ~ sex, data = lung, family = "binomial")
#' TableSubgroupGLM(status ~ sex, var_subgroup = "kk", data = lung, family = "binomial")
#'
#' ## survey design
#' library(survey)
#' data.design <- svydesign(id = ~1, data = lung)
#' TableSubgroupGLM(status ~ sex, data = data.design, family = "binomial")
#' TableSubgroupGLM(status ~ sex, var_subgroup = "kk", data = data.design, family = "binomial")
#' @seealso
#' \code{\link[purrr]{safely}},\code{\link[purrr]{map}},\code{\link[purrr]{map2}}
#' \code{\link[stats]{glm}}
#' \code{\link[survey]{svyglm}}
#' \code{\link[stats]{confint}}
#' @rdname TableSubgroupGLM
#' @export
#' @importFrom purrr possibly map_dbl map map2
#' @importFrom dplyr group_split select filter mutate bind_cols
#' @importFrom magrittr %>%
#' @importFrom survey svyglm
#' @importFrom stats glm confint coefficients anova gaussian quasibinomial
#' @importFrom utils tail
TableSubgroupGLM <- function(formula, var_subgroup = NULL, var_cov = NULL, data, family = "binomial", decimal.estimate = 2, decimal.percent = 1, decimal.pvalue = 3){
. <- NULL
if (length(formula[[3]]) > 1) stop("Formula must contains only 1 independent variable")
if (any(class(data) == "survey.design" & !is.null(var_subgroup))){
if (is.numeric(data$variables[[var_subgroup]])) stop("var_subgroup must categorical.")
if (length(levels(data$variables[[as.character(formula[[3]])]])) != 2) stop("Independent variable must have 2 levels.")
} else if(any(class(data) == "data.frame" & !is.null(var_subgroup))){
if (is.numeric(data[[var_subgroup]])) stop("var_subgroup must categorical.")
if (length(levels(data[[as.character(formula[[3]])]])) != 2) stop("Independent variable must have 2 levels.")
}
## functions with error
possible_table <- purrr::possibly(table, NA)
possible_prop.table <- purrr::possibly(function(x){prop.table(x, 1)[2, ] * 100}, NA)
possible_pv <- purrr::possibly(function(x){summary(x)[["coefficients"]][2, ] %>% tail(1)}, NA)
possible_glm <- purrr::possibly(stats::glm, NA)
possible_svyglm <- purrr::possibly(survey::svyglm, NA)
possible_confint <- purrr::possibly(stats::confint, NA)
possible_modely <- purrr::possibly(function(x){purrr::map_dbl(x, .[["y"]], 1)}, NA)
possible_rowone <- purrr::possibly(function(x){x[2, ]}, NA)
var_cov <- setdiff(var_cov, c(as.character(formula[[3]]), var_subgroup))
family.svyglm <- gaussian()
if (family == "binomial") family.svyglm <- quasibinomial()
if (is.null(var_subgroup)){
if (!is.null(var_cov)){
formula <- as.formula(paste0(deparse(formula), " + ", paste(var_cov, collapse = "+")))
}
if (any(class(data) == "survey.design")){
model <- survey::svyglm(formula, design = data, x= T, family = family.svyglm)
if (!is.null(model$xlevels) & length(model$xlevels[[1]]) != 2) stop("Categorical independent variable must have 2 levels.")
} else{
model <- stats::glm(formula, data = data, x= TRUE, family = family)
if (!is.null(model$xlevels) & length(model$xlevels[[1]]) != 2) stop("Categorical independent variable must have 2 levels.")
}
Point.Estimate <- round(coef(model), decimal.estimate)[2]
CI <- round(confint(model)[2, ], decimal.estimate)
if(family == "binomial"){
Point.Estimate <- round(exp(coef(model)), decimal.estimate)[2]
CI <- round(exp(confint(model)[2, ]), decimal.estimate)
}
#if (length(Point.Estimate) > 1){
# stop("Formula must contain 1 independent variable only.")
#}
#event <- model$y
#prop <- round(prop.table(table(event, model$x[, 1]), 2)[2, ] * 100, decimal.percent)
pv <- round(tail(summary(model)$coefficients[2, ], 1), decimal.pvalue)
data.frame(Variable = "Overall", Count = length(model$y), Percent = 100, `Point Estimate` = Point.Estimate, Lower = CI[1], Upper = CI[2]) %>%
dplyr::mutate(`P value` = ifelse(pv >= 0.001, pv, "<0.001"), `P for interaction` = NA) -> out
if (family == "binomial"){
names(out)[4] <- "OR"
}
return(out)
} else if (length(var_subgroup) > 1 | any(grepl(var_subgroup, formula))){
stop("Please input correct subgroup variable.")
} else{
if (!is.null(var_cov)){
formula <- as.formula(paste0(deparse(formula), " + ", paste(var_cov, collapse = "+")))
}
if (any(class(data) == "survey.design")){
data$variables[[var_subgroup]] %>% table %>% names -> label_val
label_val %>% purrr::map(~possible_svyglm(formula, design = subset(data, get(var_subgroup) == .), x = TRUE, family = family.svyglm)) -> model
xlev <- survey::svyglm(formula, design = data)$xlevels
xlabel <- names(attr(model[[which(!is.na(model))[1]]]$x, "contrast"))[1]
pvs_int <- possible_svyglm(as.formula(gsub(xlabel, paste(xlabel, "*", var_subgroup, sep=""), deparse(formula))), design = data, family = family.svyglm) %>% summary %>% coefficients
pv_int <- round(pvs_int[nrow(pvs_int), ncol(pvs_int)], decimal.pvalue)
if (!is.null(xlev) & length(xlev[[1]]) != 2) stop("Categorical independent variable must have 2 levels.")
if (length(label_val) > 2){
model.int <- survey::svyglm(as.formula(gsub(xlabel, paste(xlabel, "*", var_subgroup, sep=""), deparse(formula))), design = data, family = family.svyglm)
pv_anova <- anova(model.int, method = "Wald")
pv_int <- pv_anova[[length(pv_anova)]][[7]]
}
Count <- as.vector(table(data$variables[[var_subgroup]]))
} else{
data %>% filter(!is.na(get(var_subgroup))) %>% group_split(get(var_subgroup)) %>% purrr::map(~possible_glm(formula, data = ., x= T, family = family)) -> model
data %>% filter(!is.na(get(var_subgroup))) %>% select(var_subgroup) %>% table %>% names -> label_val
xlev <- stats::glm(formula, data = data)$xlevels
xlabel <- names(attr(model[[which(!is.na(model))[1]]]$x, "contrast"))[1]
model.int <- possible_glm(as.formula(gsub(xlabel, paste(xlabel, "*", var_subgroup, sep=""), deparse(formula))), data = data, family = family)
pvs_int <- model.int %>% summary %>% coefficients
pv_int <- round(pvs_int[nrow(pvs_int), ncol(pvs_int)], decimal.pvalue)
if (!is.null(xlev) & length(xlev[[1]]) != 2) stop("Categorical independent variable must have 2 levels.")
if (length(label_val) > 2){
pv_anova <- anova(model.int, test = "Chisq")
pv_int <- round(pv_anova[nrow(pv_anova), 5], decimal.pvalue)
}
Count <- as.vector(table(data[[var_subgroup]]))
}
Estimate <- model %>% purrr::map("coefficients", .default = NA) %>% purrr::map_dbl(2, .default = NA)
CI0 <- model %>% purrr::map(possible_confint) %>% purrr::map(possible_rowone) %>% Reduce(rbind, .)
Point.Estimate <- round(Estimate, decimal.estimate)
CI <- round(CI0, decimal.estimate)
if (family == "binomial"){
Point.Estimate <- round(exp(Estimate), decimal.estimate)
CI <- round(exp(CI0), decimal.estimate)
}
model %>% purrr::map(possible_pv) %>% purrr::map_dbl(~round(., decimal.pvalue)) -> pv
data.frame(Variable = paste(" ", label_val) , Count = Count, Percent = round(Count/sum(Count) * 100, decimal.percent), "Point Estimate" = Point.Estimate, Lower = CI[, 1], Upper = CI[, 2]) %>%
dplyr::mutate(`P value` = ifelse(pv >= 0.001, pv, "<0.001"), `P for interaction` = NA) -> out
if (family == "binomial"){
names(out)[4] <- "OR"
}
return(rbind(c(var_subgroup, rep(NA, ncol(out) - 2), ifelse(pv_int >= 0.001, pv_int, "<0.001")), out))
}
}
#' @title TableSubgroupMultiGLM: Multiple sub-group analysis table for GLM.
#' @description Multiple sub-group analysis table for GLM.
#' @param formula formula with survival analysis.
#' @param var_subgroups Multiple sub-group variables for analysis, Default: NULL
#' @param var_cov Variables for additional adjust, Default: NULL
#' @param data Data or svydesign in survey package.
#' @param family family, "gaussian" or "binomial"
#' @param decimal.estimate Decimal for estimate, Default: 2
#' @param decimal.percent Decimal for percent, Default: 1
#' @param decimal.pvalue Decimal for pvalue, Default: 3
#' @param line Include new-line between sub-group variables, Default: F
#' @return Multiple sub-group analysis table.
#' @details This result is used to make forestplot.
#' @examples
#' library(survival);library(dplyr)
#' lung %>%
#' mutate(status = as.integer(status == 1),
#' sex = factor(sex),
#' kk = factor(as.integer(pat.karno >= 70)),
#' kk1 = factor(as.integer(pat.karno >= 60))) -> lung
#' TableSubgroupMultiGLM(status ~ sex, var_subgroups = c("kk", "kk1"),
#' data=lung, line = TRUE, family = "binomial")
#'
#' ## survey design
#' library(survey)
#' data.design <- svydesign(id = ~1, data = lung)
#' TableSubgroupMultiGLM(status ~ sex, var_subgroups = c("kk", "kk1"),
#' data = data.design, family = "binomial")
#' @seealso
#' \code{\link[purrr]{map}}
#' \code{\link[dplyr]{bind}}
#' @rdname TableSubgroupMultiGLM
#' @export
#' @importFrom purrr map
#' @importFrom magrittr %>%
#' @importFrom dplyr bind_rows
TableSubgroupMultiGLM <- function(formula, var_subgroups = NULL, var_cov = NULL, data, family = "binomial", decimal.estimate = 2, decimal.percent = 1, decimal.pvalue = 3, line = F){
. <- NULL
out.all <- TableSubgroupGLM(formula, var_subgroup = NULL, var_cov = var_cov, data = data, family = family, decimal.estimate = decimal.estimate, decimal.percent = decimal.percent, decimal.pvalue = decimal.pvalue)
if (is.null(var_subgroups)){
return(out.all)
} else {
out.list <- purrr::map(var_subgroups, ~TableSubgroupGLM(formula, var_subgroup = ., var_cov = var_cov, data = data, family = family, decimal.estimate = decimal.estimate, decimal.percent = decimal.percent, decimal.pvalue = decimal.pvalue))
if (line){
out.newline <- out.list %>% purrr::map(~rbind(NA, .))
return(rbind(out.all, out.newline %>% dplyr::bind_rows()))
} else{
return(rbind(out.all, out.list %>% dplyr::bind_rows()))
}
}
}
| /R/forestglm.R | no_license | OhSehunXX/jstable | R | false | false | 11,075 | r | #' @title TableSubgroupGLM: Sub-group analysis table for GLM.
#' @description Sub-group analysis table for GLM.
#' @param formula formula with survival analysis.
#' @param var_subgroup 1 sub-group variable for analysis, Default: NULL
#' @param var_cov Variables for additional adjust, Default: NULL
#' @param data Data or svydesign in survey package.
#' @param family family, "gaussian" or "binomial"
#' @param decimal.estimate Decimal for estimate, Default: 2
#' @param decimal.percent Decimal for percent, Default: 1
#' @param decimal.pvalue Decimal for pvalue, Default: 3
#' @return Sub-group analysis table.
#' @details This result is used to make forestplot.
#' @examples
#' library(survival);library(dplyr)
#' lung %>%
#' mutate(status = as.integer(status == 1),
#' sex = factor(sex),
#' kk = factor(as.integer(pat.karno >= 70))) -> lung
#' TableSubgroupGLM(status ~ sex, data = lung, family = "binomial")
#' TableSubgroupGLM(status ~ sex, var_subgroup = "kk", data = lung, family = "binomial")
#'
#' ## survey design
#' library(survey)
#' data.design <- svydesign(id = ~1, data = lung)
#' TableSubgroupGLM(status ~ sex, data = data.design, family = "binomial")
#' TableSubgroupGLM(status ~ sex, var_subgroup = "kk", data = data.design, family = "binomial")
#' @seealso
#' \code{\link[purrr]{safely}},\code{\link[purrr]{map}},\code{\link[purrr]{map2}}
#' \code{\link[stats]{glm}}
#' \code{\link[survey]{svyglm}}
#' \code{\link[stats]{confint}}
#' @rdname TableSubgroupGLM
#' @export
#' @importFrom purrr possibly map_dbl map map2
#' @importFrom dplyr group_split select filter mutate bind_cols
#' @importFrom magrittr %>%
#' @importFrom survey svyglm
#' @importFrom stats glm confint coefficients anova gaussian quasibinomial
#' @importFrom utils tail
TableSubgroupGLM <- function(formula, var_subgroup = NULL, var_cov = NULL, data, family = "binomial", decimal.estimate = 2, decimal.percent = 1, decimal.pvalue = 3){
. <- NULL
if (length(formula[[3]]) > 1) stop("Formula must contains only 1 independent variable")
if (any(class(data) == "survey.design" & !is.null(var_subgroup))){
if (is.numeric(data$variables[[var_subgroup]])) stop("var_subgroup must categorical.")
if (length(levels(data$variables[[as.character(formula[[3]])]])) != 2) stop("Independent variable must have 2 levels.")
} else if(any(class(data) == "data.frame" & !is.null(var_subgroup))){
if (is.numeric(data[[var_subgroup]])) stop("var_subgroup must categorical.")
if (length(levels(data[[as.character(formula[[3]])]])) != 2) stop("Independent variable must have 2 levels.")
}
## functions with error
possible_table <- purrr::possibly(table, NA)
possible_prop.table <- purrr::possibly(function(x){prop.table(x, 1)[2, ] * 100}, NA)
possible_pv <- purrr::possibly(function(x){summary(x)[["coefficients"]][2, ] %>% tail(1)}, NA)
possible_glm <- purrr::possibly(stats::glm, NA)
possible_svyglm <- purrr::possibly(survey::svyglm, NA)
possible_confint <- purrr::possibly(stats::confint, NA)
possible_modely <- purrr::possibly(function(x){purrr::map_dbl(x, .[["y"]], 1)}, NA)
possible_rowone <- purrr::possibly(function(x){x[2, ]}, NA)
var_cov <- setdiff(var_cov, c(as.character(formula[[3]]), var_subgroup))
family.svyglm <- gaussian()
if (family == "binomial") family.svyglm <- quasibinomial()
if (is.null(var_subgroup)){
if (!is.null(var_cov)){
formula <- as.formula(paste0(deparse(formula), " + ", paste(var_cov, collapse = "+")))
}
if (any(class(data) == "survey.design")){
model <- survey::svyglm(formula, design = data, x= T, family = family.svyglm)
if (!is.null(model$xlevels) & length(model$xlevels[[1]]) != 2) stop("Categorical independent variable must have 2 levels.")
} else{
model <- stats::glm(formula, data = data, x= TRUE, family = family)
if (!is.null(model$xlevels) & length(model$xlevels[[1]]) != 2) stop("Categorical independent variable must have 2 levels.")
}
Point.Estimate <- round(coef(model), decimal.estimate)[2]
CI <- round(confint(model)[2, ], decimal.estimate)
if(family == "binomial"){
Point.Estimate <- round(exp(coef(model)), decimal.estimate)[2]
CI <- round(exp(confint(model)[2, ]), decimal.estimate)
}
#if (length(Point.Estimate) > 1){
# stop("Formula must contain 1 independent variable only.")
#}
#event <- model$y
#prop <- round(prop.table(table(event, model$x[, 1]), 2)[2, ] * 100, decimal.percent)
pv <- round(tail(summary(model)$coefficients[2, ], 1), decimal.pvalue)
data.frame(Variable = "Overall", Count = length(model$y), Percent = 100, `Point Estimate` = Point.Estimate, Lower = CI[1], Upper = CI[2]) %>%
dplyr::mutate(`P value` = ifelse(pv >= 0.001, pv, "<0.001"), `P for interaction` = NA) -> out
if (family == "binomial"){
names(out)[4] <- "OR"
}
return(out)
} else if (length(var_subgroup) > 1 | any(grepl(var_subgroup, formula))){
stop("Please input correct subgroup variable.")
} else{
if (!is.null(var_cov)){
formula <- as.formula(paste0(deparse(formula), " + ", paste(var_cov, collapse = "+")))
}
if (any(class(data) == "survey.design")){
data$variables[[var_subgroup]] %>% table %>% names -> label_val
label_val %>% purrr::map(~possible_svyglm(formula, design = subset(data, get(var_subgroup) == .), x = TRUE, family = family.svyglm)) -> model
xlev <- survey::svyglm(formula, design = data)$xlevels
xlabel <- names(attr(model[[which(!is.na(model))[1]]]$x, "contrast"))[1]
pvs_int <- possible_svyglm(as.formula(gsub(xlabel, paste(xlabel, "*", var_subgroup, sep=""), deparse(formula))), design = data, family = family.svyglm) %>% summary %>% coefficients
pv_int <- round(pvs_int[nrow(pvs_int), ncol(pvs_int)], decimal.pvalue)
if (!is.null(xlev) & length(xlev[[1]]) != 2) stop("Categorical independent variable must have 2 levels.")
if (length(label_val) > 2){
model.int <- survey::svyglm(as.formula(gsub(xlabel, paste(xlabel, "*", var_subgroup, sep=""), deparse(formula))), design = data, family = family.svyglm)
pv_anova <- anova(model.int, method = "Wald")
pv_int <- pv_anova[[length(pv_anova)]][[7]]
}
Count <- as.vector(table(data$variables[[var_subgroup]]))
} else{
data %>% filter(!is.na(get(var_subgroup))) %>% group_split(get(var_subgroup)) %>% purrr::map(~possible_glm(formula, data = ., x= T, family = family)) -> model
data %>% filter(!is.na(get(var_subgroup))) %>% select(var_subgroup) %>% table %>% names -> label_val
xlev <- stats::glm(formula, data = data)$xlevels
xlabel <- names(attr(model[[which(!is.na(model))[1]]]$x, "contrast"))[1]
model.int <- possible_glm(as.formula(gsub(xlabel, paste(xlabel, "*", var_subgroup, sep=""), deparse(formula))), data = data, family = family)
pvs_int <- model.int %>% summary %>% coefficients
pv_int <- round(pvs_int[nrow(pvs_int), ncol(pvs_int)], decimal.pvalue)
if (!is.null(xlev) & length(xlev[[1]]) != 2) stop("Categorical independent variable must have 2 levels.")
if (length(label_val) > 2){
pv_anova <- anova(model.int, test = "Chisq")
pv_int <- round(pv_anova[nrow(pv_anova), 5], decimal.pvalue)
}
Count <- as.vector(table(data[[var_subgroup]]))
}
Estimate <- model %>% purrr::map("coefficients", .default = NA) %>% purrr::map_dbl(2, .default = NA)
CI0 <- model %>% purrr::map(possible_confint) %>% purrr::map(possible_rowone) %>% Reduce(rbind, .)
Point.Estimate <- round(Estimate, decimal.estimate)
CI <- round(CI0, decimal.estimate)
if (family == "binomial"){
Point.Estimate <- round(exp(Estimate), decimal.estimate)
CI <- round(exp(CI0), decimal.estimate)
}
model %>% purrr::map(possible_pv) %>% purrr::map_dbl(~round(., decimal.pvalue)) -> pv
data.frame(Variable = paste(" ", label_val) , Count = Count, Percent = round(Count/sum(Count) * 100, decimal.percent), "Point Estimate" = Point.Estimate, Lower = CI[, 1], Upper = CI[, 2]) %>%
dplyr::mutate(`P value` = ifelse(pv >= 0.001, pv, "<0.001"), `P for interaction` = NA) -> out
if (family == "binomial"){
names(out)[4] <- "OR"
}
return(rbind(c(var_subgroup, rep(NA, ncol(out) - 2), ifelse(pv_int >= 0.001, pv_int, "<0.001")), out))
}
}
#' @title TableSubgroupMultiGLM: Multiple sub-group analysis table for GLM.
#' @description Multiple sub-group analysis table for GLM.
#' @param formula formula with survival analysis.
#' @param var_subgroups Multiple sub-group variables for analysis, Default: NULL
#' @param var_cov Variables for additional adjust, Default: NULL
#' @param data Data or svydesign in survey package.
#' @param family family, "gaussian" or "binomial"
#' @param decimal.estimate Decimal for estimate, Default: 2
#' @param decimal.percent Decimal for percent, Default: 1
#' @param decimal.pvalue Decimal for pvalue, Default: 3
#' @param line Include new-line between sub-group variables, Default: F
#' @return Multiple sub-group analysis table.
#' @details This result is used to make forestplot.
#' @examples
#' library(survival);library(dplyr)
#' lung %>%
#' mutate(status = as.integer(status == 1),
#' sex = factor(sex),
#' kk = factor(as.integer(pat.karno >= 70)),
#' kk1 = factor(as.integer(pat.karno >= 60))) -> lung
#' TableSubgroupMultiGLM(status ~ sex, var_subgroups = c("kk", "kk1"),
#' data=lung, line = TRUE, family = "binomial")
#'
#' ## survey design
#' library(survey)
#' data.design <- svydesign(id = ~1, data = lung)
#' TableSubgroupMultiGLM(status ~ sex, var_subgroups = c("kk", "kk1"),
#' data = data.design, family = "binomial")
#' @seealso
#' \code{\link[purrr]{map}}
#' \code{\link[dplyr]{bind}}
#' @rdname TableSubgroupMultiGLM
#' @export
#' @importFrom purrr map
#' @importFrom magrittr %>%
#' @importFrom dplyr bind_rows
TableSubgroupMultiGLM <- function(formula, var_subgroups = NULL, var_cov = NULL, data, family = "binomial", decimal.estimate = 2, decimal.percent = 1, decimal.pvalue = 3, line = F){
. <- NULL
out.all <- TableSubgroupGLM(formula, var_subgroup = NULL, var_cov = var_cov, data = data, family = family, decimal.estimate = decimal.estimate, decimal.percent = decimal.percent, decimal.pvalue = decimal.pvalue)
if (is.null(var_subgroups)){
return(out.all)
} else {
out.list <- purrr::map(var_subgroups, ~TableSubgroupGLM(formula, var_subgroup = ., var_cov = var_cov, data = data, family = family, decimal.estimate = decimal.estimate, decimal.percent = decimal.percent, decimal.pvalue = decimal.pvalue))
if (line){
out.newline <- out.list %>% purrr::map(~rbind(NA, .))
return(rbind(out.all, out.newline %>% dplyr::bind_rows()))
} else{
return(rbind(out.all, out.list %>% dplyr::bind_rows()))
}
}
}
|
##' @description High level GO enrichment and GSEA functions based on clusterprofiler (analyse, simplify, plot)
##' @author Vitalii Kleshchevnikov
#####################################
##' @author Vitalii Kleshchevnikov
GO_enrich_simplify_plot_bioc = function(protein_set, reference_protein_set, identifier_type, ontology, pAdjustMethod_ = "BH", minSetSize, maxSetSize, simplify_by = "p.adjust", simplify_fun = "min", similarity_calc_method = "kappa", similarity_cutoff = 0.7, visualize_result = "enrichMap", above_corrected_pval = 1, use_bioc_annotationdbi = T, plot_title = "", xlabel = "", drop_GO_levels = NULL){
suppressPackageStartupMessages({
library(clusterProfiler)
library(Homo.sapiens)
})
ego <- enrichGO(gene = protein_set,
OrgDb = org.Hs.eg.db,
universe = reference_protein_set,
keytype = identifier_type,
ont = ontology,
pAdjustMethod = pAdjustMethod_,
minGSSize = minSetSize,
maxGSSize = maxSetSize,
pvalueCutoff = above_corrected_pval,
qvalueCutoff = 1)
if(is.null(drop_GO_levels)){
for(i in length(drop_GO_levels)){
ego = dropGO(ego, level = drop_GO_levels[i])
}
}
if(similarity_calc_method != "none"){
# simplify output from enrichGO by removing redundancy of enriched GO terms
ego2 = simplify(x = ego, cutoff = similarity_cutoff,
by = simplify_by,
select_fun = eval(parse(text = simplify_fun)),
measure = similarity_calc_method,
semData = NULL)#,
#use_bioc_annotationdbi = use_bioc_annotationdbi)
# "x[which.max(eval(parse(text = paste0("c(",paste0(x, collapse = ","),")"))))]"
}
if(similarity_calc_method == "none") ego2 = ego
# visualize results
if(visualize_result == "enrichMap") plot_res = enrichMap(ego2, layout = igraph:::layout_with_kk, vertex.label.cex = 0.8,vertex.size = 5, rescale=T)
if(visualize_result == "dotplot") plot_res = dotplot(ego2, title = plot_title, xlabel = xlabel)
return(list(enrichment_result = ego, simplified_enrichment_result = ego2, plot = plot_res))
}
#####################################
##' @author Vitalii Kleshchevnikov
GSEA_simplify_plot_bioc = function(ranked_protein_list, identifier_type, ontology, nPerm = 1000, pAdjustMethod_ = "BH", minSetSize, maxSetSize, simplify_by = "p.adjust", simplify_fun = "min", similarity_calc_method = "kappa", similarity_cutoff = 0.7, visualize_result = "enrichMap", above_corrected_pval = 1, use_bioc_annotationdbi = T, plot_title = "", xlabel = ""){
suppressPackageStartupMessages({
library(clusterProfiler)
library(Homo.sapiens)
})
ego = gseGO(geneList = ranked_protein_list, ont = ontology, org.Hs.eg.db, keytype = identifier_type, exponent = 1,
nPerm = nPerm, minGSSize = minSetSize, maxGSSize = maxSetSize, pvalueCutoff = above_corrected_pval,
pAdjustMethod = pAdjustMethod_, verbose = F, seed = FALSE, by = "fgsea")
if(similarity_calc_method != "none"){
# simplify output from enrichGO by removing redundancy of enriched GO terms
ego2 = simplify(x = ego, cutoff = similarity_cutoff,
by = simplify_by,
select_fun = eval(parse(text = simplify_fun)),
measure = similarity_calc_method,
semData = NULL,
use_data_table = T)#,
# use_bioc_annotationdbi = use_bioc_annotationdbi)
# "x[which.max(eval(parse(text = paste0("c(",paste0(x, collapse = ","),")"))))]"
}
if(similarity_calc_method == "none") ego2 = ego
# visualize results
if(visualize_result == "enrichMap") plot_res = enrichMap(ego2, layout = layout_with_kk, vertex.label.cex = 0.8,vertex.size = 5, rescale=T)
if(visualize_result == "dotplot") plot_res = dotplot(ego2, title = plot_title, colorBy = "enrichmentScore", orderBy = "p.adjust", xlabel = xlabel)
return(list(enrichment_result = ego, simplified_enrichment_result = ego2, plot = plot_res))
}
#####################################
##' @author Vitalii Kleshchevnikov
cluster_GO_enrich_simplify_plot_bioc = function(formula, protein_groups.dt, reference_protein_set, identifier_type, ontology, pAdjustMethod_ = "BH", minSetSize, maxSetSize, simplify_by = "p.adjust", simplify_fun = "min", similarity_calc_method = "kappa", similarity_cutoff = 0.7, visualize_result = "dotplot", above_corrected_pval = 1, plot_title = "", drop_GO_levels = NULL){
suppressPackageStartupMessages({
library(clusterProfiler)
library(Homo.sapiens)
})
ego = compareCluster(formula, fun = "enrichGO", data = protein_groups.dt,
OrgDb = org.Hs.eg.db,
universe = reference_protein_set,
keytype = identifier_type,
ont = ontology,
pAdjustMethod = pAdjustMethod_,
minGSSize = minSetSize,
maxGSSize = maxSetSize,
pvalueCutoff = above_corrected_pval,
qvalueCutoff = 1)
if(!is.null(drop_GO_levels)){
for(i in 1:length(drop_GO_levels)){
ego = dropGO(ego, level = drop_GO_levels[i])
}
}
if(similarity_calc_method != "none"){
# simplify output from enrichGO by removing redundancy of enriched GO terms
ego2 = simplify(x = ego, cutoff = similarity_cutoff,
by = simplify_by,
select_fun = eval(parse(text = simplify_fun)),
measure = similarity_calc_method,
semData = NULL)
# "x[which.max(eval(parse(text = paste0("c(",paste0(x, collapse = ","),")"))))]"
}
if(similarity_calc_method == "none") ego2 = ego
# visualize results
if(visualize_result == "enrichMap") plot_res = enrichMap(ego2, layout = layout_with_kk, vertex.label.cex = 0.8,vertex.size = 5, rescale=T)
if(visualize_result == "dotplot") plot_res = dotplot(ego2, title = plot_title)
return(list(enrichment_result = ego, simplified_enrichment_result = ego2, plot = plot_res))
}
##################################### | /GO_analyse_simplify_plot.R | no_license | vitkl/imex_vs_uniprot | R | false | false | 6,177 | r | ##' @description High level GO enrichment and GSEA functions based on clusterprofiler (analyse, simplify, plot)
##' @author Vitalii Kleshchevnikov
#####################################
##' @author Vitalii Kleshchevnikov
GO_enrich_simplify_plot_bioc = function(protein_set, reference_protein_set, identifier_type, ontology, pAdjustMethod_ = "BH", minSetSize, maxSetSize, simplify_by = "p.adjust", simplify_fun = "min", similarity_calc_method = "kappa", similarity_cutoff = 0.7, visualize_result = "enrichMap", above_corrected_pval = 1, use_bioc_annotationdbi = T, plot_title = "", xlabel = "", drop_GO_levels = NULL){
suppressPackageStartupMessages({
library(clusterProfiler)
library(Homo.sapiens)
})
ego <- enrichGO(gene = protein_set,
OrgDb = org.Hs.eg.db,
universe = reference_protein_set,
keytype = identifier_type,
ont = ontology,
pAdjustMethod = pAdjustMethod_,
minGSSize = minSetSize,
maxGSSize = maxSetSize,
pvalueCutoff = above_corrected_pval,
qvalueCutoff = 1)
if(is.null(drop_GO_levels)){
for(i in length(drop_GO_levels)){
ego = dropGO(ego, level = drop_GO_levels[i])
}
}
if(similarity_calc_method != "none"){
# simplify output from enrichGO by removing redundancy of enriched GO terms
ego2 = simplify(x = ego, cutoff = similarity_cutoff,
by = simplify_by,
select_fun = eval(parse(text = simplify_fun)),
measure = similarity_calc_method,
semData = NULL)#,
#use_bioc_annotationdbi = use_bioc_annotationdbi)
# "x[which.max(eval(parse(text = paste0("c(",paste0(x, collapse = ","),")"))))]"
}
if(similarity_calc_method == "none") ego2 = ego
# visualize results
if(visualize_result == "enrichMap") plot_res = enrichMap(ego2, layout = igraph:::layout_with_kk, vertex.label.cex = 0.8,vertex.size = 5, rescale=T)
if(visualize_result == "dotplot") plot_res = dotplot(ego2, title = plot_title, xlabel = xlabel)
return(list(enrichment_result = ego, simplified_enrichment_result = ego2, plot = plot_res))
}
#####################################
##' @author Vitalii Kleshchevnikov
GSEA_simplify_plot_bioc = function(ranked_protein_list, identifier_type, ontology, nPerm = 1000, pAdjustMethod_ = "BH", minSetSize, maxSetSize, simplify_by = "p.adjust", simplify_fun = "min", similarity_calc_method = "kappa", similarity_cutoff = 0.7, visualize_result = "enrichMap", above_corrected_pval = 1, use_bioc_annotationdbi = T, plot_title = "", xlabel = ""){
suppressPackageStartupMessages({
library(clusterProfiler)
library(Homo.sapiens)
})
ego = gseGO(geneList = ranked_protein_list, ont = ontology, org.Hs.eg.db, keytype = identifier_type, exponent = 1,
nPerm = nPerm, minGSSize = minSetSize, maxGSSize = maxSetSize, pvalueCutoff = above_corrected_pval,
pAdjustMethod = pAdjustMethod_, verbose = F, seed = FALSE, by = "fgsea")
if(similarity_calc_method != "none"){
# simplify output from enrichGO by removing redundancy of enriched GO terms
ego2 = simplify(x = ego, cutoff = similarity_cutoff,
by = simplify_by,
select_fun = eval(parse(text = simplify_fun)),
measure = similarity_calc_method,
semData = NULL,
use_data_table = T)#,
# use_bioc_annotationdbi = use_bioc_annotationdbi)
# "x[which.max(eval(parse(text = paste0("c(",paste0(x, collapse = ","),")"))))]"
}
if(similarity_calc_method == "none") ego2 = ego
# visualize results
if(visualize_result == "enrichMap") plot_res = enrichMap(ego2, layout = layout_with_kk, vertex.label.cex = 0.8,vertex.size = 5, rescale=T)
if(visualize_result == "dotplot") plot_res = dotplot(ego2, title = plot_title, colorBy = "enrichmentScore", orderBy = "p.adjust", xlabel = xlabel)
return(list(enrichment_result = ego, simplified_enrichment_result = ego2, plot = plot_res))
}
#####################################
##' @author Vitalii Kleshchevnikov
cluster_GO_enrich_simplify_plot_bioc = function(formula, protein_groups.dt, reference_protein_set, identifier_type, ontology, pAdjustMethod_ = "BH", minSetSize, maxSetSize, simplify_by = "p.adjust", simplify_fun = "min", similarity_calc_method = "kappa", similarity_cutoff = 0.7, visualize_result = "dotplot", above_corrected_pval = 1, plot_title = "", drop_GO_levels = NULL){
suppressPackageStartupMessages({
library(clusterProfiler)
library(Homo.sapiens)
})
ego = compareCluster(formula, fun = "enrichGO", data = protein_groups.dt,
OrgDb = org.Hs.eg.db,
universe = reference_protein_set,
keytype = identifier_type,
ont = ontology,
pAdjustMethod = pAdjustMethod_,
minGSSize = minSetSize,
maxGSSize = maxSetSize,
pvalueCutoff = above_corrected_pval,
qvalueCutoff = 1)
if(!is.null(drop_GO_levels)){
for(i in 1:length(drop_GO_levels)){
ego = dropGO(ego, level = drop_GO_levels[i])
}
}
if(similarity_calc_method != "none"){
# simplify output from enrichGO by removing redundancy of enriched GO terms
ego2 = simplify(x = ego, cutoff = similarity_cutoff,
by = simplify_by,
select_fun = eval(parse(text = simplify_fun)),
measure = similarity_calc_method,
semData = NULL)
# "x[which.max(eval(parse(text = paste0("c(",paste0(x, collapse = ","),")"))))]"
}
if(similarity_calc_method == "none") ego2 = ego
# visualize results
if(visualize_result == "enrichMap") plot_res = enrichMap(ego2, layout = layout_with_kk, vertex.label.cex = 0.8,vertex.size = 5, rescale=T)
if(visualize_result == "dotplot") plot_res = dotplot(ego2, title = plot_title)
return(list(enrichment_result = ego, simplified_enrichment_result = ego2, plot = plot_res))
}
##################################### |
#Clustering on mtcars
head(mtcars)
#kmeans - select only few column to group them
df = mtcars[, c('mpg', 'cyl', 'wt')]
head(df)
km3 = kmeans(df, centers=3)
km3
km3$centers #average of each cluster
km3$size #how many into each cluster
#Plotting Clusters
library(cluster)
library(fpc)
plotcluster(df, km3$cluster)
#Plot2: PCA
clusplot(df, km3$cluster, color=TRUE, shade=TRUE, labels=2, lines=0)
#2nd Set
#kmeans - select only few column to group them
df = mtcars[, c('mpg', 'cyl', 'wt')]
km2 <- kmeans(df, centers=2)
km2
km2$centers #average of each cluster
km2$size #how many into each cluster
cbind(df, mtcars$am, km2$cluster)
df = mtcars[, c('mpg', 'cyl', 'wt')]
km4 <- kmeans(df, centers=4)
km4
km4$centers #average of each cluster
km4$size #how many into each cluster
cbind(df, mtcars$gear, km4$cluster)
table(mtcars$gear)
table(mtcars$cyl)
table(mtcars$carb)
summary(mtcars)
| /96-mtcars/16b-mtcars-cluster1.R | no_license | DUanalytics/rAnalytics | R | false | false | 903 | r | #Clustering on mtcars
head(mtcars)
#kmeans - select only few column to group them
df = mtcars[, c('mpg', 'cyl', 'wt')]
head(df)
km3 = kmeans(df, centers=3)
km3
km3$centers #average of each cluster
km3$size #how many into each cluster
#Plotting Clusters
library(cluster)
library(fpc)
plotcluster(df, km3$cluster)
#Plot2: PCA
clusplot(df, km3$cluster, color=TRUE, shade=TRUE, labels=2, lines=0)
#2nd Set
#kmeans - select only few column to group them
df = mtcars[, c('mpg', 'cyl', 'wt')]
km2 <- kmeans(df, centers=2)
km2
km2$centers #average of each cluster
km2$size #how many into each cluster
cbind(df, mtcars$am, km2$cluster)
df = mtcars[, c('mpg', 'cyl', 'wt')]
km4 <- kmeans(df, centers=4)
km4
km4$centers #average of each cluster
km4$size #how many into each cluster
cbind(df, mtcars$gear, km4$cluster)
table(mtcars$gear)
table(mtcars$cyl)
table(mtcars$carb)
summary(mtcars)
|
Boptbd.default <-
function(trt.N,blk.N,alpha,beta,nrep,brep,itr.cvrgval,Optcrit="",...)
{
trt.N=as.numeric(trt.N)
blk.N=as.numeric(blk.N)
alpha=as.numeric(alpha)
beta=as.numeric(beta)
brep=as.numeric(brep)
nrep=as.numeric(nrep)
itr.cvrgval=as.numeric(itr.cvrgval)
#"===================================================================================================="
if(is.na(alpha)|alpha<=0){
tkmessageBox(title="Error",message=paste("Please insert correct value of alpha, it should be greater than 0, click OK to reset.",sep=""),icon = "error");
stop("Wrong value of 'alpha', it should be greater than 0")
}#end of if
if(is.na(beta)|beta<=0){
tkmessageBox(title="Error",message=paste("Please insert correct value of beta, it should be greater than 0, click OK to reset.",sep=""),icon = "error");
stop("Wrong value of 'beta', it should be between 0")
}#end of if
if(is.na(trt.N)|is.na(blk.N)|trt.N!=round(trt.N)|blk.N!=round(blk.N)) {
tkmessageBox(title="Error",message=paste("Please insert correct format of the number of treatments and arrays. The number of treatments and arrays should be an integer, click OK to reset the values.",sep=""),icon = "error");
stop("Wrong format of 'trt.N' and/or 'blk.N', both should be an integer")
}#end of if
if(trt.N<3|blk.N<3){
tkmessageBox(title="Error",message=paste("The number of blocks and treatments should be greater than or equal to 3, click Ok to reset.",sep=""),icon = "error");
stop("Very small value of number of treatments and/or blocks, minimum value of the two is 3")
}#end of if
if(trt.N-blk.N>1){
tkmessageBox(title="Error",message=paste("The number of arrays should be greater than or equal to the number of treatments minus one, click Ok to reset.",sep=""),icon = "error");
stop("The number of treatments are larger than the number of arrays minus one (trt.N>blk.N-1)")
}#end of if
if(trt.N>10|blk.N>10){
tkmessageBox(title="Information",message=paste("This might take some minutes, please be patient...",sep=""))
}#end of if
if(is.na(itr.cvrgval)|itr.cvrgval<2|itr.cvrgval!=round(itr.cvrgval)){
tkmessageBox(title="Error",message=paste("The number of iteration for the exchange procedure should be a positive integer greater than or equal to two, click OK to reset.",sep=""),icon = "error");
stop("Wrong value of 'itr.cvrgval', it should be greater than or equal to two (only positive integer values)")
}#end of if
if(is.na(nrep)|nrep<2|nrep!=round(nrep)){
tkmessageBox(title="Error",message=paste("The number of replications should be a positive integer greater than or equal to two, click OK to reset.",sep=""),icon = "error");
stop("Wrong value of 'nrep', it should be greater than or equal to two (only positive integer values)")
}#end of if
#"===================================================================================================="
if(!is.element(Optcrit,c("A","D"))){stop("The optimality criterion 'Optcrit' is not correctly specified")}
#if(!is.element(Alg,c("trtE","arrayE"))){stop("The algorithm 'Alg' is not correctly specified")}
if(itr.cvrgval>blk.N) itr.cvrgval<-blk.N
if(Optcrit=="A") {
optbd_mae<-Baoptbd(trt.N,blk.N,alpha,beta,nrep,brep,itr.cvrgval)} else if(
Optcrit=="D") {
optbd_mae<-Bdoptbd(trt.N,blk.N,alpha,beta,nrep,brep,itr.cvrgval)} else{
stop("The optimality criterion is not specified")}
#optbd_mae$Alg="Array exchange"} #end of if
optbd_mae$call<-match.call()
optbd_mae$Optcrit<-Optcrit
#optbd_mae$Cmat<-cmatbd.mae(optbd_mae$v,optbd_mae$b,optbd_mae$theta,optbd_mae$OptdesF)
trtin <- contrasts(as.factor(optbd_mae$OptdesF), contrasts = FALSE)[as.factor(optbd_mae$OptdesF), ]
vec1 <- rep(1, optbd_mae$b * 2)
vec_trtr <- t(trtin) %*% vec1
optbd_mae$equireplicate<-all(vec_trtr==vec_trtr[1])
optbd_mae$vtrtrep<-t(vec_trtr)
#"======================================================================================"
titleoptbd<-list(c(" --------------------------------------- ",paste("Title: Bayesian ",Optcrit,"-optimal block design Date:", format(Sys.time(), "%a %b %d %Y %H:%M:%S"),sep=""),
" --------------------------------------- "))
write.table(titleoptbd, file =file.path(tempdir(), paste(Optcrit,"optbd_summary.csv",sep = "")),append=T ,sep = ",", row.names=FALSE, col.names=FALSE)
parcomb<-list(c(" Parametric combination:", "Number of treatments:", "Number of blocks:",
"Alpha value:","Beta value:", "Number of replications:","Number of MCMC selections:","Number of exchange iteration:", "Optimality criterion used:"," ","Design obtained:"),
c(" ",optbd_mae$v,optbd_mae$b,optbd_mae$alpha,optbd_mae$beta,optbd_mae$nrep,optbd_mae$brep,optbd_mae$itr.cvrgval,optbd_mae$Alg,paste(Optcrit,"-optimality criterion",sep="")," "," "))
write.table(parcomb, file = file.path(tempdir(), paste(Optcrit,"optbd_summary.csv",sep = "")),append=T ,sep = ",", row.names=FALSE, col.names=FALSE)
optde<-list("",rbind(paste0("Ary",1:optbd_mae$b),optbd_mae$OptdesF))
write.table(optde, file = file.path(tempdir(), paste(Optcrit,"optbd_summary.csv",sep = "")),append=T ,sep = ",", row.names=FALSE, col.names=FALSE)
write.table(list(c("",paste(Optcrit,"-score value:",sep=""), "Equireplicate:",""),c("",optbd_mae$Optcrtsv,optbd_mae$equireplicate,"")), file = file.path(tempdir(), paste(Optcrit,"optbd_summary.csv",sep = "")),append=T ,sep = ",", row.names=FALSE, col.names=FALSE)
write.table(list(c("Treatment:", "Treatment replication:"),rbind(1:optbd_mae$v,optbd_mae$vtrtrep)), file = file.path(tempdir(), paste(Optcrit,"optbd_summary.csv",sep = "")),append=T ,sep = ",", row.names=FALSE, col.names=FALSE)
optbd_mae$file_loc<-file.path(tempdir(), paste(Optcrit,"optbd_summary.csv",sep = ""))
optbd_mae$file_loc2<-paste("Summary of obtained Bayesian ",Optcrit,"-optimal block design is also saved at:",sep="")
#"======================================================================================"
class(optbd_mae)<-"Boptbd"
optbd_mae
}
| /R/Boptbd.default.R | no_license | cran/Boptbd | R | false | false | 6,020 | r | Boptbd.default <-
function(trt.N,blk.N,alpha,beta,nrep,brep,itr.cvrgval,Optcrit="",...)
{
trt.N=as.numeric(trt.N)
blk.N=as.numeric(blk.N)
alpha=as.numeric(alpha)
beta=as.numeric(beta)
brep=as.numeric(brep)
nrep=as.numeric(nrep)
itr.cvrgval=as.numeric(itr.cvrgval)
#"===================================================================================================="
if(is.na(alpha)|alpha<=0){
tkmessageBox(title="Error",message=paste("Please insert correct value of alpha, it should be greater than 0, click OK to reset.",sep=""),icon = "error");
stop("Wrong value of 'alpha', it should be greater than 0")
}#end of if
if(is.na(beta)|beta<=0){
tkmessageBox(title="Error",message=paste("Please insert correct value of beta, it should be greater than 0, click OK to reset.",sep=""),icon = "error");
stop("Wrong value of 'beta', it should be between 0")
}#end of if
if(is.na(trt.N)|is.na(blk.N)|trt.N!=round(trt.N)|blk.N!=round(blk.N)) {
tkmessageBox(title="Error",message=paste("Please insert correct format of the number of treatments and arrays. The number of treatments and arrays should be an integer, click OK to reset the values.",sep=""),icon = "error");
stop("Wrong format of 'trt.N' and/or 'blk.N', both should be an integer")
}#end of if
if(trt.N<3|blk.N<3){
tkmessageBox(title="Error",message=paste("The number of blocks and treatments should be greater than or equal to 3, click Ok to reset.",sep=""),icon = "error");
stop("Very small value of number of treatments and/or blocks, minimum value of the two is 3")
}#end of if
if(trt.N-blk.N>1){
tkmessageBox(title="Error",message=paste("The number of arrays should be greater than or equal to the number of treatments minus one, click Ok to reset.",sep=""),icon = "error");
stop("The number of treatments are larger than the number of arrays minus one (trt.N>blk.N-1)")
}#end of if
if(trt.N>10|blk.N>10){
tkmessageBox(title="Information",message=paste("This might take some minutes, please be patient...",sep=""))
}#end of if
if(is.na(itr.cvrgval)|itr.cvrgval<2|itr.cvrgval!=round(itr.cvrgval)){
tkmessageBox(title="Error",message=paste("The number of iteration for the exchange procedure should be a positive integer greater than or equal to two, click OK to reset.",sep=""),icon = "error");
stop("Wrong value of 'itr.cvrgval', it should be greater than or equal to two (only positive integer values)")
}#end of if
if(is.na(nrep)|nrep<2|nrep!=round(nrep)){
tkmessageBox(title="Error",message=paste("The number of replications should be a positive integer greater than or equal to two, click OK to reset.",sep=""),icon = "error");
stop("Wrong value of 'nrep', it should be greater than or equal to two (only positive integer values)")
}#end of if
#"===================================================================================================="
if(!is.element(Optcrit,c("A","D"))){stop("The optimality criterion 'Optcrit' is not correctly specified")}
#if(!is.element(Alg,c("trtE","arrayE"))){stop("The algorithm 'Alg' is not correctly specified")}
if(itr.cvrgval>blk.N) itr.cvrgval<-blk.N
if(Optcrit=="A") {
optbd_mae<-Baoptbd(trt.N,blk.N,alpha,beta,nrep,brep,itr.cvrgval)} else if(
Optcrit=="D") {
optbd_mae<-Bdoptbd(trt.N,blk.N,alpha,beta,nrep,brep,itr.cvrgval)} else{
stop("The optimality criterion is not specified")}
#optbd_mae$Alg="Array exchange"} #end of if
optbd_mae$call<-match.call()
optbd_mae$Optcrit<-Optcrit
#optbd_mae$Cmat<-cmatbd.mae(optbd_mae$v,optbd_mae$b,optbd_mae$theta,optbd_mae$OptdesF)
trtin <- contrasts(as.factor(optbd_mae$OptdesF), contrasts = FALSE)[as.factor(optbd_mae$OptdesF), ]
vec1 <- rep(1, optbd_mae$b * 2)
vec_trtr <- t(trtin) %*% vec1
optbd_mae$equireplicate<-all(vec_trtr==vec_trtr[1])
optbd_mae$vtrtrep<-t(vec_trtr)
#"======================================================================================"
titleoptbd<-list(c(" --------------------------------------- ",paste("Title: Bayesian ",Optcrit,"-optimal block design Date:", format(Sys.time(), "%a %b %d %Y %H:%M:%S"),sep=""),
" --------------------------------------- "))
write.table(titleoptbd, file =file.path(tempdir(), paste(Optcrit,"optbd_summary.csv",sep = "")),append=T ,sep = ",", row.names=FALSE, col.names=FALSE)
parcomb<-list(c(" Parametric combination:", "Number of treatments:", "Number of blocks:",
"Alpha value:","Beta value:", "Number of replications:","Number of MCMC selections:","Number of exchange iteration:", "Optimality criterion used:"," ","Design obtained:"),
c(" ",optbd_mae$v,optbd_mae$b,optbd_mae$alpha,optbd_mae$beta,optbd_mae$nrep,optbd_mae$brep,optbd_mae$itr.cvrgval,optbd_mae$Alg,paste(Optcrit,"-optimality criterion",sep="")," "," "))
write.table(parcomb, file = file.path(tempdir(), paste(Optcrit,"optbd_summary.csv",sep = "")),append=T ,sep = ",", row.names=FALSE, col.names=FALSE)
optde<-list("",rbind(paste0("Ary",1:optbd_mae$b),optbd_mae$OptdesF))
write.table(optde, file = file.path(tempdir(), paste(Optcrit,"optbd_summary.csv",sep = "")),append=T ,sep = ",", row.names=FALSE, col.names=FALSE)
write.table(list(c("",paste(Optcrit,"-score value:",sep=""), "Equireplicate:",""),c("",optbd_mae$Optcrtsv,optbd_mae$equireplicate,"")), file = file.path(tempdir(), paste(Optcrit,"optbd_summary.csv",sep = "")),append=T ,sep = ",", row.names=FALSE, col.names=FALSE)
write.table(list(c("Treatment:", "Treatment replication:"),rbind(1:optbd_mae$v,optbd_mae$vtrtrep)), file = file.path(tempdir(), paste(Optcrit,"optbd_summary.csv",sep = "")),append=T ,sep = ",", row.names=FALSE, col.names=FALSE)
optbd_mae$file_loc<-file.path(tempdir(), paste(Optcrit,"optbd_summary.csv",sep = ""))
optbd_mae$file_loc2<-paste("Summary of obtained Bayesian ",Optcrit,"-optimal block design is also saved at:",sep="")
#"======================================================================================"
class(optbd_mae)<-"Boptbd"
optbd_mae
}
|
library(dplyr)
library("data.table")
filename <- "exdata_data_household_power_consumption.zip"
# Loading the data
# Checking if the file exists
if (!file.exists("household_power_consumption.txt")) {
unzip(filename)
}
data_header <- names(fread("household_power_consumption.txt",nrow=0))
print(data_header)
power <- fread("household_power_consumption.txt",
select = c("Date","Time",
"Global_active_power",
"Global_reactive_power",
"Voltage",
"Sub_metering_1",
"Sub_metering_2",
"Sub_metering_3")
)
powerDT <- subset(power, power$Date=="1/2/2007" | power$Date =="2/2/2007")
datetime <- strptime(paste(powerDT$Date, powerDT$Time, sep=" "), "%d/%m/%Y %H:%M:%S")
png("plot4.png", width=480, height=480)
par(mfrow=c(2,2))
# subplot 1
plot(datetime, powerDT$Global_active_power, type="l", xlab="", ylab="Global Active Power")
# subplot 2
plot(datetime,powerDT$Voltage, type="l", xlab="datetime", ylab="Voltage")
# subplot 3
plot(datetime, powerDT$Sub_metering_1, type="l", xlab="", ylab="Energy sub metering")
lines(datetime, powerDT$Sub_metering_2, col="red")
lines(datetime, powerDT$Sub_metering_3,col="blue")
legend("topright",
col=c("black","red","blue"),
c("Sub_metering_1 ","Sub_metering_2 ", "Sub_metering_3 "),
lty=c(1,1),
bty="n",
cex=0.9
)
# subplot 4
plot(datetime, powerDT[,Global_reactive_power], type="l", xlab="datetime", ylab="Global_reactive_power")
dev.off() | /plot4.R | no_license | bearsly/ExData_Plotting1 | R | false | false | 1,622 | r | library(dplyr)
library("data.table")
filename <- "exdata_data_household_power_consumption.zip"
# Loading the data
# Checking if the file exists
if (!file.exists("household_power_consumption.txt")) {
unzip(filename)
}
data_header <- names(fread("household_power_consumption.txt",nrow=0))
print(data_header)
power <- fread("household_power_consumption.txt",
select = c("Date","Time",
"Global_active_power",
"Global_reactive_power",
"Voltage",
"Sub_metering_1",
"Sub_metering_2",
"Sub_metering_3")
)
powerDT <- subset(power, power$Date=="1/2/2007" | power$Date =="2/2/2007")
datetime <- strptime(paste(powerDT$Date, powerDT$Time, sep=" "), "%d/%m/%Y %H:%M:%S")
png("plot4.png", width=480, height=480)
par(mfrow=c(2,2))
# subplot 1
plot(datetime, powerDT$Global_active_power, type="l", xlab="", ylab="Global Active Power")
# subplot 2
plot(datetime,powerDT$Voltage, type="l", xlab="datetime", ylab="Voltage")
# subplot 3
plot(datetime, powerDT$Sub_metering_1, type="l", xlab="", ylab="Energy sub metering")
lines(datetime, powerDT$Sub_metering_2, col="red")
lines(datetime, powerDT$Sub_metering_3,col="blue")
legend("topright",
col=c("black","red","blue"),
c("Sub_metering_1 ","Sub_metering_2 ", "Sub_metering_3 "),
lty=c(1,1),
bty="n",
cex=0.9
)
# subplot 4
plot(datetime, powerDT[,Global_reactive_power], type="l", xlab="datetime", ylab="Global_reactive_power")
dev.off() |
# Constructor -------------------------------------------------------------
#' Apply function to position coordinates
#'
#' The function xy stat applies a function to the x- and y-coordinates of a
#' layers positions by group. The \code{stat_centroid()} and
#' \code{stat_midpoint()} functions are convenience wrappers for calculating
#' centroids and midpoints. \code{stat_funxy()} by default leaves the data
#' as-is, but can be supplied functions and arguments.
#'
#' @inheritParams ggplot2::stat_identity
#' @param funx,funy A \code{function} to call on the layer's \code{x} and
#' \code{y} positions respectively.
#' @param argx,argy A named \code{list} containing arguments to the \code{funx},
#' and \code{funy} function calls.
#' @param crop_other A \code{logical} of length one; whether the other data
#' should be fitted to the length of \code{x} and \code{y} (default:
#' \code{TRUE}). Useful to set to \code{FALSE} when \code{funx} or \code{funy}
#' calculate summaries of length one that need to be recycled.
#'
#' @details This statistic only makes a minimal attempt at ensuring that the
#' results from calling both functions are of equal length. Results of length
#' 1 are recycled to match the longest length result.
#'
#' @return A \code{StatFunxy} ggproto object, that can be added to a plot.
#' @export
#'
#' @examples
#' p <- ggplot(iris, aes(Sepal.Width, Sepal.Length, colour = Species))
#'
#' # Labelling group midpoints
#' p + geom_point() +
#' stat_midpoint(aes(label = Species, group = Species),
#' geom = "text", colour = "black")
#'
#' # Drawing segments to centroids
#' p + geom_point() +
#' stat_centroid(aes(xend = Sepal.Width, yend = Sepal.Length),
#' geom = "segment", crop_other = FALSE)
#'
#' # Drawing intervals
#' ggplot(iris, aes(Sepal.Width, Sepal.Length, colour = Species)) +
#' geom_point() +
#' stat_funxy(geom = "path",
#' funx = median, funy = quantile,
#' argy = list(probs = c(0.1, 0.9)))
stat_funxy <-
function(mapping = NULL, data = NULL, geom = "point",
position = "identity", ..., funx = force, funy = force,
argx = list(), argy = list(), crop_other = TRUE,
show.legend = NA, inherit.aes = TRUE) {
if (!is.function(funx)) {
stop("The `funx` argument must be a function.", call. = FALSE)
}
funx <- force(funx)
if (!is.function(funy)) {
stop("The `funy` argument must be a function.", call. = FALSE)
}
funy <- force(funy)
if (!is.list(argx) | !is.list(argy)) {
stop("The `argx` and `argy` arguments must be lists.", call. = FALSE)
} else {
if (length(argx) > 0) {
if (is.null(names(argx))) {
stop("The `argx` list must have named elements.", call. = FALSE)
}
}
if (length(argy) > 0) {
if (is.null(names(argy))) {
stop("The `argy` list must have named elements.", call. = FALSE)
}
}
}
layer(
data = data, mapping = mapping, stat = StatFunxy, geom = geom,
position = position, show.legend = show.legend, inherit.aes = inherit.aes,
params = list(
funx = funx, funy = funy,
argx = argx, argy = argy,
crop_other = crop_other,
...
)
)
}
#' @rdname stat_funxy
#' @export
stat_centroid <- function(...,
funx = mean, funy = mean,
argx = list(na.rm = TRUE), argy = list(na.rm = TRUE)) {
stat_funxy(..., funx = funx, funy = funy, argx = argx, argy = argy)
}
#' @rdname stat_funxy
#' @export
stat_midpoint <- function(...,
argx = list(na.rm = TRUE),
argy = list(na.rm = TRUE)) {
fun <- function(x, na.rm = TRUE) {
sum(range(x, na.rm = na.rm), na.rm = na.rm)/2
}
stat_funxy(..., funx = fun, funy = fun, argx = argx, argy = argy)
}
# ggproto -----------------------------------------------------------------
#' @usage NULL
#' @format NULL
#' @export
#' @rdname ggh4x_extensions
StatFunxy <- ggproto(
"StatFunxy", Stat,
required_aes = c("x", "y"),
compute_group = function(data, scales,
funx, funy,
argx, argy,
crop_other = TRUE) {
# Make list for cheaper operations
data <- as.list(data)
# Apply functions
x <- do.call(funx, c(unname(data["x"]), argx))
y <- do.call(funy, c(unname(data["y"]), argy))
# Ensure rest of data is of correct length
other <- setdiff(names(data), c("x", "y"))
size <- seq_len(max(length(x), length(y)))
if (isTRUE(crop_other)) {
other <- lapply(data[other], `[`, i = size)
} else {
other <- data[other]
}
# Combine data
data <- c(other, list(x = x, y = y))
data <- do.call(vec_recycle_common, data)
.int$new_data_frame(data)
}
)
| /R/stat_funxy.R | permissive | dimbage/ggh4x | R | false | false | 4,886 | r | # Constructor -------------------------------------------------------------
#' Apply function to position coordinates
#'
#' The function xy stat applies a function to the x- and y-coordinates of a
#' layers positions by group. The \code{stat_centroid()} and
#' \code{stat_midpoint()} functions are convenience wrappers for calculating
#' centroids and midpoints. \code{stat_funxy()} by default leaves the data
#' as-is, but can be supplied functions and arguments.
#'
#' @inheritParams ggplot2::stat_identity
#' @param funx,funy A \code{function} to call on the layer's \code{x} and
#' \code{y} positions respectively.
#' @param argx,argy A named \code{list} containing arguments to the \code{funx},
#' and \code{funy} function calls.
#' @param crop_other A \code{logical} of length one; whether the other data
#' should be fitted to the length of \code{x} and \code{y} (default:
#' \code{TRUE}). Useful to set to \code{FALSE} when \code{funx} or \code{funy}
#' calculate summaries of length one that need to be recycled.
#'
#' @details This statistic only makes a minimal attempt at ensuring that the
#' results from calling both functions are of equal length. Results of length
#' 1 are recycled to match the longest length result.
#'
#' @return A \code{StatFunxy} ggproto object, that can be added to a plot.
#' @export
#'
#' @examples
#' p <- ggplot(iris, aes(Sepal.Width, Sepal.Length, colour = Species))
#'
#' # Labelling group midpoints
#' p + geom_point() +
#' stat_midpoint(aes(label = Species, group = Species),
#' geom = "text", colour = "black")
#'
#' # Drawing segments to centroids
#' p + geom_point() +
#' stat_centroid(aes(xend = Sepal.Width, yend = Sepal.Length),
#' geom = "segment", crop_other = FALSE)
#'
#' # Drawing intervals
#' ggplot(iris, aes(Sepal.Width, Sepal.Length, colour = Species)) +
#' geom_point() +
#' stat_funxy(geom = "path",
#' funx = median, funy = quantile,
#' argy = list(probs = c(0.1, 0.9)))
stat_funxy <-
function(mapping = NULL, data = NULL, geom = "point",
position = "identity", ..., funx = force, funy = force,
argx = list(), argy = list(), crop_other = TRUE,
show.legend = NA, inherit.aes = TRUE) {
if (!is.function(funx)) {
stop("The `funx` argument must be a function.", call. = FALSE)
}
funx <- force(funx)
if (!is.function(funy)) {
stop("The `funy` argument must be a function.", call. = FALSE)
}
funy <- force(funy)
if (!is.list(argx) | !is.list(argy)) {
stop("The `argx` and `argy` arguments must be lists.", call. = FALSE)
} else {
if (length(argx) > 0) {
if (is.null(names(argx))) {
stop("The `argx` list must have named elements.", call. = FALSE)
}
}
if (length(argy) > 0) {
if (is.null(names(argy))) {
stop("The `argy` list must have named elements.", call. = FALSE)
}
}
}
layer(
data = data, mapping = mapping, stat = StatFunxy, geom = geom,
position = position, show.legend = show.legend, inherit.aes = inherit.aes,
params = list(
funx = funx, funy = funy,
argx = argx, argy = argy,
crop_other = crop_other,
...
)
)
}
#' @rdname stat_funxy
#' @export
stat_centroid <- function(...,
funx = mean, funy = mean,
argx = list(na.rm = TRUE), argy = list(na.rm = TRUE)) {
stat_funxy(..., funx = funx, funy = funy, argx = argx, argy = argy)
}
#' @rdname stat_funxy
#' @export
stat_midpoint <- function(...,
argx = list(na.rm = TRUE),
argy = list(na.rm = TRUE)) {
fun <- function(x, na.rm = TRUE) {
sum(range(x, na.rm = na.rm), na.rm = na.rm)/2
}
stat_funxy(..., funx = fun, funy = fun, argx = argx, argy = argy)
}
# ggproto -----------------------------------------------------------------
#' @usage NULL
#' @format NULL
#' @export
#' @rdname ggh4x_extensions
StatFunxy <- ggproto(
"StatFunxy", Stat,
required_aes = c("x", "y"),
compute_group = function(data, scales,
funx, funy,
argx, argy,
crop_other = TRUE) {
# Make list for cheaper operations
data <- as.list(data)
# Apply functions
x <- do.call(funx, c(unname(data["x"]), argx))
y <- do.call(funy, c(unname(data["y"]), argy))
# Ensure rest of data is of correct length
other <- setdiff(names(data), c("x", "y"))
size <- seq_len(max(length(x), length(y)))
if (isTRUE(crop_other)) {
other <- lapply(data[other], `[`, i = size)
} else {
other <- data[other]
}
# Combine data
data <- c(other, list(x = x, y = y))
data <- do.call(vec_recycle_common, data)
.int$new_data_frame(data)
}
)
|
library(ape)
testtree <- read.tree("13617_0.txt")
unrooted_tr <- unroot(testtree)
write.tree(unrooted_tr, file="13617_0_unrooted.txt") | /codeml_files/newick_trees_processed/13617_0/rinput.R | no_license | DaniBoo/cyanobacteria_project | R | false | false | 137 | r | library(ape)
testtree <- read.tree("13617_0.txt")
unrooted_tr <- unroot(testtree)
write.tree(unrooted_tr, file="13617_0_unrooted.txt") |
\name{ng_set_size<-}
\alias{ng_set_size<-}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
Change sizes of data points in an active navGraph session
}
\description{
Specify new sizes for each point for an active navGraph session.
}
\usage{
ng_set_size(obj, dataName) <- value
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{obj}{
A navgraph handler of an active navGraph session.
}
\item{dataName}{
String of the data name. If not specified and only one data set is beeing used in the navGraph session, it will default to this data. Otherwise, if multiple data sets are being used in a navGraph session, the function will list the name of these data sets and ask you to specify one.
}
\item{value}{
Replacement vector or single value specifing the sizes (>0) of points.
}
}
%\references{%% ~put references to the literature/web site here ~}
\author{
Adrian Waddell and Wayne Oldford
}
%\note{%% ~~further notes~~}
%% ~Make other sections like Warning with \section{Warning }{....} ~
\seealso{
\code{\link{navGraph}}, \code{\link{ng_get_size}}, \code{\link{ng_set_color<-}}, \code{\link{ng_get_color}}
}
\examples{
## Define a NG_data object
ng.iris <- ng_data(name = "IrisData", data = iris[,1:4])
## start a navGraph session
nav <- navGraph(ng.iris)
## set point sizes
ng_set_size(nav,'IrisData') <- sample(1:7, replace=TRUE, 150)
}
| /man/ng_set_sizesetter.Rd | no_license | cran/RnavGraph | R | false | false | 1,407 | rd | \name{ng_set_size<-}
\alias{ng_set_size<-}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
Change sizes of data points in an active navGraph session
}
\description{
Specify new sizes for each point for an active navGraph session.
}
\usage{
ng_set_size(obj, dataName) <- value
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{obj}{
A navgraph handler of an active navGraph session.
}
\item{dataName}{
String of the data name. If not specified and only one data set is beeing used in the navGraph session, it will default to this data. Otherwise, if multiple data sets are being used in a navGraph session, the function will list the name of these data sets and ask you to specify one.
}
\item{value}{
Replacement vector or single value specifing the sizes (>0) of points.
}
}
%\references{%% ~put references to the literature/web site here ~}
\author{
Adrian Waddell and Wayne Oldford
}
%\note{%% ~~further notes~~}
%% ~Make other sections like Warning with \section{Warning }{....} ~
\seealso{
\code{\link{navGraph}}, \code{\link{ng_get_size}}, \code{\link{ng_set_color<-}}, \code{\link{ng_get_color}}
}
\examples{
## Define a NG_data object
ng.iris <- ng_data(name = "IrisData", data = iris[,1:4])
## start a navGraph session
nav <- navGraph(ng.iris)
## set point sizes
ng_set_size(nav,'IrisData') <- sample(1:7, replace=TRUE, 150)
}
|
### Exercise 3 ###
library(jsonlite)
library(dplyr)
load("nelly_tracks.Rdata")
# Use the `load` function to load in the nelly_tracks file. That should make the data.frame
# `top_nelly` available to you
# Use the `flatten` function to flatten the data.frame -- note what differs!
flatten(top.nelly)
# Use your `dplyr` functions to get the number of the songs that appear on each albumt
number.songs <- filter(flatten(top.nelly), album.album_type == 'album') %>% select(track_number)
# Bonus: perform both of the steps above in one line (one statement)
| /exercise-3/exercise.R | permissive | gnixuix/m10-apis | R | false | false | 557 | r | ### Exercise 3 ###
library(jsonlite)
library(dplyr)
load("nelly_tracks.Rdata")
# Use the `load` function to load in the nelly_tracks file. That should make the data.frame
# `top_nelly` available to you
# Use the `flatten` function to flatten the data.frame -- note what differs!
flatten(top.nelly)
# Use your `dplyr` functions to get the number of the songs that appear on each albumt
number.songs <- filter(flatten(top.nelly), album.album_type == 'album') %>% select(track_number)
# Bonus: perform both of the steps above in one line (one statement)
|
% Generated by roxygen2 (4.1.0): do not edit by hand
% Please edit documentation in R/names_list.r
\name{names_list}
\alias{names_list}
\title{Get a random vector of species names.}
\usage{
names_list(rank = "genus", size = 10)
}
\arguments{
\item{rank}{Taxonomic rank, one of species, genus (default), family, order.}
\item{size}{Number of names to get. Maximum depends on the rank.}
}
\value{
Vector of taxonomic names.
}
\description{
Family and order names come from the APG plant names list. Genus and species names
come from Theplantlist.org.
}
\examples{
names_list()
names_list('species')
names_list('genus')
names_list('family')
names_list('order')
names_list('order', '2')
names_list('order', '15')
# You can get a lot of genus or species names if you want
nrow(theplantlist)
names_list('genus', 500)
}
\author{
Scott Chamberlain \email{myrmecocystus@gmail.com}
}
| /man/names_list.Rd | permissive | MadeleineMcGreer/taxize | R | false | false | 877 | rd | % Generated by roxygen2 (4.1.0): do not edit by hand
% Please edit documentation in R/names_list.r
\name{names_list}
\alias{names_list}
\title{Get a random vector of species names.}
\usage{
names_list(rank = "genus", size = 10)
}
\arguments{
\item{rank}{Taxonomic rank, one of species, genus (default), family, order.}
\item{size}{Number of names to get. Maximum depends on the rank.}
}
\value{
Vector of taxonomic names.
}
\description{
Family and order names come from the APG plant names list. Genus and species names
come from Theplantlist.org.
}
\examples{
names_list()
names_list('species')
names_list('genus')
names_list('family')
names_list('order')
names_list('order', '2')
names_list('order', '15')
# You can get a lot of genus or species names if you want
nrow(theplantlist)
names_list('genus', 500)
}
\author{
Scott Chamberlain \email{myrmecocystus@gmail.com}
}
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/s3_methods.R
\name{vcov.kclass}
\alias{vcov.kclass}
\title{S3 vcov method for class 'kclass'}
\usage{
\method{vcov}{kclass}(object, vcov. = NULL, errors = NULL)
}
\arguments{
\item{object}{a model object of class 'kclass'.}
}
\value{
a k * k matrix of variances and covariances, where k is the
number of coefficients in the model.
}
\description{
Calculates the variance-covariance matrix for a kclass model.
}
| /man/vcov.kclass.Rd | no_license | potterzot/rkclass | R | false | false | 500 | rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/s3_methods.R
\name{vcov.kclass}
\alias{vcov.kclass}
\title{S3 vcov method for class 'kclass'}
\usage{
\method{vcov}{kclass}(object, vcov. = NULL, errors = NULL)
}
\arguments{
\item{object}{a model object of class 'kclass'.}
}
\value{
a k * k matrix of variances and covariances, where k is the
number of coefficients in the model.
}
\description{
Calculates the variance-covariance matrix for a kclass model.
}
|
library(jsonlite)
# формируем json
string<- toJSON(list(as.list(dd)[1:3],
content.items.unitNds = list(studentid = 15, name = "Kevin")) )
#убираем "1" в строке
string_correct<- substr(string, start =6, stop = nchar(string)-1)
# убираем квадратные скобки (если требуется)
#string <-gsub('\"', "'", string_correct, fixed=TRUE)
string <-gsub("\\[|\\]", "", string_correct)
toJSON(list(list(as.list(dd)[1:3],
content.items.unitNds = list(studentid = 15, name = "Kevin"))[[1]]
))
unlist(as.list(dd)[1:3])
rat_i<-read_fst(paste0( i, ".fst"))
colnames(rat_i)[11:15]
list(status , code ) <- c (10,12)
#составляем словарь из тэгов чека и значений. Но можно сделать и data.frame ключи-значения через cbind
h <- hash( keys=colnames(rat_i), values=1:139 )
toJSON(as.list(h), auto_unbox = TRUE)
keys(h)
dd <- hash()
.set( dd, keys=colnames(rat_i), values=1:139 )
library (data.table)
dd<- as.data.frame(as.list(dd))
dd_t<-transpose(as.data.frame(as.list(dd)) )
colnames(dd_t)<- rownames (dd)
rownames(dd_t)<-colnames(dd)
dd_t<-as.data.frame(dd_t, rownames=FALSE)
dd_t$tags<-rownames(dd_t)
dd_t[order( dd_t[,1] ),]
toJSON(as.list(dd_t[,2]))
toJSON(as.list(dd_t))
| /synthetic_json.R | no_license | meniko117/USN | R | false | false | 1,361 | r | library(jsonlite)
# формируем json
string<- toJSON(list(as.list(dd)[1:3],
content.items.unitNds = list(studentid = 15, name = "Kevin")) )
#убираем "1" в строке
string_correct<- substr(string, start =6, stop = nchar(string)-1)
# убираем квадратные скобки (если требуется)
#string <-gsub('\"', "'", string_correct, fixed=TRUE)
string <-gsub("\\[|\\]", "", string_correct)
toJSON(list(list(as.list(dd)[1:3],
content.items.unitNds = list(studentid = 15, name = "Kevin"))[[1]]
))
unlist(as.list(dd)[1:3])
rat_i<-read_fst(paste0( i, ".fst"))
colnames(rat_i)[11:15]
list(status , code ) <- c (10,12)
#составляем словарь из тэгов чека и значений. Но можно сделать и data.frame ключи-значения через cbind
h <- hash( keys=colnames(rat_i), values=1:139 )
toJSON(as.list(h), auto_unbox = TRUE)
keys(h)
dd <- hash()
.set( dd, keys=colnames(rat_i), values=1:139 )
library (data.table)
dd<- as.data.frame(as.list(dd))
dd_t<-transpose(as.data.frame(as.list(dd)) )
colnames(dd_t)<- rownames (dd)
rownames(dd_t)<-colnames(dd)
dd_t<-as.data.frame(dd_t, rownames=FALSE)
dd_t$tags<-rownames(dd_t)
dd_t[order( dd_t[,1] ),]
toJSON(as.list(dd_t[,2]))
toJSON(as.list(dd_t))
|
library(shiny)
library(reshape2)
predictCost <- function(heuristic, p1, p2) {
data <- read.csv(file = 'data.csv', header = TRUE, sep = ',')
checks <- data[data$SET == 'coloring', c(3, 4, 11, 13, 15)]
names(checks) <- c('P1', 'P2', 'DOM', 'DEG', 'K')
checks <- melt(data = checks, id.vars = c('P1', 'P2'), variable.name = 'HEURISTIC', value.name = 'COST')
checks = checks[checks$HEURISTIC == heuristic, ]
model <- lm(COST ~ P1 + P2, data = checks)
newData <- data.frame(P1 = c(p1), P2 = c(p2))
return(predict(model, newData))
}
shinyServer(
function(input, output) {
output$heuristic <- renderPrint({input$heuristic})
output$p1 <- renderPrint({input$p1})
output$p2 <- renderPrint({input$p2})
output$cost <- renderPrint(predictCost(input$heuristic, input$p1, input$p2))
}
) | /server.R | no_license | jcobayliss/DataScience09 | R | false | false | 822 | r | library(shiny)
library(reshape2)
predictCost <- function(heuristic, p1, p2) {
data <- read.csv(file = 'data.csv', header = TRUE, sep = ',')
checks <- data[data$SET == 'coloring', c(3, 4, 11, 13, 15)]
names(checks) <- c('P1', 'P2', 'DOM', 'DEG', 'K')
checks <- melt(data = checks, id.vars = c('P1', 'P2'), variable.name = 'HEURISTIC', value.name = 'COST')
checks = checks[checks$HEURISTIC == heuristic, ]
model <- lm(COST ~ P1 + P2, data = checks)
newData <- data.frame(P1 = c(p1), P2 = c(p2))
return(predict(model, newData))
}
shinyServer(
function(input, output) {
output$heuristic <- renderPrint({input$heuristic})
output$p1 <- renderPrint({input$p1})
output$p2 <- renderPrint({input$p2})
output$cost <- renderPrint(predictCost(input$heuristic, input$p1, input$p2))
}
) |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/simulate_fludrug_fit.R
\name{simulate_fludrug_fit}
\alias{simulate_fludrug_fit}
\title{Fitting a simple viral infection model with 2 types of drug mechanisms to influenza data}
\usage{
simulate_fludrug_fit(
U = 1e+07,
I = 0,
V = 1,
dI = 1,
dV = 2,
b = 0.002,
blow = 0,
bhigh = 100,
p = 0.002,
plow = 0,
phigh = 100,
g = 0,
glow = 0,
ghigh = 100,
k = 0,
fitmodel = 1,
iter = 1
)
}
\arguments{
\item{U}{: initial number of uninfected target cells : numeric}
\item{I}{: initial number of infected target cells : numeric}
\item{V}{: initial number of infectious virions : numeric}
\item{dI}{: rate at which infected cells die : numeric}
\item{dV}{: rate at which infectious virus is cleared : numeric}
\item{b}{: rate at which virus infects cells : numeric}
\item{blow}{: lower bound for infection rate : numeric}
\item{bhigh}{: upper bound for infection rate : numeric}
\item{p}{: rate at which infected cells produce virus : numeric}
\item{plow}{: lower bound for virus production rate : numeric}
\item{phigh}{: upper bound for virus production rate : numeric}
\item{g}{: unit conversion factor : numeric}
\item{glow}{: lower bound for unit conversion factor : numeric}
\item{ghigh}{: upper bound for unit conversion factor : numeric}
\item{k}{: drug efficacy (between 0-1) : numeric}
\item{fitmodel}{: fitting model 1 or 2 : numeric}
\item{iter}{: max number of steps to be taken by optimizer : numeric}
}
\value{
The function returns a list containing the best fit timeseries,
the best fit parameters, the data and the AICc for the model.
}
\description{
This function fits the simulate_virusandtx_ode model,
which is a compartment model
using a set of ordinary differential equations.
The model describes a simple viral infection system in the presence of drug treatment.
The user provides initial conditions and parameter values for the system.
The function simulates the ODE using an ODE solver from the deSolve package.
}
\details{
A simple compartmental ODE models describing an acute viral infection with drug treatment
mechanism/model 1 assumes that drug treatment reduces rate of new cell infection.
mechanism/model 2 assumes that drug treatment reduces rate of new virus production.
}
\section{Warning}{
This function does not perform any error checking. So if
you try to do something nonsensical (e.g. specify negative parameter or starting values),
the code will likely abort with an error message.
}
\examples{
# To run the code with default parameters just call the function:
result <- simulate_fludrug_fit()
# To apply different settings, provide them to the simulator function, like such:
result <- simulate_fludrug_fit(k = 0.5, iter = 5, fitmodel = 2)
}
\seealso{
See the Shiny app documentation corresponding to this
function for more details on this model.
}
\author{
Andreas Handel
}
| /man/simulate_fludrug_fit.Rd | no_license | cran/DSAIRM | R | false | true | 3,035 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/simulate_fludrug_fit.R
\name{simulate_fludrug_fit}
\alias{simulate_fludrug_fit}
\title{Fitting a simple viral infection model with 2 types of drug mechanisms to influenza data}
\usage{
simulate_fludrug_fit(
U = 1e+07,
I = 0,
V = 1,
dI = 1,
dV = 2,
b = 0.002,
blow = 0,
bhigh = 100,
p = 0.002,
plow = 0,
phigh = 100,
g = 0,
glow = 0,
ghigh = 100,
k = 0,
fitmodel = 1,
iter = 1
)
}
\arguments{
\item{U}{: initial number of uninfected target cells : numeric}
\item{I}{: initial number of infected target cells : numeric}
\item{V}{: initial number of infectious virions : numeric}
\item{dI}{: rate at which infected cells die : numeric}
\item{dV}{: rate at which infectious virus is cleared : numeric}
\item{b}{: rate at which virus infects cells : numeric}
\item{blow}{: lower bound for infection rate : numeric}
\item{bhigh}{: upper bound for infection rate : numeric}
\item{p}{: rate at which infected cells produce virus : numeric}
\item{plow}{: lower bound for virus production rate : numeric}
\item{phigh}{: upper bound for virus production rate : numeric}
\item{g}{: unit conversion factor : numeric}
\item{glow}{: lower bound for unit conversion factor : numeric}
\item{ghigh}{: upper bound for unit conversion factor : numeric}
\item{k}{: drug efficacy (between 0-1) : numeric}
\item{fitmodel}{: fitting model 1 or 2 : numeric}
\item{iter}{: max number of steps to be taken by optimizer : numeric}
}
\value{
The function returns a list containing the best fit timeseries,
the best fit parameters, the data and the AICc for the model.
}
\description{
This function fits the simulate_virusandtx_ode model,
which is a compartment model
using a set of ordinary differential equations.
The model describes a simple viral infection system in the presence of drug treatment.
The user provides initial conditions and parameter values for the system.
The function simulates the ODE using an ODE solver from the deSolve package.
}
\details{
A simple compartmental ODE models describing an acute viral infection with drug treatment
mechanism/model 1 assumes that drug treatment reduces rate of new cell infection.
mechanism/model 2 assumes that drug treatment reduces rate of new virus production.
}
\section{Warning}{
This function does not perform any error checking. So if
you try to do something nonsensical (e.g. specify negative parameter or starting values),
the code will likely abort with an error message.
}
\examples{
# To run the code with default parameters just call the function:
result <- simulate_fludrug_fit()
# To apply different settings, provide them to the simulator function, like such:
result <- simulate_fludrug_fit(k = 0.5, iter = 5, fitmodel = 2)
}
\seealso{
See the Shiny app documentation corresponding to this
function for more details on this model.
}
\author{
Andreas Handel
}
|
library(readr)
library(sp)
library(raster)
library(gstat)
library(rgdal)
library(RNetCDF)
library(ncdf4)
library(stringr)
library(rgeos)
library(leaflet)
library(htmlwidgets)
library(dplyr)
e <- extent(50,60,20, 28) # UAE extent
plot(e)
# make a spatial polygon from the extent
p <- as(e, "SpatialPolygons")
plot(p)
proj4string(p) <- CRS("+proj=longlat +datum=WGS84")
# crs(p) <- "proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"
# save shp file for the rectangular DOMAIN
setwd("Z:/_SHARED_FOLDERS/Air Quality/Phase 2/admin_GIS/prova_shapes")
shapefile(p, "rectangular_domain.shp", overwrite=TRUE)
# reload and plot domain
dir <- "Z:/_SHARED_FOLDERS/Air Quality/Phase 2/admin_GIS/prova_shapes"
shp_rect <- readOGR(dsn = dir, layer = "rectangular_domain")
# ----- Transform to EPSG 4326 - WGS84 (required)
shp_rect <- spTransform(shp_rect, CRS("+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"))
shp_rect@data$name <- 1:nrow(shp_rect)
plot(shp_rect)
#########################
### shapefile UAE #######
#########################
dir <- "Z:/_SHARED_FOLDERS/Air Quality/Phase 2/HISTORICAL_dust/UAE_boundary"
### shapefile for UAE
shp_UAE <- readOGR(dsn = dir, layer = "uae_emirates")
# ----- Transform to EPSG 4326 - WGS84 (required)
shp_UAE <- spTransform(shp_UAE, CRS("+init=epsg:4326"))
# names(shp)
shp_UAE@data$name <- 1:nrow(shp_UAE)
plot(shp_rect)
plot(shp_UAE, add=TRUE, lwd=1)
###################################
## shapefile Arabian Peninsual ####
###################################
dir <- "Z:/_SHARED_FOLDERS/Air Quality/Phase 2/Dust_Event_UAE_2015/WRFChem_domain"
shp_AP <- readOGR(dsn = dir, layer = "ADMIN_domain_d01_WRFChem")
# ----- Transform to EPSG 4326 - WGS84 (required)
shp_AP <- spTransform(shp_AP, CRS("+init=epsg:4326"))
# names(shp)
shp_AP@data$name <- 1:nrow(shp_AP)
plot(shp_rect)
plot(shp_AP, add=TRUE, lwd=1)
# d01_shp_WW <- crop(shp_WW, e)
# plot(d01_shp_WW)
# setwd("Z:/_SHARED_FOLDERS/Air Quality/Phase 2/Dust_Event_UAE_2015/WRFChem_domain")
# shapefile(d01_shp_WW, "ADMIN_domain_MIKE.shp.shp", overwrite=TRUE)
##################################################
# make a point for a location in the UAE #########
##################################################
require(sf)
coordinates_ABU_DHABI <- read.table(text="
longitude latitude
54.646 24.43195",
header=TRUE)
# new coordiantes over AP
coordinates_ABU_DHABI <- read.table(text="
longitude latitude
47.63 21.08",
header=TRUE)
coordinates_ABU_DHABI <- read.table(text="
longitude latitude
47.63 21.08",
"54.646 24.43195",
header=TRUE)
coord_ABU_DHABI_point <- st_as_sf(x = coordinates_ABU_DHABI,
coords = c("longitude", "latitude"),
crs = "+proj=longlat +datum=WGS84")
# simple plot
plot(coord_ABU_DHABI_point)
# convert to sp object if needed
coord_ABU_DHABI_point <- as(coord_ABU_DHABI_point, "Spatial")
# shp_buff <- gBuffer(shp_UAE, width=40, byid=TRUE, quadsegs=10)
shp_buff <- gBuffer(coord_ABU_DHABI_point, width=5)
shp_buff <- spTransform(shp_buff, CRS("+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"))
plot(shp_buff)
plot(shp_UAE, add=TRUE, lwd=1)
plot(shp_AP, add=TRUE, lwd=1)
# save shp file for the crcular buffer
setwd("Z:/_SHARED_FOLDERS/Air Quality/Phase 2/admin_GIS/prova_shapes")
shapefile(shp_buff, "circular_buffer.shp", overwrite=TRUE)
dir <- "Z:/_SHARED_FOLDERS/Air Quality/Phase 2/admin_GIS/prova_shapes"
# reload and plot domain
shp_buff <- readOGR(dsn = dir, layer = "circular_buffer")
#######################################
# load .tif file for the DUST MASK ####
#######################################
setwd("Z:/_SHARED_FOLDERS/Air Quality/Phase 2/DUST SEVIRI/masks")
patt <- ".tif"
filenames <- list.files(pattern = patt)
filenames <- filenames[18]
filenames
# [1] "201802200615_MASK.tif"
DUST_mask <- raster(filenames)
# mask and crop raster over the UAE
# DUST_mask <- crop(DUST_mask, extent(shp_UAE))
# DUST_mask <- mask(DUST_mask, shp_UAE)
# DUST_mask <- crop(DUST_mask, extent(shp_AP))
# DUST_mask <- mask(DUST_mask, shp_AP)
plot(DUST_mask)
# get points from the raster (lat, lon, points)
values <- rasterToPoints(DUST_mask)
colnames(values) <- c("x", "y", "values")
values <- as.data.frame(values)
head(values)
crs <- projection(shp_buff) ### get projections from shp file
# make a spatial object with the points from the raster
values <- SpatialPointsDataFrame(values[,1:2], values,
proj4string=CRS(crs))
# get NUMBER of POINTS that fall into the circular buffer zone
# pts_in_buffer <- sp::over(values, shp_buff, fun = NULL)
# pts_in_buffer <- over(values, shp_buff[, "ID"])
library(spatialEco)
pts_in_buffer <- point.in.poly(values, shp_buff)
pts_in_buffer <- as.data.frame(pts_in_buffer)
pts_in_buffer <- pts_in_buffer %>%
filter(values > 0)
#### Sum of points in the circular buffer (each point correspond to an area of ~ 1km)
data_points <- pts_in_buffer %>%
dplyr::summarize(sum = sum(values))
| /R_scripts/DUST_operational/buffer_shp_file.R | no_license | karafede/DUST_SEVIRI | R | false | false | 5,404 | r |
library(readr)
library(sp)
library(raster)
library(gstat)
library(rgdal)
library(RNetCDF)
library(ncdf4)
library(stringr)
library(rgeos)
library(leaflet)
library(htmlwidgets)
library(dplyr)
e <- extent(50,60,20, 28) # UAE extent
plot(e)
# make a spatial polygon from the extent
p <- as(e, "SpatialPolygons")
plot(p)
proj4string(p) <- CRS("+proj=longlat +datum=WGS84")
# crs(p) <- "proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"
# save shp file for the rectangular DOMAIN
setwd("Z:/_SHARED_FOLDERS/Air Quality/Phase 2/admin_GIS/prova_shapes")
shapefile(p, "rectangular_domain.shp", overwrite=TRUE)
# reload and plot domain
dir <- "Z:/_SHARED_FOLDERS/Air Quality/Phase 2/admin_GIS/prova_shapes"
shp_rect <- readOGR(dsn = dir, layer = "rectangular_domain")
# ----- Transform to EPSG 4326 - WGS84 (required)
shp_rect <- spTransform(shp_rect, CRS("+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"))
shp_rect@data$name <- 1:nrow(shp_rect)
plot(shp_rect)
#########################
### shapefile UAE #######
#########################
dir <- "Z:/_SHARED_FOLDERS/Air Quality/Phase 2/HISTORICAL_dust/UAE_boundary"
### shapefile for UAE
shp_UAE <- readOGR(dsn = dir, layer = "uae_emirates")
# ----- Transform to EPSG 4326 - WGS84 (required)
shp_UAE <- spTransform(shp_UAE, CRS("+init=epsg:4326"))
# names(shp)
shp_UAE@data$name <- 1:nrow(shp_UAE)
plot(shp_rect)
plot(shp_UAE, add=TRUE, lwd=1)
###################################
## shapefile Arabian Peninsual ####
###################################
dir <- "Z:/_SHARED_FOLDERS/Air Quality/Phase 2/Dust_Event_UAE_2015/WRFChem_domain"
shp_AP <- readOGR(dsn = dir, layer = "ADMIN_domain_d01_WRFChem")
# ----- Transform to EPSG 4326 - WGS84 (required)
shp_AP <- spTransform(shp_AP, CRS("+init=epsg:4326"))
# names(shp)
shp_AP@data$name <- 1:nrow(shp_AP)
plot(shp_rect)
plot(shp_AP, add=TRUE, lwd=1)
# d01_shp_WW <- crop(shp_WW, e)
# plot(d01_shp_WW)
# setwd("Z:/_SHARED_FOLDERS/Air Quality/Phase 2/Dust_Event_UAE_2015/WRFChem_domain")
# shapefile(d01_shp_WW, "ADMIN_domain_MIKE.shp.shp", overwrite=TRUE)
##################################################
# make a point for a location in the UAE #########
##################################################
require(sf)
coordinates_ABU_DHABI <- read.table(text="
longitude latitude
54.646 24.43195",
header=TRUE)
# new coordiantes over AP
coordinates_ABU_DHABI <- read.table(text="
longitude latitude
47.63 21.08",
header=TRUE)
coordinates_ABU_DHABI <- read.table(text="
longitude latitude
47.63 21.08",
"54.646 24.43195",
header=TRUE)
coord_ABU_DHABI_point <- st_as_sf(x = coordinates_ABU_DHABI,
coords = c("longitude", "latitude"),
crs = "+proj=longlat +datum=WGS84")
# simple plot
plot(coord_ABU_DHABI_point)
# convert to sp object if needed
coord_ABU_DHABI_point <- as(coord_ABU_DHABI_point, "Spatial")
# shp_buff <- gBuffer(shp_UAE, width=40, byid=TRUE, quadsegs=10)
shp_buff <- gBuffer(coord_ABU_DHABI_point, width=5)
shp_buff <- spTransform(shp_buff, CRS("+proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0"))
plot(shp_buff)
plot(shp_UAE, add=TRUE, lwd=1)
plot(shp_AP, add=TRUE, lwd=1)
# save shp file for the crcular buffer
setwd("Z:/_SHARED_FOLDERS/Air Quality/Phase 2/admin_GIS/prova_shapes")
shapefile(shp_buff, "circular_buffer.shp", overwrite=TRUE)
dir <- "Z:/_SHARED_FOLDERS/Air Quality/Phase 2/admin_GIS/prova_shapes"
# reload and plot domain
shp_buff <- readOGR(dsn = dir, layer = "circular_buffer")
#######################################
# load .tif file for the DUST MASK ####
#######################################
setwd("Z:/_SHARED_FOLDERS/Air Quality/Phase 2/DUST SEVIRI/masks")
patt <- ".tif"
filenames <- list.files(pattern = patt)
filenames <- filenames[18]
filenames
# [1] "201802200615_MASK.tif"
DUST_mask <- raster(filenames)
# mask and crop raster over the UAE
# DUST_mask <- crop(DUST_mask, extent(shp_UAE))
# DUST_mask <- mask(DUST_mask, shp_UAE)
# DUST_mask <- crop(DUST_mask, extent(shp_AP))
# DUST_mask <- mask(DUST_mask, shp_AP)
plot(DUST_mask)
# get points from the raster (lat, lon, points)
values <- rasterToPoints(DUST_mask)
colnames(values) <- c("x", "y", "values")
values <- as.data.frame(values)
head(values)
crs <- projection(shp_buff) ### get projections from shp file
# make a spatial object with the points from the raster
values <- SpatialPointsDataFrame(values[,1:2], values,
proj4string=CRS(crs))
# get NUMBER of POINTS that fall into the circular buffer zone
# pts_in_buffer <- sp::over(values, shp_buff, fun = NULL)
# pts_in_buffer <- over(values, shp_buff[, "ID"])
library(spatialEco)
pts_in_buffer <- point.in.poly(values, shp_buff)
pts_in_buffer <- as.data.frame(pts_in_buffer)
pts_in_buffer <- pts_in_buffer %>%
filter(values > 0)
#### Sum of points in the circular buffer (each point correspond to an area of ~ 1km)
data_points <- pts_in_buffer %>%
dplyr::summarize(sum = sum(values))
|
#1.
TB<-c(50,51,52,53,54)
BB<-c(40,46,44,55,49)
plot(TB,BB)
scatter.smooth(TB,BB)
#bagaimana model liniernya?
lm
#membuat regresi linier model
reg<-lm(BB~TB)
reg
#2.
predict(reg, data.frame(TB= 55))
#3.
x<- c(0,1,2,3,4)
y<- c(1,2.25,3.75,4.25,5.65)
library(polynom)
f<-poly.calc(x, y)
f
#4.
#INI SOAL DI SLIDE INTERPOLASI- BAGIAN POLINOM LAGRANGE
n=5
x<- c(0,1,2,3,4)
y<- c(1,2.25,3.75,4.25,5.65)
#tentukan polinom pada titik x=3,5 atau f(3.5) atau p(3.5)
px = 2.75
#L=lagrange
L = 0.0
for(i in 1:n){
pembilang = 1.0
penyebut = 1.0
for(j in 1:n){
if(i != j){
pembilang = pembilang*(px - x[j])
penyebut = penyebut * (x[i] -x[j])
}
j <- j+1
}
L = L + (pembilang/penyebut)*y[i]
i <- i+1
}
print(paste("answer is " ,L))
#5.
plot(x,y)
curve(f, add=TRUE)
#13
h <- 0.1
x <- seq(0.1, by=h)
f <- function(x){
return(x^2)
}
f0<- f(x[1])
fi <- sapply(x[2:10], f)
fn <- f(x[length(x)])
trap <- function(f0,fi,fn,h){
L< - h *(f0+2*sum(fi)-1+fn)/2
return(L)
}
trap(f0,fi,fn,h) | /17523196.R | no_license | agyaudiaisk/17523196-R-UAS-2 | R | false | false | 1,087 | r | #1.
TB<-c(50,51,52,53,54)
BB<-c(40,46,44,55,49)
plot(TB,BB)
scatter.smooth(TB,BB)
#bagaimana model liniernya?
lm
#membuat regresi linier model
reg<-lm(BB~TB)
reg
#2.
predict(reg, data.frame(TB= 55))
#3.
x<- c(0,1,2,3,4)
y<- c(1,2.25,3.75,4.25,5.65)
library(polynom)
f<-poly.calc(x, y)
f
#4.
#INI SOAL DI SLIDE INTERPOLASI- BAGIAN POLINOM LAGRANGE
n=5
x<- c(0,1,2,3,4)
y<- c(1,2.25,3.75,4.25,5.65)
#tentukan polinom pada titik x=3,5 atau f(3.5) atau p(3.5)
px = 2.75
#L=lagrange
L = 0.0
for(i in 1:n){
pembilang = 1.0
penyebut = 1.0
for(j in 1:n){
if(i != j){
pembilang = pembilang*(px - x[j])
penyebut = penyebut * (x[i] -x[j])
}
j <- j+1
}
L = L + (pembilang/penyebut)*y[i]
i <- i+1
}
print(paste("answer is " ,L))
#5.
plot(x,y)
curve(f, add=TRUE)
#13
h <- 0.1
x <- seq(0.1, by=h)
f <- function(x){
return(x^2)
}
f0<- f(x[1])
fi <- sapply(x[2:10], f)
fn <- f(x[length(x)])
trap <- function(f0,fi,fn,h){
L< - h *(f0+2*sum(fi)-1+fn)/2
return(L)
}
trap(f0,fi,fn,h) |
# Load R's "USPersonalExpenditure" dataset using the "data()" function
# This will produce a data frame called `USPersonalExpenditure`
data("USPersonalExpenditure")
# The variable USPersonalExpenditure is now accessible to you. Unfortunately,
# it's not a data frame (it's actually what is called a matrix)
# Test this using the `is.data.frame()` function
is.data.frame(USPersonalExpenditure)
# Luckily, you can simply pass the USPersonalExpenditure variable as an argument
# to the `data.frame()` function to convert it a data farm. Do this, storing the
# result in a new variable
USPersonalExpenditure <- data.frame(USPersonalExpenditure, stringsAsFactors = FALSE)
# What are the column names of your dataframe?
View(USPersonalExpenditure)
# Why are they so strange? Think about whether you could use a number like 1940
# with dollar notation!
USPersonalExpenditure$X1940
# What are the row names of your dataframe?
View(USPersonalExpenditure)
# Create a column "category" that is equal to your rownames
USPersonalExpenditure$category <- rownames(USPersonalExpenditure)
# How much money was spent on personal care in 1940?
sum(USPersonalExpenditure["Personal Care", 1:5])
# How much money was spent on Food and Tobacco in 1960?
sum(USPersonalExpenditure["Food and Tobacco", 1:5])
# What was the highest expenditure category in 1960?
# Hint: use the `max()` function to find the largest, then compare that to the column
highest.1960 <- USPersonalExpenditure$category[USPersonalExpenditure$X1960 == max(USPersonalExpenditure$X1960)]
# Define a function `DetectHighest` that takes in a year as a parameter, and
# returns the highest spending category of that year
DetectHighest <- function(year){
col <- paste0("X", year)
USPersonalExpenditure$category[USPersonalExpenditure[, col] == max(USPersonalExpenditure[, col])]
}
# Using your function, determine the highest spending category of each year
DetectHighest(1960)
| /exercise-3/exercise.R | permissive | akshat-a/module9-dataframes | R | false | false | 1,933 | r | # Load R's "USPersonalExpenditure" dataset using the "data()" function
# This will produce a data frame called `USPersonalExpenditure`
data("USPersonalExpenditure")
# The variable USPersonalExpenditure is now accessible to you. Unfortunately,
# it's not a data frame (it's actually what is called a matrix)
# Test this using the `is.data.frame()` function
is.data.frame(USPersonalExpenditure)
# Luckily, you can simply pass the USPersonalExpenditure variable as an argument
# to the `data.frame()` function to convert it a data farm. Do this, storing the
# result in a new variable
USPersonalExpenditure <- data.frame(USPersonalExpenditure, stringsAsFactors = FALSE)
# What are the column names of your dataframe?
View(USPersonalExpenditure)
# Why are they so strange? Think about whether you could use a number like 1940
# with dollar notation!
USPersonalExpenditure$X1940
# What are the row names of your dataframe?
View(USPersonalExpenditure)
# Create a column "category" that is equal to your rownames
USPersonalExpenditure$category <- rownames(USPersonalExpenditure)
# How much money was spent on personal care in 1940?
sum(USPersonalExpenditure["Personal Care", 1:5])
# How much money was spent on Food and Tobacco in 1960?
sum(USPersonalExpenditure["Food and Tobacco", 1:5])
# What was the highest expenditure category in 1960?
# Hint: use the `max()` function to find the largest, then compare that to the column
highest.1960 <- USPersonalExpenditure$category[USPersonalExpenditure$X1960 == max(USPersonalExpenditure$X1960)]
# Define a function `DetectHighest` that takes in a year as a parameter, and
# returns the highest spending category of that year
DetectHighest <- function(year){
col <- paste0("X", year)
USPersonalExpenditure$category[USPersonalExpenditure[, col] == max(USPersonalExpenditure[, col])]
}
# Using your function, determine the highest spending category of each year
DetectHighest(1960)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/edytuj_skale.R
\name{edytuj_skale}
\alias{edytuj_skale}
\title{Zapisuje elementy skali do bazy danych}
\usage{
edytuj_skale(P, idSkali, elementy, nadpisz = FALSE)
}
\arguments{
\item{P}{połączenie z bazą danych uzyskane z \code{DBI::dbConnect(RPostgres::Postgres())}}
\item{idSkali}{identyfikator skali, ktora ma zostac zapisana (typowo uzyskany z funkcji "stworz_skale()")}
\item{elementy}{ramka danych opisujaca elementy skali - patrz http://zpd.ibe.edu.pl/doku.php?id=r_zpd_skale}
\item{nadpisz}{czy nadpisac skale, jesli jest juz zdefiniowana w bazie danych}
}
\value{
[data.frame] zapisane elementy skali
}
\description{
Patrz http://zpd.ibe.edu.pl/doku.php?id=r_zpd_skale
}
| /man/edytuj_skale.Rd | permissive | zozlak/ZPDzapis | R | false | true | 764 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/edytuj_skale.R
\name{edytuj_skale}
\alias{edytuj_skale}
\title{Zapisuje elementy skali do bazy danych}
\usage{
edytuj_skale(P, idSkali, elementy, nadpisz = FALSE)
}
\arguments{
\item{P}{połączenie z bazą danych uzyskane z \code{DBI::dbConnect(RPostgres::Postgres())}}
\item{idSkali}{identyfikator skali, ktora ma zostac zapisana (typowo uzyskany z funkcji "stworz_skale()")}
\item{elementy}{ramka danych opisujaca elementy skali - patrz http://zpd.ibe.edu.pl/doku.php?id=r_zpd_skale}
\item{nadpisz}{czy nadpisac skale, jesli jest juz zdefiniowana w bazie danych}
}
\value{
[data.frame] zapisane elementy skali
}
\description{
Patrz http://zpd.ibe.edu.pl/doku.php?id=r_zpd_skale
}
|
# Read Canadian computer science graduate data from 2015
c_comsci_grads_2015 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/computer_science/canadian_graduates_computer_science_2015.csv")
par(mfrow=c(1,3))
# Create a bar graph to the show distribution of graduates from the year 2015
barplot(c_comsci_grads_2015$VALUE,
names.arg=c_comsci_grads_2015$GEO,
xlab="Province",
ylab="Number of Graduates",
col="dodgerblue",
main="Geographic Location of Canadian Computer Science Graduates (2015)",
border="black")
# Read Canadian computer science graduate data from 2016
c_comsci_grads_2016 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/computer_science/canadian_graduates_computer_science_2016.csv")
# Create a bar graph to the show distribution of graduates from the year 2016
barplot(c_comsci_grads_2016$VALUE,
names.arg=c_comsci_grads_2016$GEO,
xlab="Province",
ylab="Number of Graduates",
col="darkorange1",
main="Geographic Location of Canadian Computer Science Graduates (2016)",
border="black")
# Read Canadian computer science graduate data from 2017
c_comsci_grads_2017 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/computer_science/canadian_graduates_computer_science_2017.csv")
# Create a bar graph to the show distribution of graduates from the year 2017
barplot(c_comsci_grads_2017$VALUE,
names.arg=c_comsci_grads_2017$GEO,
xlab="Province",
ylab="Number of Graduates",
col="azure4",
main="Geographic Location of Canadian Computer Science Graduates (2017)",
border="black")
# Create a box plot with three columns of data for Canadian computer science grads
par(mfrow=c(1,3))
boxplot(c_comsci_grads_2015$VALUE, main="Number of Canadian Computer Science Graduates (2015)",
xlab ="Number of Graduates", col="dodgerblue", horizontal=TRUE)
boxplot(c_comsci_grads_2016$VALUE, main="Number of Canadian Computer Science Graduates (2016)",
xlab ="Number of Graduates", col="darkorange1", horizontal=TRUE)
boxplot(c_comsci_grads_2017$VALUE, main="Number of Canadian Computer Science Graduates (2017)",
xlab ="Number of Graduates", col="azure4", horizontal=TRUE)
# Display boxplot values
summary(c_comsci_grads_2015$VALUE)
summary(c_comsci_grads_2016$VALUE)
summary(c_comsci_grads_2017$VALUE)
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# Read Canadian engineering graduate data from 2015
c_eng_grads_2015 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/engineering/canadian_graduates_engineering_2015.csv")
par(mfrow=c(1,3))
# Create a bar graph to the show distribution of graduates from the year 2015
barplot(c_eng_grads_2015$VALUE,
names.arg=c_eng_grads_2015$GEO,
xlab="Province",
ylab="Number of Graduates",
col="dodgerblue",
main="Geographic Location of Canadian Engineering Graduates (2015)",
border="black")
# Read Canadian engineering graduate data from 2016
c_eng_grads_2016 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/engineering/canadian_graduates_engineering_2016.csv")
# Create a bar graph to the show distribution of graduates from the year 2016
barplot(c_eng_grads_2016$VALUE,
names.arg=c_eng_grads_2016$GEO,
xlab="Province",
ylab="Number of Graduates",
col="darkorange1",
main="Geographic Location of Canadian Engineering Graduates (2016)",
border="black")
# Read Canadian engineering graduate data from 2017
c_eng_grads_2017 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/engineering/canadian_graduates_engineering_2017.csv")
# Create a bar graph to the show distribution of graduates from the year 2017
barplot(c_eng_grads_2017$VALUE,
names.arg=c_eng_grads_2017$GEO,
xlab="Province",
ylab="Number of Graduates",
col="azure4",
main="Geographic Location of Canadian Engineering Graduates (2017)",
border="black")
# Create a box plot with three columns of data for Canadian engineering grads
par(mfrow=c(1,3))
boxplot(c_eng_grads_2015$VALUE, main="Number of Canadian Engineering Graduates (2015)",
xlab ="Number of Graduates", col="dodgerblue", horizontal=TRUE)
boxplot(c_eng_grads_2016$VALUE, main="Number of Canadian Engineering Graduates (2016)",
xlab ="Number of Graduates", col="darkorange1", horizontal=TRUE)
boxplot(c_eng_grads_2017$VALUE, main="Number of Canadian Engineering Graduates (2017)",
xlab ="Number of Graduates", col="azure4", horizontal=TRUE)
# Display boxplot values
summary(c_eng_grads_2015$VALUE)
summary(c_eng_grads_2016$VALUE)
summary(c_eng_grads_2017$VALUE)
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# Read Canadian agriculture graduate data from 2015
c_agr_grads_2015 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/_agriculture/canadian_graduates_agriculture_2015.csv")
par(mfrow=c(1,3))
# Create a bar graph to the show distribution of graduates from the year 2015
barplot(c_agr_grads_2015$VALUE,
names.arg=c_agr_grads_2015$GEO,
xlab="Province",
ylab="Number of Graduates",
col="dodgerblue",
main="Geographic Location of Canadian Agriculture Graduates (2015)",
border="black")
# Read Canadian agriculture graduate data from 2016
c_agr_grads_2016 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/_agriculture/canadian_graduates_agriculture_2016.csv")
# Create a bar graph to the show distribution of graduates from the year 2016
barplot(c_agr_grads_2016$VALUE,
names.arg=c_agr_grads_2016$GEO,
xlab="Province",
ylab="Number of Graduates",
col="darkorange1",
main="Geographic Location of Agriculture Graduates (2016)",
border="black")
# Read Canadian agriculture graduate data from 2017
c_agr_grads_2017 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/_agriculture/canadian_graduates_agriculture_2017.csv")
# Create a bar graph to the show distribution of graduates from the year 2017
barplot(c_agr_grads_2017$VALUE,
names.arg=c_agr_grads_2017$GEO,
xlab="Province",
ylab="Number of Graduates",
col="azure4",
main="Geographic Location of Agriculture Graduates (2017)",
border="black")
# Create a box plot with three columns of data for Canadian Agriculture grads
par(mfrow=c(1,3))
boxplot(c_agr_grads_2015$VALUE, main="Number of Canadian Agriculture Graduates (2015)",
xlab ="Number of Graduates", col="dodgerblue", horizontal=TRUE)
boxplot(c_agr_grads_2016$VALUE, main="Number of Canadian Agriculture Graduates (2016)",
xlab ="Number of Graduates", col="darkorange1", horizontal=TRUE)
boxplot(c_agr_grads_2017$VALUE, main="Number of Canadian Agriculture Graduates (2017)",
xlab ="Number of Graduates", col="azure4", horizontal=TRUE)
# Display boxplot values
summary(c_agr_grads_2015$VALUE)
summary(c_agr_grads_2016$VALUE)
summary(c_agr_grads_2017$VALUE)
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# Read International computer science graduate data from 2015
i_comsci_grads_2015 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/computer_science/international_graduates_computer_science_2015.csv")
par(mfrow=c(1,3))
# Create a bar graph to the show distribution of graduates from the year 2015
barplot(i_comsci_grads_2015$VALUE,
names.arg=i_comsci_grads_2015$GEO,
xlab="Province",
ylab="Number of Graduates",
col="dodgerblue",
main="Geographic Location of International Computer Science Graduates (2015)",
border="black")
# Read International computer science data from 2016
i_comsci_grads_2016 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/computer_science/international_graduates_computer_science_2016.csv")
# Create a bar graph to the show distribution of graduates from the year 2016
barplot(i_comsci_grads_2016$VALUE,
names.arg=i_comsci_grads_2016$GEO,
xlab="Province",
ylab="Number of Graduates",
col="darkorange1",
main="Geographic Location of International Computer Science Graduates (2016)",
border="black")
# Read International computer science data from 2017
i_comsci_grads_2017 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/computer_science/international_graduates_computer_science_2017.csv")
# Create a bar graph to the show distribution of graduates from the year 2017
barplot(i_comsci_grads_2017$VALUE,
names.arg=i_comsci_grads_2017$GEO,
xlab="Province",
ylab="Number of Graduates",
col="azure4",
main="Geographic Location of International Computer Science Graduates (2017)",
border="black")
# Create a box plot with three columns of data for International computer science grads
par(mfrow=c(1,3))
boxplot(i_comsci_grads_2015$VALUE, main="Number of International Computer Science Graduates (2015)",
xlab ="Number of Graduates", col="dodgerblue", horizontal=TRUE)
boxplot(i_comsci_grads_2016$VALUE, main="Number of International Computer Science Graduates (2016)",
xlab ="Number of Graduates", col="darkorange1", horizontal=TRUE)
boxplot(i_comsci_grads_2017$VALUE, main="Number of International Computer Science Graduates (2017)",
xlab ="Number of Graduates", col="azure4", horizontal=TRUE)
# Display boxplot values
summary(i_comsci_grads_2015$VALUE)
summary(i_comsci_grads_2016$VALUE)
summary(i_comsci_grads_2017$VALUE)
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# Read International engineering graduate data from 2015
i_eng_grads_2015 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/engineering/international_graduates_engineering_2015.csv")
par(mfrow=c(1,3))
# Create a bar graph to the show distribution of graduates from the year 2015
barplot(i_eng_grads_2015$VALUE,
names.arg=i_eng_grads_2015$GEO,
xlab="Province",
ylab="Number of Graduates",
col="dodgerblue",
main="Geographic Location of International Engineering Graduates (2015)",
border="black")
# Read International engineering data from 2016
i_eng_grads_2016 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/engineering/international_graduates_engineering_2016.csv")
# Create a bar graph to the show distribution of graduates from the year 2016
barplot(i_eng_grads_2016$VALUE,
names.arg=i_eng_grads_2016$GEO,
xlab="Province",
ylab="Number of Graduates",
col="darkorange1",
main="Geographic Location of International Engineering Graduates (2016)",
border="black")
# Read International engineering data from 2017
i_eng_grads_2017 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/engineering/international_graduates_engineering_2017.csv")
# Create a bar graph to the show distribution of graduates from the year 2017
barplot(i_eng_grads_2017$VALUE,
names.arg=i_eng_grads_2017$GEO,
xlab="Province",
ylab="Number of Graduates",
col="azure4",
main="Geographic Location of International Engineering Graduates (2017)",
border="black")
# Create a box plot with three columns of data for International engineering grads
par(mfrow=c(1,3))
boxplot(i_eng_grads_2015$VALUE, main="Number of International Engineering Graduates (2015)",
xlab ="Number of Graduates", col="dodgerblue", horizontal=TRUE)
boxplot(i_eng_grads_2016$VALUE, main="Number of International Engineering Graduates (2016)",
xlab ="Number of Graduates", col="darkorange1", horizontal=TRUE)
boxplot(i_eng_grads_2017$VALUE, main="Number of International Engineering Graduates (2017)",
xlab ="Number of Graduates", col="azure4", horizontal=TRUE)
# Display boxplot values
summary(i_eng_grads_2015$VALUE)
summary(i_eng_grads_2016$VALUE)
summary(i_eng_grads_2017$VALUE)
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# Read International agriculture graduate data from 2015
i_agr_grads_2015 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/_agriculture/international_graduates_agriculture_2015.csv")
par(mfrow=c(1,3))
# Create a bar graph to the show distribution of graduates from the year 2015
barplot(i_agr_grads_2015$VALUE,
names.arg=i_agr_grads_2015$GEO,
xlab="Province",
ylab="Number of Graduates",
col="dodgerblue",
main="Geographic Location of International Agriculture Graduates (2015)",
border="black")
# Read International agriculture data from 2016
i_agr_grads_2016 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/_agriculture/international_graduates_agriculture_2016.csv")
# Create a bar graph to the show distribution of graduates from the year 2016
barplot(i_agr_grads_2016$VALUE,
names.arg=i_agr_grads_2016$GEO,
xlab="Province",
ylab="Number of Graduates",
col="darkorange1",
main="Geographic Location of International Agriculture Graduates (2016)",
border="black")
# Read International agriculture data from 2017
i_agr_grads_2017 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/_agriculture/international_graduates_agriculture_2017.csv")
# Create a bar graph to the show distribution of graduates from the year 2017
barplot(i_agr_grads_2017$VALUE,
names.arg=i_agr_grads_2017$GEO,
xlab="Province",
ylab="Number of Graduates",
col="azure4",
main="Geographic Location of International Agriculture Graduates (2017)",
border="black")
# Create a box plot with three columns of data for International Agriculture grads
par(mfrow=c(1,3))
boxplot(i_agr_grads_2015$VALUE, main="Number of International Agriculture Graduates (2015)",
xlab ="Number of Graduates", col="dodgerblue", horizontal=TRUE)
boxplot(i_agr_grads_2016$VALUE, main="Number of International Agriculture Graduates (2016)",
xlab ="Number of Graduates", col="darkorange1", horizontal=TRUE)
boxplot(i_agr_grads_2017$VALUE, main="Number of International Agriculture Graduates (2017)",
xlab ="Number of Graduates", col="azure4", horizontal=TRUE)
# Display boxplot values
summary(i_agr_grads_2015$VALUE)
summary(i_agr_grads_2016$VALUE)
summary(i_agr_grads_2017$VALUE)
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# Read Canadian university enrollment data from the years 2015-2017
enrollments <- read.csv("~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/enrolments_in_canadian universities_and_colleges_by_field_of_study_20152016_and_20162017.csv")
par(mgp=c(5,1,0))
par(mar=c(6,6,2,2))
yearly.enrollments <- rbind(enrollments$�.2015.2016_University, enrollments$�.2016.2017_University)
# Create a bar graph to show distribution of enrollments in Canadian Universities (2015-2017)
barplot(yearly.enrollments,
names.arg=c("[1]","[2]","[3]","[4]","[5]","[6]","[7]","[8]","[9]","[10]","[11]","[12]","[13]"),
xlab="Field of Study",
ylab="Number of Enrollments",
col=c("aquamarine3", "hotpink3"),
main="Enrollments in Canadian Universities by Field of Study",
border="black",
legend.text = c("2015/2016", "2016/2017"),
args.legend = list(cex=0.75, x = "topright"),
beside=TRUE,
las=1)
# Read total number of grad data for Canadian agriculture students from the years 2015-2017
c_total_agr_grads <- read.csv("~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/_agriculture/canadian_graduates_agriculture_20152017.csv")
# Create a linear regression model to predict avergae number of Canadian agriculture graduates in a year
linear_model_i_agr = lm(c_total_agr_grads$VALUE~c_total_agr_grads$�..REF_DATE)
summary(linear_model_i_agr)
| /Data Analysis/grad_data_analysis.R | no_license | harman-khehara/Data-Analysis-of-Postsecondary-Graduates | R | false | false | 18,236 | r | # Read Canadian computer science graduate data from 2015
c_comsci_grads_2015 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/computer_science/canadian_graduates_computer_science_2015.csv")
par(mfrow=c(1,3))
# Create a bar graph to the show distribution of graduates from the year 2015
barplot(c_comsci_grads_2015$VALUE,
names.arg=c_comsci_grads_2015$GEO,
xlab="Province",
ylab="Number of Graduates",
col="dodgerblue",
main="Geographic Location of Canadian Computer Science Graduates (2015)",
border="black")
# Read Canadian computer science graduate data from 2016
c_comsci_grads_2016 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/computer_science/canadian_graduates_computer_science_2016.csv")
# Create a bar graph to the show distribution of graduates from the year 2016
barplot(c_comsci_grads_2016$VALUE,
names.arg=c_comsci_grads_2016$GEO,
xlab="Province",
ylab="Number of Graduates",
col="darkorange1",
main="Geographic Location of Canadian Computer Science Graduates (2016)",
border="black")
# Read Canadian computer science graduate data from 2017
c_comsci_grads_2017 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/computer_science/canadian_graduates_computer_science_2017.csv")
# Create a bar graph to the show distribution of graduates from the year 2017
barplot(c_comsci_grads_2017$VALUE,
names.arg=c_comsci_grads_2017$GEO,
xlab="Province",
ylab="Number of Graduates",
col="azure4",
main="Geographic Location of Canadian Computer Science Graduates (2017)",
border="black")
# Create a box plot with three columns of data for Canadian computer science grads
par(mfrow=c(1,3))
boxplot(c_comsci_grads_2015$VALUE, main="Number of Canadian Computer Science Graduates (2015)",
xlab ="Number of Graduates", col="dodgerblue", horizontal=TRUE)
boxplot(c_comsci_grads_2016$VALUE, main="Number of Canadian Computer Science Graduates (2016)",
xlab ="Number of Graduates", col="darkorange1", horizontal=TRUE)
boxplot(c_comsci_grads_2017$VALUE, main="Number of Canadian Computer Science Graduates (2017)",
xlab ="Number of Graduates", col="azure4", horizontal=TRUE)
# Display boxplot values
summary(c_comsci_grads_2015$VALUE)
summary(c_comsci_grads_2016$VALUE)
summary(c_comsci_grads_2017$VALUE)
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# Read Canadian engineering graduate data from 2015
c_eng_grads_2015 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/engineering/canadian_graduates_engineering_2015.csv")
par(mfrow=c(1,3))
# Create a bar graph to the show distribution of graduates from the year 2015
barplot(c_eng_grads_2015$VALUE,
names.arg=c_eng_grads_2015$GEO,
xlab="Province",
ylab="Number of Graduates",
col="dodgerblue",
main="Geographic Location of Canadian Engineering Graduates (2015)",
border="black")
# Read Canadian engineering graduate data from 2016
c_eng_grads_2016 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/engineering/canadian_graduates_engineering_2016.csv")
# Create a bar graph to the show distribution of graduates from the year 2016
barplot(c_eng_grads_2016$VALUE,
names.arg=c_eng_grads_2016$GEO,
xlab="Province",
ylab="Number of Graduates",
col="darkorange1",
main="Geographic Location of Canadian Engineering Graduates (2016)",
border="black")
# Read Canadian engineering graduate data from 2017
c_eng_grads_2017 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/engineering/canadian_graduates_engineering_2017.csv")
# Create a bar graph to the show distribution of graduates from the year 2017
barplot(c_eng_grads_2017$VALUE,
names.arg=c_eng_grads_2017$GEO,
xlab="Province",
ylab="Number of Graduates",
col="azure4",
main="Geographic Location of Canadian Engineering Graduates (2017)",
border="black")
# Create a box plot with three columns of data for Canadian engineering grads
par(mfrow=c(1,3))
boxplot(c_eng_grads_2015$VALUE, main="Number of Canadian Engineering Graduates (2015)",
xlab ="Number of Graduates", col="dodgerblue", horizontal=TRUE)
boxplot(c_eng_grads_2016$VALUE, main="Number of Canadian Engineering Graduates (2016)",
xlab ="Number of Graduates", col="darkorange1", horizontal=TRUE)
boxplot(c_eng_grads_2017$VALUE, main="Number of Canadian Engineering Graduates (2017)",
xlab ="Number of Graduates", col="azure4", horizontal=TRUE)
# Display boxplot values
summary(c_eng_grads_2015$VALUE)
summary(c_eng_grads_2016$VALUE)
summary(c_eng_grads_2017$VALUE)
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# Read Canadian agriculture graduate data from 2015
c_agr_grads_2015 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/_agriculture/canadian_graduates_agriculture_2015.csv")
par(mfrow=c(1,3))
# Create a bar graph to the show distribution of graduates from the year 2015
barplot(c_agr_grads_2015$VALUE,
names.arg=c_agr_grads_2015$GEO,
xlab="Province",
ylab="Number of Graduates",
col="dodgerblue",
main="Geographic Location of Canadian Agriculture Graduates (2015)",
border="black")
# Read Canadian agriculture graduate data from 2016
c_agr_grads_2016 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/_agriculture/canadian_graduates_agriculture_2016.csv")
# Create a bar graph to the show distribution of graduates from the year 2016
barplot(c_agr_grads_2016$VALUE,
names.arg=c_agr_grads_2016$GEO,
xlab="Province",
ylab="Number of Graduates",
col="darkorange1",
main="Geographic Location of Agriculture Graduates (2016)",
border="black")
# Read Canadian agriculture graduate data from 2017
c_agr_grads_2017 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/_agriculture/canadian_graduates_agriculture_2017.csv")
# Create a bar graph to the show distribution of graduates from the year 2017
barplot(c_agr_grads_2017$VALUE,
names.arg=c_agr_grads_2017$GEO,
xlab="Province",
ylab="Number of Graduates",
col="azure4",
main="Geographic Location of Agriculture Graduates (2017)",
border="black")
# Create a box plot with three columns of data for Canadian Agriculture grads
par(mfrow=c(1,3))
boxplot(c_agr_grads_2015$VALUE, main="Number of Canadian Agriculture Graduates (2015)",
xlab ="Number of Graduates", col="dodgerblue", horizontal=TRUE)
boxplot(c_agr_grads_2016$VALUE, main="Number of Canadian Agriculture Graduates (2016)",
xlab ="Number of Graduates", col="darkorange1", horizontal=TRUE)
boxplot(c_agr_grads_2017$VALUE, main="Number of Canadian Agriculture Graduates (2017)",
xlab ="Number of Graduates", col="azure4", horizontal=TRUE)
# Display boxplot values
summary(c_agr_grads_2015$VALUE)
summary(c_agr_grads_2016$VALUE)
summary(c_agr_grads_2017$VALUE)
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# Read International computer science graduate data from 2015
i_comsci_grads_2015 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/computer_science/international_graduates_computer_science_2015.csv")
par(mfrow=c(1,3))
# Create a bar graph to the show distribution of graduates from the year 2015
barplot(i_comsci_grads_2015$VALUE,
names.arg=i_comsci_grads_2015$GEO,
xlab="Province",
ylab="Number of Graduates",
col="dodgerblue",
main="Geographic Location of International Computer Science Graduates (2015)",
border="black")
# Read International computer science data from 2016
i_comsci_grads_2016 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/computer_science/international_graduates_computer_science_2016.csv")
# Create a bar graph to the show distribution of graduates from the year 2016
barplot(i_comsci_grads_2016$VALUE,
names.arg=i_comsci_grads_2016$GEO,
xlab="Province",
ylab="Number of Graduates",
col="darkorange1",
main="Geographic Location of International Computer Science Graduates (2016)",
border="black")
# Read International computer science data from 2017
i_comsci_grads_2017 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/computer_science/international_graduates_computer_science_2017.csv")
# Create a bar graph to the show distribution of graduates from the year 2017
barplot(i_comsci_grads_2017$VALUE,
names.arg=i_comsci_grads_2017$GEO,
xlab="Province",
ylab="Number of Graduates",
col="azure4",
main="Geographic Location of International Computer Science Graduates (2017)",
border="black")
# Create a box plot with three columns of data for International computer science grads
par(mfrow=c(1,3))
boxplot(i_comsci_grads_2015$VALUE, main="Number of International Computer Science Graduates (2015)",
xlab ="Number of Graduates", col="dodgerblue", horizontal=TRUE)
boxplot(i_comsci_grads_2016$VALUE, main="Number of International Computer Science Graduates (2016)",
xlab ="Number of Graduates", col="darkorange1", horizontal=TRUE)
boxplot(i_comsci_grads_2017$VALUE, main="Number of International Computer Science Graduates (2017)",
xlab ="Number of Graduates", col="azure4", horizontal=TRUE)
# Display boxplot values
summary(i_comsci_grads_2015$VALUE)
summary(i_comsci_grads_2016$VALUE)
summary(i_comsci_grads_2017$VALUE)
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# Read International engineering graduate data from 2015
i_eng_grads_2015 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/engineering/international_graduates_engineering_2015.csv")
par(mfrow=c(1,3))
# Create a bar graph to the show distribution of graduates from the year 2015
barplot(i_eng_grads_2015$VALUE,
names.arg=i_eng_grads_2015$GEO,
xlab="Province",
ylab="Number of Graduates",
col="dodgerblue",
main="Geographic Location of International Engineering Graduates (2015)",
border="black")
# Read International engineering data from 2016
i_eng_grads_2016 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/engineering/international_graduates_engineering_2016.csv")
# Create a bar graph to the show distribution of graduates from the year 2016
barplot(i_eng_grads_2016$VALUE,
names.arg=i_eng_grads_2016$GEO,
xlab="Province",
ylab="Number of Graduates",
col="darkorange1",
main="Geographic Location of International Engineering Graduates (2016)",
border="black")
# Read International engineering data from 2017
i_eng_grads_2017 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/engineering/international_graduates_engineering_2017.csv")
# Create a bar graph to the show distribution of graduates from the year 2017
barplot(i_eng_grads_2017$VALUE,
names.arg=i_eng_grads_2017$GEO,
xlab="Province",
ylab="Number of Graduates",
col="azure4",
main="Geographic Location of International Engineering Graduates (2017)",
border="black")
# Create a box plot with three columns of data for International engineering grads
par(mfrow=c(1,3))
boxplot(i_eng_grads_2015$VALUE, main="Number of International Engineering Graduates (2015)",
xlab ="Number of Graduates", col="dodgerblue", horizontal=TRUE)
boxplot(i_eng_grads_2016$VALUE, main="Number of International Engineering Graduates (2016)",
xlab ="Number of Graduates", col="darkorange1", horizontal=TRUE)
boxplot(i_eng_grads_2017$VALUE, main="Number of International Engineering Graduates (2017)",
xlab ="Number of Graduates", col="azure4", horizontal=TRUE)
# Display boxplot values
summary(i_eng_grads_2015$VALUE)
summary(i_eng_grads_2016$VALUE)
summary(i_eng_grads_2017$VALUE)
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# Read International agriculture graduate data from 2015
i_agr_grads_2015 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/_agriculture/international_graduates_agriculture_2015.csv")
par(mfrow=c(1,3))
# Create a bar graph to the show distribution of graduates from the year 2015
barplot(i_agr_grads_2015$VALUE,
names.arg=i_agr_grads_2015$GEO,
xlab="Province",
ylab="Number of Graduates",
col="dodgerblue",
main="Geographic Location of International Agriculture Graduates (2015)",
border="black")
# Read International agriculture data from 2016
i_agr_grads_2016 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/_agriculture/international_graduates_agriculture_2016.csv")
# Create a bar graph to the show distribution of graduates from the year 2016
barplot(i_agr_grads_2016$VALUE,
names.arg=i_agr_grads_2016$GEO,
xlab="Province",
ylab="Number of Graduates",
col="darkorange1",
main="Geographic Location of International Agriculture Graduates (2016)",
border="black")
# Read International agriculture data from 2017
i_agr_grads_2017 <- read.csv(file="~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/international_grad_data_by_year/_agriculture/international_graduates_agriculture_2017.csv")
# Create a bar graph to the show distribution of graduates from the year 2017
barplot(i_agr_grads_2017$VALUE,
names.arg=i_agr_grads_2017$GEO,
xlab="Province",
ylab="Number of Graduates",
col="azure4",
main="Geographic Location of International Agriculture Graduates (2017)",
border="black")
# Create a box plot with three columns of data for International Agriculture grads
par(mfrow=c(1,3))
boxplot(i_agr_grads_2015$VALUE, main="Number of International Agriculture Graduates (2015)",
xlab ="Number of Graduates", col="dodgerblue", horizontal=TRUE)
boxplot(i_agr_grads_2016$VALUE, main="Number of International Agriculture Graduates (2016)",
xlab ="Number of Graduates", col="darkorange1", horizontal=TRUE)
boxplot(i_agr_grads_2017$VALUE, main="Number of International Agriculture Graduates (2017)",
xlab ="Number of Graduates", col="azure4", horizontal=TRUE)
# Display boxplot values
summary(i_agr_grads_2015$VALUE)
summary(i_agr_grads_2016$VALUE)
summary(i_agr_grads_2017$VALUE)
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# -----------------------------------------------------------------------------------------
# Read Canadian university enrollment data from the years 2015-2017
enrollments <- read.csv("~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/enrolments_in_canadian universities_and_colleges_by_field_of_study_20152016_and_20162017.csv")
par(mgp=c(5,1,0))
par(mar=c(6,6,2,2))
yearly.enrollments <- rbind(enrollments$�.2015.2016_University, enrollments$�.2016.2017_University)
# Create a bar graph to show distribution of enrollments in Canadian Universities (2015-2017)
barplot(yearly.enrollments,
names.arg=c("[1]","[2]","[3]","[4]","[5]","[6]","[7]","[8]","[9]","[10]","[11]","[12]","[13]"),
xlab="Field of Study",
ylab="Number of Enrollments",
col=c("aquamarine3", "hotpink3"),
main="Enrollments in Canadian Universities by Field of Study",
border="black",
legend.text = c("2015/2016", "2016/2017"),
args.legend = list(cex=0.75, x = "topright"),
beside=TRUE,
las=1)
# Read total number of grad data for Canadian agriculture students from the years 2015-2017
c_total_agr_grads <- read.csv("~/Education Research Project/Data-Analysis-of-Postsecondary-Graduates/canadian_grad_data_by_year/_agriculture/canadian_graduates_agriculture_20152017.csv")
# Create a linear regression model to predict avergae number of Canadian agriculture graduates in a year
linear_model_i_agr = lm(c_total_agr_grads$VALUE~c_total_agr_grads$�..REF_DATE)
summary(linear_model_i_agr)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/iam_operations.R
\name{iam_update_access_key}
\alias{iam_update_access_key}
\title{Changes the status of the specified access key from Active to Inactive,
or vice versa}
\usage{
iam_update_access_key(UserName, AccessKeyId, Status)
}
\arguments{
\item{UserName}{The name of the user whose key you want to update.
This parameter allows (through its \href{http://wikipedia.org/wiki/regex}{regex pattern}) a string of characters
consisting of upper and lowercase alphanumeric characters with no
spaces. You can also include any of the following characters: \\_+=,.@-}
\item{AccessKeyId}{[required] The access key ID of the secret access key you want to update.
This parameter allows (through its \href{http://wikipedia.org/wiki/regex}{regex pattern}) a string of characters that
can consist of any upper or lowercased letter or digit.}
\item{Status}{[required] The status you want to assign to the secret access key. \code{Active} means
that the key can be used for API calls to AWS, while \code{Inactive} means
that the key cannot be used.}
}
\description{
Changes the status of the specified access key from Active to Inactive,
or vice versa. This operation can be used to disable a user\'s key as
part of a key rotation workflow.
}
\details{
If the \code{UserName} is not specified, the user name is determined
implicitly based on the AWS access key ID used to sign the request. This
operation works for access keys under the AWS account. Consequently, you
can use this operation to manage AWS account root user credentials even
if the AWS account has no associated users.
For information about rotating keys, see \href{https://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html}{Managing Keys and Certificates}
in the \emph{IAM User Guide}.
}
\section{Request syntax}{
\preformatted{svc$update_access_key(
UserName = "string",
AccessKeyId = "string",
Status = "Active"|"Inactive"
)
}
}
\examples{
# The following command deactivates the specified access key (access key
# ID and secret access key) for the IAM user named Bob.
\donttest{svc$update_access_key(
AccessKeyId = "AKIAIOSFODNN7EXAMPLE",
Status = "Inactive",
UserName = "Bob"
)}
}
\keyword{internal}
| /cran/paws.security.identity/man/iam_update_access_key.Rd | permissive | ryanb8/paws | R | false | true | 2,271 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/iam_operations.R
\name{iam_update_access_key}
\alias{iam_update_access_key}
\title{Changes the status of the specified access key from Active to Inactive,
or vice versa}
\usage{
iam_update_access_key(UserName, AccessKeyId, Status)
}
\arguments{
\item{UserName}{The name of the user whose key you want to update.
This parameter allows (through its \href{http://wikipedia.org/wiki/regex}{regex pattern}) a string of characters
consisting of upper and lowercase alphanumeric characters with no
spaces. You can also include any of the following characters: \\_+=,.@-}
\item{AccessKeyId}{[required] The access key ID of the secret access key you want to update.
This parameter allows (through its \href{http://wikipedia.org/wiki/regex}{regex pattern}) a string of characters that
can consist of any upper or lowercased letter or digit.}
\item{Status}{[required] The status you want to assign to the secret access key. \code{Active} means
that the key can be used for API calls to AWS, while \code{Inactive} means
that the key cannot be used.}
}
\description{
Changes the status of the specified access key from Active to Inactive,
or vice versa. This operation can be used to disable a user\'s key as
part of a key rotation workflow.
}
\details{
If the \code{UserName} is not specified, the user name is determined
implicitly based on the AWS access key ID used to sign the request. This
operation works for access keys under the AWS account. Consequently, you
can use this operation to manage AWS account root user credentials even
if the AWS account has no associated users.
For information about rotating keys, see \href{https://docs.aws.amazon.com/IAM/latest/UserGuide/ManagingCredentials.html}{Managing Keys and Certificates}
in the \emph{IAM User Guide}.
}
\section{Request syntax}{
\preformatted{svc$update_access_key(
UserName = "string",
AccessKeyId = "string",
Status = "Active"|"Inactive"
)
}
}
\examples{
# The following command deactivates the specified access key (access key
# ID and secret access key) for the IAM user named Bob.
\donttest{svc$update_access_key(
AccessKeyId = "AKIAIOSFODNN7EXAMPLE",
Status = "Inactive",
UserName = "Bob"
)}
}
\keyword{internal}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.