blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 2 327 | content_id stringlengths 40 40 | detected_licenses listlengths 0 91 | license_type stringclasses 2 values | repo_name stringlengths 5 134 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 46 values | visit_date timestamp[us]date 2016-08-02 22:44:29 2023-09-06 08:39:28 | revision_date timestamp[us]date 1977-08-08 00:00:00 2023-09-05 12:13:49 | committer_date timestamp[us]date 1977-08-08 00:00:00 2023-09-05 12:13:49 | github_id int64 19.4k 671M ⌀ | star_events_count int64 0 40k | fork_events_count int64 0 32.4k | gha_license_id stringclasses 14 values | gha_event_created_at timestamp[us]date 2012-06-21 16:39:19 2023-09-14 21:52:42 ⌀ | gha_created_at timestamp[us]date 2008-05-25 01:21:32 2023-06-28 13:19:12 ⌀ | gha_language stringclasses 60 values | src_encoding stringclasses 24 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 7 9.18M | extension stringclasses 20 values | filename stringlengths 1 141 | content stringlengths 7 9.18M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d9856b2cbcaa57062c0934839fd3ea5cff5c9f82 | 726532f1b9cfbce776736221e7ad9515259a7756 | /R/btn_importRecords.R | 9f375181a2f917c5cfda71641789c0135d25c4a5 | [
"MIT"
] | permissive | nutterb/ldsmls | 9b4015754071c60c89b063cf57d572d2aff9b10b | 98ae4991aca4c6bbe7876258326a4df69978d1a9 | refs/heads/master | 2021-01-19T04:18:43.613148 | 2016-07-24T11:56:20 | 2016-07-24T11:56:20 | 46,683,485 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,122 | r | btn_importRecords.R | #' @name btn_importRecords
#' @title Import Membership Records
#'
#' @description Inserts or updates membership records from a CSV file into
#' the Membership table.
#'
#' @param Import The data from the file being imported (Membership())
#' @param Current The Currently stored data (RV$Membership)
#'
#' @export
btn_importRecords <- function(Import, Current, conn)
{
if (nrow(Current))
{
New <- Import[!Import$ID %in% Current$MemberID, ]
Exist <- Import[Import$ID %in% Current$MemberID, ]
}
else
{
New <- Import
Exist <- Current
}
if (nrow(New))
{
sql <-
New %>%
mutate(sql = sprintf("('%s', %s, '%s', '%s', %s, 1)",
ID,
format(Birth_Date,
format = "%Y-%m-%d"),
Sex,
gsub("'", "''", Full_Name),
ifelse(is.na(Maiden_Name),
"NULL",
paste0("'", gsub("'", "''", Maiden_Name), "'")))) %>%
`$`("sql")
dbSendQuery(
conn = conn,
statement =
paste0("INSERT INTO Membership ",
"(MemberID, BirthDate, Sex, FullName, MaidenName, Enabled) ",
"VALUES ",
paste(sql, collapse = ", "))
)
# dbSendPreparedQuery(
# conn = ch,
# statement = paste0("INSERT INTO Membership ",
# "(MemberID, BirthDate, Sex, FullName, MaidenName, Enabled) ",
# "VALUES ",
# "(?, ?, ?, ?, ?, 1);"),
# bind.table = dplyr::select(New, ID, Birth_Date, Sex, Full_Name, Maiden_Name)
# )
}
#
# if (nrow(Exist))
# {
# dbSendPreparedQuery(
# conn = ch,
# statement = paste0("UPDATE Membership ",
# "SET BirthDate = ?, Sex = ?, FullName = ?, ",
# " MaidenName = ? WHERE MemberID = ?"),
# bind.table = Import[, c("Birth_Date", "Sex", "Full_Name", "Maiden_Name", "ID")]
# )
# }
} |
98ac3eec9950266a927d5ea5f100b3ea4c84fa53 | 0ae69401a429092c5a35afe32878e49791e2d782 | /trinker-lexicon-4c5e22b/R/hash_emoticons.R | a1a0211eb4378cee006569d544495b77451ff684 | [] | no_license | pratyushaj/abusive-language-online | 8e9156d6296726f726f51bead5b429af7257176c | 4fc4afb1d524c8125e34f12b4abb09f81dacd50d | refs/heads/master | 2020-05-09T20:37:29.914920 | 2019-06-10T19:06:30 | 2019-06-10T19:06:30 | 181,413,619 | 3 | 0 | null | 2019-06-05T17:13:22 | 2019-04-15T04:45:06 | Jupyter Notebook | UTF-8 | R | false | false | 661 | r | hash_emoticons.R | #' Emoticons
#'
#' A \pkg{data.table} key containing common emoticons (adapted from
#' Wikipedia's Page semi-protected 'List of emoticons').
#'
#' @details
#' \itemize{
#' \item x. The graphic representation of the emoticon
#' \item y. The meaning of the emoticon
#' }
#'
#' @section License: https://creativecommons.org/licenses/by-sa/3.0/legalcode
#' @docType data
#' @keywords datasets
#' @name hash_emoticons
#' @usage data(hash_emoticons)
#' @format A data.table with 144 rows and 2 variables
#' @references https://en.wikipedia.org/wiki/List_of_emoticons
#' @examples
#' \dontrun{
#' library(data.table)
#' hash_emoticons[c(':-(', '0:)')]
#' }
NULL
|
74054ff7d39cd8f5ec4cea938e2a1bce52728fb0 | 6812e3cab597d4598b569a9e434ed36ca56df25b | /week_05/day3/app.R | 34ccae6fa84bc5e05c9ab99887b4fdfff1972ce4 | [] | no_license | sltoboso88/codeclan_homework_SandraTobon | 4cd2da95248937c72d7c658be331c1928ca33499 | 8718d10ee058d999c488bcfe53160dcf831108ff | refs/heads/master | 2021-02-10T21:39:41.655752 | 2020-05-21T08:46:40 | 2020-05-21T08:46:40 | 244,421,806 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,261 | r | app.R | library(shiny)
library(tidyverse)
library(CodeClanData)
library(shinythemes)
library(shinyWidgets)
source("global.R")
all_teams <- unique(olympics_overall_medals$team)
chosen_season <- unique(olympics_overall_medals$season)
chosen_medal <- unique(olympics_overall_medals$medal)
ui <- fluidPage(
theme = shinytheme("flatly"),
titlePanel(tags$h1("Olympic Games")),
tabsetPanel(
tabPanel(
"Medal by Country",
plotOutput("medal_plot"),
fluidRow(
column(4,
radioButtons("season",
tags$i("Summer of Winter Olympics?"),
choices = chosen_season)
),
column(4,
selectInput("team",
tags$i("Chose Team"),
choices = all_teams)
),
column(4,
tags$a("Olympic Web", href = "https://www.Olympic.org/")
)
)
),
tabPanel(
"Medal by 5 Countries",
plotOutput("medal_plot_m"),
fluidRow(
column(4,
radioButtons("season",
"Summer of Winter Olympics?",
choices = chosen_season)
),
column(4,
radioButtons("medal",
"Chose Medal",
choices = chosen_medal)
),
column(4,
multiInput("team",
tags$i("Chose Team"),
choices = all_teams)
)
)
)
)
)
server <- function(input, output) {
output$medal_plot <- renderPlot({
medal_plot(chosen_team = input$team,
chosen_season = input$season)
})
output$medal_plot_m <- renderPlot({
medals_olympics(chosen_season = input$season,
chosen_medal = input$medal,
chosen_team = input$team)
})
}
shinyApp(ui = ui, server = server) |
ebf268225e43d823622fe4a22d94ebaa30fb8d1a | cbb9e94ce4d0b6ff444be6129560b5b4d0133d0a | /tests/testthat/test_teams.R | a3ab686e7d5a1d7f48c7b127e2d893764d5f8092 | [] | no_license | pedmiston/totems-data | de39b8a38d9aefcee8ef34868323d5cf054814eb | 5ed46fe78cefafcead59297508a2631e9ea0d27b | refs/heads/master | 2021-07-16T03:17:06.326910 | 2019-02-05T19:53:05 | 2019-02-05T19:53:05 | 104,396,767 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 847 | r | test_teams.R | context("Teams")
test_that("Correctly parse datetime out of ID_Group", {
id_group <- "G_1/25/2017 1:32:16 PM"
expected <- lubridate::ymd_hms("2017-01-25 13:32:16", tz = "America/Chicago")
result <- parse_date_time_from_id_group(id_group)
expect_equal(result, expected)
})
test_that("Parsing date time from ID_Group is vectorized", {
id_group <- "G_1/25/2017 1:32:16 PM"
expected <- lubridate::ymd_hms("2017-01-25 13:32:16", tz = "America/Chicago")
result <- parse_date_time_from_id_group(rep(id_group, 2))
expect_equal(result, rep(expected, 2))
})
test_that("Parse date time from TOT ID_Group with appended value", {
id_group <- "G_1/25/2017 1:32:16 PM-100"
expected <- lubridate::ymd_hms("2017-01-25 13:32:16", tz = "America/Chicago")
result <- parse_date_time_from_id_group(id_group)
expect_equal(result, expected)
})
|
5ae62e070554fe0dbc369cdd84c90195fa7b3961 | c2da8766166c023bd2c1cf191bfd8e27eeaeb7ee | /benchmark.syntheticGRN/sim.grnboost2/regNet.comb.kd9/linksFromNetAct/regdb10/functions.R | f33db938f64e13b22f3bfe15c5d6f3f2af2d11d7 | [
"MIT"
] | permissive | lusystemsbio/NetActAnalysis | 281e6ce4e4a0897c2e44ef70f7f2556037f2d717 | 2c1dcf1cc9530c4fd504fe78f5daafbca2ab6b0a | refs/heads/main | 2023-04-16T20:11:52.294588 | 2022-11-23T17:29:39 | 2022-11-23T17:29:39 | 478,671,839 | 6 | 0 | null | null | null | null | UTF-8 | R | false | false | 467 | r | functions.R |
format_source_target_nodes <- function(inputNet.df){
tmp.df <- as.data.frame(matrix(nrow = nrow(inputNet.df), ncol = ncol(inputNet.df)))
for(i in 1:nrow(inputNet.df)){
#i <- 1
#print(i)
myR <- inputNet.df[i, ]
if(as.character(myR[1]) > as.character(myR[2]) ) {
tmp.df[i, ] <- c(as.character(myR[2]), as.character(myR[1]), as.numeric(myR[3]))
}else tmp.df[i, ] <- myR
}
colnames(tmp.df) <- colnames(inputNet.df)
return(tmp.df)
}
|
88af1370e287486f4c1c63da3f0b0437dff60995 | 8c1333fb9fbaac299285dfdad34236ffdac6f839 | /equity-valuation/ch5/solution-02d.R | e542245566413512dd3e2e1b0ad2ba0b5798758f | [
"MIT"
] | permissive | cassiopagnoncelli/datacamp-courses | 86b4c2a6d19918fc7c6bbf12c51966ad6aa40b07 | d05b74a1e42b119efbbf74da3dfcf71569c8ec85 | refs/heads/master | 2021-07-15T03:24:50.629181 | 2020-06-07T04:44:58 | 2020-06-07T04:44:58 | 138,947,757 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 225 | r | solution-02d.R | # Calculate agggregate equity value
equity_value_fcfe <- pv_proj_period + pv_terminal
equity_value_fcfe
# Calculate equity value per share
equity_value_fcfe_per_share <- equity_value_fcfe / shout
equity_value_fcfe_per_share
|
0b66452b9ad5cb4a3c0fcabe9e7395d33dff47bc | 48b3b7371ed4c0aa4ae7beff6448428034c37f14 | /man/MCMC.step.Rd | f75c199d2963c9b6f30405c24a24b6969ee90903 | [] | no_license | andreamrau/GBNcausal | 2d6f2cd522d5ad62aa18cba0d1d36a5fd93d20d1 | 0c117a6c5e09da808dc72dd435b9fd2221d19439 | refs/heads/master | 2020-06-26T04:52:25.525026 | 2015-08-25T10:22:35 | 2015-08-25T10:22:35 | 41,356,538 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,409 | rd | MCMC.step.Rd | \name{MCMC.step}
\alias{MCMC.step}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
Main step of the MCMC.GBN algorithm
}
\description{
This function is used by the MCMC.GBN algorithm every step : it uses the newMallowsProposal function and accepts or rejects the proposed GBN.
}
\usage{
MCMC.step(GBN, data, alpha = 0.05, alpha2 = 0.05, lambda, listblocks = list(), str = matrix(1, length(GBN@resSigma), length(GBN@resSigma)))
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{GBN}{
GBN - An object of class GBN. If this is a step of the MCMC.GBN function, this argument is the maximum likelihood estimation of the previous iteration.
}
\item{data}{
data - can be obtained by the function dataFormat or dataCreate.
}
\item{alpha}{
double - First Mallows temperature.
}
\item{alpha2}{
double - Second Mallows temperature. If listblocks is empty, this argument won't be used.
}
\item{lambda}{
logarithmic - Coefficient of the penalty Ridge.
}
\item{listblocks}{
list - A list of nodes of the form (c("N1","N2"),c("N3","N4","N5")) where "N1","N2","N3","N4" and "N5" are elements of rownames and colnames of firstGBN elements and data elements.
}
\item{str}{
matrix - To improve the efficiency of the algorithm, a structure can be add. Colnames and rownames are not needed.
}
}
\value{
\item{newGBN }{If accept equals 1, new GBN is the maximum likelihood estimation of the previous GBN based on a new order of the nodes and the data. If accept equals 0, newGBN is the GBN of the previous step. }
\item{accept }{If 1, the new GBN is the new estimation. If 0, the new GBN is the old GBN. }
}
\examples{
# Data creation
seed = 1990
n = 3000
p <- 10
m<-rep(0,10)
sigma<-rep(0.1,10)
W <- 1*upper.tri(matrix(0,p,p))
data <- dataCreate(nbData = 2*p, p = 10,KO = list(1,9), nbKO = c(p,p), W = W , m = m,sigma = sigma, seed = seed)$data
# Initial Value
W1=1*upper.tri(matrix(0,p,p))
m1=rep(0,p)
s1=rep(10e-4,p)
colnames(W1)=names(m1)=names(s1)=rownames(W1)=paste("N",1:p,sep="")
firstGBN = new("GBNetwork",WeightMatrix=W1,resMean=m1,resSigma=s1)
firstGBN = GBNmle(firstGBN,data,lambda= 0,sigmapre=s1)$GBN
# Algorithm
MCMC.step(firstGBN, data, alpha=0.05, lambda = 0)
}
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{ ~kwd1 }
\keyword{ ~kwd2 }% __ONLY ONE__ keyword per line
|
73b19427b7fdac4188ab327d4557338e109e7ca8 | 646b232105aa2faff55cfde95b54b10cc0bdc742 | /plot2.R | 2183d85c805be160677a6a83ddf05d34bb3f67bd | [] | no_license | shashisiva/ExData_Plotting1 | 3817fcf3b76802bbff878cfbb6079218b8e1fb9a | c7adf49cc285b458ff341c9b93679ddb257453ec | refs/heads/master | 2021-09-03T09:06:40.421003 | 2018-01-07T22:51:54 | 2018-01-07T22:51:54 | 116,513,902 | 0 | 0 | null | 2018-01-06T20:37:27 | 2018-01-06T20:37:27 | null | UTF-8 | R | false | false | 743 | r | plot2.R | library(lubridate)
setwd("C:\\Users\\shash\\Documents\\Data Science\\Course 4\\Week 1")
classes <- c(rep("character",2), rep("numeric",7)) #def col classes
powerdata <- read.table("household_power_consumption.txt",
header = TRUE, sep = ';', na.strings = "?")
powerdataSub <- subset(powerdata, Date == "1/2/2007" | Date == "2/2/2007")
png(filename = "plot2.png", width = 480, height = 480)
powerdataSub$datetimeMerged <- dmy_hms(apply(powerdataSub[,1:2], 1, paste, collapse=" "))
x <- powerdataSub$datetimeMerged
y <- powerdataSub$Global_active_power
plot(x,y,
ylab = "Global Active Power (kilowatts)", xlab = "", lwd=1,
type = "l"
)
dev.off()
|
022b4ed2ea55839db95045c2996e66dfd7a9346d | 307198a162f66f69243a701364b45c352f3237db | /r/fit_elo_model/0b_download_club_elo.r | 6888179256d94b4d7d24a9d7d80ff366f629bbaa | [] | no_license | xiaodaigh/football_league_predictions | 9bc4440f7521b7324acccf8188fbb54228662498 | 84addd2893c5d71cb62487a22c2bff57d9391ec7 | refs/heads/master | 2020-03-22T14:18:36.129117 | 2018-10-29T09:21:00 | 2018-10-29T09:21:00 | 140,168,802 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 504 | r | 0b_download_club_elo.r | library(data.table)
library(magrittr)
library(stringr)
# download Elo data from clubelo.com --------------------------------------
# took about 1.5 mins
system.time(elos <- lapply(setdiff(unique(epl_fixtures$HomeTeam),"Middlesboro"), function(HomeTeam) {
if(HomeTeam == "Nott'm Forest") {
HomeTeam = "Forest"
}
read.csv(paste0("http://api.clubelo.com/",
str_replace_all(HomeTeam," ","")))
}) %>% rbindlist)
elos[is.na(Elo),unique(Club)]
fst::write_fst(elos,"data/elos.fst") |
4221ba1e2b59ab9a605e9f3825d1e68a19eeab92 | ac88aeb1f6bbeff37ba5d27d5cb0ef3beb0ea5cf | /R/detections.R | 4a1c1a1ec662b6e4d49e4c3bc11806a3bc71ed76 | [] | no_license | effie-pav/detex | cf680ef7fdb02415bbed5b20d55277fddafedbcf | 5544d78c78dbe5ab7815865bab0bae3d90484e47 | refs/heads/master | 2020-03-28T10:51:31.544199 | 2018-09-10T13:12:00 | 2018-09-10T13:12:00 | 147,299,291 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,043 | r | detections.R | #'
#'@title Anomaly detection/ Change detection in time series
#'@description The script imports time series and uses different functions to perform anomaly and/or temporal change detection with different approaches. These include use of thresholds, counting flagged anomalies in rolling windows or discrete periods, calculating cumulative normalized values and using the Kolmoogorov-Smirnoff test to compare the distribution of values in different temporal windows. Plots are provided for comparisons.
#'@param
#'@param
#'@examples
#
#'@author Effie Pavlidou
#'@export
#'
detections<-function(input, lwin, k, ntot){
#example: input from single pixel
input<-as.numeric(input$x)
#set timestamp series
calender<-seq(ISOdate(2011,1,1, hour=0), ISOdate(2011, 12, 31, hour=23), by="hour")
input.x<-xts(x=input, order.by=calender)
#########################################################################
#use a threshold to delineate anomalies in the series
#this approach gives info on presence/absence of anomaly, not on intensity
flags<-flag(input, ntot)
#count anomalies in given periods
predate<-"2011-11-10 01:00:00 GMT"
postdate<-"2011-11-12 19:00:00 GMT"
counts<-period_counts_simple(predate, postdate, flags)
#count and plot nrs of anomalies in 7-day moving window
count7<-rollapply(flags, lwin, sum, fill = NA, na.rm=TRUE, partial = FALSE, align = "right")
plot.ts(count7)
##########################################################################
#calculate cumulative values in a 7-day moving window
#provides indication of anomaly intensity
sum7<-rollapply(input, lwin, sum, fill = NA, na.rm=TRUE, partial = FALSE, align = "right")
plot.ts(sum7)
###########################################################################
#use a two-sample KS test to see if the empirical distribution function
#in a temporal window is the same as in the previous window
#difference indicates possible change/anomaly
dstat<-ks(input, lwin)
dstatr<-ksr(input, lwin, k)
###########################################################################
}
|
fadfe45f913aaed756dfb6ea3fdb2d1ad87e8fe6 | bfcc34df9896ef071e319bddec633b59e50ff282 | /inst/tests/test-gettaubootstrap.r | 1909dd0eb9e7834a68f1807e0eaffa1a11133c0f | [
"MIT"
] | permissive | HopkinsIDD/spatialtau | 84a9c85ba36e8e1d30e357b94f7b1c40786a98b0 | a023f2ac9ec71278f928fc6fa54d2f65bb3259f0 | refs/heads/main | 2023-03-17T02:53:09.015020 | 2023-02-07T21:54:48 | 2023-02-07T21:54:48 | 598,820,157 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,056 | r | test-gettaubootstrap.r | context("get.tau.bootstrap")
test_that("get.tau.bootstrap runs and returs 1 when it should", {
x<-cbind(rep(c(1,2),50), x=runif(100,0,100), y=runif(100,0,100))
colnames(x) <-c("type","x","y")
test <- function(a,b) {return(1)}
#should return a matrix of all ones
res <- get.tau.bootstrap(x, test, seq(10,100,10), seq(0,90,10), 10)
expect_that(sum(res!=1),equals(0))
expect_that(nrow(res),equals(10))
})
test_that("performs correctly for test case 1 (equilateral triangle)", {
x <- rbind(c(1,0,0), c(1,1,0),c(2,.5,sqrt(.75)))
colnames(x) <-c("type","x","y")
test <- function(a,b) {
if (a[1] != 1) return(3)
if (b[1] == 2) return(1)
return(2)
}
res <- get.tau.bootstrap(x, test, 1.5, 0.1, 500)
res2 <- get.tau.typed.bootstrap(x, 1,2, 1.5, 0.1, 500)
#should have 95% CI of 0,1 and mean/median of 0.5
expect_that(as.numeric(quantile(res[,1], probs=c(.025,.975), na.rm=T)),
equals(c(1,1)))
expect_that(as.numeric(quantile(res2[,1], probs=c(.025,.975), na.rm=T)),
equals(c(1,1)))
})
test_that("performs correctly for test case 2 (points on a line)", {
x<-rbind(c(1,0,0), c(2,1,0), c(2,-1,0), c(3,2,0),
c(2,-2,0), c(3,3,0),c(3,-3,0))
colnames(x) <-c("type","x","y")
test <- function(a,b) {
if (a[1] != 1) return(3)
if (b[1] == 2) return(1)
return(2)
}
#the medians for the null distribution should be 2,1,0
res <- get.tau.bootstrap(x, test, c(1.5,2.5,3.5), c(0,1.5,2.5), 1500)
res2 <- get.tau.typed.bootstrap(x, 1, 2, c(1.5,2.5,3.5), c(0,1.5,2.5), 1500)
expect_that(median(res[,1], na.rm=T), equals(2))
expect_that(median(res[,2], na.rm=T), equals(1))
expect_that(median(res[,3], na.rm=T), equals(0))
expect_that(median(res2[,1], na.rm=T), equals(2))
expect_that(median(res2[,2], na.rm=T), equals(1))
expect_that(median(res2[,3], na.rm=T), equals(0))
#FIRST RANGE
#max would be only 1 type 2 used and in range = 1/(1/6) = 6...should occur
#more than 2.5% of time
#min would be 1, occuring just over .01% of the time
expect_that(as.numeric(quantile(res[,1], probs=c(.001,.975), na.rm=T)),
equals(c(1,6)))
expect_that(as.numeric(quantile(res2[,1], probs=c(.001,.975), na.rm=T)),
equals(c(1,6)))
#SECOND RANGE
#max would be 6, should occur less than 1% of the time
#min should be 0, should occur 2.5% of the time
expect_that(as.numeric(quantile(res[,2], probs=c(.025), na.rm=T)),
equals(0))
expect_that(as.numeric(quantile(res2[,2], probs=c(.025), na.rm=T)),
equals(0))
expect_that(as.numeric(quantile(res[,2], probs=c(.99), na.rm=T))<6,
is_true())
expect_that(as.numeric(quantile(res2[,2], probs=c(.99), na.rm=T))<6,
is_true())
#THIRD RANGE
#SHOULD BE DETERMINISTICALLY 0
expect_that(as.numeric(quantile(res[,3], probs=c(.025,.975), na.rm=T)),
equals(c(0,0)))
expect_that(as.numeric(quantile(res2[,3], probs=c(.025,.975), na.rm=T)),
equals(c(0,0)))
})
test_that("get.tau.ci returns bootstrap cis when same seed", {
x<-cbind(rep(c(1,2),50), x=runif(100,0,100), y=runif(100,0,100))
colnames(x) <-c("type","x","y")
test <- function(a,b) {
if (a[1] != 1) return(3)
if (b[1] == 2) return(1)
return(2)
}
set.seed(787)
res <- get.tau.bootstrap(x, test, seq(15,45,15), seq(0,30,15), 20)
set.seed(787)
ci1 <- get.tau.ci(x, test, seq(15,45,15), seq(0,30,15), 20)
expect_that(as.numeric(ci1[,1]),
equals(as.numeric(quantile(res[,1],
probs=c(.025,.975),
na.rm=T))))
expect_that(as.numeric(ci1[,2]),
equals(as.numeric(quantile(res[,2],
probs=c(.025,.975),
na.rm=T))))
expect_that(as.numeric(ci1[,3]),
equals(as.numeric(quantile(res[,3],
probs=c(.025,.975),
na.rm=T))))
})
test_that("fails nicely if x and y column names are not provided", {
x<-cbind(rep(c(1,2),500), a=runif(1000,0,100), b=runif(1000,0,100))
test <- function(a,b) {
if (a[1] != 2) return(3)
if (b[1] == 3) return(1)
return(2)
}
expect_that(get.tau.bootstrap(x,test,seq(10,50,10), seq(0,40,10),100),
throws_error("unique x and y columns must be defined"))
expect_that(get.tau.ci(x,test,seq(10,50,10), seq(0,40,10),100),
throws_error("unique x and y columns must be defined"))
})
##################DEPRECATED TESTS...TAKE TO LONG...NOW USING SMALLER CANONICAL
##################TESTS THAT HAVE VALUES THAT CAN BE WORKED OUT BY HAND
## test_that("CIs calculated from get.tau.bootstrap include the true value", {
## set.seed(777)
## x<-cbind(rep(c(1,2),250), x=runif(500,0,100), y=runif(500,0,100))
## colnames(x) <-c("type","x","y")
## test <- function(a,b) {
## if (a[1] != 1) return(3)
## if (b[1] == 1) return(1)
## return(2)
## }
## res <- get.tau.ci(x, test, seq(10,100,10), seq(0,90,10), 200)
## #print(res)
## expect_that(sum(!(res[1,]<res[2,])),equals(0))
## expect_that(sum(!(res[1,]<1)),equals(0))
## expect_that(sum(!(res[2,]>1)),equals(0))
## #repeat for typed data
## res <- get.tau.typed.bootstrap(x, typeA=1, typeB=1,
## seq(10,100,10), seq(0,90,10), 200)
## ci <- matrix(nrow=2, ncol=ncol(res))
## for (i in 1:ncol(ci)) {
## ci[,i] <- quantile(res[,i], probs=c(0.025, 0.975))
## }
## res <- ci
## expect_that(sum(!(res[1,]<res[2,])),equals(0))
## expect_that(sum(!(res[1,]<1)),equals(0))
## expect_that(sum(!(res[2,]>1)),equals(0))
## })
|
79e6fbeaa4e74b544dc1b0a38d4d82994831d795 | 4316a6f5f4123d75869a5a7f917d9171a32cecfb | /man/meme_list.Rd | e36b7349f2c168734bb690115fc3edd93c810244 | [] | no_license | gergness/memer | e94b5faa9b9af5a2fd991f7b3aa73a55e4528f47 | fcfdf82f49a9953b240cb052107a520f4d4bb42d | refs/heads/master | 2020-05-15T12:36:07.771580 | 2019-04-19T14:35:48 | 2019-04-19T14:35:48 | 182,271,356 | 1 | 0 | null | 2019-04-19T13:47:30 | 2019-04-19T13:47:29 | null | UTF-8 | R | false | true | 237 | rd | meme_list.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/get-meme.R
\name{meme_list}
\alias{meme_list}
\title{List available memes}
\usage{
meme_list()
}
\description{
List available memes
}
\examples{
meme_list()
}
|
7174cfb5916d83b8d3564929d533bc30f3717ed4 | 762c4b827eff789c0813974bab26c92d85b93dbf | /workingRscript.R | f86c2eb916cc7cceb7d27455908cc1f555255c2e | [] | no_license | testbeddev1/rworkspace | eb65c4728f29f8319761e78c4ad045d3a47ef008 | 1e5aefc05194eb5ebaee43a103145fb7ac3c10ed | refs/heads/master | 2020-04-10T14:48:50.144041 | 2018-12-10T00:36:54 | 2018-12-10T00:36:54 | 161,088,169 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 21,781 | r | workingRscript.R | #Goal: Investigators use open ended terms to describe their clinical trials on clinicaltrials.gov.
#Here, we consider preparing two versions of investigator originated semantic trial
#naming for consistency. We consider naming conventions in 'key words' and clinical 'conditions'
#for hiv and aids like trials. We also connect four study data sets to h20.ai analysis
#cluster to attempt to predict investigator originated semantic trial naming where high
#prediction (true positive) means investigators use semantic trial names in
#a non-random way #because true positive key words and conditions can be learned from
#other trial feature item responses on clinicaltrials.gov. Low prediction (false positive)
#means the trial features
#look more like the trial features of trials with different investigator originated
#semantic trial terms.
#1a. Install db package
#it may work better to instal RpostgreSQL from R studio under Packages//Install... not sure why...
install.packages("RPostgreSQL")
#1b.Call db package
library(RPostgreSQL)
#1c. Specify DB driver in DB package
drv<-dbDriver("PostgreSQL")
#1d. Declare DB connection features; note user name and password
#(if you change 'con' take note, each select statement must start with the 'con'nection name)
con<-dbConnect(drv,dbname="aact",host="aact-db.ctti-clinicaltrials.org",port=5432,user="nwstudy1",password="Helloworld1!")
#Step 2: Call data from database as data frame to get a grasp of what is available
#2a. Collect study features where the key words used by investigators are 'hiv like';
#note the wild card operator in the where clause
hiv_with_keywords<-dbGetQuery(con,"select
keywords.nct_id,
keywords.downcase_name,
studies.study_type,
studies.baseline_population,
studies.brief_title,
studies.official_title,
studies.phase,
studies.enrollment,
studies.number_of_arms,
studies.number_of_groups,
studies.is_fda_regulated_drug,
studies.is_fda_regulated_device
from
ctgov.keywords left join ctgov.studies
on
ctgov.keywords.nct_id = ctgov.studies.nct_id
where
ctgov.keywords.downcase_name like 'hiv%'")
View(hiv_with_keywords)
#2b. Collect study features where the key words used by investigators are aids like;
#note the wild card operator in the where clause
aids_with_keywords<-dbGetQuery(con,"select
keywords.nct_id,
keywords.downcase_name,
studies.study_type,
studies.baseline_population,
studies.brief_title,
studies.official_title,
studies.phase,
studies.enrollment,
studies.number_of_arms,
studies.number_of_groups,
studies.is_fda_regulated_drug,
studies.is_fda_regulated_device
from
ctgov.keywords left join ctgov.studies
on
ctgov.keywords.nct_id = ctgov.studies.nct_id
where
ctgov.keywords.downcase_name like 'aids%'")
View(aids_with_keywords)
#2c. Collect study features where the disease conditions for intervention is HIV like;
#note the wild card operator in the where clause
hiv_with_condition<-dbGetQuery(con,"select
conditions.nct_id,
conditions.downcase_name,
studies.study_type,
studies.baseline_population,
studies.brief_title,
studies.official_title,
studies.phase,
studies.enrollment,
studies.number_of_arms,
studies.number_of_groups,
studies.is_fda_regulated_drug,
studies.is_fda_regulated_device
from
ctgov.conditions left join ctgov.studies
on
ctgov.conditions.nct_id = ctgov.studies.nct_id
where
ctgov.conditions.downcase_name like 'hiv%'")
View(hiv_with_condition)
#2d. Collect study features where the disease conditions for intervention is aids like;
#note the wild card operator in the where clause
aids_with_condition<-dbGetQuery(con,"select
conditions.nct_id,
conditions.downcase_name,
studies.study_type,
studies.baseline_population,
studies.brief_title,
studies.official_title,
studies.phase,
studies.enrollment,
studies.number_of_arms,
studies.number_of_groups,
studies.is_fda_regulated_drug,
studies.is_fda_regulated_device
from
ctgov.conditions left join ctgov.studies
on
ctgov.conditions.nct_id = ctgov.studies.nct_id
where
ctgov.conditions.downcase_name like 'aids%'")
View(aids_with_condition)
#Step 3: Call trial id tagged key words and disease conditions for our four study groups under step 2
#3a. Collect Key words and conditions from trial ids where trials are 'hiv like';
#note the wild card operator in the where clause
hiv_for_melt<-dbGetQuery(con,"select
distinct downcase_name,
count(nct_id),
'conditions' AS studytype
from
ctgov.conditions
where downcase_name like 'hiv%'
group by downcase_name
union
select
distinct downcase_name,
count(nct_id),
'keywords' AS studytype
from
ctgov.keywords
where downcase_name like 'hiv%'
group by downcase_name
")
View(hivformelt)
#3b. Collect Key words and conditions from trial ids where trials are 'aids like';
#note the wild card operator in the where clause
aids_for_melt<-dbGetQuery(con,"select
distinct downcase_name,
count(nct_id),
'conditions' AS studytype
from
ctgov.conditions
where downcase_name like 'aids%'
group by downcase_name
union
select
distinct downcase_name,
count(nct_id),
'keywords' AS studytype
from
ctgov.keywords
where downcase_name like 'aids%'
group by downcase_name
order by count desc
")
#3c. Remove not hiv terms from HIV like trial list (the aids version looked clean to me...)
# if you want to see the before cleaning version #View(hiv_for_melt)#
library(dplyr)
library(tidyr)
hiv_for_melt<-filter(hiv_for_melt,downcase_name !="hives")
hiv_for_melt<-filter(hiv_for_melt,downcase_name !="hivep2")
hiv_for_melt<-filter(hiv_for_melt,downcase_name !="hiveac")
hiv_for_melt<-filter(hiv_for_melt,downcase_name !="hive")
hiv_for_melt<-filter(hiv_for_melt,downcase_name !="hivst")
hiv_for_melt<-filter(hiv_for_melt,downcase_name !="hivac")
View(hiv_for_melt)
#3d. Melt 3a and 3b into a matrix like data frame (here melt is really called spread-)
# below spread does not mean key word condition pairs but paired sums of like terms across item response classes
#stacked bar may work better but I like seeing mutual terms across response types where available
# one use is noticing how 'hiv inventions'and 'hiv infection' are heavily declared but are different strings-
#ontology opportunity...
HIV_Trials_spread<-spread(hiv_for_melt,studytype,count,fill=NA,convert=FALSE,drop=TRUE,sep=NULL)
AIDS_Trials_spread<-spread(aids_for_melt,studytype,count,fill=NA,convert=FALSE,drop=TRUE,sep=NULL)
View(HIV_Trials_spread)
View(AIDS_Trials_spread)
#3e. Plot
# this plot is really a view of a plot- it populates to viewer in R studio, not plots
# single observations will not be- scatter plotted; only records where both keyword and condition had the same strings-
library(plotly)
AIDS_Trial_Terms<-plot_ly(data=AIDS_Trials_spread,x=~conditions,y=~keywords,text=~downcase_name)
print(AIDS_Trial_Terms)
HIV_Trial_Terms<-plot_ly(data=HIV_Trials_spread,x=~conditions,y=~keywords,text=~downcase_name)
print(HIV_Trial_Terms)
#Step 4. Call and join study data set features, key words and conditions for four studies.
#This method is a touch clunky due to the underlying database features.
#Here we use an inner join to fit trial id-key word or conditions pairs to a set of hiv and
#aids like + trial features. We also clean our feature sets to make sure they do not contain
#bad hiv and aids names. The inner join does double duty here,
#cleaning our trial features retrospectively if they only have dirty hiv
#and aids terms but enrolling the features if they have clean terms.
#4a. Make hiv study group
hiv_group<-dbGetQuery(con,"
select
keywords.nct_id AS nct_id,
studies.study_type,
studies.phase,
studies.enrollment,
studies.number_of_arms,
studies.number_of_groups,
studies.is_fda_regulated_drug,
studies.is_fda_regulated_device
from ctgov.keywords
left join
ctgov.studies
on
ctgov.keywords.nct_id = ctgov.studies.nct_id
where
ctgov.keywords.downcase_name like 'hiv%'
union
select
conditions.nct_id AS nct_id,
studies.study_type,
studies.phase,
studies.enrollment,
studies.number_of_arms,
studies.number_of_groups,
studies.is_fda_regulated_drug,
studies.is_fda_regulated_device
from ctgov.conditions
left join
ctgov.studies
on
ctgov.conditions.nct_id = ctgov.studies.nct_id
where
ctgov.conditions.downcase_name like 'hiv%'
")
#4b.make aids study group
aids_group<-dbGetQuery(con,"
select
keywords.nct_id AS nct_id,
studies.study_type,
studies.phase,
studies.enrollment,
studies.number_of_arms,
studies.number_of_groups,
studies.is_fda_regulated_drug,
studies.is_fda_regulated_device
from ctgov.keywords
left join
ctgov.studies
on
ctgov.keywords.nct_id = ctgov.studies.nct_id
where
ctgov.keywords.downcase_name like 'aids%'
union
select
conditions.nct_id AS nct_id,
studies.study_type,
studies.phase,
studies.enrollment,
studies.number_of_arms,
studies.number_of_groups,
studies.is_fda_regulated_drug,
studies.is_fda_regulated_device
from ctgov.conditions
left join
ctgov.studies
on
ctgov.conditions.nct_id = ctgov.studies.nct_id
where
ctgov.conditions.downcase_name like 'aids%'
")
#4c.make key words and conditions features
keywords_aids<-dbGetQuery(con,"select keywords.nct_id,keywords.downcase_name AS keywords_aids
from ctgov.keywords
where keywords.downcase_name like 'aids%'")
conditions_aids<-dbGetQuery(con,"select conditions.nct_id,conditions.downcase_name AS conditions_aids
from ctgov.conditions
where conditions.downcase_name like 'aids%'")
keywords_hiv<-dbGetQuery(con,"select keywords.nct_id,keywords.downcase_name AS keywords_hiv
from ctgov.keywords
where keywords.downcase_name like 'hiv%'")
conditions_hiv<-dbGetQuery(con,"select conditions.nct_id,conditions.downcase_name AS conditions_hiv
from ctgov.conditions
where conditions.downcase_name like 'hiv%'")
#4d. clean features
#remove bad captures from conditions_hiv
conditions_hiv<-filter(conditions_hiv, conditions_hiv !="hives")
conditions_hiv<-filter(conditions_hiv, conditions_hiv !="hivep2")
conditions_aids<-filter(conditions_aids,conditions_aids !="hives")
conditions_aids<-filter(conditions_aids,conditions_aids !="hivep2")
keywords_hiv<-filter(keywords_hiv,keywords_hiv !="hives")
keywords_hiv<-filter(keywords_hiv,keywords_hiv != "hivep2")
keywords_hiv<-filter(keywords_hiv,keywords_hiv !="hiveac")
keywords_hiv<-filter(keywords_hiv,keywords_hiv !="hive")
keywords_hiv<-filter(keywords_hiv,keywords_hiv !="hivst")
keywords_aids<-filter(keywords_aids,keywords_aids !="hives")
keywords_aids<-filter(keywords_aids,keywords_aids != "hivep2")
keywords_aids<-filter(keywords_aids,keywords_aids !="hiveac")
keywords_aids<-filter(keywords_aids,keywords_aids !="hive")
keywords_aids<-filter(keywords_aids,keywords_aids !="hivst")
#4e. join features
study_1_hiv_conditions<-inner_join(hiv_group,conditions_hiv)
study_2_hiv_keywords<-inner_join(hiv_group,keywords_hiv)
study_3_aids_conditions<-inner_join(aids_group,conditions_aids)
study_4_aids_keywords<-inner_join(aids_group,keywords_aids)
#Step 5: Install, launch and connect to local analysis cluster
#this is the hard part- we are going to download some java software and connect R studio to it. The java software provides a virtual cluster that will process our
#study localy on this very machine! There are two ways to do this- the easy way and the hard way.
#5.easy way.1 load library like this:
#if ("package:h2o" %in% search()) { detach("package:h2o", unload=TRUE) }
#if ("h2o" %in% rownames(installed.packages())) { remove.packages("h2o") }
#pkgs <- c("RCurl","jsonlite")
#for (pkg in pkgs) {
# if (! (pkg %in% rownames(installed.packages()))) { install.packages(pkg) }
#}
#install.packages("h2o", type="source", repos="http://h2o-release.s3.amazonaws.com/h2o/rel-wright/8/R")
library(h2o)
h2o.init(ip="localhost",port=54321)
#if you get a java version error (64 bit versions of java only with H20) update java, and restart your R console
#if you get an error saying you can not connect to H2o do “the hard way”
#5.hard way.1 download, unzip and launch the java executable file
# a. download h2o from: http://h2o-release.s3.amazonaws.com/h2o/rel-xia/2/index.html #use the 'download and run' version#
# b. unzip it on your desktop-or any where else where you will not lose it...
# c. launch the java executable file called: 'h2o' ; it should be just inside the unziped folder
# d. check if the cluster is running by going to in your browser: 'http://localhost:54321/flow/index.html'
# e. load h20 library in R that did not work before by calling library(h20)
#don't do this if easy way worked#library(h2o) #note the o in h2o is O as in Off, not zero#
#don't do this if easy way worked#h2o.init(ip="localhost",port= 54321)
#So that's it, you did not connect the library in R to the native instance but connected to a cluster you hand launched yourself instead.
#Step 6: Send study data sets to analysis cluster
#you can not send data from R to cluster; you have to write out csv and then call to cluster like so
#6a. write files out to working directory for R; default is my documents under windows via c/user-
# you can also search for the file name in explorer if you dont know your R default directory
#if you link to git you may write these to git and not my documents!
# if you write to git point to git, if you write to my documents... point to my documents-
write.csv(study_1_hiv_conditions,file="C:/Users/nick/Documents/study1.csv")
write.csv(study_2_hiv_keywords,file="C:/Users/nick/Documents/study2.csv")
write.csv(study_3_aids_conditions,file="C:/Users/nick/Documents/study3.csv")
write.csv(study_4_aids_keywords,file="C:/Users/nick/Documents/study4.csv")
#6b. Read written out local files to server
study1_h2o<-h2o.importFile(path="C:/Users/nick/Documents/study1.csv")
summary(study1_h2o)
study2_h2o<-h2o.importFile(path="C:/Users/nick/Documents/study2.csv")
summary(study2_h2o)
study3_h2o<-h2o.importFile(path="C:/Users/nick/Documents/study3.csv")
summary(study3_h2o)
study4_h2o<-h2o.importFile(path="C:/Users/nick/Documents/study4.csv")
summary(study4_h2o)
#6c. Name response, predictor and splits
response_s1<-"conditions_hiv"
response_s2<-"keywords_hiv"
response_s3<-"conditions_aids"
response_s4<-"keywords_aids"
predictor_s1<-setdiff(names(study1_h2o),c(response_s1))
predictor_s2<-setdiff(names(study2_h2o),c(response_s2))
predictor_s3<-setdiff(names(study3_h2o),c(response_s3))
predictor_s4<-setdiff(names(study4_h2o),c(response_s4))
splits_s1<-h2o.splitFrame(data = study1_h2o,
ratios = c(0.6,0.2),
destination_frames = c("s1_train.hex", "s1_valid.hex", "s1_test.hex"),
seed = 1234)
s1_train<-splits_s1 [[1]]
s1_valid<-splits_s1[[2]]
s1_test<-splits_s1[[3]]
splits_s2<-h2o.splitFrame(data = study2_h2o,
ratios = c(0.6,0.2),
destination_frames = c("s2_train.hex", "s2_valid.hex", "s2_test.hex"),
seed = 1234)
s2_train<-splits_s2 [[1]]
s2_valid<-splits_s2[[2]]
s2_test<-splits_s2[[3]]
splits_s3<-h2o.splitFrame(data = study3_h2o,
ratios = c(0.6,0.2),
destination_frames = c("s3_train.hex", "s3_valid.hex", "s3_test.hex"),
seed = 1234)
s3_train<-splits_s3 [[1]]
s3_valid<-splits_s3[[2]]
s3_test<-splits_s3[[3]]
splits_s4<-h2o.splitFrame(data = study4_h2o,
ratios = c(0.6,0.2),
destination_frames = c("s4_train.hex", "s4_valid.hex", "s4_test.hex"),
seed = 1234)
s4_train<-splits_s4 [[1]]
s4_valid<-splits_s4[[2]]
s4_test<-splits_s4[[3]]
#7. Run study models
#7a.study model 1
gbm_s1<-h2o.gbm(x=predictor_s1,
y=response_s1,
training_frame=s1_train)
#7b.study model 2
gbm_s2<-h2o.gbm(x=predictor_s2,
y=response_s2,
training_frame=s2_train)
#7c.study model 3
gbm_s3<-h2o.gbm(x=predictor_s3,
y=response_s3,
training_frame=s3_train)
#7d.study model 4
gbm_s4<-h2o.gbm(x=predictor_s4,
y=response_s4,
training_frame=s4_train)
#8 results(results 8b works better than 8a-)
#8a results in R- this can be done but it kind of spams the console a bit
gbm_s1 #for model 1
gbm_s2# for model 2
gbm_s3# for model 3
gbm_s4# for model 4
#8b results in flow (this works best I think)
#navigate your browser to the flow portal at: http://localhost:54321/flow/index.html
# Click Model in the menu and then click list all models-then click inspect, or click a model name-
#you will get a click sub menu with 10 or so output options-
|
cce39a5c455a46337df8c8119e2b50cd6701637c | c0befdac32dd86f06994c71eb80cab99cb3e5c6a | /R/norpaccatches.R | 027a358d96ef97d5b549f7309412760fd001375f | [] | no_license | aaronmberger-nwfsc/hakedataUSA | 8180602ae01a47f85ad0a6166341db687e5c2fcb | f6ee60568885f670a559502e1728b00f5d90ed5b | refs/heads/master | 2023-02-11T17:09:25.867759 | 2021-01-07T05:37:34 | 2021-01-07T05:52:37 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 10,852 | r | norpaccatches.R | #' Workup NORPAC Catches
#'
#' Summarize and plot NORPAC catches for the at-sea hake fishery.
#' Fleet types are assigned to create summaries by fleet and year.
#'
#' @details *Tribal catches*
#' @template cdq_code
#'
#' @param ncatch An R object with NORPAC catches. If \code{NULL},
#' which is the default, then the \code{.Rdat} file will be read
#' from the disk.
#' @param nyears The number of years for plotting.
#' @template species
#'
#' @import ggplot2 grDevices
#' @export
#' @author Kelli Faye Johnson
#'
#' @return Figures and csv files are saved to the disk regarding
#' catch rates and depth (fishing and bottom depths).
#' In the `Figures` directory, the following png or csv files are saved:
#' * fishDepthsUS.png
#' * fishDepthsUSallyears.png
#' * fish_FISHING_DEPTH_FATHOMS_US.png
#' * fish_BOTTOM_DEPTH_FATHOMS_US.png
#' * fishBottomDepthByVesselUS.png
#' * fishCatchRatesUSByYear.png
#' * fishDepthByYearUS.png
#' * fishCatchRatesUS.png
#' * fishCatchRatesUSnolog.png
#' * NORPAC_DomesticAtSea_bdepthfathom_range.csv
#' * NORPAC_DomesticAtSea_bdepthfathom_deeper1500f.csv
#' * us-cp-startdate.csv
#' * us-ms-startdate.csv
#'
#' In the `Catches` directory, the following csv files are saved:
#' * depth-us-atsea-bottom.csv
#' * depth-us-atsea-fishing.csv
#'
norpaccatches <- function(ncatch = NULL, nyears = 5,
species = 206) {
#### Internal functions
#' @param x A named list of values by year, more than likely
#' made by tapply or split.
#' @param country The country the data is coming from, which
#' is only relevant if you are saving the data to the disk
#' for naming purposes.
#' @param type A character value that will be used for the name
#' of the file.
#' The first portion should be the sector such as "atsea".
#' The second portion should be "fishing" if the file is not
#' bottom depth, which what the Canadians export.
#' @param dir A directory where you want to save the data. If left
#' \code{NULL} then no data will be saved and just a data frame will
#' be returned.
exportdepth <- function(x, country = c("US", "CAN"),
type = c("sector-bottom"), dir = NULL) {
country <- match.arg(country, several.ok = FALSE)
# Consult US or Canadian counterparts before changing this
# code! The function is duplicated across repositories
# to access confidential data. The same quantiles must
# be exported by each country.
aa <- sapply(lapply(x, boxplot.stats), "[[", "stats")
aa[1, ] <- sapply(x, quantile, probs = c(0.025), na.rm = TRUE)
aa[5, ] <- sapply(x, quantile, probs = c(0.975), na.rm = TRUE)
rownames(aa) <- c("lower95", "lowerhinge", "median",
"upperhinge", "upper95")
aa <- t(aa)
aa <- data.frame("year" = as.numeric(rownames(aa)), aa)
if (!is.null(dir)) {
utils::write.csv(aa, file = file.path(dir,
paste0("depth-", tolower(country),"-", type, ".csv")),
row.names = FALSE)
}
return(aa)
}
#### Setup the environment
args <- list(width = 6.5, height = 4.5,
pointsize = 10, units = "in", res = 600)
oldop <- options()$warn
options(warn = -1)
on.exit(options(warn = oldop), add = TRUE)
mydir <- hakedatawd()
dir.create(file.path(mydir, "Figures", "CONFIDENTIAL"),
showWarnings = FALSE, recursive = TRUE)
if (!file.exists(file.path(mydir, "Figures"))) {
stop("The folder 'CONFIDENTIAL' doesn't exist in 'Figures' in ", mydir)
}
if (is.null(ncatch)) {
load(file.path(mydir, "extractedData", "NORPACdomesticCatch.Rdat"))
}
outncatch <- processNorpacCatch(ncatch,
outfname = file.path(mydir, "Catches"))
hcatch <- outncatch[outncatch$SPECIES == species, ]
# Unique vessel (either catcher boat if not NA or mothership)
hcatch[, "vcolumn"] <- ifelse(is.na(hcatch[, "CATCHER_BOAT_ADFG"]),
hcatch[, "VESSEL"],
hcatch[, "CATCHER_BOAT_ADFG"])
keeptheseyears <- utils::tail(1:max(hcatch$year, na.rm = TRUE), nyears)
#### Figures: depths
hcatch <- get_confidential(hcatch, yvar = "vcolumn", xvar = c("year"))
stopifnot("Yearly summaries are not confidential" =
all(stats::aggregate(ngroups ~ year, data = hcatch, unique)[, "ngroups"] > 2)
)
gg <- plot_boxplot(
data = stats::reshape(
data = hcatch[hcatch[, "year"] %in% keeptheseyears &
hcatch[, "ngroups"] > 2, ],
direction = "long",
varying = c("FISHING_DEPTH_FATHOMS", "BOTTOM_DEPTH_FATHOMS"),
v.names = "fathoms", timevar = "type",
times = c("(a) Fishing depth", "(b) Bottom depth")),
xvar = c("year", "type"), showmedian = TRUE,
yvar = "fathoms", ylab = "Depth (fathoms)", mlab = "",
incolor = "year", scales = "free",
file = file.path(mydir, "Figures", "fishDepthsUS.png"),
width = args[["width"]], height = args[["height"]],
units = args[["units"]], dpi = args[["res"]])
gg <- plot_boxplot(
data = stats::reshape(hcatch,
direction = "long",
varying = c("FISHING_DEPTH_M", "BOTTOM_DEPTH_M"),
v.names = "meters", timevar = "type",
times = c("(a) Fishing depth", "(b) Bottom depth")),
xvar = c("year", "type"), showmedian = TRUE,
yvar = "meters", ylab = "Depth (m)", mlab = "",
incolor = "year", scales = "free",
file = file.path(mydir, "Figures", "fishDepthsUSallyears.png"),
width = args[["width"]], height = args[["height"]],
units = args[["units"]], dpi = args[["res"]])
cols <- c("year",
grep(value = TRUE, "lower|median|upper", colnames(gg[["data"]])))
utils::write.csv(
x = gg[["data"]][grepl("Bottom", gg[["data"]][, "type"]), cols],
file = file.path(mydir, "Catches", "depth-us-atsea-bottom.csv"),
row.names = FALSE)
utils::write.csv(
x = gg[["data"]][grepl("Fishing", gg[["data"]][, "type"]), cols],
file = file.path(mydir, "Catches", "depth-us-atsea-fishing.csv"),
row.names = FALSE)
hcatch[, "months"] <- droplevels(factor(hcatch$month, levels = 1:12,
labels = rep(paste(seq(1, 11, by = 2), seq(2, 12, by = 2), sep = "-"),
each = 2)))
hcatch <- get_confidential(hcatch, yvar = "vcolumn", xvar = c("year", "months"))
gg <- plot_boxplot(hcatch[hcatch[, "ngroups"] > 2, ],
xvar = c("months", "year"), showmedian = TRUE,
yvar = "FISHING_DEPTH_FATHOMS", ylab = "Fishing depth (fathoms)", mlab = "",
file = file.path(mydir, "Figures", "fishFISHING_DEPTH_FATHOMS_US.png"),
width = args[["width"]], height = args[["height"]],
units = args[["units"]], dpi = args[["res"]])
gg <- plot_boxplot(hcatch[hcatch[, "ngroups"] > 2, ],
xvar = c("months", "year"), showmedian = TRUE,
yvar = "BOTTOM_DEPTH_FATHOMS", ylab = "Bottom depth (fathoms)", mlab = "",
file = file.path(mydir, "Figures", "fishBOTTOM_DEPTH_FATHOMS_US.png"),
width = args[["width"]], height = args[["height"]],
units = args[["units"]], dpi = args[["res"]])
gg <- plot_boxplot(hcatch[!is.na(hcatch$VESSEL) &
hcatch$year %in% keeptheseyears, ],
xvar = c("VESSEL", "year"), showmedian = TRUE,
yvar = "BOTTOM_DEPTH_FATHOMS", ylab = "Bottom depth (fathoms)", mlab = "",
file = file.path(mydir, "Figures", "CONFIDENTIAL", "fishBottomDepthByVesselUS.png"),
width = args[["width"]], height = args[["height"]],
units = args[["units"]], dpi = args[["res"]])
hcatch <- get_confidential(hcatch, yvar = "vcolumn", xvar = c("year", "vesseltype"))
gg <- plot_boxplot(
data = stats::reshape(
data = hcatch[hcatch[, "year"] %in% keeptheseyears &
hcatch[, "ngroups"] > 2, ],
direction = "long",
varying = c("FISHING_DEPTH_FATHOMS", "BOTTOM_DEPTH_FATHOMS"),
v.names = "fathoms", timevar = "type",
times = c("(a) Fishing depth", "(b) Bottom depth")),
xvar = c("vesseltype", "type", "year"), xlab = "At-Sea sector",
showmedian = TRUE,
yvar = "fathoms", ylab = "Depth (fathoms)", mlab = "",
scales = "free", nrow = 2,
file = file.path(mydir, "Figures", "fishDepthByYearUS.png"),
width = 7, height = args[["height"]],
units = args[["units"]], dpi = args[["res"]])
#### Figure: catch rate
hcatch <- get_confidential(hcatch, yvar = "vcolumn", xvar = c("year", "month"))
gg <- plot_boxplot(data = hcatch,
xvar = c("month", "year"),
yvar = "crate",
ylab = "Unstandardized catch-per-hour (mt/hour)",
mlab = "U.S. at-sea unstandardized yearly catch-rates.",
file = file.path(mydir, "Figures", "Confidential", "fishCatchRatesUSByYear.png"),
width = args[["width"]], height = args[["height"]],
units = args[["units"]], dpi = args[["res"]])
gg <- mapply(plot_boxplot,
data = list(
hcatch[hcatch[, "year"] %in% keeptheseyears, ],
hcatch[hcatch[, "year"] %in% keeptheseyears & hcatch[, "ngroups"] > 2, ],
hcatch[hcatch[, "year"] %in% keeptheseyears & hcatch[, "ngroups"] > 2, ]),
file = list(
file.path(mydir, "Figures", "Confidential", "fishCatchRatesUS.png"),
file.path(mydir, "Figures", "fishCatchRatesUS.png"),
file.path(mydir, "Figures", "fishCatchRatesUSnolog.png")),
yscale = list("log10", "log10", "identity"),
mlab = list(
"U.S. at-sea unstandardized yearly catch-rates (confidential)",
"U.S. at-sea unstandardized yearly catch-rates",
"U.S. at-sea unstandardized yearly catch-rates"
),
MoreArgs = list(
xvar = c("Month"), showmedian = TRUE,
incolor = "year",
yvar = "crate",
ylab = "Unstandardized catch-per-hour (mt/hour)",
legend.position = c(0.1, 0.23),
width = args[["width"]], height = args[["height"]],
units = args[["units"]], dpi = args[["res"]]
))
#### Summaries
# Summarize catches by depth
utils::write.table(t(do.call("rbind",
tapply(hcatch[, "BOTTOM_DEPTH_FATHOMS"], hcatch[, "year"],
range, na.rm = TRUE, simplify = TRUE))),
file = file.path(mydir, "Catches", "NORPAC_DomesticAtSea_bdepthfathom_range.csv"),
sep = ",", row.names = FALSE, col.names = TRUE, quote = FALSE)
# Number of tows deeper than 1500 fathoms
temp <- get_confidential(hcatch[hcatch$BOTTOM_DEPTH_FATHOMS > 1500, ],
yvar = "vcolumn",
xvar = c("month", "year"))
utils::write.table(x = setNames(aggregate(BOTTOM_DEPTH_FATHOMS ~ year + month,
drop = TRUE,
data = temp[temp[, "ngroups"] > 2, ],
FUN = length), c("year", "month", "ntowsdeeper1500f")),
file = file.path(mydir, "Catches", "NORPAC_DomesticAtSea_bdepthfathom_deeper1500f.csv"))
utils::write.table(
aggregate(Date ~ year, data = hcatch[hcatch$VESSEL_TYPE == 1, ], min),
file = file.path(mydir, "Catches", "us-cp-startdate.csv"),
sep = ",", quote = FALSE, row.names = FALSE)
utils::write.table(
aggregate(Date ~ year, data = hcatch[hcatch$VESSEL_TYPE == 2, ], min),
file = file.path(mydir, "Catches", "us-ms-startdate.csv"),
sep = ",", quote = FALSE, row.names = FALSE)
}
|
2d6e409cd4fa3d1749a95ebfdb3858c344276b17 | a5f68f65fb63f11ff368b037f4248cc2059899d5 | /opencga-client/src/main/R/man/OpencgaStudy-Opencga-method.Rd | c1ddcd79c014055e71136b189714fa92a0b94c17 | [
"Apache-2.0"
] | permissive | mh11/opencga | bbd8b411e26d993e833883ecd9064a29a4ba239b | 0d8c29b22dd304ac63d6db5785fa450177128831 | refs/heads/develop | 2020-04-06T04:36:56.064829 | 2018-05-31T14:48:01 | 2018-05-31T14:48:01 | 55,725,982 | 0 | 0 | null | 2016-04-07T20:30:49 | 2016-04-07T20:30:48 | null | UTF-8 | R | false | true | 474 | rd | OpencgaStudy-Opencga-method.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/OpencgaClasses.R
\docType{methods}
\name{OpencgaStudy,Opencga-method}
\alias{OpencgaStudy}
\alias{OpencgaStudy,Opencga-method}
\title{A method to query Opencga studies}
\usage{
\S4method{OpencgaStudy}{Opencga}(object, id = NULL, action, params = NULL,
...)
}
\description{
This method allow the user to create, update and explore
study data and metadta
}
\details{
A method to query Studies
}
|
85eb0004cef12128276b8c48c1de632e75fa3b2d | 2bdd10bdb7aa2130cceccfbfbdd918a62a096950 | /Regresion Lineal Multiple/Regresion_Lineal_Multiple.R | d6ab2a9ad379ec9a0fb633a44415d2b08a68fccf | [] | no_license | billoBJ/regression | 83e50d94b5ee36fe1927bca356e36717c8222911 | b4c20df5793c2a1e27503c9fa0622c0243addddd | refs/heads/master | 2022-12-07T20:03:29.809849 | 2020-08-24T14:47:48 | 2020-08-24T14:47:48 | 289,954,632 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,526 | r | Regresion_Lineal_Multiple.R | # Importar el dataset
dataset = read.csv('50_Startups.csv')
# Codificar las variables categoricas
dataset$State = factor(dataset$State,
levels = c("New York", "California", "Florida"),
labels = c(1, 2, 3))
# Dividir los datos en conjunto de entrenamiento y conjunto de test
library(caTools)
set.seed(123)
#profit es la columna a predecir
split = sample.split(dataset$Profit, SplitRatio = 0.8)
training_set = subset(dataset, split == TRUE)
testing_set = subset(dataset, split == FALSE)
# Ajustar el modelo de Regresion Lineal Multiple
#con el Conjunto de Entrenamiento
regression = lm(formula = Profit ~ .,
data = training_set)
# Predecir los resultados con el conjunto de testing
y_pred = predict(regression, newdata = testing_set)
# Construir un modelo optimo con la Eliminacion hacia atras
#valor minimo para el p-value
#los coeficientes que tengan un p-value mayor
#seran eliminado de la formula del modelo de prediccion
SL = 0.05 #nivel de significancia
regression = lm(formula = Profit ~ R.D.Spend + Administration + Marketing.Spend + State,
data = dataset)
summary(regression)
regression = lm(formula = Profit ~ R.D.Spend + Administration + Marketing.Spend,
data = dataset)
summary(regression)
regression = lm(formula = Profit ~ R.D.Spend + Marketing.Spend,
data = dataset)
summary(regression)
regression = lm(formula = Profit ~ R.D.Spend,
data = dataset)
summary(regression) |
d4d79c65ad1f0d82eaf279291fda76d4d39b59f7 | 7c95033415669a0812a5c275547113eabd024db0 | /R/bfa.boot2.ls.R | 5ccdc48ce07179f48bce6ba30967c1e22922fecd | [] | no_license | cran/bifurcatingr | 293d6a7e39e3fd5bbdb6713436f04dd4051e14bd | 90a6596c19ed6f47c158d7587f2d12986d000287 | refs/heads/master | 2023-07-11T02:05:05.328474 | 2023-06-22T01:10:02 | 2023-06-22T01:10:02 | 340,015,192 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,569 | r | bfa.boot2.ls.R | #' Double Bootstrap of Least Squares Estimators of BAR(p) Models
#'
#' This function performs double bootstrapping of the least squares estimators
#' of the autoregressive coefficients in a bifurcating autoregressive (BAR)
#' model of any order \code{p} as described in Elbayoumi and Mostafa (2020).
#'
#' @param z a numeric vector containing the tree data
#' @param p an integer determining the order of bifurcating autoregressive model
#' to be fit to the data
#' @param burn number of tree generations to discard before starting the
#' bootstrap sample (replicate)
#' @param B1 number of bootstrap samples (replicates) used in first round of
#' bootstrapping
#' @param B2 number of bootstrap samples (replicates) used in second round of
#' bootstrapping
#' @return \item{boot.est}{a matrix containing the first-stage bootstrapped
#' least squares estimates of the autoregressive coefficients} \item{boot2}{a
#' matrix containing the second-stage bootstrapped least squares estimates of
#' the autoregressive coefficients}
#' @references Elbayoumi, T. M. & Mostafa, S. A. (2020). On the estimation bias
#' in bifurcating autoregressive models. \emph{Stat}, 1-16.
#' @export
#' @examples
#' z <- bfa.tree.gen(31, 1, 1, 1, 0.5, 0.5, 0, 10, c(0.7))
#' bfa.boot2.ls(z, p=1, B1=99, B2=9)
bfa.boot2.ls <- function(z, p, burn = 5, B1, B2){
#step 1
boot1 <- bfa.boot1.ls(z, p, burn = burn, B1, boot.est=TRUE, boot.data=TRUE)
#step 2
boot2 <- apply(boot1$boot.data, 1, bfa.boot1.ls, p, burn = burn, B2, boot.est=TRUE)
return(list(boot1$boot.est, boot2))
}
|
ab3f0243d39ff92480895fb89fa53ceaff241742 | 39284ad1738b0a55800d55f556083c4ba53a92fa | /Lsn18.R | 6f931aa2a4154a2079eb79c9193391ddd7a874e6 | [] | no_license | nick3703/MA376 | cf10e82b32104836e99d94d0858e7e89e85179cb | 55e9d21bae9154bb4507d377e0f9ab0ab1341417 | refs/heads/master | 2023-01-29T00:29:55.796978 | 2023-01-09T19:56:02 | 2023-01-09T19:56:02 | 203,806,968 | 3 | 3 | null | null | null | null | UTF-8 | R | false | false | 2,852 | r | Lsn18.R | ## ----setup, include=FALSE--------------------------------------------------------------------------------------------------------------------
knitr::opts_chunk$set(echo = TRUE,fig.width=5,fig.height=3)
library(tidyverse)
## --------------------------------------------------------------------------------------------------------------------------------------------
house.dat<-read.table("http://www.isi-stats.com/isi2/data/housing.txt",header=T)
house.dat<- house.dat %>% mutate(price=price.1000)%>% select(-price.1000)
sqft.lm<-lm(price~sqft,data=house.dat)
summary(sqft.lm)
anova(sqft.lm)
## --------------------------------------------------------------------------------------------------------------------------------------------
fit.house <- house.dat %>% mutate(resids=sqft.lm$residuals,preds=sqft.lm$fitted.values)
fit.house %>% ggplot(aes(x=preds,y=resids,color=lake))+geom_point()
## --------------------------------------------------------------------------------------------------------------------------------------------
fit.house %>% group_by(lake)%>%summarize(mean=mean(resids))
## --------------------------------------------------------------------------------------------------------------------------------------------
fit.house %>% group_by(lake)%>%summarize(mean=mean(sqft))
## --------------------------------------------------------------------------------------------------------------------------------------------
contrasts(house.dat$lake)=contr.sum
lake.lm<-lm(price~lake,data=house.dat)
coef(lake.lm)
## --------------------------------------------------------------------------------------------------------------------------------------------
house.dat.mod<-house.dat%>%mutate(price=ifelse(lake=="lakefront",price-197.2,price+197.2))
mod.lm<-lm(price~sqft,data=house.dat.mod)
coef(mod.lm)
## --------------------------------------------------------------------------------------------------------------------------------------------
house.dat.mod<-house.dat%>%mutate(lake.adj.price=lake.lm$residuals)
mod.lm<-lm(lake.adj.price~sqft,data=house.dat.mod)
coef(mod.lm)
## --------------------------------------------------------------------------------------------------------------------------------------------
library(car)
full.lm<-lm(price~lake+sqft,house.dat)
avPlots(full.lm)
## --------------------------------------------------------------------------------------------------------------------------------------------
summary(full.lm)
## --------------------------------------------------------------------------------------------------------------------------------------------
Anova(full.lm,type=2)
## --------------------------------------------------------------------------------------------------------------------------------------------
par(mfrow=c(1,2))
plot(full.lm,which=c(1:2))
|
2b9a471f1a7e08938e4a602748d35c55ffb977fd | 697e3ac9cbe9010ed9b50f356a2cddf5ed8cc8a0 | /tests/testthat/tests_tsvreq_methods.R | 56118589d0ac15df3c6fb52eeb1eec13531674a7 | [] | no_license | reumandc/tsvr | 4c2b2b0c9bbbb191ae55058648da87589bc25e01 | f8f7a72d4f8ba40e881e78a1a2fb53791d227d21 | refs/heads/master | 2021-06-01T14:27:34.757125 | 2021-01-08T17:09:11 | 2021-01-08T17:09:11 | 132,662,500 | 1 | 1 | null | null | null | null | UTF-8 | R | false | false | 2,505 | r | tests_tsvreq_methods.R | context("tsvreq_methods")
test_that("test the set methods",{
h<-list(com=2,comnull=1,vr=2)
expect_error(set_ts(h,3),"Error in set_ts: set_ts not defined for this class")
expect_error(set_tsvr(h,12),"Error in set_tsvr: set_tsvr not defined for this class")
expect_error(set_wts(h,3),"Error in set_wts: set_wts not defined for this class")
h<-tsvreq(ts=c(1,2,3),com=c(2,2,2),comnull=c(2,1,2),tsvr=c(1,2,1),wts=c(1,1,1))
expect_error(set_ts(h,3),"Error in set_ts: tsvreq slots should not be changed individually")
expect_error(set_com(h,3),"Error in set_com: tsvreq slots should not be changed individually")
expect_error(set_comnull(h,2),"Error in set_comnull: tsvreq slots should not be changed individually")
expect_error(set_tsvr(h,12),"Error in set_tsvr: tsvreq slots should not be changed individually")
expect_error(set_wts(h,12),"Error in set_wts: tsvreq slots should not be changed individually")
})
test_that("test the get methods",{
h<-list(com=2,comnull=1,vr=2)
expect_error(get_ts(h),"Error in get_ts: get_ts not defined for this class")
expect_error(get_tsvr(h),"Error in get_tsvr: get_tsvr not defined for this class")
expect_error(get_wts(h),"Error in get_wts: get_wts not defined for this class")
h<-tsvreq(ts=c(1,2,3),com=c(2,2,2),comnull=c(2,1,2),tsvr=c(1,2,1),wts=c(1,1,1))
expect_equal(get_ts(h),c(1,2,3))
expect_equal(get_com(h),c(2,2,2))
expect_equal(get_comnull(h),c(2,1,2))
expect_equal(get_tsvr(h),c(1,2,1))
expect_equal(get_wts(h),c(1,1,1))
})
test_that("test the summary method",{
inp<-tsvreq(ts=c(1,2,3),com=c(2,2,2),comnull=c(2,1,2),tsvr=c(1,2,1),wts=c(1,1,1))
out<-summary(inp)
expect_equal(names(out),c("class","ts_start","ts_end","ts_length","com_length","comnull_length","tsvr_length","wts_length"))
expect_equal(out$class,"tsvreq")
expect_equal(out$ts_start,1)
expect_equal(out$ts_end,3)
expect_equal(out$ts_length,3)
expect_equal(out$com_length,3)
expect_equal(out$comnull_length,3)
expect_equal(out$tsvr_length,3)
expect_equal(out$wts_length,3)
})
test_that("test the print method",{
inp<-tsvreq(ts=1:10,com=rep(2,10),comnull=rep(3,10),tsvr=rep(2/3,10),wts=rep(5,10))
expect_known_output(inp,"../vals/print_tsvreq_testval_01.txt",print=TRUE,update=FALSE)
})
test_that("test the plot method",{
inp<-tsvreq(ts=1:10,com=c(10:1)*rep(2,10),comnull=rep(2,10),tsvr=10:1,wts=2*c(1:10))
Test_plot_tsvreq<-function(){plot(inp)}
expect_doppelganger(title="Test-plot-tsvreq",fig=Test_plot_tsvreq)
})
|
f0839c7cce04046cf51d5fff7e15434e956ba821 | 6b3059139c9f74ba211da97c7243cb3f20feef3c | /ChangeDetectionEx-2.R | 4087b59c6ec4b4d39ae58fed5e3ff4e8ea0dccc2 | [] | no_license | PFalkowski/ChangeDetection | 142ab6f858ebd9968a723e1fc73996000b6793ae | 3992ef2c8a837edd497abb94f45b0339ca032010 | refs/heads/master | 2020-09-10T18:13:08.761740 | 2016-09-27T00:07:57 | 2016-09-27T00:07:57 | 66,205,358 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,485 | r | ChangeDetectionEx-2.R | # configure R environment
source("../OneDrive/Repos/Change Detection/Helper.R")
#source("../Helper.R")
Packages = c("lme4", "ggplot2", "lattice", "rio", "lmtest", "rms")
WorkingDirectory = "../OneDrive/Repos/Change Detection/Data"
SetupEnvironment(workingDirectory = WorkingDirectory, requiredPackages = Packages)
# read data
data2 <- read.csv("CD_ex2_data_140t_25Ps.xlsx - CD_ex2_RAW.csv", header = TRUE)
aggregate(Corr ~ ChangeType, data2, mean)
describeBy(data2,data2$ChangeType)
# add variables
ScaleMin = 0
ScaleMax = 1
data2$PositionRadians <-data2$TargetPos / 8
data2$ChangeOccured <- ifelse(data2$ChangeOccured > 0, c("yes"), c("no"))
data2$ChangeType <- ifelse(data2$Condition > 2, c("Mirror"), c("Category"))
data2$MirrorChange <- ifelse(data2$Condition > 2, 1, 0)
# Get outliers by response bias
LowerBoundStrategy = .2
UpperBoundStrategy = 1 - LowerBoundStrategy
ResponseByID = aggregate(Response ~ ID, data2, mean)
strategizersIDs = ResponseByID[ResponseByID$Response < LowerBoundStrategy | ResponseByID$Response > UpperBoundStrategy, ]
aggregate(Response ~ ConditionRecoded * ID, data2, mean)
# Remove Outliers
data2 = data2[!(is.element(data2$ID, strategizersIDs$ID)),]
# ANOVA
summary(aov(Corr ~ ChangeType * ChangeOccured * PAS + Error(ID), data2))
# GLMM
mf1 = glmer(Corr ~ PAS * ChangeType + (PAS|ID) ,
data2,
family = binomial,
glmerControl(optimizer="bobyqa", optCtrl = list(maxfun = 10000000)))
summary(mf1)
|
d9294fcf424398ad506a06dab85c326939f1bb2b | 54b4976030ae6a42e10282c8f41609ef266721c9 | /man/plot_2x2.ecd.Rd | d5a1078560e57732328892dd884275c12b9654ca | [] | no_license | cran/ecd | b1be437b407e20c34d65bcf7dbee467a9556b4c1 | 18f3650d6dff442ee46ed7fed108f35c4a4199b9 | refs/heads/master | 2022-05-18T20:24:56.375378 | 2022-05-09T20:10:02 | 2022-05-09T20:10:02 | 48,670,406 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 739 | rd | plot_2x2.ecd.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ecd-plot-2x2-generic.R
\name{plot_2x2.ecd}
\alias{plot_2x2.ecd}
\alias{plot_2x2}
\alias{plot_2x2,ecd-method}
\title{Standard 2x2 plot for sample data}
\usage{
plot_2x2.ecd(object, ts, EPS = FALSE, eps_file = NA)
plot_2x2(object, ts, EPS = FALSE, eps_file = NA)
\S4method{plot_2x2}{ecd}(object, ts, EPS = FALSE, eps_file = NA)
}
\arguments{
\item{object}{An object of ecd class.}
\item{ts}{The xts object for the timeseries.}
\item{EPS}{Logical, indicating whether to save the plot to EPS, default = FALSE}
\item{eps_file}{File name for eps output}
}
\description{
Standard 2x2 plot for sample data
}
\examples{
\dontrun{
plot_2x2(d, ts)
}
}
\keyword{plot}
|
868bc9cf52bb406c10c26a81c3a822c55529d638 | 9ade0e6cefd3d83ecfcc0eba92bbff6be2c6c91d | /R/write_text.R | 3ae610d0cdab040ba3ee11695f6fc511afa7107b | [] | no_license | cran/org | c5db09ec29b9f8975c924fde8662d3ad41fbd360 | d8317bfcd567c275be76a9f590d2566181df68bd | refs/heads/master | 2022-11-28T11:13:35.865795 | 2022-11-22T05:20:02 | 2022-11-22T05:20:02 | 174,553,462 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 734 | r | write_text.R | convert_newline_linux_to_windows <- function(txt) {
needs_converting <- length(grep("\\n", txt)) > 0 & length(grep("\\r\\n", txt)) == 0
if (needs_converting) {
txt <- gsub("\\n", "\r\n", txt)
}
return(txt)
}
#' Write text to a file
#' @param txt Text to be written
#' @param file File, passed through to `base::cat`
#' @param header Optional header that is inserted at the top of the text file
#' @return No return value.
#' @export
write_text <- function(txt, file = "", header = "**THIS FILE IS CONSTANTLY OVERWRITTEN -- DO NOT MANUALLY EDIT**\r\n\r\n") {
header <- convert_newline_linux_to_windows(header)
txt <- convert_newline_linux_to_windows(txt)
retval <- paste0(header, txt)
cat(retval, file = file)
}
|
b28b8e25e4840066edb4845bd3e6859ee37cb13e | 3d557b698ecea160be63191450cac35defd10f58 | /R/overlayPlot.R | 1eec6a69dda1b825277312a841b760f8645e7166 | [] | no_license | bomeara/utilitree | cb0601f105429ad292b27ae3cfeca29a6b348ad0 | 47c908ee5b3b1b285c255a22a29e51de5722f5f8 | refs/heads/master | 2016-08-08T16:23:59.109616 | 2014-07-29T20:46:39 | 2014-07-29T20:46:39 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,735 | r | overlayPlot.R | overlayPlot <-
function(phylolist, alpha=0.1) {
#inputs are list of phylo objects. Currently assumes all root to tip lengths are the same; rescale xxyy dynamically if this is not so.
initialxxyy<-phyloXXYY(as(phylolist[[1]],"phylo4"))
print(initialxxyy)
tiporder<-initialxxyy$torder
print(tiporder)
plot(x=c(initialxxyy$segs$h1x,initialxxyy$segs$v0x,1.3),y=c(initialxxyy$segs$v0y,initialxxyy$segs$h1y,0.5),type="n",bty="n", xaxt="n",xlab="time from present",yaxt="n",ylab="")
axis(side=1,at=c(0,0.25,0.5,0.75,1),labels=round(seq(from=max(branching.times(phylolist[[1]])),to=0,length.out=5),digits=1))
text(x=rep(1.0,Ntip(phylolist[[1]])),y=seq(from=0.0,to=1.0,length.out=Ntip(phylolist[[1]])),labels=phylolist[[1]]$tip.label[tiporder],pos=4,cex=0.8)
xvalues<-c()
yvalues<-c()
for (treeindex in 1:length(phylolist)) {
xxyy<-phyloXXYY(as(phylolist[[treeindex]],"phylo4"),tip.order=tiporder)
x0 = xxyy$segs$v0x #modified from treePlot.R in phylobase
y0 = xxyy$segs$v0y
x1 = xxyy$segs$h1x
y1 = xxyy$segs$h1y
xvalues<-rbind(xvalues,xxyy$segs$h1x)
yvalues<-xxyy$segs$h0y
for (lineindex in 1:length(x0)) {
lines(x=c(x0[lineindex],x1[lineindex]),y=c(y0[lineindex],y1[lineindex]),col=rgb(0,0,0,alpha,maxColorValue = 1))
}
}
for (nodeindex in 1:dim(xvalues)[2]) {
print(paste("nodeindex is ",nodeindex))
quantiles<-quantile(xvalues[,nodeindex],probs=c(0.025,0.5,0.975),na.rm=TRUE)
print(quantiles)
print(yvalues[nodeindex])
#print(initialxxyy$segs$v0y[nodeindex])
if (!is.na(yvalues[nodeindex])) {
#symbols(x=quantiles[2],y= yvalues[nodeindex],circles=0.01,inches=FALSE,fg="red",bg="red",add=TRUE) #this part isn't quite working yet
}
}
}
|
843f79b87ccfbd2fcfe47a7e429c7e80cd5508b2 | 3b4aa2880d2922e99adf3eaa343e2557f44e6c79 | /gsample2.R | ebf190fe7a47136896f8320d742da77207e5ec17 | [] | no_license | DBomber60/shiny.objects | b1d408a76fb50619c972b7d43febf7a08e8b4447 | cce35590fa38a3cce2582c46b6080e926dcc85b5 | refs/heads/master | 2023-05-15T04:20:53.305809 | 2021-06-02T00:41:36 | 2021-06-02T00:41:36 | 335,093,126 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,345 | r | gsample2.R | library(tidyverse)
library(gfoRmula)
library(rstanarm)
# generate some data to validate the gfoRmula function
# TODO: can we compute the true Y^{11} for this example??
set.seed(1)
n = 500
X1 = rnorm(n) # let's assume X is CD4
A1 = rbinom(n, 1, prob = plogis(-1 - X1 )) # higher probability of treatment for low CD4
X2 = rnorm(n, mean = X1 + A1, sd = 1)
A2 = rbinom(n, 1, prob = plogis(-1 - X2 )) # higher probability of treatment for low CD4
Y = rnorm(n, 1, mean = (X2 + A2)) # final outcome
consdat = data.frame(A1 = A1, A2 = A2, X1 = X1, X2 = X2, Y = Y)
dat = consdat %>% pivot_longer(!Y, names_to = c(".value", "set"), names_pattern = "(.)(.)")
dat$id = rep(1:nrow(consdat), each = 2) # to-do.. this automatically (using regex)
dat$set = as.numeric(dat$set) - 1
head(dat)
# use gfoRmula package
id = 'id'
time_name = 'set'
covnames = c('X', 'A')
outcome_name = 'Y'
covtypes = c('normal', 'binary')
histories = c(lagged)
histvars = list(c('X', 'A'))
covparams = list(covmodels=c(X ~ lag1_X + lag1_A,
A ~ X + lag1_A))
ymodel = Y ~ A + X
intvars = list('A', 'A')
interventions = list( list(c(static, rep(0,2))),
list(c(static, c(1,1))) )
int_descript = c('Never', 'Always')
b = gformula_continuous_eof(dat, id=id, time_name = time_name, covnames = covnames, covtypes = covtypes,
covparams = covparams, histvars = histvars, histories = histories, outcome_name = outcome_name,
ymodel = ymodel, intvars = intvars, interventions = interventions, int_descript = int_descript,
seed = 1, nsamples = 100, ref_int = 2) # 0.2771665/ 0.2697512
# 0.30787258/ 0.03070607
lower = b$result$`MD lower 95% CI`[2] # lower
upper = b$result$`MD upper 95% CI`[2]
mean = b$result$`Mean difference`[2] # -1.79
#### what is our estimate if we instead consider X2 to be a baseline covariate (not time-varying)
covparams2 = list(covmodels=c(X ~ lag1_X,
A ~ X + lag1_A))
ymodel = Y ~ A + X
intvars = list('A', 'A')
interventions = list( list(c(static, rep(0,2))),
list(c(static, c(1,1))) )
int_descript = c('Never', 'Always')
b2 = gformula_continuous_eof(dat, id=id, time_name = time_name, covnames = covnames, covtypes = covtypes,
covparams = covparams2, histvars = histvars, histories = histories, outcome_name = outcome_name,
ymodel = ymodel, intvars = intvars, interventions = interventions, int_descript = int_descript,
seed = 1, nsamples = 100, ref_int = 2) # 0.2771665/ 0.2697512
b2$result
# TODO
# 1. show sensitivity of causal effect estimate to different modeling assumptions
# 2. prospectus ---> do asthma part!
# 3. prospectus ---> do causal inference part!
# 4. WIHS - more EDA (what centers are these patients from)
# now, let's do Bayesian estimates of this?
# X1 --> A1 --> X2 --> A2 --> Y
# sample from the posterior of relevant parameters, then sample outcome
bd = dat %>% group_by(id) %>% mutate(lag_A = lag(A), lag_X = lag(X)) %>% na.omit
xmod = stan_glm(X ~ lag_A + lag_X, family = gaussian(), data = bd)
ymod = stan_glm(Y ~ A + X, family = gaussian(), data = bd)
# Y ^ 11
nrep = 30
pp.X11 = posterior_predict(xmod, newdata = data.frame(lag_A = 1, lag_X = bd$lag_X), draws = nrep)
pp.X10 = posterior_predict(xmod, newdata = data.frame(lag_A = 0, lag_X = bd$lag_X), draws = nrep)
# these will be 500 * nrep long
pp.Y11 = posterior_predict(ymod, newdata = data.frame(A = 1, X = array(t(pp.X11))), draws = nrep)
pp.Y00 = posterior_predict(ymod, newdata = data.frame(A = 0, X = array(t(pp.X10))), draws = nrep)
mean(pp.Y11 - pp.Y00)
plot.df = data.frame(Y = c( array(t(pp.Y11)), array(t(pp.Y00)) ) ,
keep = rep( c(1, rep(0,nrep-1)) , each = n) ) %>% filter(keep == 1)
plot.df$nrep = rep(1:(nrep*2), each = 500)
plot.df$int = c( rep("11", 500 * nrep), rep("00", 500 * nrep))
#Plot.
cbp1 <- c("#56B4E9", "#009E73",
"#F0E442", "#0072B2", "#D55E00", "#CC79A7")
ggplot(plot.df, aes(x = Y, group = nrep)) + geom_density(aes(color=int)) +
theme_minimal() + theme(legend.position = "none") + scale_colour_manual(values=cbp1)
plot.df %>% group_by(int) %>% summarise(m = mean(Y))
# now show posterior predictive under different assumption
|
17337ae59d16952e646b3fd9895ab3d00b69c57b | b9688d34f370073c8865844dc9478ea0f712064a | /man/Package.Rd | 73c622e74d4c11710fd804c6fdbfb8467c31e0c8 | [] | no_license | Majood00/math4753 | 48e71d5ae0d44003634ebd39290d2a4376cee763 | 89d9806a5a44979f04832414170981e82b93107b | refs/heads/master | 2022-04-18T22:29:24.183054 | 2020-04-15T00:51:47 | 2020-04-15T00:51:47 | 234,974,524 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 282 | rd | Package.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Package.R
\docType{package}
\name{Package}
\alias{Package}
\title{Functions Package}
\description{
Math 4753 lab functions as produced by Majood Haddad.
}
\author{
Majood Haddad \email{majood00@ou.edu}
}
|
7b009abbb7fa6411a18517891b58b05bdb215f65 | ef01bab1215f822fe415021c73c2b915fdd787ba | /01_data_preparation/step3_merge_datasets/revisit/step38_aggregate_for_vendor_groups_v2.R | 0316258438aae75b845a731fdf61196bf551670a | [] | no_license | nvkov/MA_Code | b076512473cf463e617ed7b24d6553a7ee733155 | 8c996d3fdbbdd1b1b84a46f84584e3b749f89ec3 | refs/heads/master | 2021-01-17T02:48:40.082306 | 2016-09-25T18:53:53 | 2016-09-25T18:53:53 | 58,817,375 | 2 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,191 | r | step38_aggregate_for_vendor_groups_v2.R | #Find vendors with several vendor_IDs
#Part 1: General steps, ideas, progress summary and questions
rm(list=ls())
#Set working directory
project_directory<- "C:/Users/Nk/Documents/Uni/MA"
data_directory<- "/Pkw/MobileDaten/generatedData/"
wd<- paste0(project_directory, data_directory)
setwd(wd)
library("data.table")
library("stringr")
library("DescTools")
#source("https://bioconductor.org/biocLite.R")
#biocLite("IRanges")
library("IRanges")
#Read files with car specifications:
load("Merged_data/df_merge_after_step37.RData")
#======Experiment with overlaps:
df.mini<- df_merge[1:20,]
seeOverlaps<-function(start , stop){
ir<- IRanges(as.numeric(as.IDate(start)), as.numeric(as.IDate(stop)))
group<- subjectHits(findOverlaps(ir, reduce(ir)) )
return(group)
}
#================
vendor_group<- c(611815, 450931, 795790, 468982, 455508, 458936, 470362, 494168, 471189, 458767)
df_merge1<- df_merge[df_merge$vendor_ID %in% vendor_group,]
df_merge1<- df_merge1[ ,.(car_ID=(car_ID), vendor_ID=paste(vendor_ID, collapse=","),
cars_lastChange=cars_lastChange, cars_lastDate=cars_lastDate,
Anzeigenanlage=Anzeigenanlage, leasing=max(leasing),
leasing_change=max(leasing)-min(leasing), TOM=sum(TOM),
prices_firstDate=prices_firstDate, prices_lastDate=prices_lastDate,
group=seeOverlaps(Anzeigenanlage, prices_firstDate)) ,
by=.(valuePrice, Typ, Kategorie, Farbe, HU, Erstzulassung,
Emission, Kraftstoff,
Leistung, Schaltung, Klimatisierung, Hubraum,
Eigenschaften, Kilometer, consecutives) ]
df_merge$realTOM<- difftime(df_merge$prices_lastDate, df_merge$prices_firstDate, units="days")
df_merge$merge_check<- df_merge$realTOM-df_merge$TOM
View(df_merge[as.numeric(df_merge$merge_check)<=-2,])
save(df_merge, file=paste0(project_directory, data_directory, "Merged_data/df_merge_after_step38.RData" ))
A180<-df_merge[df_merge$Typ=="A180",]
save(A180, file=paste0(project_directory, data_directory, "Merged_data/df_merge_after_step38_A180.RData" ))
|
124e90570d19efe30abfa86e247929e02648a092 | d276e37a06e119d0b53605a8fac8fd46c4d6eeaf | /s3/s3-continuacion.R | 406270d8784feee46a6abe512376c35f2c33a170 | [] | no_license | andres1898/Programacion-R-intermedio | 3fb55452664e23c0ebf056167993d1b7a3fcaf14 | 68268f27963b684f6e6ea9f05d4ebb76e7a2a64b | refs/heads/master | 2020-08-11T09:29:45.245632 | 2020-01-22T17:39:40 | 2020-01-22T17:39:40 | 214,539,946 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,219 | r | s3-continuacion.R |
###########################################################
## http://www.cyclismo.org/tutorial/R/s3Classes.html#s3classesmethods
NorthAmerican <- function(eatsBreakfast=TRUE,myFavorite="cereal")
{
me <- list(
hasBreakfast = eatsBreakfast,
favoriteBreakfast = myFavorite
)
## Set the name for the class
class(me) <- append(class(me),"NorthAmerican")
return(me)
}
bubba <- NorthAmerican()
bubba
louise <- NorthAmerican(eatsBreakfast=TRUE,myFavorite="fried eggs")
louise
NordAmericain <- function(eatsBreakfast=TRUE,myFavorite="cereal")
{
## Get the environment for this
## instance of the function.
thisEnv <- environment()
hasBreakfast <- eatsBreakfast
favoriteBreakfast <- myFavorite
## Create the list used to represent an
## object for this class
me <- list(
## Define the environment where this list is defined so
## that I can refer to it later.
thisEnv = thisEnv,
## The Methods for this class normally go here but are discussed
## below. A simple placeholder is here to give you a teaser....
getEnv = function()
{
return(get("thisEnv",thisEnv))
}
)
## Define the value of the list within the current environment.
assign('this',me,envir=thisEnv)
## Set the name for the class
class(me) <- append(class(me),"NordAmericain")
return(me)
}
environment()
## ejem
bubba <- NordAmericain()
bubba
get("hasBreakfast",bubba$getEnv())
get("favoriteBreakfast",bubba$getEnv())
bubba <- NordAmericain(myFavorite="oatmeal")
bubba
get("favoriteBreakfast",bubba$getEnv())
louise <- bubba
assign("favoriteBreakfast","toast",louise$getEnv())
get("favoriteBreakfast",louise$getEnv())
get("favoriteBreakfast",bubba$getEnv())
setHasBreakfast <- function(elObjeto, newValue)
{
print("Calling the base setHasBreakfast function")
UseMethod("setHasBreakfast",elObjeto)
print("Note this is not executed!")
}
setHasBreakfast.default <- function(elObjeto, newValue)
{
print("You screwed up. I do not know how to handle this object.")
return(elObjeto)
}
setHasBreakfast.NorthAmerican <- function(elObjeto, newValue)
{
print("In setHasBreakfast.NorthAmerican and setting the value")
elObjeto$hasBreakfast <- newValue
return(elObjeto)
}
bubba <- NorthAmerican()
bubba$hasBreakfast
bubba <- setHasBreakfast(bubba,FALSE)
bubba$hasBreakfast
bubba <- setHasBreakfast(bubba,"No type checking sucker!")
bubba$hasBreakfast
## manejo del error
someNumbers <- 1:4
someNumbers
someNumbers <- setHasBreakfast(someNumbers,"what?")
someNumbers
getHasBreakfast <- function(elObjeto)
{
print("Calling the base getHasBreakfast function")
UseMethod("getHasBreakfast",elObjeto)
print("Note this is not executed!")
}
getHasBreakfast.default <- function(elObjeto)
{
print("You screwed up. I do not know how to handle this object.")
return(NULL)
}
getHasBreakfast.NorthAmerican <- function(elObjeto)
{
print("In getHasBreakfast.NorthAmerican and returning the value")
return(elObjeto$hasBreakfast)
}
bubba <- NorthAmerican()
bubba <- setHasBreakfast(bubba,"No type checking sucker!")
result <- getHasBreakfast(bubba)
result
## el resto al ver environments
|
a287adda6f026e4b952586a22945791a58cf2ef8 | baff257c506abb2157edfc53a6b30bdfc07ad48d | /R/data for brandon graph.r | 07da65a86ad0b8a172fcdb84568f6ae2da3c6fe0 | [] | no_license | amcox/map | d40a51958c6f66bcdb64adfde44c711fcca481ad | 8e14c849ccfb984ea7cf8df89f7cf8382070ed3a | refs/heads/master | 2021-03-12T22:50:36.980378 | 2016-02-28T21:07:56 | 2016-02-28T21:07:56 | 27,775,802 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,268 | r | data for brandon graph.r | library(ggplot2)
library(gdata)
library(RColorBrewer)
library(plyr)
library(dplyr)
library(reshape2)
library(stringr)
library(scales)
update_functions <- function() {
old.wd <- getwd()
setwd("functions")
sapply(list.files(), source)
setwd(old.wd)
}
update_functions()
df <- load_wide_map_data()
df <- subset(df, map.grade %in% numeric.grades)
d.g <- load_ges()
df <- merge(df, d.g, by.x=c("subject", "winter.rit"), by.y=c("subject", "rit"))
df <- rename(df, c("ge"="winter.ge"))
df$winter.ge.gap <- apply(df, 1, function(r) {
as.numeric(r[['winter.ge']]) - (as.numeric(r[['map.grade']]) + 0.5)
})
d.means <- df %.%
group_by(map.grade) %.%
summarize(ge.gap.mean=mean(winter.ge.gap, na.rm=T),
ge.gap.median=median(winter.ge.gap, na.rm=T)
)
df.p <- df %.%
group_by(map.grade, subject) %.%
mutate(percentile=ecdf(winter.ge.gap)(winter.ge.gap))
df.top <- subset(df.p, percentile > .45)
d.means.top <- df.top %.%
group_by(map.grade) %.%
summarize(ge.gap.mean=mean(winter.ge.gap, na.rm=T),
ge.gap.median=median(winter.ge.gap, na.rm=T)
)
d.means$group <- rep("all", nrow(d.means))
d.means.top$group <- rep("top 55%", nrow(d.means.top))
d.means.all <- rbind(d.means, d.means.top)
save_df_as_csv(d.means.all, 'map data for brandon graph') |
0320d8a213b36685f3fe7179f46a35f018488c25 | 9dd2947d0dec81ab301ab0e21003943c8b713c5d | /R/iterativeOcc.R | 1d2ef2d57f0e32fb14f90d7c979b46989d9339fe | [] | no_license | benmack/iterOneClass | 8d57746b52c292448d6f828c3e07b7de0fa93b02 | 7bd7161eeb61a4f01130b9ff6c18285716aeaa8a | refs/heads/master | 2020-12-24T16:07:30.775900 | 2014-10-26T18:53:32 | 2014-10-26T18:53:32 | 25,482,522 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,478 | r | iterativeOcc.R | #' @name iterativeOcc
#' @title Train a iterative one-class classifier.
#' @description ...
#' @param train_pos a data frame with the positive training samples.
#' @param un an object of class \code{\link{rasterTiled}} image data to be classified.
#' @param iter_max maximum number of iterations.
#' @param n_train_un the number of unlabeled samples to be used for training and validation.
#' @param k the number of folds used for resampling.
#' @param indep_un the fraction of unlabeled samples used for validation.
#' @param expand ...
#' @param folder_out a folder where the results are written to.
#' @param test_set a data frame used as independent test set. the first column must be the response variable with 1 being the positive and -1 the negative samples.
#' @param seed a seed point to be set for sampling the unlabeled data and the
#' @param ... other arguments that can be passed to \code{\link{trainOcc}}
#' @export
iterativeOcc <- function (train_pos, un,
iter_max = 10,
n_train_un = nrow(train_pos)*2,
k = 10, indep_un = 0.5,
expand=2,
folder_out=NULL,
test_set=NULL,
seed=NULL,
scale=TRUE,
...
) {
if (is.null(folder_out)) {
folder_out <- paste(tempdir(), "\\iterOneClass", sep="")
cat("\nSave results in ", folder_out, "\n\n")
dir.create(folder_out, showWarnings=FALSE)
}
colors_pu<- list(pos='#2166ac', un='#e0e0e0')
colors_pn <- list(pos='#2166ac', neg='#f4a582')
if (!is.null(test_set)) {
# index_test <- match(as.numeric(rownames(test_set)),
# un$validCells)
pred_neg_test <- vector(mode="integer", length=nrow(test_set))
}
n_un <- length(un$validCells)
nPixelsPerTile <- un$tiles[2,1]
if (!is.null(seed))
seed_global <- seed
dir.create(folder_out)
pred_neg <- vector(mode="integer", length=n_un)
validCells <- un$validCells
save(n_un, nPixelsPerTile, validCells, seed_global, .fname,
file=.fname(folder_out, 0, ".RData", "initialized") )
### ### ### ### ### ### ### ### ### ### ### ### ### ### ###
### iterate
iter = 0
STOP = FALSE
time_stopper <- proc.time()
while(!STOP) {
iter <- iter + 1
n_un_iter <- length( un$validCells )
if (!is.null(seed)) {
set.seed(seed_global*iter)
seed <- round(runif(1, 1, 1000000), 0)
}
cat(sprintf("Iteration: %d \n\tPercent unlabeled: %2.2f", iter, (n_un_iter/n_un)*100 ), "\n")
### classification
cat("\tTraining ...\n")
train_un <- sample_rasterTiled(un, size=n_train_un, seed=seed)
train_pu_x <- rbind(train_pos, train_un)
train_pu_y <- puFactor(rep(c(1, 0),
c(nrow(train_pos), nrow(train_un))))
index <- createFoldsPu( train_pu_y, k=k,
indepUn=indep_un, seed=seed )
model <- trainOcc(x=train_pu_x, y=train_pu_y, index=index, ...)
cat("\tPrediction ...\n")
pred <- predict(un, model, returnRaster = FALSE,
fnameLog = paste(paste(folder_out,
"/log_prediction_iter-",
iter, ".txt", sep="") ) )
th <- thresholdNu(model, pred, expand=expand)
pred_in_pred_neg <- which(pred_neg==0)
new_neg_in_pred_neg <- pred_in_pred_neg[pred<th]
pred_neg[ new_neg_in_pred_neg ] <- iter
if (!is.null(test_set)) {
cat("\tEvaluation ...\n")
n_un_test_iter <- sum(pred_neg_test==0)
cat(sprintf("\tPercent unlabeled (test): %2.2f",
(n_un_test_iter/nrow(test_set))*100 ), "\n")
pred_in_pred_neg_test <- which(pred_neg_test==0)
pred_test <- predict(model,
test_set[pred_in_pred_neg_test, -1])
new_neg_in_pred_neg_test <- pred_in_pred_neg_test[pred_test<th]
pred_neg_test[ new_neg_in_pred_neg_test ] <- iter
ans <- pred_test
pred_test <- rep(0+min(ans), nrow(test_set))
pred_test[pred_in_pred_neg_test] <- ans
ev <- evaluate(p = pred_test[test_set$y==1],
a = pred_test[test_set$y==-1])
}
### plot diagnostics
param_as_str <- paste(colnames(signif(model$bestTune)),
model$bestTune, sep=": ",
collapse=" | ")
### grid
ans <- plot(model, plotType="level")
pdf(.fname(folder_out, iter, ".pdf", "grid"))
trellis.par.set(caretTheme())
print(ans)
dev.off()
### histogram
pdf(.fname(folder_out, iter, ".pdf", "histogram"))
hist(model, pred, main=param_as_str)
abline(v=c(th, attr(th, "th_non_expanded")))
if (!is.null(test_set)) {
rug(pred_test[test_set$y==1], ticksize = 0.03, col=colors_pn$p)
rug(pred_test[test_set$y==-1], ticksize = -0.03, col=colors_pn$n)
}
dev.off()
### featurespace
if (ncol(train_pu_x)==2) {
pdf(.fname(folder_out, iter, ".pdf", "featurespace"))
featurespace(model,
thresholds=c(th, attr(th, "th_non_expanded")),
main=param_as_str)
dev.off()
}
### test
if (!is.null(test_set)) {
pdf(.fname(folder_out, iter, ".pdf", "eval_pn"))
plot(ev, main=paste("max. Kappa:", round(max(ev@kappa), 2)))
dev.off()
}
### change
time_stopper <- rbind(time_stopper, proc.time())
# update un
un <- rasterTiled(un$raster,
mask=un$validCells[pred>=th],
nPixelsPerTile = nPixelsPerTile)
### save results of this iteration
save(iter, model, pred, th, pred_neg, pred_neg_test, ev,
seed_global, seed, time_stopper,
file=.fname(folder_out, iter, ".RData", "results") )
rm(model, pred, th, ev, seed, n_un_iter, train_un,
train_pu_x, train_pu_y)
if (iter==iter_max)
STOP = TRUE
cat(sprintf("\t%2.2f real and %2.2f CPU time required.",
(time_stopper[iter+1, , drop=FALSE]-
time_stopper[iter, , drop=FALSE])[,"elapsed"],
(time_stopper[iter+1, , drop=FALSE]-
time_stopper[iter, , drop=FALSE])[,"sys.self"]
), "\n")
}
return(un)
} |
967eb5dad921529a55ce340f74abd40aa8990707 | 60f1edb2bc0c082b7f8ff035ddacb41edd249a45 | /plot4.R | 6112109dca9b565aa2dcb032dca20569f5bea198 | [] | no_license | marnorton/ExData_Plotting1 | 6aa8e6d12ed0fb0f49c3a77acdd7f56b669c46d1 | cb907fb762d62eb50488f3a8843389a437275036 | refs/heads/master | 2021-01-22T11:37:29.705221 | 2014-10-12T22:21:45 | 2014-10-12T22:21:45 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,714 | r | plot4.R | plot4 <- function()
{
## Read data into "data"
if (!file.exists("data.zip"))
{
download.file(url="https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2Fhousehold_power_consumption.zip",
destfile="data.zip",
method="curl")
unzip("data.zip")
}
data <- read.csv("household_power_consumption.txt",skip=66637,nrows=2880,na.strings = "?",header=F,sep=";")
## Set names of columns the same as raw data
names(data) <- names(read.csv("household_power_consumption.txt", nrows=1,sep=";"))
## Add column of date and time as a combined data type, to allow continuous plotting
data$DateTime <- as.POSIXct(paste(data$Date, data$Time, sep=" "),format="%d/%m/%Y %H:%M:%S")
# Initialise a PNG device named plot3.png
png(filename="plot4.png", width=480, height=480)
# Set vector in which to put plots
par(mfcol=c(2,2))
# Plot 1
plot(data$DateTime,data$Global_active_power,type="l",col="black",xlab="",ylab="Global Active Power (kilowatts)",
main="")
# Plot 2
plot(data$DateTime,data$Sub_metering_1,type="l",col="black",xlab="",ylab="Energy sub metering",
main="")
lines(data$DateTime, data$Sub_metering_2, col="red")
lines(data$DateTime, data$Sub_metering_3, col="blue")
legend("topright",lwd=1,lty=1,col = c("black", "red", "blue"),legend = c("Sub_metering_1", "Sub_metering_2",
"Sub_metering_3"))
# Plot 3
plot(data$DateTime,data$Voltage,type="l",col="black",xlab="datetime",ylab="Voltage",main="")
# Plot 4
plot(data$DateTime,data$Global_reactive_power,type="l",col="black",xlab="datetime",ylab="Global_reactive_power",
main="")
## Shut down the current device
dev.off()
} |
e823fb4fce5d61abd940b7547c9df44baf41c903 | f9de6e4a47c41fa5b901f5b4e24c6322942f70af | /R/re_detect.R | 36fb9aab1adb183583dfbab504a0b4d46486da49 | [] | no_license | faccinig/RE | 1d33e6fd1e60b61985f38989804fa738491cc830 | 4ea0805c19059cc9bd87423aa0bc2a9d97094b8e | refs/heads/master | 2021-01-01T18:04:45.067782 | 2017-07-27T20:03:47 | 2017-07-27T20:03:47 | 98,238,229 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 607 | r | re_detect.R |
re_detect <- function(x, pattern, quiet = FALSE) {
if (length(pattern) > 1) {
if (!quiet & length(x) > 1 & length(x) != length(pattern)) {
message("`x` and `pattern` don´t have the same length!\n",
"using the bigger one.")
}
if (length(x) > length(pattern)) {
pattern <- rep_len(pattern,length(x))
} else {
x <- rep_len(x,length(pattern))
}
x <- map_lgl(re_detect, x, pattern)
} else {
is_na <- is.na(x)
x[is_na] <- ""
x <- grepl(pattern = pattern, x = x, perl = TRUE)
x[is_na] <- NA
}
x
}
|
8afb07cadd87d728c7e929beef871c0ad73e402d | 4f21af2575ca080418d0c6d8cf2594f257ae2a84 | /R/stepdown.update.R | 41e383a7a5ed8b6880f66e478cf945634e5e66eb | [] | no_license | cran/MAMS | 8eef968cc859aec98bf6d72d6f9548a8db4a9c58 | c4b80485048d8739c0e6a6ac533b06cfbba588f3 | refs/heads/master | 2023-04-27T11:45:09.467551 | 2023-04-18T11:30:09 | 2023-04-18T11:30:09 | 17,680,403 | 0 | 3 | null | 2017-08-22T10:14:20 | 2014-03-12T19:21:55 | R | UTF-8 | R | false | false | 15,797 | r | stepdown.update.R | stepdown.update <- function(current.mams = stepdown.mams(), nobs = NULL, zscores = NULL, selected.trts = NULL, nfuture = NULL) {
zscores <- c(current.mams$zscores, list(zscores))
if (!is.null(selected.trts)) selected.trts <- c(current.mams$selected.trts, list(selected.trts))
# checking input parameters
if (!is(current.mams, "MAMS.stepdown")) {stop("current.mams must be a 'MAMS.stepdown' object")}
if (length(nobs) != current.mams$K + 1) {stop("must provide observed cumulative sample size for each treatment")}
completed.stages <- length(zscores)
for (i in 1:completed.stages) {
if (length(zscores[[i]]) != current.mams$K) {stop("vector of statistics is wrong length")}
}
if (is.null(selected.trts)){
if (current.mams$J > completed.stages) {stop("must specify treatments selected for next stage")}
}
for (i in seq_along(selected.trts)){
if (length(setdiff(selected.trts[[i]], 1:current.mams$K) > 0)) {stop("inappropriate treatment selection")}
}
if (is.matrix(nfuture)){
if (dim(nfuture)[1] != current.mams$J - completed.stages) {stop("must provide future sample sizes for all remaining stages")}
if (dim(nfuture)[2] != current.mams$K + 1) {stop("must provide future sample sizes for all treatment arms")}
}
# load all necessary functions
get.hyp <- function(n){ # find the nth intersection hypothesis (positions of 1s in binary n)
indlength = ceiling(log(n)/log(2)+.0000001)
ind = rep(0,indlength)
newn=n
for (h in seq(1,indlength)){
ind[h] = (newn/(2^(h-1))) %% 2
newn = newn - ind[h]*2^(h-1)
}
seq(1,indlength)[ind==1]
}
create.block <- function(control.ratios = 1:2, active.ratios = matrix(1:2, 2, 3)){ # for argument c(i,j) this gives covariance between statistics in stage i with statistics in stage j
K <- dim(active.ratios)[2]
block <- matrix(NA, K, K)
for(i in 1:K){
block[i, i] <- sqrt(active.ratios[1, i] * control.ratios[1] * (active.ratios[2, i] + control.ratios[2]) / (active.ratios[1, i] + control.ratios[1]) / active.ratios[2, i] / control.ratios[2])
}
for (i in 2:K){
for (j in 1:(i - 1)){
block[i, j] <- sqrt(active.ratios[1, i] * control.ratios[1] * active.ratios[2, j] / (active.ratios[1, i] + control.ratios[1]) / (active.ratios[2, j] + control.ratios[2]) / control.ratios[2])
block[j, i] <- sqrt(active.ratios[1, j] * control.ratios[1] * active.ratios[2, i] / (active.ratios[1, j] + control.ratios[1]) / (active.ratios[2, i] + control.ratios[2]) / control.ratios[2])
}
}
block
}
create.cov.matrix <- function(control.ratios = 1:2, active.ratios = matrix(1:2, 2, 3)){ # create variance-covariance matrix of the test statistics
J <- dim(active.ratios)[1]
K <- dim(active.ratios)[2]
cov.matrix <- matrix(NA, J * K, J * K)
for (i in 1:J){
for (j in i:J){
cov.matrix[((i - 1) * K + 1):(i * K), ((j - 1) * K + 1):(j * K)] <- create.block(control.ratios[c(i, j)], active.ratios[c(i, j), ])
cov.matrix[((j - 1) * K + 1):(j * K), ((i - 1) * K + 1):(i * K)] <- t(cov.matrix[((i - 1) * K + 1):(i * K), ((j - 1) * K + 1):(j * K)])
}
}
cov.matrix
}
create.cond.cov.matrix <- function(cov.matrix, K, completed.stages){ # find the conditional covariance of future test statistics given data so far
sigma_1_1 <- cov.matrix[((completed.stages - 1) * K + 1):(completed.stages * K), ((completed.stages - 1) * K + 1):(completed.stages * K)]
sigma_1_2 <- cov.matrix[((completed.stages - 1) * K + 1):(completed.stages * K), -(1:(completed.stages * K))]
sigma_2_1 <- t(sigma_1_2)
sigma_2_2 <- cov.matrix[-(1:(completed.stages * K)), -(1:(completed.stages * K))]
sigma_2_2 - sigma_2_1 %*% solve(sigma_1_1) %*% sigma_1_2
}
create.cond.mean <- function(cov.matrix, K, completed.stages, zscores){ # find the conditional mean of future test statistics given data so far
sigma_1_1 <- cov.matrix[((completed.stages - 1) * K + 1):(completed.stages * K), ((completed.stages - 1) * K + 1):(completed.stages * K)]
sigma_1_2 <- cov.matrix[((completed.stages - 1) * K + 1):(completed.stages * K), -(1:(completed.stages * K))]
sigma_2_1 <- t(sigma_1_2)
sigma_2_1 %*% solve(sigma_1_1) %*% zscores
}
get.path.prob <- function(surviving.subset1, surviving.subset2 = NULL, cut.off, treatments, cov.matrix, lower.boundary, upper.boundary, K, stage, z.means){ # find the probability that no test statistic crosses the upper boundary + only treatments in surviving_subsetj reach the jth stage
treatments2 <- treatments[surviving.subset1]
if (stage == 2){
lower <- c(rep(-Inf, length(treatments)), rep(-Inf, length(treatments2)))
lower[surviving.subset1] <- lower.boundary[1]
upper <- c(rep(lower.boundary[1], length(treatments)), rep(cut.off, length(treatments2)))
upper[surviving.subset1] <- upper.boundary[1]
return(pmvnorm(lower = lower, upper = upper, mean = z.means[c(treatments, K + treatments2)], sigma = cov.matrix[c(treatments, K + treatments2), c(treatments, K + treatments2)])[1])
}
treatments3 <- treatments2[surviving.subset2]
lower <- c(rep(-Inf, length(treatments)), rep(-Inf, length(treatments2)), rep(-Inf, length(treatments3)))
lower[surviving.subset1] <- lower.boundary[1]
lower[length(treatments) + surviving.subset2] <- lower.boundary[2]
upper <- c(rep(lower.boundary[1], length(treatments)), rep(lower.boundary[2], length(treatments2)), rep(cut.off, length(treatments3)))
upper[surviving.subset1] <- upper.boundary[1]
upper[length(treatments) + surviving.subset2] <- upper.boundary[2]
pmvnorm(lower = lower, upper = upper, mean = z.means[c(treatments, K + treatments2, 2 * K + treatments3)], sigma = cov.matrix[c(treatments, K + treatments2, 2 * K + treatments3), c(treatments, K + treatments2, 2 * K + treatments3)])[1]
}
rejection.paths <- function(selected.treatment, cut.off, treatments, cov.matrix, lower.boundary, upper.boundary, K, stage, z.means){ # for the "select.best" method, find the probability that "select.treatment" is selected and subsequently crosses the upper boundary
contrast <- diag(-1, K + stage - 1)
contrast[1:K, selected.treatment] <- 1
for (i in 1:(stage - 1)) contrast[K + i, K + i] <- 1
bar.mean <- contrast %*% z.means[c(1:K, 1:(stage - 1) * K + selected.treatment)]
bar.cov.matrix <- contrast %*% cov.matrix[c(1:K, 1:(stage - 1) * K + selected.treatment), c(1:K, 1:(stage - 1) * K + selected.treatment)] %*% t(contrast)
lower <- c(rep(0, length(treatments)), cut.off)
if (stage > 2) lower <- c(rep(0, length(treatments)), lower.boundary[2:(stage - 1)], cut.off)
lower[which(treatments == selected.treatment)] <- lower.boundary[1]
upper <- c(rep(Inf, length(treatments)), Inf)
if (stage > 2) upper <- c(rep(Inf, length(treatments)), upper.boundary[2:(stage - 1)], Inf)
upper[which(treatments == selected.treatment)] <- upper.boundary[1]
pmvnorm(lower = lower, upper = upper, mean = bar.mean[c(treatments, K + 1:(stage - 1))], sigma = bar.cov.matrix[c(treatments, K + 1:(stage - 1)), c(treatments, K + 1:(stage - 1))])[1]
}
excess.alpha <- function(cut.off, alpha.star, treatments, cov.matrix, lower.boundary, upper.boundary, selection.method, K, stage, z.means){# for "all.promising" rule, this gives the cumulative typeI error for 'stage' stages
# for "select.best" rule, this gives the Type I error spent at the 'stage'th stage
if (stage == 1) return(1 - alpha.star[1] - pmvnorm(lower = rep(-Inf, length(treatments)), upper = rep(cut.off, length(treatments)), mean = z.means[treatments], sigma = cov.matrix[treatments, treatments])[1])
if (selection.method == "select.best") return(sum(unlist(lapply(treatments, rejection.paths, cut.off = cut.off, treatments = treatments, cov.matrix = cov.matrix, lower.boundary = lower.boundary, upper.boundary = upper.boundary, K = K, stage = stage, z.means = z.means))) - (alpha.star[stage] - alpha.star[stage - 1])) # any of 'treatments' could be selected, so we add all these probabilities
if (stage == 2){
surviving.subsets <- c(list(numeric(0)), lapply(as.list(1:(2 ^ length(treatments) - 1)), get.hyp)) # list all possible subsets of surviving treatments after the first stage
return(1 - alpha.star[2] - sum(unlist(lapply(surviving.subsets, get.path.prob, cut.off = cut.off, treatments = treatments, cov.matrix = cov.matrix, lower.boundary = lower.boundary, upper.boundary = upper.boundary, K = K, stage = stage, z.means = z.means))))
}
surviving.subsets1 <- c(list(numeric(0)), lapply(as.list(1:(2 ^ length(treatments) - 1)), get.hyp)) # all possible subsets of surviving treatments after the first stage
surviving.subsets2 <- c(list(list(numeric(0))), lapply(surviving.subsets1[-1], function(x) c(list(numeric(0)), lapply(as.list(1:(2 ^ length(x) - 1)), get.hyp)))) # for each possible subset of survivng subsets after stage 1, list the possible subsets still surviving after stage 2
1 - alpha.star[3] - sum(unlist(Map(function(x, y) sum(unlist(lapply(y, get.path.prob, surviving.subset1 = x, cut.off = cut.off, treatments = treatments, cov.matrix = cov.matrix, lower.boundary = lower.boundary, upper.boundary = upper.boundary, K = K, stage = stage, z.means = z.means))), surviving.subsets1, surviving.subsets2)))
}
# give everything the correct name
alpha.star <- current.mams$alpha.star
l <- current.mams$l
u <- current.mams$u
selection.method <- current.mams$selection
sample.sizes <- current.mams$sample.sizes
sample.sizes[completed.stages, ] <- nobs # Update given the sample sizes actually observed
if (!all(diff(sample.sizes) >= 0)) {stop("total sample size per arm cannot decrease between stages.")}
J <- dim(sample.sizes)[1]
K <- dim(sample.sizes)[2] - 1
R <- sample.sizes[, -1] / sample.sizes[1, 1]
r0 <- sample.sizes[, 1] / sample.sizes[1, 1]
# get conditional distributions BEFORE seeing the new z scores
cond.cov.matrix <- cov.matrix <- create.cov.matrix(r0, R)
cond.mean <- rep(0, K * J)
if (completed.stages > 1){
cond.cov.matrix <- create.cond.cov.matrix(cov.matrix, K, completed.stages - 1)
cond.mean <- create.cond.mean(cov.matrix, K, completed.stages - 1, zscores = zscores[[completed.stages - 1]])
}
# adjust upper boundaries in light of observed sample sizes:
for (i in 1:(2 ^ K - 1)){
treatments <- intersect(selected.trts[[completed.stages]], get.hyp(i))
if ((length(treatments > 0)) && (alpha.star[[i]][J] > 0) && (alpha.star[[i]][J] < 1)){
for (j in completed.stages:J){
try(new.u <- uniroot(excess.alpha, c(0, 10), alpha.star = alpha.star[[i]][completed.stages:J], treatments = treatments, cov.matrix = cond.cov.matrix, lower.boundary = l[[i]][completed.stages:J], upper.boundary = u[[i]][completed.stages:J], selection.method = selection.method, K = K, stage = j - completed.stages + 1, z.means = cond.mean)$root, silent = TRUE)
if (is.null(new.u)) {stop("upper boundary not between 0 and 10")}
u[[i]][j] <- round(new.u, 2)
}
l[[i]][J] <- u[[i]][J]
}
}
if (J > completed.stages) {
cond.cov.matrix <- create.cond.cov.matrix(cov.matrix, K, completed.stages)
cond.mean <- create.cond.mean(cov.matrix, K, completed.stages, zscores[[completed.stages]])
}
for (i in 1:(2 ^ K - 1)) { # get conditional errors
treatments <- intersect(selected.trts[[completed.stages]], get.hyp(i))
if ((length(treatments > 0)) && (alpha.star[[i]][J] > 0) && (alpha.star[[i]][J] < 1)){
max.z <- max(zscores[[completed.stages]][treatments])
best.treatment <- treatments[which.max(zscores[[completed.stages]][treatments])]
if (max.z <= u[[i]][completed.stages]) alpha.star[[i]][completed.stages] <- 0
if (max.z > u[[i]][completed.stages]) {
alpha.star[[i]][completed.stages:J] <- 1
if (J > completed.stages) {
l[[i]][(completed.stages + 1):J] <- u[[i]][(completed.stages + 1):J] <- -Inf
}
}
else if (max.z <= l[[i]][completed.stages]){
alpha.star[[i]][completed.stages:J] <- 0
if (J > completed.stages) {
l[[i]][(completed.stages + 1):J] <- u[[i]][(completed.stages + 1):J] <- Inf
}
}
else if (selection.method == "select.best") {
for (j in (completed.stages + 1):J){
alpha.star[[i]][j] <- excess.alpha(cut.off = u[[i]][j], alpha.star = rep(0, J - completed.stages), treatments = best.treatment, cov.matrix = cond.cov.matrix, lower.boundary = l[[i]][(completed.stages + 1):J], upper.boundary = u[[i]][(completed.stages + 1):J], selection.method = selection.method, K = K, stage = j - completed.stages, z.means = cond.mean) + alpha.star[[i]][j - 1]
}
}
else {
for (j in (completed.stages + 1):J){
alpha.star[[i]][j] <- excess.alpha(cut.off = u[[i]][j], alpha.star = rep(0, J - completed.stages), treatments = treatments, cov.matrix = cond.cov.matrix, lower.boundary = l[[i]][(completed.stages + 1):J], upper.boundary = u[[i]][(completed.stages + 1):J], selection.method = selection.method, K = K, stage = j - completed.stages, z.means = cond.mean)
}
}
}
}
if (is.matrix(nfuture)){
sample.sizes[(completed.stages + 1):J, ] <- nfuture
if (!all(diff(sample.sizes) >= 0)) {stop("total sample size per arm cannot decrease between stages.")}
R <- sample.sizes[, -1] / sample.sizes[1, 1]
r0 <- sample.sizes[, 1] / sample.sizes[1, 1]
cov.matrix <- create.cov.matrix(r0, R)
cond.cov.matrix <- create.cond.cov.matrix(cov.matrix, K, completed.stages)
cond.mean <- create.cond.mean(cov.matrix, K, completed.stages, zscores = zscores[[completed.stages]])
}
if (J > completed.stages){
for (i in 1:(2 ^ K - 1)){
treatments <- intersect(selected.trts[[completed.stages + 1]], get.hyp(i))
if ((length(treatments > 0)) && (alpha.star[[i]][J] > 0) && (alpha.star[[i]][J] < 1)){
for (j in (completed.stages + 1):J){
try(new.u <- uniroot(excess.alpha, c(0, 10), alpha.star = alpha.star[[i]][(completed.stages + 1):J], treatments = treatments, cov.matrix = cond.cov.matrix, lower.boundary = l[[i]][(completed.stages + 1):J], upper.boundary = u[[i]][(completed.stages + 1):J], selection.method = selection.method, K = K, stage = j - completed.stages, z.means = cond.mean)$root, silent = TRUE)
if (is.null(new.u)) {stop("upper boundary not between 0 and 10")}
u[[i]][j] <- round(new.u, 2)
}
l[[i]][J] <- u[[i]][J]
}
}
}
res <- NULL
res$l <- l
res$u <- u
res$sample.sizes <- sample.sizes
res$K <- K
res$J <- J
res$alpha.star <- alpha.star
res$selection <- selection.method
res$zscores <- zscores
res$selected.trts <- selected.trts
class(res) <- "MAMS.stepdown"
return(res)
}
|
1b68ab5edb8ab2da7529fb54753c87c47965705c | a53f1f939d3dc8a0278cfcbeed57c000bed77a1f | /R/perform_split.R | 463125c694fe726192c501731628ce7cba6020c1 | [] | no_license | giuseppec/customtrees | 0c803bdb70f7fb5d2d589cfca1b8510e7154288d | f7f50d9357d129c9f88d1985b8862da1f7f2f68e | refs/heads/master | 2023-06-26T22:32:30.207928 | 2021-08-01T15:04:34 | 2021-08-01T15:04:34 | 254,136,481 | 1 | 1 | null | 2020-08-04T11:11:53 | 2020-04-08T16:06:21 | R | UTF-8 | R | false | false | 2,705 | r | perform_split.R | #' @title Performs a single split and measures the objective
#'
#' @description
#' Uses values in split.points as candidates to make intervals.
#' Sums up the objective in each interval (produced by the split.points).
#'
#' @param split.points [\code{vector}]\cr
#' Either a single split point (for binary splits) or a vector of split points (for multiple splits)
#' @param xval feature values
#' @param y target
#' @return value of the objective
#' @export
#' @example inst/examples/perform_split.R
#'
#'
perform_split = function(split.points, xval, y, min.node.size, objective, ...) {
# args = formalArgs(objective)
# deparse(body(objective))
# always increasing split points
# split.points = xval[split.points] # integer optim
split.points = sort.int(split.points)
split.points = get_closest_point(split.points, xval, min.node.size)
#cat(split.points, fill = TRUE)
# assign intervalnr. according to split points
node.number = findInterval(x = xval, split.points, rightmost.closed = TRUE) + 1
# compute size of each childnode
node.size = tabulate(node.number)
# if minimum node size is violated, return Inf
# TODO: instead of returning Inf try to avoid that this happens by fixing split points
if (min(node.size) < min.node.size)
return(Inf)
# compute objective in each interval and sum it up
y.list = split(y, node.number)
# x.list only needed if this is used in the objective
requires.x = formals(objective)[["requires.x"]]
if (isTRUE(requires.x))
x.list = split(xval, node.number) else
x.list = NULL
res = vapply(seq_along(y.list), FUN = function(i) {
objective(y = y.list[[i]], x = x.list[[i]], sub.number = which(node.number == i), ...)
}, FUN.VALUE = NA_real_, USE.NAMES = FALSE)
sum(res)
}
# not tested
perform_split.factor = function(split.points, xval, y, min.node.size, objective) {
lev = levels(xval)
xval = as.numeric(xval)
split.points = which(lev %in% split.points)
perform_split.numeric(xval = xval, split.points = split.points, y = y, min.node.size = min.node.size, objective = objective)
}
# perform_split2 = function(split.points, x, y, min.node.size = 10, objective) {
# # always increasing split points
# split.points = sort(split.points)
# # assign intervalnr. according to split points
# node.number = findInterval(x, split.points, rightmost.closed = TRUE)
# # compute size of each childnode
# node.size = table(node.number)
# # if minimum node size is violated, return Inf
# if (any(node.size < min.node.size))
# return(Inf)
# # compute objective in each interval and sum it up
# d = data.table(x, y, node.number)
# sum(d[, .(obj = objective(x, y)), by = node.number]$obj)
# }
|
96eb28e266ca86e615a50c2ceb3994e980443f8f | 15ac69e4667a4ec86a5e52d73bf59a6ac73a8dd4 | /cachematrix.R | 72afd33fb913cf78dc32bae4a0fe64915db57222 | [] | no_license | deysantanu84/ProgrammingAssignment2 | 87a724913b430e4e21141164750579a83751d15c | d68767e96029ad0984ea9f533d45be7b9b8cd404 | refs/heads/master | 2021-01-01T20:06:57.945735 | 2017-07-30T01:22:16 | 2017-07-30T01:22:16 | 98,766,216 | 0 | 0 | null | 2017-07-30T00:39:47 | 2017-07-30T00:39:47 | null | UTF-8 | R | false | false | 1,263 | r | cachematrix.R | ## The following functions are used to create a special object
## that stores a matrix and caches its inverse.
## The first function, makeCacheMatrix creates a special "matrix",
## which is really a list containing a function to:
## 1. set the value of the matrix
## 2. get the value of the matrix
## 3. set the value of the inverse
## 4. get the value of the inverse
makeCacheMatrix <- function(x = matrix()) {
inv <- NULL
set <- function(y) {
x <<- y
inv <<- NULL
}
get <- function() x
setInverse <- function(inverse) inv <<- inverse
getInverse <- function() inv
list(set = set,
get = get,
setInverse = setInverse,
getInverse = getInverse)
}
## The second function, cacheSolve computes the inverse of the
## special "matrix" returned by makeCacheMatrix above. If the
## inverse has already been calculated (and the matrix has not
## changed), then cacheSolve should retrieve the inverse from
## the cache.
cacheSolve <- function(x, ...) {
## Return a matrix that is the inverse of 'x'
inv <- x$getInverse()
if (!is.null(inv)) {
message("getting cached data")
return(inv)
}
data <- x$get()
inv <- solve(data, ...)
x$setInverse(inv)
inv
}
|
4a19915a4c21433cb9bee5da2fd582fef37b01b3 | 6c44fd4e8882b63198444221bac999a83c195b00 | /man/getMSF.Rd | 49101ff64be5a0e2ec7861b4b6d1c415dff1b0d0 | [] | no_license | aymanabuelela/ogcc | 4b60d1feb393d47715f52c66b8b40020a8f73a1c | 6071c4cbd7d2b01302f039595510d93682479d2d | refs/heads/master | 2021-01-12T06:40:26.842100 | 2017-02-02T12:23:25 | 2017-02-02T12:23:25 | 77,407,459 | 0 | 3 | null | 2020-12-13T08:16:42 | 2016-12-26T21:21:29 | R | UTF-8 | R | false | true | 767 | rd | getMSF.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ogcc.R
\name{getMSF}
\alias{getMSF}
\title{Get minimum set of features (getMSF)}
\usage{
getMSF(model = "types", d = "RSEM")
}
\arguments{
\item{model}{is a prediction model name from \code{\link{ogcc}}. See \code{model} argument.}
\item{d}{a string indicating the RNA-Seq measurement type; either 'RSEM' or 'RPKM'. 'RSEM' by default.}
}
\value{
a character vector of the required features for a working model.
}
\description{
\code{getMSF} prints the minimum set of features required by a model from \code{ogcc}.
}
\examples{
msf <- getMSF("normal_tumor")
print(msf)
}
\author{
Ayman Abuelela; ayman.elkhodiery@kaust.edu.sa
}
\seealso{
\code{\link{ogcc}} and \code{\link{getLabels}}
}
|
ea5472123dc2af8d85f524278ee18b42b98310ae | b9fdcbf1aba0ce0d55049ae17535e348db74dd3f | /07-DescribeWildLifeStrikeDataSet.R | 8b1c051d31719da19a09d145c5457deecb4f04b1 | [
"MIT"
] | permissive | HoGaSan/FinalPaper | 8123494199c55e90bba7eecf1d1ccf85728f7161 | aa3e65c0981e83d03c32229ac129f91e1cb71481 | refs/heads/master | 2020-04-03T09:56:11.951809 | 2017-06-27T17:36:30 | 2017-06-27T17:36:30 | 63,605,860 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,904 | r | 07-DescribeWildLifeStrikeDataSet.R | #'
#' \code{DescribeWildLifeStrikeDataSet} re-creates the
#' inputs based on the column selection of the data
#' verification report
#'
#' @examples
#' DescribeWildLifeStrikeDataSet()
#'
DescribeWildLifeStrikeDataSet <- function() {
dataDir <- getDataDir()
startYear <- getStartYear()
endYear <- getEndYear()
for (i in startYear:endYear){
RDSFileName <- paste(i,
"_Animal_Strikes_01_Orig.rds",
sep = "")
RDSFile <- paste(dataDir,
"/",
RDSFileName,
sep = "")
RDSFileNameDescibed <- paste(i,
"_Animal_Strikes_02_Desc.rds",
sep = "")
RDSFileDescibed <- paste(dataDir,
"/",
RDSFileNameDescibed,
sep = "")
if (file.exists(RDSFile) != TRUE){
message(RDSFileName,
"is not available, ",
"please re-run the preparation scripts!")
} else {
if (file.exists(RDSFileDescibed) == TRUE){
message(RDSFileNameDescibed,
" exists, no further action is required.")
} else {
#Read the data file into a variable
variableName <- paste("AS_", i, sep="")
assign(variableName, readRDS(file = RDSFile))
#set the required column names
ColumnNames <- c("INDEX_NR",
"OPID",
"OPERATOR",
"ATYPE",
"AC_CLASS",
"AC_MASS",
"TYPE_ENG",
"REG",
"FLT",
"INCIDENT_DATE",
"INCIDENT_MONTH",
"INCIDENT_YEAR",
"TIME_OF_DAY",
"TIME",
"AIRPORT_ID",
"AIRPORT",
"STATE",
"FAAREGION",
"ENROUTE",
"RUNWAY",
"HEIGHT",
"SPEED",
"DISTANCE",
"PHASE_OF_FLT",
"SKY",
"PRECIP",
"WARNED")
#Move reduces data into a new data set
describedDataSet <- get(variableName)[, ..ColumnNames]
saveRDS(describedDataSet, file = RDSFileDescibed)
#Free up the memory
rm(list = variableName)
rm(variableName)
rm(describedDataSet)
gc()
} #end of "if (file.exists(RDSFileDescibed) == TRUE)"
} #end of "if (file.exists(RDSFile) != TRUE)"
} #end of "for (i in startYear:endYear)"
}
|
94152609878717e32f0f82812a438acbad5b8fde | 4af1db18fc3fa5e4eb1969eb3daa3c95704bb887 | /ui.R | 6821dc39282e808ad2b7f347d51c11e958eb2031 | [] | no_license | lengning/soxtc_plot1 | 439e165ef1bd720732f7ea621cc7044c28c7d275 | 12485e533c7e7727ba65436c823d3dc700c5fcc3 | refs/heads/master | 2021-01-10T08:40:03.971196 | 2016-04-20T22:20:01 | 2016-04-20T22:20:01 | 47,719,371 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 94 | r | ui.R | ui <- bootstrapPage(
textInput('name', 'gene name', "T"),
plotOutput('plot')
)
|
8e95bdb228424b998e261e17012144aaff6eddf4 | 2814155eb1d450ef708f0c462e9d4a04db747e35 | /pix2img.r | ad986029fc9f4211946a7cea635eeb2d1fd6d179 | [] | no_license | abigaile-d/emotion-recognition-svm-pca | 183809872587ab3bb029498fa5836c942048a04a | 1ed0cdc1408a9ab773f02abd9fcf310c1c49eeb4 | refs/heads/main | 2023-08-11T05:40:08.894213 | 2021-09-20T13:12:30 | 2021-09-20T13:12:30 | 408,058,687 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,637 | r | pix2img.r | BASEDIR <- "C:/Users/Abigaile Dionisio/Documents/Masters/CS280/MP"
setwd(BASEDIR)
# create dirs if missing
dir.create(file.path(BASEDIR, "images"), showWarnings = FALSE)
dir.create(file.path(BASEDIR, "images", "emo_train"), showWarnings = FALSE)
dir.create(file.path(BASEDIR, "images", "emo_test"), showWarnings = FALSE)
dir.create(file.path(BASEDIR, "images", "ide_train"), showWarnings = FALSE)
dir.create(file.path(BASEDIR, "images", "ide_test"), showWarnings = FALSE)
# EMOTION
P = 48*48 #number of pixels
N = 4178/2 #total number of observations per set
IN <- "emotion_trainset"
dataset <- matrix(scan(IN, n=N*(P+1), what=""), N, (P+1), byrow=TRUE)
y <- dataset[,1]
x <- gsub("^.*:","", dataset[,1:P+1], perl=TRUE)
x <- matrix(as.numeric(x), N, P, byrow=FALSE)
for(i in 1:N){
m <- matrix(rev(x[i,]), nrow=48)
IMGFILE <- paste0(BASEDIR, "/images/emo_train/img", y[i], "_", i, ".jpg")
jpeg(IMGFILE)
par(mar=rep(0, 4))
image(m, axes=FALSE, col=grey(seq(0,1,length=256))) #recreate pixels as image
dev.off()
}
IN <- "emotion_testset"
dataset <- matrix(scan(IN, n=N*(P+1), what=""), N, (P+1), byrow=TRUE)
y <- dataset[,1]
x <- gsub("^.*:","", dataset[,1:P+1], perl=TRUE)
x <- matrix(as.numeric(x), N, P, byrow=FALSE)
for(i in 1:N){
m <- matrix(rev(x[i,]), nrow=48)
IMGFILE <- paste0(BASEDIR, "/images/emo_test/img", y[i], "_", i, ".jpg")
jpeg(IMGFILE)
par(mar=rep(0, 4))
image(m, axes=FALSE, col=grey(seq(0,1,length=256))) #recreate pixels as image
dev.off()
}
# IDENTITY
P = 48*48 #number of pixels
N = 3537/2+1#total number of observations per set
IN <- "identity_trainset"
dataset <- matrix(scan(IN, n=N*(P+1), what=""), N, (P+1), byrow=TRUE)
y <- dataset[,1]
x <- gsub("^.*:","", dataset[,1:P+1], perl=TRUE)
x <- matrix(as.numeric(x), N, P, byrow=FALSE)
for(i in 1:N){
m <- matrix(rev(x[i,]), nrow=48)
IMGFILE <- paste0(BASEDIR, "/images/ide_train/img", y[i], "_", i, ".jpg")
jpeg(IMGFILE)
par(mar=rep(0, 4))
image(m, axes=FALSE, col=grey(seq(0,1,length=256))) #recreate pixels as image
dev.off()
}
N = 3537/2 #total number of observations per set
IN <- "identity_testset"
dataset <- matrix(scan(IN, n=N*(P+1), what=""), N, (P+1), byrow=TRUE)
y <- dataset[,1]
x <- gsub("^.*:","", dataset[,1:P+1], perl=TRUE)
x <- matrix(as.numeric(x), N, P, byrow=FALSE)
for(i in 1:N){
m <- matrix(rev(x[i,]), nrow=48)
IMGFILE <- paste0(BASEDIR, "/images/ide_test/img", y[i], "_", i, ".jpg")
jpeg(IMGFILE)
par(mar=rep(0, 4))
image(m, axes=FALSE, col=grey(seq(0,1,length=256))) #recreate pixels as image
dev.off()
} |
040c47ce14cff050a30e0c72ff42a294b078a6d6 | f65ae497d68d9d842febebb5e23bb33df636c266 | /man/removeORFsWithStartInsideCDS.Rd | ba944f437229982de2351435eb4e365c563b7857 | [
"MIT"
] | permissive | Roleren/ORFik | 3da7e00f2033f920afa232c4c9a76bd4b5eeec6d | e6079e8433fd90cd8fc682e9f2b1162d145d6596 | refs/heads/master | 2023-08-09T06:38:58.862833 | 2023-08-08T08:34:02 | 2023-08-08T08:34:02 | 48,330,357 | 28 | 7 | MIT | 2023-07-06T16:17:33 | 2015-12-20T17:21:15 | R | UTF-8 | R | false | true | 819 | rd | removeORFsWithStartInsideCDS.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/uORF_helpers.R
\name{removeORFsWithStartInsideCDS}
\alias{removeORFsWithStartInsideCDS}
\title{Remove ORFs that have start site within the CDS}
\usage{
removeORFsWithStartInsideCDS(grl, cds)
}
\arguments{
\item{grl}{(GRangesList), the ORFs to filter}
\item{cds}{(GRangesList), the coding sequences (main ORFs on transcripts),
to filter against.}
}
\value{
(GRangesList) of filtered uORFs
}
\description{
Remove ORFs that have start site within the CDS
}
\seealso{
Other uorfs:
\code{\link{addCdsOnLeaderEnds}()},
\code{\link{filterUORFs}()},
\code{\link{removeORFsWithSameStartAsCDS}()},
\code{\link{removeORFsWithSameStopAsCDS}()},
\code{\link{removeORFsWithinCDS}()},
\code{\link{uORFSearchSpace}()}
}
\concept{uorfs}
\keyword{internal}
|
897977f3e45ef53b6c01a75ef86ad52da616a151 | e0137e38aa3dc68045821e5a7e4df226e3934fb6 | /Section 6/tutorial_11_Themes.R | 50344abeeb367e97f7105c43fbbab24f3b892fee | [] | no_license | nihar-dar123/Udemy-R-Programming-A-Z | d22071d564bd228a5872b859bc1ac498ebf290d3 | aa1bbbb524adcd07d13309c8a53b60b568d40a38 | refs/heads/master | 2022-01-05T23:58:48.049469 | 2019-07-29T02:01:12 | 2019-07-29T02:01:12 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,185 | r | tutorial_11_Themes.R | # using some themes to enhance the graphs
o <- ggplot(data=movies, aes(x=BudgetMillions))
h <- o + geom_histogram(binwidth = 10,aes(fill=Genre), color="Black")
h
# add a label
h + xlab("Money Axis") + ylab("Number of Movies")
# label formatting
h + xlab("Money Axis") + ylab("Number of Movies") +
theme(axis.title.x = element_text(color="DarkGreen", size=15),
axis.title.y = element_text(color="Red",size=15))
# tick mark formatting
h + xlab("Money Axis") + ylab("Number of Movies") +
theme(axis.title.x = element_text(color="DarkGreen", size=15),
axis.title.y = element_text(color="Red",size=15),
axis.text.x = element_text(size=20),
axis.text.y = element_text(size=20))
# read about it!
?theme
# legend formatting
h + xlab("Money Axis") + ylab("Number of Movies") +
theme(axis.title.x = element_text(color="DarkGreen", size=15),
axis.title.y = element_text(color="Red",size=15),
axis.text.x = element_text(size=20),
axis.text.y = element_text(size=20),
legend.title = element_text(size=30),
legend.text = element_text(size=20))
h + xlab("Money Axis") + ylab("Number of Movies") +
theme(axis.title.x = element_text(color="DarkGreen", size=15),
axis.title.y = element_text(color="Red",size=15),
axis.text.x = element_text(size=20),
axis.text.y = element_text(size=20),
legend.title = element_text(size=30),
legend.text = element_text(size=20),
legend.position = c(1,1),
legend.justification = c(1,1))
# top right corner of the legend on the top right corner of the chart
h + xlab("Money Axis") + ylab("Number of Movies") +
ggtitle("Movies Budget Distribution") +
theme(axis.title.x = element_text(color="DarkGreen", size=15),
axis.title.y = element_text(color="Red",size=15),
axis.text.x = element_text(size=12),
axis.text.y = element_text(size=12),
legend.title = element_text(size=15),
legend.text = element_text(size=10),
legend.position = c(1,1),
legend.justification = c(1,1),
plot.title = element_text(color="DarkBlue",size=15,family="Courier"))
|
e6fc3dc5b1653749d22a5cc153860830d318a702 | b323da3e2adf3980e160380a1bc3b0ab175d752f | /21 5 3.R | e62615aceac9fe1f3dd4f1ec3991b4ce945a28a2 | [] | no_license | JVanEss/R-for-Data-Science-Week-4-Chapter-21 | 7d519002a5819d318a5e5778c81737e2562a35e7 | 7b64f6c931e177817d1e76d814ebf28191f8c105 | refs/heads/master | 2020-04-05T19:53:42.043636 | 2018-11-12T04:17:18 | 2018-11-12T04:17:18 | 157,155,477 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 130 | r | 21 5 3.R | #Exercise 21.5.3.1
map_dbl(mtcars, mean)
map(flights, class)
map(iris, ~ length(unique(.)))
map(c(-10, 0, 10, 100), rnorm, n = 10) |
a230fd05f5e8179f12a8b1dd5122ab6b34cdb397 | 2e3f446e2603f6e09d57e744cb40c7aa513bbe02 | /Aircraft Delay Prediction/3_WeatherData/Train/hly.R | d8ef067bad490a4581d2e85dae48460669c24b20 | [] | no_license | kunalrayscorpio/DSProjects | 653f6429a556467cd043a2ff59e306cfeda5404f | 389c678afe16e066228545f5f07a2491db1bdceb | refs/heads/master | 2020-05-09T22:08:46.667037 | 2019-06-10T10:11:35 | 2019-06-10T10:11:35 | 181,461,609 | 0 | 0 | null | 2019-04-23T13:03:04 | 2019-04-15T10:09:08 | null | UTF-8 | R | false | false | 647 | r | hly.R | rm(list=ls())
library(sqldf)
hly01<-readRDS("hly0401.rds")
hly03<-readRDS("hly0403.rds")
hly05<-readRDS("hly0405.rds")
hly07<-readRDS("hly0407.rds")
hly09<-readRDS("hly0409.rds")
hly11<-readRDS("hly0411.rds")
hly<-rbind(hly01,hly03,hly05,hly07,hly09,hly11) # Bring all datasets together
rm(hly01,hly03,hly05,hly07,hly09,hly11) # Remove individual datasets
str(hly)
hly<-sqldf('select distinct WeatherStationID, AirportID, YearMonthDay, TimeSlot, OrigSkyCond,
OrigVis,OrigDBT,OrigDewPtTemp,OrigRelHumPerc,OrigWindSp,OrigWindDir,OrigWindGustVal,
OrigStnPres from hly')
saveRDS(hly, file = "hly.rds") |
ea1a739fdb36f7d9c49ab26296e2807f7c59ce8f | 5526117d9b268955b3814384744fb40e01723bb4 | /predicting diabetes using different classification ML models.R | 299b32a6c90369230fa20bfb7e5fc323d72adade | [] | no_license | kenkaycee/Diabetes | 6cf6b792754981a3cdc80555f001193c4abfa791 | abf129c2348b8b7f565e50c05b355bce104c98bb | refs/heads/master | 2022-11-28T20:35:52.963641 | 2020-08-09T20:02:36 | 2020-08-09T20:02:36 | 284,454,533 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,163 | r | predicting diabetes using different classification ML models.R | library(tidyverse); library(caret)
diabetes<- read.csv("diabetes.csv")
diabetes %>% str()
## convert aall predictors except age into factors
diabetes<- diabetes %>% mutate_if(is.character, as.factor)
diabetes %>% str()
diabetes$class %>% table() %>% prop.table()*100
## graphical representation
diabetes %>% ggplot(aes(Obesity, fill=class))+
geom_bar(position = "dodge")+
facet_grid(.~AgeGroup)
table(diabetes$Age)
diabetes$AgeGroup[diabetes$Age<18]= "Child"
diabetes$AgeGroup[diabetes$Age >= 18 & diabetes$Age <=65]="Adult"
diabetes$AgeGroup[diabetes$Age > 65]= "Elderly"
diabetes$AgeGroup<- factor(diabetes$AgeGroup)
str(diabetes)
## create train and test data set
set.seed(100)
trainIndex<- createDataPartition(diabetes$class, p=.75, list = F)
train_diabetes<- diabetes[trainIndex,]
test_diabetes<- diabetes[-trainIndex,]
## compare frequencies of diagnosis in test and train data set against the original canceer dataset
train_prop<-round(prop.table(table(train_diabetes$class))*100,1)
test_prop<- round(prop.table(table(test_diabetes$class))*100,1)
original_prop<- round(prop.table(table(diabetes$class))*100,1)
freq<- data.frame(cbind(original_prop,train_prop,test_prop))
colnames(freq)<- c("Original","Training", "Testing")
freq ## the frequencies are similar
## parameter tuning
fitCtrl<- trainControl(method = "repeatedcv", number = 10, repeats = 3)
## KNN classification
set.seed(100)
knnFit<- train(class~., data = train_diabetes, method="knn", preProcess=c("center","scale"), metric="Accuracy", tuneLength=17,
trControl=fitCtrl)
knnFit
ggplot(knnFit)+
scale_x_continuous(breaks = c(1:43)) # k=5 has highest accuracy on train data
## make predictions on test dataset
knnPred<- predict(knnFit, test_diabetes)
## build confusion matrix
cmatKnn<- confusionMatrix(knnPred, test_diabetes$class, positive = "Positive")
cmatKnn # knn algorithm has 90% accuracy
## building classification tree using rpart
set.seed(100)
rpartFit<- train(class~., data = train_diabetes, method="rpart", metric="Accuracy", tuneLength=17, trControl=fitCtrl)
## view the classification tree
rpart.plot::rpart.plot(rpartFit$finalModel)
## make predicition on test data
rpartPred<- predict(rpartFit, test_diabetes)
cmatRpart<- confusionMatrix(rpartPred, test_diabetes$class, positive = "Positive")
cmatRpart ## 87% Accuracy rate
## logistic regression
set.seed(100)
logFit<-train(class~., data = train_diabetes, method="glm", family="binomial", metric="Accuracy",
tuneLength=17, trControl=fitCtrl)
# make predictions
logPred<- predict(logFit, test_diabetes)
cmatLog<- confusionMatrix(logPred, test_diabetes$class, positive = "Positive")
cmatLog ## logistic regression has an accuracy rate of 93%
## Random forest
set.seed(100)
rfFit<- train(class~., data = train_diabetes, method="rf", metric="Accuracy", tuneLength=17, trControl=fitCtrl)
rfFit %>% plot() # mtry of 5 has highest accuracy
## make predictiions on the test data set
rfPred<- predict(rfFit, test_diabetes)
cmatRf<- confusionMatrix(rfPred, test_diabetes$class, positive = "Positive")
cmatRf# randomforest has accuracy rate of 98%
gbmGrid<- expand.grid(.interaction.depth = (1:5) * 2,.n.trees = (1:10)*25, .shrinkage = c(0.01,0.05,0.1,0.5),
.n.minobsinnode=10)
set.seed(100)
gbmFit<- train(class~., data = train_diabetes, method="gbm", metric="Accuracy",
trControl=fitCtrl, tuneGrid=gbmGrid, verbose=FALSE, distribution="bernoulli",tuneLength=17)
gbmFit$finalModel
## make a predition using test data set
gbmPred<- predict(gbmFit, test_diabetes)
cmatGbm<- confusionMatrix(gbmPred, test_diabetes$class, positive = "Positive")
cmatGbm ## accuracy of 96%
## support vector machine
set.seed(100)
svmFit<- train(class~., data = train_diabetes, method="svmRadial", trControl=fitCtrl, tuneLength=17, metric="Accuracy",
preProcess=c("center", "scale"))
plot(svmFit, scales=list(log=2))
svmPred<- predict(svmFit, test_diabetes)
cmatSvm<- confusionMatrix(svmPred, test_diabetes$class, positive = "Positive")
cmatSvm ## 98% accuracy
## compare performances of the models
model_diff<- resamples(list(Knn=knnFit, LogisticReg=logFit, RpartTree=rpartFit, RandomForest=rfFit, GBM=gbmFit,
SVM= svmFit))
summary(model_diff)
dotplot(model_diff) # display the accuracy and kappa values for the different models (Radomforest, GBM and SVM have highest accuracy rates)
## create a function that compares the performance of different models
## function that to round values in a list if it is numeric
round_num<- function(list){
lapply(list, function(x){
if(is.numeric(x)){
x=round(x, 2) # round to 2 D.P
}
})
}
## create a function that compares results of the models
comp_summ<- function(cm, fit){
summ<- list(TN= cm$table[1,1], # True Negative
TP= cm$table[2,2], # True Positive
FN= cm$table[1,2], # False Negative
FP= cm$table[2,1], # False Positive
Acc=cm$overall["Accuracy"], # Accuracy
Sens=cm$byClass["Sensitivity"], # Sensitivity
Spec=cm$byClass["Specificity"], # Specificity
Prec=cm$byClass["Precision"], # Precision
Recall= cm$byClass["Recall"], # Recall
F1_Score=cm$byClass["F1"], # F1 score
PPV= cm$byClass["Pos Pred Value"], # Positive predictive value
NPV= cm$byClass["Neg Pred Value"] # Negative predictive value
)
round_num(summ) # rounds to 2 D.P
}
## create a dataframe that stores the performance of the models
model_performance<- data.frame(rbind(comp_summ(cmatLog,logFit),
comp_summ(cmatKnn,knnFit),
comp_summ(cmatRpart,rpartFit),
comp_summ(cmatSvm,svmFit),
comp_summ(cmatRf,rfFit),
comp_summ(cmatGbm,gbmFit)))
## create names for rows in model performanc
rownames(model_performance)<- c("LogisticReg","KNN","RpartTree","SVM","RandomForest", "GBM")
model_performance
|
c5f21d738cc028fa5ecf7cfe4cc39b3109c64249 | 7dbe77fc28b5703c4e0c68caf1d471b9cf8158ca | /cleanOTUtable.R | d5b5a46a474547e5467f651b7dd3f4771662975b | [] | no_license | slinn-shady/BioinformaticsTools | 0a86cfc7eae8e71989d53c310624e9ca35c50128 | 9e2f23772d45b9fc268fc06a3c44b085c10fdc91 | refs/heads/master | 2020-04-02T00:38:51.256445 | 2018-10-18T22:22:21 | 2018-10-18T22:22:21 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,926 | r | cleanOTUtable.R | #!/usr/bin/Rscript
#J. G. Harrison
#April 27, 2018
#this script takes an otu table and a taxonomy file and
#outputs an OTU table that does not have non-target taxa (e.g. no plants if you
#are doing fungi). It specifically searches the taxonomy file for the words
#chloroplast, plantae, and mitochondria
#This script takes several inputs:
#1. a taxonomy file as output from mergeTaxonomyFiles.sh
#2. An otu table
#IMPORTANT: This script has very limited utility and very likely won't work for
#you unless you tweak it a bit, or are using my pipeline, so carefully read it!
#For examples of input data see the dir Example_data/ in the git repo
#Usage: Rscript cleanOTUtable.R combinedTaxonomyfile.txt OTUtableExample.txt clean
#where clean is either "yes" or "no" and determines if you want
#taxa removed that were not assigned to a phylum
main <- function() {
inargs <- commandArgs(trailingOnly = TRUE)
print(inargs)
#input otu table, sample-well key, and taxonomy file
tax=read.delim(file=inargs[1], header=F)
otus=read.delim(file=inargs[2], header=T)
clean=inargs[3]
#now we need to pick which OTUs to keep
#I am first removing any OTUs that either database said were plants.
#then I will subset the data to just keep only those OTUs that one or the other database
#had >80% confidence in placement to phylum. In previous work I have found that <80% placement to
#phylum often match nontarget taxa on NCBI...so I remove those too.
#note these are just operating on the 4th field because the UNITE database out has a different format
print(paste("Started with ", length(tax[,1]), " taxa", sep=""))
fungiOnly <- tax[tax[,3]!="d:Plantae",]
print(paste("Now have ", length(fungiOnly[,1]), " taxa after removing plant otus found by taxonomy db 1", sep=""))
fungiOnly <- fungiOnly[fungiOnly[,4]!="d:Plantae",]
print(paste("Now have ", length(fungiOnly[,1]), " taxa after removing plant otus found by taxonomy db 2", sep=""))
if(length(grep("Chloroplast", fungiOnly[,3]))>0){
fungiOnly= fungiOnly[-(grep("Chloroplast", fungiOnly[,3])),]
print(paste("Now have ", length(fungiOnly[,1]), " taxa after removing cp otus from db 1", sep=""))
}
if(length(grep("Chloroplast", fungiOnly[,4]))>0){
fungiOnly= fungiOnly[-(grep("Chloroplast", fungiOnly[,4])),]
print(paste("Now have ", length(fungiOnly[,1]), " taxa after removing cp otus from db 2", sep=""))
}
#this keeps any row that for either database there is more than 10 characters
#describing the taxonomic hypothesis.
#this gets rid of anything that was not identified to phylum by
#one or the other db.
#IMPORTANT: this may very well get rid of target taxa if you are working in
#a remote system. Use with caution.
if(clean=="yes"){
fungiOnly = fungiOnly[which(nchar(as.character(fungiOnly[,3])) > 10 | nchar(as.character(fungiOnly[,4])) > 10),]
}
######################################################################################################
#NOTE: I strongly recommend doing some spot checks of the OTUs that made it through. Find some that were
#not id'd as fungi by both databases and go to the NCBI web interface and paste them in there.
#if they show up as some other eukaryote then it may be worth scrubbing the data more
######################################################################################################
#make new OTU table so that it includes only the taxa of interest
otus2 <- otus[otus$OTU_ID %in% fungiOnly[,1],]
otu_ids <- otus2[,1]
samps <- colnames(otus)
samps <- samps[-1]
#remove empty columns (no taxa of interest in a sample)
otus2 <- otus2[,-1]
#transpose and name
otus2 <- t(otus2)
colnames(otus2) <- otu_ids
otus2 <- data.frame(samps, otus2)
otus2 <- otus2[which(rowSums(otus2[,2:length(otus2)])!=0),]
write.csv(otus2,file="cleanOTUtable_youshouldrename.csv", row.names = F)
}
main()
|
45c3cfe23796a4b8e823910bde5d86392760f69e | afff1c31b1a85b45c46d56f81f4b4ce237a7a11c | /pulmonary_fibrosis/10x_scRNA-Seq_2019/Fig5_F_S14.R | 0645e52ab34584e0d04f7c5cf8e1dc7b22e417bd | [
"MIT"
] | permissive | tgen/banovichlab | 9f67eeb7cd95a8c65fbccea2e51033de8ea08a63 | 13ee30d91aa30e6a672d56e4f834c2b7b66769a4 | refs/heads/master | 2023-07-11T18:07:48.530129 | 2023-06-26T22:32:52 | 2023-06-26T22:32:52 | 203,832,392 | 32 | 11 | MIT | 2020-03-15T19:08:30 | 2019-08-22T16:24:15 | R | UTF-8 | R | false | false | 8,105 | r | Fig5_F_S14.R | # ==============================================================================
# Author(s) : Linh T. Bui, lbui@tgen.org
# Austin J. Gutierrez, agutierrez@tgen.org
# Date: 03/11/2020
# Description: Create loess plots using pseudotime ordering (Figure 5F and Figure S14)
# ==============================================================================
# ======================================
# Environment parameters
# ======================================
set.seed(12345)
# =====================================
# Load libraries
# =====================================
library(Seurat)
library(slingshot)
library(dplyr)
library(gridExtra)
library(tidyverse)
# Set date and create out folder
getwd()
Sys.Date()
main_dir <- "/scratch/lbui/RStudio_folder/"
date <- gsub("-", "", Sys.Date())
dir.create(file.path(main_dir, date), showWarnings = FALSE)
setwd(file.path(main_dir, date))
getwd()
# =====================================
# Read in the object(s)
# =====================================
at2 <- readRDS("/Volumes/scratch/agutierrez/IPF/R/Seurat/20200310/Final_AT2.rds")
scgb3a2 <- readRDS("/Volumes/scratch/agutierrez/IPF/R/Seurat/20200310/Final_SCGB3A2.rds")
at1 <- readRDS("/Volumes/scratch/agutierrez/IPF/R/Seurat/20200306/AT2_AT1_control.rds")
# =====================================
# Make subsets
# =====================================
sub1 <- subset(at2, cells = rownames(at2@meta.data[at2@meta.data$celltype %in%
c("KRT5-/KRT17+","Transitional AT2", "AT2"), ]))
sub2 <- subset(scgb3a2, cells = rownames(scgb3a2@meta.data[scgb3a2@meta.data$celltype %in%
c("KRT5-/KRT17+","Transitional AT2", "SCGB3A2+"), ]))
# =====================================
# Convert to SingleCellExperiment
# =====================================
sub_sce1 <- as.SingleCellExperiment(sub1)
sub_sce2 <- as.SingleCellExperiment(sub2)
# =====================================
# Run Slingshot
# =====================================
sub_slingshot1 <- slingshot(sub_sce1, clusterLabels = "celltype", reducedDim = 'UMAP',
start.clus="AT2", end.clus="KRT5-/KRT17+")
sub_slingshot2 <- slingshot(sub_sce2, clusterLabels = "celltype", reducedDim = 'UMAP',
start.clus="SCGB3A2+", end.clus="KRT5-/KRT17+")
sub_slingshot3 <- readRDS("/Volumes/scratch/agutierrez/IPF/R/Seurat/20200306/AT2_AT1_control_slingshot.rds")
# =====================================
# Set the pseudotime variable
# =====================================
t1 <- sub_slingshot1$slingPseudotime_1
t2 <- sub_slingshot2$slingPseudotime_1
t3 <- sub_slingshot3$slingPseudotime_1
# =====================================
# Genes of interest
# =====================================
gene.list <- c("NR1D1", "SOX4", "SOX9", "ZMAT3", "MDK", "CDKN1A")
plot.val <- c(1.25, 2.5, .6, .825, 2.25,2)
# =====================================
# Prepare date for loess plots
# =====================================
loess_data1 <- as.data.frame(sub1@assays$SCT@data[gene.list, ])
loess_data1 <- loess_data1[, order(t1)]
temp1 <- loess_data1
temp1 <- t(temp1)
temp1 <- as.data.frame(temp1)
temp1$index <- 1:nrow(temp1)
temp1$ct <- sub1@meta.data$celltype[order(t1)]
loess_data2 <- as.data.frame(sub2@assays$SCT@data[gene.list, ])
loess_data2 <- loess_data2[, order(t2)]
temp2 <- loess_data2
temp2 <- t(temp2)
temp2 <- as.data.frame(temp2)
temp2$index <- 1:nrow(temp2)
temp2$ct = sub2@meta.data$celltype[order(t2)]
loess_data3 <- as.data.frame(at1@assays$SCT@data[gene.list, ])
loess_data3 <- loess_data3[, order(t3)]
temp3 <- loess_data3
temp3 <- t(temp3)
temp3 <- as.data.frame(temp3)
temp3$index = 1:nrow(temp3)
temp3$ct = at1@meta.data$celltype[order(t3)]
# ======================================
# FIGURE 5F
# ======================================
# Loess plot
update_geom_defaults("line", list(colour = 'blue', linetype = 0.5))
theme_set(theme_grey(base_size=6))
pdf(file = "Fig5F_loess_final_genes.pdf", width = 3, height = 1.5)
for (i in 1:length(gene.list)) {
print(paste(i, gene.list[i], sep = "_"))
p1 <- ggplot(temp3, aes_string(y = gene.list[i] , x = temp3[, (ncol(temp3) -1)])) +
geom_smooth(method=loess, level=1-1e-10) + coord_cartesian(ylim = c(0, plot.val[i])) + scale_linetype()
# geom_tile(aes(x = index, y= 0, color = ct, height = .1, fill=ct)) + guides(fill=guide_legend())
p2 <- ggplot(temp1, aes_string(y = gene.list[i], x = temp1[, (ncol(temp1) -1)])) +
geom_smooth(method = loess, level=1-1e-10) + coord_cartesian(ylim = c(0, plot.val[i]))
# geom_tile(aes(x = index, y= 0, color = ct, height = .1, fill=ct)) + guides(fill=guide_legend())
p3 <- ggplot(temp2, aes_string(y = gene.list[i], x = temp2[, (ncol(temp2) -1)])) +
geom_smooth(method = loess,level=1-1e-10) + coord_cartesian(ylim = c(0, plot.val[i]))
# geom_tile(aes(x = index, y= 0, color = ct, height = .1, fill=ct)) + guides(fill=guide_legend())
grid.arrange(p1,p2,p3, ncol=3)
}
dev.off()
# Violin plot
my_levels <- c("AT2","SCGB3A2+","Transitional AT2","AT1","KRT5-/KRT17+")
krt5_5pop@meta.data$celltype <- factor(krt5_5pop@meta.data$celltype, levels = my_levels)
pdf(file = "Fig5F_vln_final_genes.pdf")
plot.list <- list()
for (i in 1:length(gene.list)) {
print(paste(i, gene.list[i], sep = "_"))
plot.list[[i]] <- VlnPlot(krt5_5pop, gene.list[i], split.by = "Status", group.by = "celltype", pt.size = 0)
}
plot.list
dev.off()
# ======================================
# FIGURE S14
# ======================================
# Genes of interest
gene.list <- c("SFTPC","AGER","KRT17","SCGB3A2","COL1A1")
plot.val <- c(8.0, 1.0, 3.0, 8.0, 2.0)
loess_data1 <- as.data.frame(sub1@assays$SCT@data[gene.list, ])
loess_data1 <- loess_data1[, order(t1)]
temp1 <- loess_data1
temp1 <- t(temp1)
temp1 <- as.data.frame(temp1)
temp1$index <- 1:nrow(temp1)
temp1$ct <- sub1@meta.data$celltype[order(t1)]
loess_data2 <- as.data.frame(sub2@assays$SCT@data[gene.list, ])
loess_data2 <- loess_data2[, order(t2)]
temp2 <- loess_data2
temp2 <- t(temp2)
temp2 <- as.data.frame(temp2)
temp2$index <- 1:nrow(temp2)
temp2$ct = sub2@meta.data$celltype[order(t2)]
# Make the plots
interleave <- function(a, b) {
shorter <- if (length(a) < length(b)) a else b
longer <- if (length(a) >= length(b)) a else b
slen <- length(shorter)
llen <- length(longer)
index.short <- (1:slen) + llen
names(index.short) <- (1:slen)
lindex <- (1:llen) + slen
names(lindex) <- 1:llen
sindex <- 1:slen
names(sindex) <- 1:slen
index <- c(sindex, lindex)
index <- index[order(names(index))]
return(c(a, b)[index])
}
pdf(file = "20200514_FigS14_loess_original.pdf", width = 16, height = 8.5)
plot_list1 <- list()
plot_list2 <- list()
for (i in 1:length(gene.list)) {
print(paste(i, gene.list[i], sep = "_"))
plot_list1[[i]] <- ggplot(temp1, aes_string(y = gene.list[i], x = temp1[, (ncol(temp1) -1)])) +
geom_smooth(method = loess,level=1-1e-10) + coord_cartesian(ylim = c(0, plot.val[i]))
# geom_tile(aes(x = index, y= 0, color = ct, height = .1, fill=ct)) + guides(fill=guide_legend())
plot_list2[[i]] <- ggplot(temp2, aes_string(y = gene.list[i], x = temp2[, (ncol(temp2) -1)])) +
geom_smooth(method = loess,level=1-1e-10) + coord_cartesian(ylim = c(0, plot.val[i]))
# geom_tile(aes(x = index, y= 0, color = ct, height = .1, fill=ct)) + guides(fill=guide_legend())
}
plot_list <- interleave(plot_list1, plot_list2)
plot_list <- plyr::compact(plot_list)
grid.arrange(grobs=plot_list, ncol=2)
dev.off()
# Get the pseudotime bar
p1 <- ggplot(temp1, aes_string(y = "SFTPC", x = temp1[, (ncol(temp1) -1)])) + geom_smooth(method = loess) + coord_cartesian(ylim = c(0, 8)) +
geom_tile(aes(x = index, y= 0, color = ct, height = .5, fill=ct)) + guides(fill=guide_legend())
p2 <- ggplot(temp2, aes_string(y = "SFTPC", x = temp2[, (ncol(temp1) -1)])) + geom_smooth(method = loess) + coord_cartesian(ylim = c(0, 8)) +
geom_tile(aes(x = index, y= 0, color = ct, height = .5, fill=ct)) + guides(fill=guide_legend())
|
197c778d7687557d165e3a93724d5a459c35d0be | 165e4a729024161336b21edb9eb71ba90786be7e | /covid19/www/4_load_external_data/05_columbia_data.R | c6dccff090a17d904e480a8b2e01f9e87ca42f11 | [
"MIT"
] | permissive | SpencerButt/CHAD | 8de53793777986caa569fa4fe002ed16266d6ff9 | 0dab6db877ea2704581b14c768b33b80fdca95c2 | refs/heads/master | 2022-06-26T02:13:24.604454 | 2020-05-08T13:55:29 | 2020-05-08T13:55:29 | 256,320,196 | 0 | 0 | MIT | 2020-04-20T13:11:42 | 2020-04-16T20:16:09 | HTML | UTF-8 | R | false | false | 2,677 | r | 05_columbia_data.R | #' @description From Columbia University, These files contain 42 day
#' projections which they update on Sunday evenings.
#'
#' @source https://github.com/shaman-lab/COVID-19Projection/tree/master/
#'
CU40PSD<-vroom::vroom("www/4_load_external_data/data_files/bed_80contact.csv")
CU30PSD<-vroom::vroom("www/4_load_external_data/data_files/bed_80contact_1x.csv")
CU20PSD<-vroom::vroom("www/4_load_external_data/data_files/bed_80contactw.csv")
CU40PSD<-CU40PSD %>% separate(county,c("County","State"), extra = "drop", fill = "right")
CU30PSD<-CU30PSD %>% separate(county,c("County","State"), extra = "drop", fill = "right")
CU20PSD<-CU20PSD %>% separate(county,c("County","State"), extra = "drop", fill = "right")
CU40PSD$fips<-as.numeric(CU40PSD$fips)
CU30PSD$fips<-as.numeric(CU30PSD$fips)
CU20PSD$fips<-as.numeric(CU20PSD$fips)
CU40PSD<-subset(CU40PSD,
select = -c(hosp_need_2.5,
hosp_need_97.5,
ICU_need_2.5,
ICU_need_25,
ICU_need_50,
ICU_need_75,
ICU_need_97.5,
vent_need_2.5,
vent_need_25,
vent_need_50,
vent_need_75,
vent_need_97.5,
death_2.5,death_97.5))
CU30PSD<-subset(CU30PSD,
select = -c(hosp_need_2.5,
hosp_need_97.5,
ICU_need_2.5,
ICU_need_25,
ICU_need_50,
ICU_need_75,
ICU_need_97.5,
vent_need_2.5,
vent_need_25,
vent_need_50,
vent_need_75,
vent_need_97.5,
death_2.5,
death_97.5))
CU20PSD<-subset(CU20PSD,
select = -c(hosp_need_2.5,
hosp_need_97.5,
ICU_need_2.5,
ICU_need_25,
ICU_need_50,
ICU_need_75,
ICU_need_97.5,
vent_need_2.5,
vent_need_25,
vent_need_50,
vent_need_75,
vent_need_97.5,
death_2.5,
death_97.5))
|
5217349d94b38103358394787b27981acf6f04b2 | bdccca49ffd9796c31f6bbce55fb2070aa201d89 | /Analysis_backwarddiff.R | 4aef2d299fbb61876a8d0511d56567b36cf4daf9 | [] | no_license | elspethwilson/implicature-development | 0df013ce90eb779572d20749191b91ad7063b4bd | df8ef1684511a5c8be0405cc16a1c1ca2c5b4c81 | refs/heads/master | 2021-06-23T00:56:41.095773 | 2017-08-25T15:48:24 | 2017-08-25T15:48:24 | 100,102,485 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,205 | r | Analysis_backwarddiff.R | library(lme4)
require(optimx)
source(system.file("utils", "allFit.R", package = "lme4"))
# for a more theory-dependent analysis, use backward difference conding
# A) to compare each agegroup to previous - 4-years to 5-years, and 3-years to 4-years
# (taking 5-year-olds as control group)
# B) compare different types of implicature as per predictions - W > R > A > S
# C) compare critical to control
# create monoling dataset with backwarddifference coding
Monoling_backdiff <- Monoling
print(levels(Monoling_backdiff$Agegroup))
Monoling_backdiff$Agegroup = factor(Monoling_backdiff$Agegroup, levels(Monoling_backdiff$Agegroup)[c(3,2,1)])
print(levels(Monoling_backdiff$Type))
Monoling_backdiff$Type = factor(Monoling_backdiff$Type, levels(Monoling_backdiff$Type)[c(4,2,1,3)])
print(levels(Monoling_backdiff$Critical))
contrasts(Monoling_backdiff$Agegroup) = contr.sdif(3)
contrasts(Monoling_backdiff$Type) = contr.sdif(4)
contrasts(Monoling_backdiff$Critical) = contr.sdif(2)
# run fullest by item model
bdmodel_mono_item3 <- glmer(Score ~ Critical + Type + Agegroup + (1 + Critical + Agegroup | Item.no), family = "binomial", optimizer = "bobyqa", control=glmerControl(optCtrl=list(maxfun=100000)), data = Monoling_backdiff)
bdmodel_mono_item3_all <- allFit(bdmodel_mono_item3)
summary(bdmodel_mono_item3_all$bobyqa)
back_diffs <- capture.output(summary(bdmodel_mono_item3_all$bobyqa))
write(back_diffs, "back_diffs.txt")
# by ID
bdmodel_mono_id3 <- glmer(Score ~ Critical + Type + Agegroup + (1 + Critical + Type | ID), family = "binomial", optimizer = "bobyqa", control=glmerControl(optCtrl=list(maxfun=100000)), data = Monoling_backdiff)
bdmodel_mono_id3_all <- allFit(bdmodel_mono_id3)
summary(bdmodel_mono_id3_all$nlminbw)
back_diffs_id <- capture.output(summary(bdmodel_mono_id3_all$nlminbw))
write(back_diffs_id, "back_diffs_id.txt")
# Add in block as random effect
Monoling_backdiff_trial <- Monoling_trial_s
print(levels(Monoling_backdiff_trial$Agegroup))
Monoling_backdiff_trial$Agegroup = factor(Monoling_backdiff_trial$Agegroup, levels(Monoling_backdiff_trial$Agegroup)[c(3,2,1)])
print(levels(Monoling_backdiff_trial$Type))
Monoling_backdiff_trial$Type = factor(Monoling_backdiff_trial$Type, levels(Monoling_backdiff_trial$Type)[c(4,2,1,3)])
print(levels(Monoling_backdiff_trial$Critical))
contrasts(Monoling_backdiff_trial$Agegroup) = contr.sdif(3)
contrasts(Monoling_backdiff_trial$Type) = contr.sdif(4)
contrasts(Monoling_backdiff_trial$Critical) = contr.sdif(2)
print(levels(Monoling_backdiff_trial$Agegroup_s))
Monoling_backdiff_trial$Agegroup_s = factor(Monoling_backdiff_trial$Agegroup_s, levels(Monoling_backdiff_trial$Agegroup_s)[c(6,5,4,3,2,1)])
contrasts(Monoling_backdiff_trial$Agegroup_s) = contr.sdif(6)
#backward difference with block as random effect
bdmodel_mono_item3_trial <- glmer(Score ~ Critical + Type + Agegroup + (1 + Critical + Agegroup + Trial_block | Item.no), family = "binomial", optimizer = "bobyqa", control=glmerControl(optCtrl=list(maxfun=100000)), data = Monoling_backdiff_trial)
bdmodel_mono_item3_trial_all <- allFit(bdmodel_mono_item3_trial)
summary(bdmodel_mono_item3_trial_all$bobyqa)
back_diffs_trial <- capture.output(summary(bdmodel_mono_item3_trial_all$bobyqa))
write(back_diffs_trial, "back_diffs_trial.txt")
# small agegroups
bdmodel_mono_item3_trial_s <- glmer(Score ~ Critical + Type + Agegroup_s + (1 + Critical + Agegroup_s + Trial_block | Item.no), family = "binomial", optimizer = "bobyqa", control=glmerControl(optCtrl=list(maxfun=100000)), data = Monoling_backdiff_trial)
bdmodel_mono_item3_trial_s_all <- allFit(bdmodel_mono_item3_trial_s)
back_diffs_trial_s <- capture.output(summary(bdmodel_mono_item3_trial_s_all$bobyqa))
write(back_diffs_trial_s, "back_diffs_trial_s.txt")
# try interaction
# bdmodel_mono_int <- glmer(Score ~ Critical * Type * Agegroup + (1 + Critical + Agegroup | Item.no), family = "binomial", optimizer = "bobyqa", control=glmerControl(optCtrl=list(maxfun=100000)), data = Monoling_backdiff)
# bdmodel_mono_int_all <- allFit(bdmodel_mono_int)
# summary(bdmodel_mono_int_all$bobyqa)
# fails to converge |
961497a1b83975c942694114a802309e150e17f6 | 0a906cf8b1b7da2aea87de958e3662870df49727 | /grattan/inst/testfiles/anyOutside/libFuzzer_anyOutside/anyOutside_valgrind_files/1610387048-test.R | cda84345ea6bfb2744aac9d1f5ed94fcd088d22f | [] | no_license | akhikolla/updated-only-Issues | a85c887f0e1aae8a8dc358717d55b21678d04660 | 7d74489dfc7ddfec3955ae7891f15e920cad2e0c | refs/heads/master | 2023-04-13T08:22:15.699449 | 2021-04-21T16:25:35 | 2021-04-21T16:25:35 | 360,232,775 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 325 | r | 1610387048-test.R | testlist <- list(a = 16203008L, b = 512L, x = c(-5242946L, -1694498817L, -738263040L, 0L, 5855577L, 1499027801L, 1499017049L, 1499027801L, 688392448L, 1499027801L, 1499027712L, 589657L, 1499027801L, 1499027967L, -1L, -1L, -1L, NA, -1L, -1L, -1L, -1L, -1L, -1L))
result <- do.call(grattan:::anyOutside,testlist)
str(result) |
ddf4b16e14cbf8cb2fa5c55aba01f61770676b7f | 9262e777f0812773af7c841cd582a63f92d398a4 | /inst/userguide/figures/STS--Cs703_multivariate.R | 0a8e62d7d0e62c154d6b48ab91cf764178d4d78d | [
"CC0-1.0",
"LicenseRef-scancode-public-domain"
] | permissive | nwfsc-timeseries/MARSS | f0124f9ba414a28ecac1f50c4596caaab796fdd2 | a9d662e880cb6d003ddfbd32d2e1231d132c3b7e | refs/heads/master | 2023-06-07T11:50:43.479197 | 2023-06-02T19:20:17 | 2023-06-02T19:20:17 | 438,764,790 | 1 | 2 | NOASSERTION | 2023-06-02T19:17:41 | 2021-12-15T20:32:14 | R | UTF-8 | R | false | false | 384 | r | STS--Cs703_multivariate.R | ###################################################
### code chunk number 31: Cs703_multivariate
###################################################
vy <- var(y, na.rm = TRUE) / 100
mod.list.x <- list(
x0 = matrix(list("x0", 0), nrow = 2), tinitx = 1,
V0 = matrix(1e+06 * vy, 2, 2) + diag(1e-10, 2),
Q = ldiag(list(q, "qt")),
B = matrix(c(1, 0, 1, 1), 2, 2),
U = "zero"
)
|
18cace208629509c65b58e14417bd701e5cf707d | efb0f64b55db89f734fe1c76b7f0393aea0d908d | /R/toyPackage.R | de9c2edd0e8f9c05ad07ca152f28c3beeaf2c85a | [] | no_license | rebeccaferrell/toyPackage | c5f29d12f525a60f2eca2689585e8579d5161391 | d3a742d9f5e3593b16cf42896c89cbd54bbca75b | refs/heads/master | 2016-09-05T08:52:53.273403 | 2015-07-22T18:01:32 | 2015-07-22T18:01:32 | 39,464,167 | 0 | 2 | null | 2015-07-22T18:01:33 | 2015-07-21T18:58:23 | R | UTF-8 | R | false | false | 104 | r | toyPackage.R | #' A toy package
#'
#' Does nothing.
#' Nothing at all.
#'
#' @docType package
#' @name toyPackage
NULL
|
bea83343ba8eaef877eddb93eda856a97460f6ee | 28bb2ce209e8ab2e650ca4c78f46284860e63bac | /src/Ad_GABasedOpt.R | e9fbbb79fdba117a2c59832b9ecce48370f92c35 | [
"Apache-2.0"
] | permissive | Seager1989/HOpt4SMSD | 382c2660af7496aa95c5a95d86665e10bb610649 | e8d0154c9813594e245c9902f4d1e9650c0a758e | refs/heads/main | 2023-04-11T08:49:59.435654 | 2021-04-21T18:51:27 | 2021-04-21T18:51:27 | 314,405,056 | 2 | 0 | null | null | null | null | UTF-8 | R | false | false | 8,639 | r | Ad_GABasedOpt.R | library (mlr)
library (mlbench)
library(readr)
library(mlrMBO)
library(emoa)
library(DiceKriging)
library(rgenoud)
library(mxnet)
library(PMCMR)
library(PMCMRplus)
library(metaheuristicOpt)
#######################################################################################################################
## Optimizing the established models
#construct the learners list for Tube
Tube_learners=c('Tube_model_GPR','Tube_model_SVM','Tube_model_RFR','Tube_model_mxnet')
#the circulation for the tube optimization
Tube_optimum<-matrix(nrow = 4,ncol = 10)
for (i in 1:4) {
Tubef <- function(X){
names(X)<-c("T1","T2","T3","T4","T5","T6","T7","T8","T9")
X=as.data.frame(X)
X=t(X)
X=as.data.frame(X)
trans=predict(get(Tube_learners[i]),newdata=X)
re<-trans$data$response
return(re)}
## Define parameters of GA algorithms
Pm <- 0.1
Pc <- 0.8
numVar <- 9
rangeVar <- matrix(c(0,1), nrow=2)
## calculate the optimum solution using GA
Tube_optimum[i,1:9] <- GA(Tubef, optimType="MAX", numVar, numPopulation=20, maxIter=100, rangeVar, Pm, Pc)
## calculate the optimum value using trained learner
Tube_optimum[i,10] <- Tubef(Tube_optimum[i,1:9])
}
####################################################################################################################
########construct the learners list for Sbeam
SbeamCAD_learners=c('SbeamCAD_model_GPR','SbeamCAD_model_SVM','SbeamCAD_model_RFR','SbeamCAD_model_mxnet')
#Define dataset to save optimum
SbeamCAD_optimum<-matrix(nrow = 4,ncol = 8)
#the circulation for the Sbeam optimization
for (i in 1:4) {
SbeamCADf <- function(X){
names(X)<-c("AN", "T", "DR", "UL", "UR", "H", "CL")
X=as.data.frame(X)
X=t(X)
X=as.data.frame(X)
trans=predict(get(SbeamCAD_learners[i]),newdata=X)
re<-trans$data$response
return(re)}
## Define parameters of GA algorithms
Pm <- 0.1
Pc <- 0.8
numVar <- 7
rangeVar <- matrix(c(0,1), nrow=2)
## calculate the optimum solution using GA
SbeamCAD_optimum[i,1:7] <- GA(SbeamCADf, optimType="MAX", numVar, numPopulation=20, maxIter=100, rangeVar, Pm, Pc)
## calculate the optimum value using trained learner
SbeamCAD_optimum[i,8] <- SbeamCADf(SbeamCAD_optimum[i,1:7])
}
#########################################################################################################################
#construct the learners list for Ten bars
Tenbars_learners=c('Tenbars_model_GPR','Tenbars_model_SVM','Tenbars_model_RFR','Tenbars_model_mxnet')
#Define dataset to save optimum
Tenbars_optimum<-matrix(nrow = 4,ncol = 11)
#the circulation for the Tenbars optimization
for (i in 1:4) {
Tenbarsf <- function(X){
names(X)<-c("A1" , "A2" , "A3" , "A4" , "A5" , "A6" , "A7" , "A8" , "A9" , "A10")
X=as.data.frame(X)
X=t(X)
X=as.data.frame(X)
trans=predict(get(Tenbars_learners[i]),newdata=X)
dis4<-trans$data$response
if ((555.58*dis4+36.6836) < 100){
mass=9144*(22520*(X[,1]+X[,2]+X[,3]+X[,4]+X[,5]+X[,6])+360)+12931.6*(22520*(X[,7]+X[,8]+X[,9]+X[,10])+240)
} else {
mass=1e30
}
return(mass)
}
## Define parameters of GA algorithms
Pm <- 0.1
Pc <- 0.8
numVar <- 10
rangeVar <- matrix(c(0,1), nrow=2)
## calculate the optimum solution using GA
Tenbars_optimum[i,1:10] <- GA(Tenbarsf, optimType="MIN", numVar, numPopulation=20, maxIter=100, rangeVar, Pm, Pc)
## calculate the optimum value using trained learner
Tenbars_optimum[i,11] <- Tenbarsf(Tenbars_optimum[i,1:10])
}
#############################################################################################################################
#construct the learners list for Ten bars
TorsionB_learners=c('TorsionB_model_GPR','TorsionB_model_SVM','TorsionB_model_RFR','TorsionB_model_mxnet')
#Define dataset to save optimum
TorsionB_optimum<-matrix(nrow = 4,ncol = 15)
#the circulation for the TorsionB optimization
for (i in 1:4) {
TorsionBf <- function(X){
names(X)<-c("Y1","Y2","Y3","Y4","Y5","Y6","Y7","Y8","Y9","Y10","r1","r2","X1","X2")
X=as.data.frame(X)
X=t(X)
X=as.data.frame(X)
trans=predict(get(TorsionB_learners[i]),newdata=X)
X=data.matrix(X)
stress=predict(Torsiob_ANN_stress,X)
stress=354.21+stress*78018.99
if (stress<800){
re<-trans$data$response
} else {
re<-100
}
return(re)}
## Define parameters of GA algorithms
Pm <- 0.1
Pc <- 0.8
numVar <- 14
rangeVar <- matrix(c(0,1), nrow=2)
## calculate the optimum solution using GA
TorsionB_optimum[i,1:14] <- GA(TorsionBf, optimType="MIN", numVar, numPopulation=20, maxIter=100, rangeVar, Pm, Pc)
## calculate the optimum value using trained learner
TorsionB_optimum[i,15] <- TorsionBf(TorsionB_optimum[i,1:14])
}
##############################################################################################################################
#tube real optimum
Tube_optimum_R<-Tube_optimum[,dim(Tube_optimum)[2]]*Est_tube_scale+Est_tube_min
Tube_var<-matrix(data=NA,nrow=dim(Tube_optimum)[1],ncol=dim(Tube_optimum)[2])
Tube_var_max<-apply(DTube,2,max)
Tube_var_min<-apply(DTube,2,min)
Tube_var_range<-Tube_var_max-Tube_var_min
for (i in 1:dim(Tube_optimum)[1]){
Tube_var[i,]<-Tube_optimum[i,]*Tube_var_range+Tube_var_min
}
#SbeamCAD real optimum
SbeamCAD_optimum_R<-SbeamCAD_optimum[,dim(SbeamCAD_optimum)[2]]*Est_sbeam_scale+Est_sbeam_min
#construct a null matrix
SbeamCAD_var<-matrix(data = NA,nrow = dim(SbeamCAD_optimum)[1],ncol = dim(SbeamCAD_optimum)[2])
SbeamCAD_var_max<-apply(DSbeamCAD,2,max)
SbeamCAD_var_min<-apply(DSbeamCAD,2,min)
SbeamCAD_var_range<-SbeamCAD_var_max-SbeamCAD_var_min
for (i in 1:dim(SbeamCAD_optimum)[1]){
SbeamCAD_var[i,]<-SbeamCAD_optimum[i,]*SbeamCAD_var_range+SbeamCAD_var_min
}
#Ten bars real optimum
Tenbars_var<-matrix(data = NA,nrow = dim(Tenbars_optimum)[1],ncol = dim(Tenbars_optimum)[2])
Tenbars_var_max<-apply(DTenbars,2,max)
Tenbars_var_min<-apply(DTenbars,2,min)
Tenbars_var_range<-Tenbars_var_max[-dim(Tenbars_optimum)[2]]-Tenbars_var_min[-dim(Tenbars_optimum)[2]]
for (i in 1:dim(Tenbars_optimum)[1]){
Tenbars_var[i,-dim(Tenbars_optimum)[2]]<-Tenbars_optimum[i,-dim(Tenbars_optimum)[2]]*Tenbars_var_range+Tenbars_var_min[-dim(Tenbars_optimum)[2]]
}
Tenbars_var[,dim(Tenbars_optimum)[2]]<-Tenbars_optimum[,dim(Tenbars_optimum)[2]]
#Torsion Bars real optimum
TorsionB_optimum_R<-TorsionB_optimum[,dim(TorsionB_optimum)[2]]*Est_torsionb_scale+Est_torsionb_min
TorsionB_var<-matrix(data = NA,nrow = dim(TorsionB_optimum)[1],ncol = dim(TorsionB_optimum)[2])
TorsionB_var_max<-apply(DTorsionB,2,max)
TorsionB_var_min<-apply(DTorsionB,2,min)
TorsionB_var_range<-TorsionB_var_max-TorsionB_var_min
for (i in 1:dim(TorsionB_optimum)[1]){
TorsionB_var[i,]<-TorsionB_optimum[i,]*TorsionB_var_range+TorsionB_var_min
}
##present the optimium data
print(Tube_var)
print(SbeamCAD_var)
print(Tenbars_var)
print(TorsionB_var)
colnames(Tube_var)<-colnames(DTube)
colnames(SbeamCAD_var)<-colnames(DSbeamCAD)
colnames(Tenbars_var)<-colnames(DTenbars)
colnames(TorsionB_var)<-colnames(DTorsionB )
rownames(Tube_var)<-c('GPR','SVM','RFR','mxnet')
rownames(SbeamCAD_var)<-c('GPR','SVM','RFR','mxnet')
rownames(Tenbars_var)<-c('GPR','SVM','RFR','mxnet')
rownames(TorsionB_var)<-c('GPR','SVM','RFR','mxnet')
##write out the data
library(xlsx)
write.xlsx(Tube_var, file = "C:/Users/DUX1/Desktop/PHD Project/3rd Deep learning/3.1 Three other machine learning R/Hypereffectstudy/Fourexample_4MLs_optimum.xlsx",
sheetName="Tube_optimum_4MLs", col.names=TRUE, row.names=TRUE, append=FALSE, showNA=TRUE, password=NULL)
write.xlsx(SbeamCAD_var, file = "C:/Users/DUX1/Desktop/PHD Project/3rd Deep learning/3.1 Three other machine learning R/Hypereffectstudy/Fourexample_4MLs_optimum.xlsx",
sheetName="SbeamCAD_optimum_4MLs", col.names=TRUE, row.names=TRUE, append=TRUE, showNA=TRUE, password=NULL)
write.xlsx(Tenbars_var, file = "C:/Users/DUX1/Desktop/PHD Project/3rd Deep learning/3.1 Three other machine learning R/Hypereffectstudy/Fourexample_4MLs_optimum.xlsx",
sheetName="Tenbars_optimum_4MLs", col.names=TRUE, row.names=TRUE, append=TRUE, showNA=TRUE, password=NULL)
write.xlsx(TorsionB_var, file = "C:/Users/DUX1/Desktop/PHD Project/3rd Deep learning/3.1 Three other machine learning R/Hypereffectstudy/Fourexample_4MLs_optimum.xlsx",
sheetName="TorsionB_optimum_4MLs", col.names=TRUE, row.names=TRUE, append=TRUE, showNA=TRUE, password=NULL)
|
4dc6f5e9999174a66a7087cee8bd621c8e022c9e | f7128aa987adcdd60c4a65c2f0c2bec2c1920d94 | /cachematrix.R | 172daa98b3a5e479d67a3a44315f00345c1be731 | [] | no_license | bliner/ProgrammingAssignment2 | 9237c8ada8c8200aa8402b79dadd55d2db474b1b | 0d7c7cd98e55be7cce5afd833eba94afc844b76e | refs/heads/master | 2021-01-21T16:27:28.554610 | 2016-07-17T22:25:46 | 2016-07-17T22:25:46 | 63,555,078 | 0 | 0 | null | 2016-07-17T22:23:45 | 2016-07-17T22:23:44 | null | UTF-8 | R | false | false | 1,386 | r | cachematrix.R | # The first function, makeCacheMatrix creates a special "matrix", which is really a list containing a function to
# set the value of the matrix
# get the value of the matrix
# set the value of the inverse matrix
# get the value of the inverse matrix
makeCacheMatrix <- function(x = matrix()) {
MatrInv <- NULL # sets the cache empty
set <- function(y) {
x <<- y
MatrInv <<- NULL
}
get <- function() x
setinverse <- function(inverse) MatrInv <<- inverse
getinverse <- function() MatrInv
list(set=set, get=get, setinverse=setinverse, getinverse=getinverse)
}
# The following function calculates the inverse of the special "matrix" created with the above function.
# It first checks to see if the inverse has already been calculated
# If so, it gets the inverse from the cache and skips the computation.
# Otherwise, it calculates the inverse of the matrix and sets the value of the matrix in the cache via the setmean function.
# This function assumes that the matrix is always invertible.
cacheSolve <- function(x, ...) {
MatrInv <- x$getinverse()
if(!is.null(MatrInv)) {
message("getting cached data.")
return(MatrInv)
}
data <- x$get()
MatrInv <- solve(data)
x$setinverse(MatrInv)
MatrInv
}
# tests:
input<-matrix(c(1,2.5,2.5,3.4),2,2)
tempMatrix<-makeCacheMatrix(input)
tempMatrix$get()
cacheSolve(tempMatrix)
cacheSolve(tempMatrix)
|
0bc9881afaca7f2f0876966d3669ccf9f64295d3 | 563de1dae5d80e271821a945a985dc5e3e96e31c | /man/catheter.Rd | b8202b0864bdda3da051f95151537b6b7c5635b1 | [] | no_license | cran/MetaAnalyser | 1d17bb2f111c73520ec611213620e9470c0937a5 | bd4ab24e1dff83c13fdb091de8d55e65869f3634 | refs/heads/master | 2021-01-20T19:08:53.428224 | 2016-10-13T00:24:55 | 2016-10-13T00:24:55 | 65,407,026 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,244 | rd | catheter.Rd | \name{catheter}
\alias{catheter}
\docType{data}
\title{
Meta-analysis of antibacterial catheter coating
}
\description{
Data on the effectiveness of silver sulfadiazine coating on venous
catheters for preventing bacterial colonisation of the catheter
and bloodstream infection. A modified version of the data provided
by the \pkg{rmeta} package, excluding four small or uninformative studies.
}
\usage{data("catheter")}
\format{
A data frame with 11 observations on the following 3 variables.
\describe{
\item{\code{name}}{Study name}
\item{\code{est}}{Log odds ratio of bacteria colonisation (treatment
compared to control)}
\item{\code{se}}{Corresponding standard error}
}
}
\details{
The Appavi, Pemberton, Logghe and Bach (a) studies are excluded.
The data here are produced from the source numerators and denominators using the
\code{meta.MH} method in \pkg{rmeta}.
}
\source{
Veenstra D et al (1998) "Efficacy of Antiseptic Impregnated
Central Venous Catheters in Preventing Nosocomial Infections: A
Meta-analysis" JAMA 281:261-267
}
\references{
The \pkg{rmeta} package (Lumley, 2012).
}
\examples{
\dontrun{
MetaAnalyser(catheter)
}
}
\keyword{datasets}
|
bb0e1b32ba3069317c51f83c0127e290f402802b | cdbca901bbf564d9c94a74a4ab3aabe303691ce8 | /R/geom_hpline.R | fdbe71b6e6a693a1b77c5a24d9149c679b3e49a4 | [] | no_license | wilkelab/ungeviz | 6d57a5a69bf46efe35f5e4d04f3f413c47273fff | aeae12b014c718dcd01343548f2221b58ece63f6 | refs/heads/master | 2020-03-27T00:49:35.929110 | 2019-01-24T20:14:20 | 2019-01-24T20:14:20 | 145,660,189 | 95 | 8 | null | null | null | null | UTF-8 | R | false | false | 3,779 | r | geom_hpline.R | #' Draw point-like short line segments
#'
#' The geoms `geom_hpline()` and `geom_vpline()` can be used as a drop-in
#' replacement for [`geom_point()`] but draw horizontal or vertical lines
#' (point-lines, or plines) instead of points. These lines can often be useful to
#' indicate specific parameter estimates in a plot. The geoms take position
#' aesthetics as `x` and `y` like [`geom_point()`], and they use `width` or `height`
#' to set the length of the line segment. All other aesthetics (`colour`, `size`,
#' `linetype`, etc.) are inherited from [`geom_segment()`].
#' @inheritParams ggplot2::geom_point
#' @examples
#' library(ggplot2)
#' ggplot(iris, aes(Species, Sepal.Length)) +
#' geom_hpline(stat = "summary")
#'
#' ggplot(iris, aes(Species, Sepal.Length)) +
#' geom_point(position = "jitter", size = 0.5) +
#' stat_summary(aes(colour = Species), geom = "hpline", width = 0.6, size = 1.5)
#'
#' ggplot(iris, aes(Sepal.Length, Species, color = Species)) +
#' geom_point(color = "grey50", alpha = 0.3, size = 2) +
#' geom_vpline(data = sampler(5, 1, group = Species), height = 0.4) +
#' scale_color_brewer(type = "qual", palette = 2, guide = "none") +
#' facet_wrap(~.draw) +
#' theme_bw()
#' @export
geom_hpline <- function(mapping = NULL, data = NULL,
stat = "identity", position = "identity",
...,
na.rm = FALSE,
show.legend = NA,
inherit.aes = TRUE) {
layer(
data = data,
mapping = mapping,
stat = stat,
geom = GeomHpline,
position = position,
show.legend = show.legend,
inherit.aes = inherit.aes,
params = list(
na.rm = na.rm,
...
)
)
}
#' @rdname geom_hpline
#' @format NULL
#' @usage NULL
#' @export
GeomHpline <- ggproto("GeomHpline", GeomSegment,
required_aes = c("x", "y"),
non_missing_aes = c("size", "colour", "linetype", "width"),
default_aes = aes(
width = 0.5, colour = "black", size = 2, linetype = 1,
alpha = NA
),
draw_panel = function(self, data, panel_params, coord, arrow = NULL, arrow.fill = NULL,
lineend = "butt", linejoin = "round", na.rm = FALSE) {
data <- mutate(data, x = x - width/2, xend = x + width, yend = y)
ggproto_parent(GeomSegment, self)$draw_panel(
data, panel_params, coord, arrow = arrow, arrow.fill = arrow.fill,
lineend = lineend, linejoin = linejoin, na.rm = na.rm
)
}
)
#' @rdname geom_hpline
#' @export
geom_vpline <- function(mapping = NULL, data = NULL,
stat = "identity", position = "identity",
...,
na.rm = FALSE,
show.legend = NA,
inherit.aes = TRUE) {
layer(
data = data,
mapping = mapping,
stat = stat,
geom = GeomVpline,
position = position,
show.legend = show.legend,
inherit.aes = inherit.aes,
params = list(
na.rm = na.rm,
...
)
)
}
#' @rdname geom_hpline
#' @format NULL
#' @usage NULL
#' @export
GeomVpline <- ggproto("GeomVpline", GeomSegment,
required_aes = c("x", "y"),
non_missing_aes = c("size", "colour", "linetype", "height"),
default_aes = aes(
height = 0.5, colour = "black", size = 2, linetype = 1,
alpha = NA
),
draw_panel = function(self, data, panel_params, coord, arrow = NULL, arrow.fill = NULL,
lineend = "butt", linejoin = "round", na.rm = FALSE) {
data <- mutate(data, y = y - height/2, yend = y + height, xend = x)
ggproto_parent(GeomSegment, self)$draw_panel(
data, panel_params, coord, arrow = arrow, arrow.fill = arrow.fill,
lineend = lineend, linejoin = linejoin, na.rm = na.rm
)
}
)
|
7016392cf49ce50fd8826ec0821524ca7d10d520 | 130fac5c7630d17332e253b4da6870f028d6f4d2 | /man/RandomFields.Rd | 03b83f04747ffad190c0e191d5911112b62900b6 | [] | no_license | cran/RandomFields | 41efaabb19f883462ec3380f3d4c3102b0ed86b4 | 41d603eb8a5f4bfe82c56acee957c79e7500bfd4 | refs/heads/master | 2022-01-26T09:24:35.125597 | 2022-01-18T18:12:52 | 2022-01-18T18:12:52 | 17,693,063 | 5 | 4 | null | 2019-05-20T21:08:38 | 2014-03-13T03:21:25 | C++ | UTF-8 | R | false | false | 12,090 | rd | RandomFields.Rd | \name{RandomFields-package}
\alias{RandomFields}
\alias{RandomFields-package}
\docType{package}
\title{Simulation and Analysis of Random Fields}
\description{
The package \code{RandomFields} offers various tools for
\enumerate{
\item{\bold{model estimation (ML) and inference (tests)}}
for regionalized variables and data analysis,
\item{\bold{simulation}} of different kinds
of random fields, including
\itemize{
\item multivariate, spatial, spatio-temporal, and non-stationary
Gaussian random fields,
\item Poisson fields, binary fields, Chi2 fields, t fields and
\item max-stable fields.
}
It can also deal with non-stationarity and anisotropy of these
processes and conditional simulation (for Gaussian random fields,
currently).
% \item{\bold{model estimation}} for (geostatistical) linear (mixed) models
}
See \mysoftware
for \bold{intermediate updates}.
}
\details{
The following features are provided by the package:
\enumerate{
\item \bold{Bayesian Modelling}
\itemize{
\item See \link{Bayesian Modelling} for an introduction to
hierarchical modelling.
}
\item \bold{Coordinate systems}
\itemize{
\item Cartesian, earth and spherical coordinates are
recognized, see \link{coordinate systems} for details.
\item A list of valid models is given by
\link{spherical models}.
}
\item \bold{Data and example studies}:
Some data sets and published code are provided to illustrate the
syntax and structure of the package functions.
\itemize{
\item \code{\link{soil}} : soil physical data
\item \code{\link{weather}} : UWME weather data
\item \code{\link{papers}} : code used in the papers published by
the author(s)
}
\item \bold{Estimation of parameters (for second-order random fields)}
\itemize{
\item \command{\link{RFfit}} : general function for estimating
parameters; (for Gaussian random fields)
\item \command{\link{RFhurst}} : estimation of the Hurst parameter
\item \command{\link{RFfractaldim}} : estimation of the fractal
dimension
\item \command{\link{RFvariogram}} : calculates
the empirical variogram
\item \command{\link{RFcov}} : calculates
the empirical (auto-)covariance function
}
\item \bold{Graphics}
\itemize{
\item Fitting a covariance function manually
\command{\link{RFgui}}
\item the generic function \command{\link[graphics]{plot}}
\item global graphical parameters with \command{\link{RFpar}}
}
\item \bold{Inference (for Gaussian random fields)}
\itemize{
\item \command{\link{RFcrossvalidate}} : cross validation
\item \command{\link{RFlikelihood}} : likelihood
\item \command{\link{RFratiotest}} : likelihood ratio test
\item \command{\link[=AIC.RF_fit]{AIC}},
\command{\link[=AICc.RF_fit]{AICc}},
\command{\link[=BIC.RF_fit]{BIC}}, \command{\link[=anova.RF_fit]{anova}},
\command{\link[=logLik.RFfit]{logLik}}
}
\item \bold{Models}
\itemize{
\item For an introduction and general properties, see
\link{RMmodels}.
\item For an overview over classes of
covariance and variogram models --e.g. for
\bold{geostatistical} purposes-- see \link{RM}. More
sophisticated models
and covariance function operators are included.
% \item To apply the offered package procedures to \bold{mixed models}
% -- e.g. appearing in genetical data analysis-- see
\item \command{\link{RFformula}} reports a new style of passing a
model since version 3.3.
\item definite models are evaluated by \command{\link{RFcov}},
\command{\link{RFvariogram}} and \command{\link{RFcovmatrix}}.
For a quick impression use \code{\link{plot}(model)}.
\item non-definite models are evaluated by \command{\link{RFfctn}} and
\command{\link{RFcalc}}
\item \command{\link{RFlinearpart}} returns the linear part of a
model
\item \command{\link{RFboxcox}} deals explicitly with Box-Cox
transformations. In many cases it is performed implicitly.
}
\item \bold{Prediction (for second-order random fields)}
\itemize{
\item \command{\link{RFinterpolate}} : kriging, including imputing
}
\item \bold{Simulation}
\itemize{
\item \command{\link{RFsimulate}}: Simulation
of random fields,
including conditional simulation. For a list of all covariance
functions and variogram models see \command{\link{RM}}.
Use \command{\link{plot}} for visualisation of the result.
}
\item \bold{S3 and S4 objects}
\itemize{
\item The functions return S4 objects
based on the package \pkg{sp},
if \code{\link[=RFoptions]{spConform=TRUE}}.
This is the default.
If \code{\link[=RFoptions]{spConform=FALSE}},
simple objects as in version 2 are returned.
These simple objects are frequently provided with an S3 class.
This option makes the returning procedure much faster, but
currently does not allow for the comfortable use of
\command{\link[=plot-method]{plot}}.
\item \command{\link[graphics]{plot}},
\command{\link[base]{print}}, \command{\link[base]{summary}},
sometimes also \command{\link[utils]{str}} recognise these S3 and S4
objects
\item use \command{\link{sp2RF}} for an explicit transformation
of \pkg{sp} objects to S4 objects of \pkg{RandomFields}.
\item
Further generic functions are available for fitted models,
see \sQuote{Inference} above.
% \item \bold{Note} that, in many cases, \command{print} will return
% an invisible list. This list contains the main information of the
% S4 object in an accessible way and is in many cases the
% information obtained from \code{summary}. See examples below.
}
\item \bold{Xtended} features, especially for package programmers
\itemize{
\item might decide on a large variety of arguments of the
simulation and estimation procedures using the function
\command{\link{RFoptions}}
\item may use \sQuote{./configure
--with-tcl-config=/usr/lib/tcl8.5/tclConfig.sh
--with-tk-config=/usr/lib/tk8.5/tkConfig.sh} to configure R
}
}
}
\section{Changings}{
A list of major changings from Version 2 to Version 3 can be found
in \link{MajorRevisions}.
\link{Changings} lists some further changings, in particular of
argument and argument names.
\pkg{RandomFields} should be rather
stable when running it with \pkg{parallel}.
However \pkg{RandomFields} might crash severely if an error occurs
when running in parallel. When used with \pkg{parallel},
you might set \code{RFoptions(cores = 1)}. Note that
\code{RFoptions(cores = ...)} with more than 1 core uses another level
of parallelism which will be in competetions with \pkg{parallel}
during runtime.
}
% In the beta version, the following functionalities are currently
% not available:
% \itemize{
% \item \command{\link{ShowModels}}
% \item numerical evaluation of the covariance function in tbm2
% \item Harvard Rue's Markov fields
% }
\seealso{
See also \link{RF}, \link{RM}, \link{RP}, \link{RR}, \link{RC}, \link{R.}
}
\note{
The following packages enable further choices for the optimizer
(instead of \command{optim}) in RandomFields:
\pkg{optimx}, \pkg{soma}, \pkg{GenSA}, \pkg{minqa}, \pkg{pso},
\pkg{DEoptim}, \pkg{nloptr}, \pkg{RColorBrewer}, \pkg{colorspace}
}
\section{Update}{
Current updates are available through \mysoftware.
}
\me
\references{
\itemize{
\item
Singleton, R.C. (1979). In \emph{Programs for Digital Signal Processing}
Ed.: Digital Signal Processing Committee and IEEE Acoustics,
Speech, and Signal Processing Committe (1979)
IEEE press.
\item
Schlather, M., Malinowski, A., Menck, P.J., Oesting, M. and
Strokorb, K. (2015)
Analysis, simulation and prediction of multivariate
random fields with package \pkg{RandomFields}. \emph{
Journal of Statistical Software}, \bold{63} (8), 1-25,
url = \sQuote{http://www.jstatsoft.org/v63/i08/}
\item
see also the corresponding \href{../doc/multivariate_jss.pdf}{vignette}.
}
}
\section{Contributions}{
\itemize{
\item Contributions to version 3.0 and following:\cr
Felix Ballani (TU Bergakademie Freiberg; Poisson Polygons, 2014) \cr
Daphne Boecker (Univ. Goettingen; RFgui, 2011)\cr
Katharina Burmeister (Univ. Goettingen; testing, 2012)\cr
Sebastian Engelke (Univ. Goettingen; RFvariogram, 2011-12)\cr
Sebastian Gross (Univ. Goettingen; tilde formulae, 2011)\cr
Alexander Malinowski (Univ. Mannheim; S3, S4 classes 2011-13)\cr
Juliane Manitz (Univ. Goettingen; testing, 2012)\cr
Johannes Martini (Univ. Goettingen; RFvariogram,
2011-12)\cr
Ulrike Ober (Univ. Goettingen; help pages, testing, 2011-12)\cr
Marco Oesting (Univ. Mannheim; Brown-Resnick processes, Kriging, Trend,
2011-13)\cr
Paulo Ribeiro (Unversidade Federal do Parana; code adopted
from \pkg{geoR}, 2014)\cr
Kirstin Strokorb (Univ. Mannheim; help pages, 2011-13)\cr
\item Contributions to version 2.0 and following:\cr
Peter Menck (Univ. Goettingen; multivariate circulant embedding)\cr
R Core Team, Richard Singleton (fft.c and advice)
\item Contributions to version 1 and following:\cr
Ben Pfaff, 12167 Airport Rd, DeWitt MI 48820, USA making available
an algorithm for AVL trees (avltr*)
}
}
\section{Thanks}{
Patrick Brown : comments on Version 3\cr
Paulo Ribeiro : comments on Version 1\cr
Martin Maechler : advice for Version 1
}
\section{Financial support}{
\itemize{
\item
V3.0 has been financially supported by the German Science Foundation
(DFG) through the Research Training Group 1953 \sQuote{Statistical
Modeling of Complex Systems and Processes --- Advanced Nonparametric
Approaches} (2013-2018).
\item
V3.0 has been financially supported by Volkswagen Stiftung within
the project \sQuote{WEX-MOP} (2011-2014).
\item
Alpha versions for V3.0 have been
financially supported by the German Science Foundation (DFG) through the
Research Training Groups 1644 \sQuote{Scaling problems in Statistics}
and 1023 \sQuote{Identification in Mathematical Models} (2008-13).
\item
V1.0 has been financially supported by
the German Federal Ministry of Research and Technology
(BMFT) grant PT BEO 51-0339476C during 2000-03.
\item
V1.0 has been financially supported by the EU TMR network
ERB-FMRX-CT96-0095 on
``Computational and statistical methods for the analysis of spatial
data'' in 1999.
}
}
\keyword{spatial}
\examples{\dontshow{StartExample(reduced=FALSE)}
RFoptions(seed=0) ## *ANY* simulation will have the random seed 0; set
## RFoptions(seed=NA) to make them all random again
# simulate some data first (Gaussian random field with exponential
# covariance; 6 realisations)
model <- RMexp()
x <- seq(0, 10, 0.1)
z <- RFsimulate(model, x, x, n=6)
## select some data from the simulated data
xy <- coordinates(z)
pts <- sample(nrow(xy), min(100, nrow(xy) / 2))
dta <- matrix(nrow=nrow(xy), as.vector(z))[pts, ]
dta <- cbind(xy[pts, ], dta)
plot(z, dta)
## re-estimate the parameter (true values are 1)
estmodel <- RMexp(var=NA, scale=NA)
(fit <- RFfit(estmodel, data=dta))
## show a kriged field based on the estimated parameters
kriged <- RFinterpolate(fit, x, x, data=dta)
plot(kriged, dta)
\dontshow{FinalizeExample()}}
|
27969ee219ad9ce33885e83328ab6f07ed2d96c7 | 65e64f4b0b1160bdaacd2738b976b9ccbd8160b6 | /06_RegressionModel/RM/learn_rm.R | 956e3561c758c2169c98acf9e8ffb96a763b36ef | [] | no_license | enliktjioe/r_projects_datascience | 4fbebd382d4e220b423f70702bfc806c7463f458 | da3072b50a5866ee2ee288d37191f78be1576643 | refs/heads/master | 2020-04-16T13:56:33.737787 | 2020-03-04T09:26:41 | 2020-03-04T09:27:23 | 165,648,821 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,462 | r | learn_rm.R | library(dplyr)
library(leaps)
library(MASS)
copiers <- read.csv("copiers.csv")
hist(copiers$Sales, breaks=20)
boxplot(copiers$Sales)
hist(copiers$Sales)
library(manipulate)
expr1 <- function(mn){
mse <- mean((copiers$Sales - mn)^2)
hist(copiers$Sales, breaks=20, main=paste("mean = ", mn, "MSE=", round(mse, 2), sep=""))
abline(v=mn, lwd=2, col="darkred")
}
manipulate(expr1(mn), mn=slider(1260, 1360, step=10))
############################################################
## ##
############################################################
#Hands-on
library(dplyr)
retail <- read.csv("data_input/retail.csv")
str(retail)
retail.accessories <- retail %>%
filter(Sub.Category == "Accessories") %>%
dplyr::select("Ship.Mode", "Segment", "Quantity", "Sales", "Discount", "Profit")
summary(retail.accessories)
model1 <- lm(formula = Profit ~ ., data = retail.accessories)
summary(step(model1, direction = "backward"))
############################################################
## ##
############################################################
data("mtcars")
cars <- lm(mpg ~ ., mtcars)
cars.none <- lm(mpg ~ 1, mtcars)
summary(step(cars, direction = "backward"))
summary(step(cars.none, scope = list(lower = cars.none, upper = cars), direction = "forward"))
summary(step(cars.none, scope = list(upper = cars), direction = "both")) |
e40a5cf82df443df2c1445ff3626bb75d2216b72 | 339867edfaf54148f797beecd1420182b7c66520 | /src/job_submit_loop.R | d71a0b684a86379fdd7d04ce37581bafd99ac665 | [] | no_license | RCN-ECS/CnGV | d82946dc3969c101a02bcf09db32fa533a008d85 | 4b362f891770bb6286db5195f58d418c353f2f79 | refs/heads/master | 2023-03-19T15:04:58.192221 | 2021-12-15T16:49:48 | 2021-12-15T16:49:48 | 189,079,692 | 1 | 6 | null | null | null | null | UTF-8 | R | false | false | 2,035 | r | job_submit_loop.R | ######################
## Cluster For Loop ##
######################
require(tidyr)
require(ggplot2)
require(dplyr)
# Read in Parameter table
df1 = read.csv("df.csv")
chunk_size <- 1000
df1$chunks <- ggplot2::cut_interval(1:length(unique(df1$row)), length=chunk_size, labels=FALSE)
#chunks1 = data.frame("chunks" = chunks, group = unique(df1$group))
#job_df = full_join(chunks1, df1, by = "group")
for(j in 1:length(unique(df1$chunks))){
df = df1[df1$chunks == j,]
# Create and submit job for each row
for(i in 1:nrow(df)){
id = unique(df$row)[i]
filename <- id
#fileConn<-file(print(paste0(filename,".bash")))
fileConn<-file(paste0(filename,".bash"))
writeLines(c("#!/bin/bash",
##"#SBATCH --reservation=lotterhos",
"#SBATCH --nodes=1",
"#SBATCH --tasks-per-node=1",
paste0("#SBATCH --job-name=",filename,".txt"),
"#SBATCH --mem=500Mb",
"#SBATCH --mail-user=m.albecker@northeastern.edu",
"#SBATCH --mail-type=FAIL",
## "#SBATCH --partition=lotterhos",
"#SBATCH --partition=short",
"#SBATCH --time=03:00:00",
##paste0("#SBATCH --output=",filename,".output"),
##paste0("#SBATCH --error=",filename,".error"),
paste0("Rscript --vanilla Cov_GxE_clusterFun.R ",
df$row[i]," ",
df$n_pop[i]," ",
df$sample_size[i]," ",
df$std_dev[i]," ",
df$n_env[i]," ",
df$delta_env[i]," ",
df$delta_gen[i]," ",
df$interaction[i]," ",
df$errpop[i]," ",
df$replicate[i]," ",
df$env_scenario[i]," ",
df$seed[i])
), fileConn)
system(paste0("sbatch ",filename,".bash"))
Sys.sleep(5)
}
Sys.sleep(14400) # 4 hours
}
|
6bff9f41d652dbedb931f16518d9d2fa6b4a28b9 | a64b5567dbf88560fcb8889ffc9170092179ae11 | /man/ped.mrode.Rd | 87275dfe90d527887e5d0dcc61e2e381932496e5 | [] | no_license | luansheng/AGHmatrix | 91588084cc595e7594d72793288f099e345bb302 | 632575b26cac58aae6012fc5792f53c7b3f5dff0 | refs/heads/master | 2022-04-02T09:07:04.561725 | 2020-01-14T14:20:10 | 2020-01-14T14:20:10 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 436 | rd | ped.mrode.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ped.mrode-data.R
\docType{data}
\name{ped.mrode}
\alias{ped.mrode}
\title{Pedigree Data}
\format{table}
\usage{
data(ped.mrode)
}
\description{
Data from pedigree example proposed by Mrode 2005
}
\examples{
data(ped.mrode)
}
\references{
R. A. Mrode, R. Thompson. Linear Models for the Prediction of Animal Breeding Values. CABI, 2005.
}
\keyword{datasets}
|
bc2d58e2dbc6b0fa418cbcf7333ded24b3969da1 | 1c1e0991ad24c95578d4291a97aed09ab5e7cdf4 | /num_sundays.R | aa76fbddc345a5c65736e15177dd6b33c0f5d0c7 | [] | no_license | adamacosta/euler | f1b74e4d8fb5400f8c15994bbc9d5a5e21506876 | c8286fbaeb11690a563a209c922ce58d933f8aab | refs/heads/master | 2020-06-04T13:25:05.531371 | 2015-05-08T14:06:30 | 2015-05-08T14:06:30 | 34,878,094 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 113 | r | num_sundays.R | library(lubridate)
days <- seq(ymd("1901-01-01"), ymd("2000-12-31"), by="days")
sum(day(days)==1 & wday(days)==1) |
4fda96e5bcf934d4b97cc374bf8a83574977753b | bca1166b487ead5957c695d1b2aa1073e64888a6 | /man/calculate_days_accrued_pref.Rd | d7e6b73f16d44d7b696136035baf45a1b73450b3 | [
"MIT"
] | permissive | fxcebx/fundManageR | d4a02397e9e7c5d37fccfc9e221d80b2398d4be8 | cb11ad0cd648d4d484eec67cf093f35b2d163cda | refs/heads/master | 2021-01-13T13:55:47.905682 | 2017-01-12T19:55:26 | 2017-01-12T19:55:26 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 608 | rd | calculate_days_accrued_pref.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/waterfall_functions.R
\name{calculate_days_accrued_pref}
\alias{calculate_days_accrued_pref}
\title{Calculate accrued preference for a stated period of days}
\usage{
calculate_days_accrued_pref(pct_pref = 0.1, is_actual_360 = T, days = 31,
equity_bb = 1700000, pref_accrued_bb = 0)
}
\arguments{
\item{pref_accrued_bb}{}
}
\description{
Calculate accrued preference for a stated period of days
}
\examples{
calculate_days_accrued_pref(pct_pref = .1, is_actual_360 = T, days = 31, equity_bb = 1700000.00, pref_accrued_bb = 0)
}
|
f1369641fbfdcfddce4f23b67c4420ade9e75844 | d4fb12399519c7a84cc8bd37fa328bc225a528d6 | /CoVTRxdb/app.R | 367be2d5f97a780c36252acecf65d7466436aacd | [
"MIT"
] | permissive | katkoler/CoVTRxdb | 936ffbfe0b0202445f684450c73f145ae6fd173d | 1edd8e9a65b19c9841e65602d9335fbe213bf463 | refs/heads/main | 2023-03-30T14:32:01.134980 | 2021-04-12T17:52:57 | 2021-04-12T17:52:57 | 334,945,891 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,707 | r | app.R | #
# CoVTRxdb
#
library(shiny)
library(dplyr)
library(reactable)
library(readr)
tbl <- read_csv("master_COVID_trials_info.csv")
# colnames(tbl) <- gsub(" ", " ", trimws(gsub("\\.", " ", colnames(tbl))))
# tbl <- tbl %>% select(TrialID, `Scientific title Original`, `Intervention Original`, `Country Curated Extracted`, `size Curated Extracted`, `drugs Curated Extracted`, `isRandomized Curated Extracted`, `isCombination Curated Extracted`)
# colnames(tbl) <- trimws(gsub("Original|Curated Extracted", "", colnames(tbl)))
# tbl <- tbl %>% mutate(drugs = ifelse(drugs == "", NA, drugs), isDrugTrial = !is.na(drugs))
# Define UI for application that draws a histogram
ui <- fluidPage(
# Application title
titlePanel("CoVTRxdb"),
fluidRow(
column(8,
# offset = 2,
h3("Abstract"),
p("COVID-19 clinical trials initiation may not be driven by scientific need:
geographical and social biases appear to be confounders.
To better understand these biases,
we performed a global informatics-driven survey of clinical trial registrations
and built CoVTRxdb, a unified catalog of drugs being tested.
There have been 2,579 COVID-19 drug trials conducted as of September 4th 2020,
with 2.01 million people enrolled.
In each country, although trial registration is correlated with outbreak severity,
the USA, China, and Iran are conducting the most trials.
15.8% of trials test hydroxychloroquine, the most frequently tested drug.
Of 407 hydroxychloroquine trials,
81.6% are randomised and 70% are combined with other drugs.
Remarkably, Twitter activity appears to be closely associated with
hydroxychloroquine trial initiation.
Of the 929,994 tweets about the most frequently trialed COVID-19 drugs,
56.6% mentioned hydroxychloroquine.
Hydroxychloroquine trial prevalence may be a result of social media attention,
especially by world leaders."))
),
fluidRow(
column(8,
h3("Table of curated COVID-19 trials")
)
),
fluidRow(
column(12,
reactableOutput("table2", width = "100%")
)
),
fluidRow(
column(12,
h3("Data Sources"),
p("Raw COVID-19 clinical trial data is publicly available from the WHO ICTRP (",
a("https://www.who.int/clinical-trials-registry-platform"), ")")
)
)
)
# Define server logic required to draw a histogram
server <- function(input, output) {
output$table2 <- renderReactable({
reactable(tbl, filterable = T,
defaultColDef = colDef(
header = function(value) gsub(".", " ", value, fixed = TRUE),
cell = function(value) format(value, nsmall = 1),
align = "center",
minWidth = 70,
headerStyle = list(background = "#f7f7f8")
),
columns = list(
Intervention = colDef(minWidth = 140),
`Scientific title` = colDef(minWidth = 140),
drugs = colDef(minWidth = 100) # overrides the default# overrides the default
),
bordered = TRUE,
highlight = TRUE,
showPageSizeOptions = TRUE
)
})
}
# Run the application
shinyApp(ui = ui, server = server)
|
8145cd288c96515295337a94b5ef2cdfaedb73f7 | 184d33fbe6d0ab73a260d0db9d3849df00d33786 | /rcmdr.temis/man/corpusCaDlg.rd | a273af48dad85682843aacc98e05cdb6cdfe2a0a | [] | no_license | nalimilan/R.TeMiS | 65660d9fbe4c8ca7253aeba5571eab4445736c99 | 3a8398038595807790087c36375bb26417ca606a | refs/heads/master | 2023-04-30T18:04:49.721122 | 2023-04-25T19:45:04 | 2023-04-25T19:45:04 | 81,315,737 | 25 | 7 | null | 2020-06-29T21:45:06 | 2017-02-08T10:07:16 | C | UTF-8 | R | false | false | 1,587 | rd | corpusCaDlg.rd | \name{corpusCaDlg}
\alias{corpusCaDlg}
\title{Correspondence analysis from a tm corpus}
\description{Compute a simple correspondence analysis on the document-term matrix of a tm corpus.}
\details{This dialog wraps the \code{\link{runCorpusCa}} function. The function \code{runCorpusCa}
runs a correspondence analysis (CA) on the document-term matrix.
If no variable is selected in the list (the default), a CA is run on the full document-term
matrix (possibly skipping sparse terms, see below). If one or more variables are chosen,
the CA will be based on a stacked table whose rows correspond to the levels of the variable:
each cell contains the sum of occurrences of a given term in all the documents of the level.
Documents that contain a \code{NA} are skipped for this variable, but taken into account for
the others, if any.
In all cases, variables that have not been selected are added as supplementary rows. If at least one
variable is selected, documents are also supplementary rows, while they are active otherwise.
The first slider ('sparsity') allows skipping less significant terms to use less memory, especially
with large corpora. The second slider ('dimensions to retain') allows choosing the number of
dimensions that will be printed, but has no effect on the computation of the correspondance analysis.
}
\seealso{\code{\link{runCorpusCa}}, \code{\link{ca}}, \code{\link{meta}}, \code{\link{removeSparseTerms}},
\code{\link{DocumentTermMatrix}} }
|
aec428b7b57a4be8dfcc6ed37b618de0643db04a | a318f8965ee044e966a0434746378ae5d011b201 | /unit2-linearRegression/climate_change.R | e351175cd86c5889dc9175b44ade13b29fbe09be | [] | no_license | vimalromeo/My_Codes | 100366fd0c6b3b5b292cf00fa3f82fc1ee6d8ad2 | 16072679136faa89a0e5e80191d25b5bf90a9986 | refs/heads/master | 2021-01-10T01:27:51.437792 | 2017-11-14T11:37:59 | 2017-11-14T11:37:59 | 53,647,984 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 537 | r | climate_change.R | setwd("D:/doc/study/TheAnalyticsEdge/unit2")
climate<-read.csv("climate_change.csv")
str(climate)
train<-subset(climate, Year<="2006")
test<-subset(climate, Year>"2006")
str(train)
model1<-lm(Temp~MEI + CO2 + CH4 + N2O + CFC.11 + CFC.12 + TSI + Aerosols,data=train)
summary(model1)
cor(train[,c(3:10)])
model2<-lm(Temp~MEI + N2O + TSI + Aerosols,data=train)
summary(model2)
model3<-step(model1)
summary(model3)
pred<-predict(model3, newdata=test)
SSE<-sum((pred-test$Temp)^2)
SST<-sum((test$Temp-mean(train$Temp))^2)
R2<-1-SSE/SST
|
93b86c8307607965fb4ba053b449609029d379ee | adb24b997ddf95e74d412fc09aea40e63623bc12 | /ui.R | c5a301f1ae28ab2c567c191fc0283075e24a8565 | [] | no_license | ExtremeNt/Random-walk-generator | 7734fbe116c0844ed7a9fb3435e67e87a43f74f4 | 278a1698a011ef98775a52eb4389555f4255678c | refs/heads/master | 2016-08-12T23:32:16.716982 | 2016-02-18T03:32:13 | 2016-02-18T03:32:13 | 51,975,779 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 825 | r | ui.R | shinyUI(
fluidPage(
titlePanel('Random walk generator'),
fluidRow(
column(2, wellPanel(
numericInput('stepsz1','Step size 1', value=1),
numericInput('stepsz2','step size 2', value=-1),
numericInput('steppb1','step 1 probability',min=0, max=1,value=0.5),
numericInput('nstep','Number of steps',min=1, step=1, value=100),
numericInput('seedn','Seed', min=0, step=1, value=0),
checkboxGroupInput('Displaywalks','Display',
c('Random walk'=1,'Maximum process'=2,'Reflected process'=3)
,selected = 1)
)),
column(10,
wellPanel(ggvisOutput('plot') )
)
)
)
)
|
6e8f533ba1af9c5223553c4c613ffa6ee14ed813 | be86308f9cd95be51b4f83ec06895fa1bd4c49e7 | /tests/inputs.R | 644a95d841faad44888ff37c1b46e66a2ade4759 | [] | no_license | duncantl/CodeDepends | 7f60ad0b4e3513dd4e2e723778f44bce39e47fd6 | 878fd24850d28037640cb21fbbf3922b51f996df | refs/heads/master | 2022-05-16T00:25:38.573235 | 2022-04-25T22:11:20 | 2022-04-25T22:11:20 | 3,998,946 | 75 | 16 | null | 2020-01-14T19:30:08 | 2012-04-11T22:17:31 | R | UTF-8 | R | false | false | 6,500 | r | inputs.R | library(CodeDepends)
res7 = getInputs(quote(x <- names(y[a:5])[w]))
stopifnot(identical(res7@inputs, c("y", "a", "w")),
identical(res7@outputs, "x"),
identical(res7@code, quote(x <- names(y[a:5])[w])))
## regression test to ensure formulaInputs argument functions correctly
res10a = getInputs(quote(lm(x~y)))
stopifnot(identical(res10a@inputs, character()))
res10b = getInputs(quote(lm(x~y)), formulaInputs = TRUE)
stopifnot(identical(res10b@inputs, c("x", "y")))
## regression tests for passing functions directly to getInputs
## function with {} and multiple expressions
f = function(a = 5, b, c = 7) {
d = a + b + 5
df = data.frame(a = a, b = b)
fit =lm(b~a, data = df)
fit
}
res11 = getInputs(f)
## one ScriptNodeInfo for the formals, then 5 for the body
stopifnot(length(res11) == 6,
identical(res11[[5]]@inputs, "df"),
identical(res11[[5]]@outputs, "fit"),
identical(res11[[5]]@functions, c("lm" = FALSE, "~" = FALSE)),
identical(res11[[5]]@nsevalVars, c("b", "a"))
)
## function with single expressoin (call) with no {}
fsmall = function(a = 5, b, c = 7) a+b+c
res11b = getInputs(fsmall)
stopifnot(length(res11b) == 2,
identical(res11b[[1]]@outputs, c("a", "b", "c")),
identical(res11b[[2]]@inputs, c("a", "b", "c")))
## does it know where functions that live in base packages come from (ie do they get FALSE)
## also does passing expressions directly to readScript work?
## We have to do this because it only tries to figure out function locality
## when it's given a script, not for individual expressions
## XXX change this for base package funs?
res12 = getInputs(readScript(txt = quote(x <- rnorm(10)+ Rcmd("This would never work!"))))
stopifnot(identical(res12[[1]]@functions, c("+" = FALSE, rnorm = FALSE, Rcmd = FALSE)))
## do functions called via the *apply statements show up in funs rather than inputs?
## including when specified out of order via FUN argument
res13 = getInputs(quote(y <- lapply(x, mean, na.rm=narm)))
stopifnot(identical(res13@outputs, "y"),
identical(res13@inputs, c("x", "narm")),
identical(res13@functions, c(lapply = NA, mean = NA)))
res13b = getInputs(quote(y <- lapply(FUN=mean, x, na.rm=narm)))
stopifnot(identical(res13b@outputs, "y"),
identical(res13b@inputs, c("x", "narm")),
identical(res13b@functions, c(lapply = NA, mean = NA)))
res14 = getInputs(quote(y <- apply(x,1, mean, na.rm=narm)))
stopifnot(identical(res14@outputs, "y"),
identical(res14@inputs, c("x", "narm")),
identical(res14@functions, c(apply = NA, mean = NA)))
res15 = getInputs(quote(y <- mapply(mean, x = stuff, y = things)))
stopifnot(identical(res15@outputs, "y"),
identical(res15@inputs, c( "stuff", "things")),
identical(res15@functions, c(mapply = NA, mean = NA)))
res13c = getInputs(quote(y <- sapply(x, mean, na.rm=narm)))
stopifnot(identical(res13c@outputs, "y"),
identical(res13c@inputs, c("x", "narm")),
identical(res13c@functions, c(sapply = NA, mean = NA)))
## do we catch updates correctly in all their various forms
res1 = getInputs(quote( x [ z > 0 ] <- 2 * y )) # outputs should be x and inputs x, z, y
stopifnot(identical(res1@updates, "x"),
identical(res1@outputs, character()),
identical(res1@inputs, c("x", "z", "y")))
res2 = getInputs(quote( foo(x) <- 1)) #updates and inputes are both "x"
stopifnot(identical(res2@inputs, "x"),
identical(res2@outputs, character()),
identical(res2@updates, "x"))
res3 = getInputs(quote( foo(x) <- a)) # updates is "x", inputs is x, a
stopifnot(identical(res3@inputs, c("x", "a")),
identical(res3@updates, "x"),
identical(res3@outputs, character()))
res4 = getInputs(quote( x$foo <- a))
stopifnot(identical(res4@inputs, c("x", "a")),
identical(res4@updates, "x"),
identical(res4@outputs, character()))
res5 = getInputs(quote( x[[foo]] <- a)) # outputs is "x", inputs is x, foo, a
stopifnot(identical(res5@inputs, c("x", "foo", "a")),
identical(res5@outputs, character()),
identical(res5@updates, "x"))
res6 = getInputs(quote( x[["foo"]] <- a)) # outputs is "x", inputs is x, a
stopifnot(identical(res6@inputs, c("x", "a")),
identical(res6@strings, "foo"),
identical(res6@updates, "x"),
identical(res6@outputs, character()))
res8 = getInputs(quote(x[x>0] <- 5))
stopifnot(identical(res8@inputs, "x"),
identical(res8@outputs, character()),
identical(res8@updates, "x"))
res9 = getInputs(quote(x <- lapply(1:10, function(i) x[[10-i]])))
stopifnot(identical(res9@inputs, "x"),
identical(res9@outputs, character()),
identical(res9@updates, "x"))
## pipe handling and apply/map style function invocation play nicely
## together
res15 = getInputs(quote(1:10 %>% map_int(rnorm, sd = sample(1:10))))
stopifnot(identical(res15@inputs, character()))
stopifnot(identical(res15@functions, c("%>%" = NA, map_int = NA,
rnorm = NA, sample = NA,
":" = NA)))
## test that we now remember package loads across expressions and that the filter
## handler uses that
## test that nested calls within pipes behave correctly wrt identifying nseval vs standard
## eval inputs
scr16 = readScript(txt = "library(dplyr); df %>% left_join(filter(df2, colname > 6))")
res16 = getInputs(scr16)
stopifnot(identical(res16[[2]]@inputs, c("df2", "df")))
stopifnot(identical(res16[[2]]@nsevalVars, "colname"))
## filter regression test and test differentiation heuristic
scr17 = readScript(txt = "library(dplyr); filter(df, x>5)")
res17 = getInputs(scr17)
stopifnot(identical(res17[[2]]@inputs, "df"))
stopifnot(identical(res17[[2]]@nsevalVars, "x"))
scr18 = readScript(txt = "filter(df, x>5)")
res18 = getInputs(scr18)
stopifnot(identical(res18[[1]]@inputs, c("df", "x")))
stopifnot(length(res18[[1]]@nsevalVars) == 0)
## regression test for handling of inlined NativeSymbols by default
## handler, which includes Rcpp "functions" compiled
## from R
##
## Can't figure out how to get this not to barf during R CMD check :(
## library(Rcpp)
## sourceCpp( system.file("unitTests/rcppfun.cpp", package="CodeDepends"))
## res19 = getInputs(convolve3cpp)
## stopifnot(identical(res19[[1]]@outputs, c("a", "b")),
## identical(res19[[2]]@inputs, c("a", "b")))
|
f5cd4b7c5e13a5b60409fc01bfb31247d4637057 | e5aa6627e7e2fdcc69f91e2beddf267708f0c2b1 | /man/autoRope.Rd | 0e961f1debe932ed4c42a4c61bd9f5697e5f35f2 | [] | no_license | BenjaminChittick/BenCScore | 1b289c1a20c8da968b7571a99a98834ed8c9ded5 | 54b821358f77f156796b7878c732f09ab2ae1864 | refs/heads/master | 2021-01-01T04:06:55.614373 | 2016-05-19T15:31:21 | 2016-05-19T15:31:21 | 58,406,675 | 1 | 0 | null | null | null | null | UTF-8 | R | false | true | 714 | rd | autoRope.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/rope.R
\name{autoRope}
\alias{autoRope}
\title{autoRope}
\usage{
autoRope(assay.conc, max.conc, modifier = 1, hillslope = 1, top = 100,
bottom = 0)
}
\arguments{
\item{assay.conc}{numeric value of assay concentration (same units as max.con)}
\item{max.conc}{numeric value of maximum detectable EC50 (same units as assay.conc)}
\item{modifier}{numeric modifier of signal}
}
\description{
From a provided maximum detectable EC50 and by assuming a one site effective
concentration 50 curve this method estimates the minimum signal cutoff for an
active hit at the given screening concentration.
}
\examples{
autoRope(25, 100, 0.8)
}
|
07aa6197271f497e86190cae74d0cb22834d57df | 2ad62aa7217dfccc06a37d0207a1584b2f065652 | /R/dataLocationPrefs.R | bf2677d333b3bd81b62efe1f0674c7037a1d7483 | [] | no_license | MattNapsAlot/rSynapseClient | 3138f18f1d6e751a647460542312b4ed5f124267 | 7afa52dfb6a76bcf1951ac75763c2c3f1467d722 | refs/heads/master | 2020-04-03T10:31:39.857816 | 2012-08-30T04:29:22 | 2012-08-30T04:29:22 | 3,358,481 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 601 | r | dataLocationPrefs.R | ## Set and get data location preferences
##
## Author: Nicole Deflaxu <nicole.deflaux@sagebase.org>
###############################################################################
synapseDataLocationPreferences <- function(locationTypes){
if(missing(locationTypes)){
return(.getCache("synapseDataLocationPreferences"))
}
if(!all(locationTypes %in% kSupportedDataLocationTypes)){
ind <- which(!(locationTypes %in% kSupportedDataLocationTypes))
stop(paste("unsupported repository location(s):", locationTypes[ind]))
}
.setCache("synapseDataLocationPreferences", locationTypes)
}
|
adfd46d09e9811964b7a0a5dcaf31b72a264de7e | 7c8ec1d7663b010519f2b26c78f8aa889b1e9aed | /workout03/man/bin_variable.Rd | 9dd970f341732cce4cbba929bcc1316b3f8316aa | [] | no_license | stat133-sp19/hw-stat133-AziziDunia | 6c2e99526abfb7b46384dd7391796bd9b41e7768 | 788a1bbefa61f8c6a0c387406d25bdb76bdaf1ff | refs/heads/master | 2020-04-29T00:00:36.947216 | 2019-05-03T21:24:54 | 2019-05-03T21:24:54 | 175,677,968 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 634 | rd | bin_variable.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/binomial.R
\name{bin_variable}
\alias{bin_variable}
\title{Function bin_variable()}
\usage{
bin_variable(trials, prob)
}
\arguments{
\item{trials}{is a non-negative integer; numeric vector}
\item{prob}{is the probaility of success on each trial; numeric vector}
}
\value{
a data frame with a list of trials and probablity
}
\description{
it takes two aruments trials & prob and calculates an object of class caled "binvar" which is a binomial random variable object
}
\examples{
bin_var <- bin_variable(trials = 5, prob = 0.5)
bin_var
summary(bin_var)
}
|
0520614adc93fff9edf41e0b20815f4fe24c4808 | f72f31c41043c5735d7beb81a0e43d1ae4400d45 | /man/board_get.Rd | 54a26816b35b5e29f39cf0e7d8fc0254dd090004 | [
"Apache-2.0"
] | permissive | kevinykuo/pins | dbd07e4fb87270b9f9ed563636907819e27dbcea | ac74f7066d0d2b981b4bae93755d7b41c81e53e2 | refs/heads/master | 2020-07-02T19:51:57.229387 | 2019-08-10T01:53:38 | 2019-08-10T01:53:38 | 201,645,472 | 1 | 0 | Apache-2.0 | 2019-08-10T15:05:30 | 2019-08-10T15:05:30 | null | UTF-8 | R | false | true | 292 | rd | board_get.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/board.R
\name{board_get}
\alias{board_get}
\title{Get Board}
\usage{
board_get(name = NULL)
}
\arguments{
\item{name}{The name of the board to use}
}
\description{
Retrieves information about a particular board.
}
|
cada393e2f061dc8beb1bed3223d9476091743bf | dd89b14e542e1227b3a63147727be2ec63b4570d | /man/filter_by_evalue.Rd | 0f8417764f0862a3cda9e8b9b47237392a98d42a | [
"BSD-3-Clause"
] | permissive | MetAnnotate/metannoviz | 79b792432b6b719ba4744c4f252256e24763e46e | d9467dcde72f73edb37ff0149954ca9d1bac8aa0 | refs/heads/master | 2022-11-22T02:44:43.923746 | 2020-07-28T05:41:31 | 2020-07-28T05:41:31 | 264,586,383 | 4 | 0 | BSD-3-Clause | 2020-07-28T05:30:50 | 2020-05-17T04:53:37 | R | UTF-8 | R | false | true | 738 | rd | filter_by_evalue.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/filter.R
\name{filter_by_evalue}
\alias{filter_by_evalue}
\alias{filter}
\title{Filter metannotate data by e-value}
\usage{
filter_by_evalue(metannotate_data, evalue = 1e-10)
}
\arguments{
\item{metannotate_data}{Tibble of metannotate data}
\item{evalue}{Numeric vector of the evalue cutoff}
}
\value{
List of two:
\itemize{
\item Tibble of metannotate data, e-value filtered
\item List of three read count tables. First and second are from before and after e-value filtration.
See summarize_total_reads_all_genes(). The third is the \% change between the above two tables.
}
}
\description{
Filters the hits to the desired e-value cutoff and reports stats
}
|
723cf0e0a1e25ea13e2ce8718395f4ed73e4285f | 7cfdda5c2183941059b3af48c1653ccdc3c0a91b | /code/TestGradient.R | 8f1fcbc337bcd810cbc3ef4e3e935002e640e4df | [] | no_license | liuminzhao/qrmissing-draft | de19f71b57b3dae8d6a1c407e74a29caa79675fc | 3a6efea9a3ea871612550277e42dfdba47721359 | refs/heads/master | 2021-01-22T13:42:10.621663 | 2015-01-11T04:46:59 | 2015-01-11T04:46:59 | 10,259,310 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,648 | r | TestGradient.R | ## Test Gredient Descent Method
## Time-stamp: <liuminzhao 03/16/2013 20:07:05>
## Test for normal MLE first
## Data
n <- 500
x <- rnorm(n)
beta <- c(1, -1)
sigma <- 2
y <- beta[1] + beta[2] * x + rnorm(n, sd = sigma)
## Target function
LL <- function(beta0, beta1, sigma){
return(-sum(dnorm(y, mean = beta0 + beta1 * x, sd = sigma, log = T)))
}
cat(LL(beta[1], beta[2], sigma), '\n')
PartialLL0 <- function(beta0, beta1, sigma){
epsilon <- 0.01
ll1 <- LL(beta0 + epsilon, beta1, sigma)
ll0 <- LL(beta0 - epsilon, beta1, sigma)
return((ll1 - ll0) / 2 / epsilon)
}
PartialLL1 <- function(beta0, beta1, sigma){
epsilon <- 0.01
ll1 <- LL(beta0, beta1 + epsilon, sigma)
ll0 <- LL(beta0, beta1 - epsilon, sigma)
return((ll1 - ll0) / 2 / epsilon)
}
PartialLLs <- function(beta0, beta1, sigma){
epsilon <- 0.01
ll1 <- LL(beta0, beta1, sigma + epsilon)
ll0 <- LL(beta0, beta1, sigma - epsilon)
return((ll1 - ll0) / 2 / epsilon)
}
## Begin Gradient Descent
dif <- 1
alpha <- 0.001
beta0 <- beta1 <- sigma <- 4
ll0 <- LL(beta0, beta1, sigma)
llsave <- ll0
iter <- 0
while (dif > 10^-3 & iter < 10000) {
partial0 <- PartialLL0(beta0, beta1, sigma)
partial1 <- PartialLL1(beta0, beta1, sigma)
partials <- PartialLLs(beta0, beta1, sigma)
beta0 <- beta0 - alpha * partial0
beta1 <- beta1 - alpha * partial1
sigma <- sigma - alpha * partials
ll <- LL(beta0, beta1, sigma)
dif <- abs(ll - ll0)
ll0 <- ll
llsave <- append(llsave, ll)
iter <- iter + 1
}
cat(beta0, beta1, sigma, '\n')
print(summary(lm(y ~ x)))
l <- length(llsave)
plot(ts(llsave[(l/2): l]))
## pretty successful and quick, need small alpha
|
8fbf025b07e01fd216ecdb220e40f84150644b60 | 30265d02eb92dcb724653e1323ec0dbc4b9bb096 | /man/lam_ecr_repo_url.Rd | 760dd4d897929fa55a2bae881c2bbf2a134bdda5 | [
"MIT"
] | permissive | lewinfox/lambdar | ded3854316b33e484f9405272253cd381a0b6a2e | 0c741757a161fac1167c5bb9e1a9082ce759486d | refs/heads/master | 2023-07-11T19:10:08.701843 | 2021-08-29T10:10:05 | 2021-08-29T10:10:05 | 392,570,039 | 2 | 0 | NOASSERTION | 2021-08-17T02:42:04 | 2021-08-04T06:06:32 | R | UTF-8 | R | false | true | 570 | rd | lam_ecr_repo_url.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/aws.R
\name{lam_ecr_repo_url}
\alias{lam_ecr_repo_url}
\title{Generate an ECR repo URL}
\usage{
lam_ecr_repo_url(aws_account_id, aws_region, aws_ecr_repository_name)
}
\arguments{
\item{aws_account_id}{Your 12-digit AWS account id}
\item{aws_region}{Your AWS region, e.g. \code{"ap-southeast-2"}}
\item{aws_ecr_repository_name}{Chosen name for your AWS ECR repository. It is recommended that
this be the same as your app name.}
}
\description{
Generate an ECR repo URL
}
\keyword{internal}
|
731738c4f7ff99b162156bd1073e32ed6fdb83d2 | a8710ca51f2c3bd9fa19947b0c0c36b4062aaa33 | /R/S11.R | 27b43eccb7930709591c508ab5abb54e6f3306be | [] | no_license | cran/ELT | 2ba6dd97a0de6620120b93fe008a834f5e8e4f6a | 938a633944b42d92572234a4d1c5c12c9463fc94 | refs/heads/master | 2021-01-17T08:48:38.579908 | 2016-04-11T09:06:26 | 2016-04-11T09:06:26 | 17,678,902 | 0 | 1 | null | null | null | null | UTF-8 | R | false | false | 12,400 | r | S11.R |
## ------------------------------------------------------------------------ ##
## Script R for "Constructing Entity Specific Prospective Mortality Table ##
## Adjustment to a reference" ##
## ------------------------------------------------------------------------ ##
## Script S11.R ##
## ------------------------------------------------------------------------ ##
## Description Computation of confidence intervals and ##
## relative dispersion of the periodic life expectancies ##
## ------------------------------------------------------------------------ ##
## Authors Tomas Julien, Frederic Planchet and Wassim Youssef ##
## julien.tomas@univ-lyon1.fr ##
## frederic.planchet@univ-lyon1.fr ##
## wassim.g.youssef@gmail.com ##
## ------------------------------------------------------------------------ ##
## Version 01 - 2013/11/06 ##
## ------------------------------------------------------------------------ ##
## ------------------------------------------------------------------------ ##
## Definition of the functions ##
## ------------------------------------------------------------------------ ##
## ------------------------------------------------------------------------ ##
## Simualtion des décès ##
## ------------------------------------------------------------------------ ##
.SimDxt = function(q, e, x1, x2, t1, nsim){
DxtSim <- vector("list", nsim)
for(i in 1:nsim){
DxtSim[[i]] <- matrix(rpois(length(x1) * length(t1), e[x1+1-min(x2),as.character(t1)] * q[x1+1-min(as.numeric(rownames(q))),as.character(t1)]), length(x1), length(t1))
colnames(DxtSim[[i]]) <- t1; rownames(DxtSim[[i]]) <- x1
}
return(DxtSim)
}
## ------------------------------------------------------------------------ ##
## Coefficient of variation ##
## ------------------------------------------------------------------------ ##
.GetCV = function(q, Age, t1, t2, nsim){
CvMat <- matrix(,length(Age), length(min(t1):max(t2)))
q2 <- vector("list",length(min(t1):max(t2)))
for (y in 1:length(min(t1):max(t2))){
q2[[y]] <- matrix(,length(Age),nsim)
for (k in 1:nsim){
for (x in 1:length(Age)){
q2[[y]][x,k] <- q[[k]][x,y]
}
}
}
for(y in 1:length(min(t1):max(t2))){
sq <- matrix(,length(Age),nsim)
sum_sim <- apply(q2[[y]],1,sum)
for(x in 1:length(Age)){
for(k in 1:nsim){
sq[x,k] <- (q2[[y]][x,k] - ((sum_sim/nsim)[x]))^2
}
}
CvMat[,y] <- sqrt((1/(nsim-1)) * apply(sq,1,sum)) / (sum_sim/nsim)
}
return(CvMat)
}
## ------------------------------------------------------------------------ ##
## Simulated periodic life expectancies ##
## ------------------------------------------------------------------------ ##
.GetSimExp = function(q, Age, t1, t2, nsim){
LifeExp <- vector("list", nsim)
for(k in 1:nsim){
LifeExp[[k]] <- matrix(,length(Age),length(min(t1):max(t2)))
colnames(LifeExp[[k]]) <- as.character(min(t1):max(t2))
for (j in 1:(length(min(t1):max(t2)))){
for (x in 1:length(Age)){
age.vec <- ((Age[x]):max(Age))+1-min(Age)
LifeExp[[k]][x,j] <- sum(cumprod(1-q[[k]][age.vec, j])) } } }
LifeExp2 <-vector("list",length(min(t1):max(t2)))
for (y in 1:(length(min(t1):max(t2)))){
LifeExp2[[y]] <- matrix(,length(Age),nsim)
for (k in 1:nsim){
for (x in 1:length(Age)){
LifeExp2[[y]][x,k] <- LifeExp[[k]][x,y] } } }
return(LifeExp2)
}
## ------------------------------------------------------------------------ ##
## Computation of the quantiles ##
## ------------------------------------------------------------------------ ##
.GetQtiles = function(LifeExp, Age, t1, t2, qval){
Qtile <- vector("list",length(min(t1):max(t2)))
for (y in 1:(length(min(t1):max(t2)))){
Qtile[[y]] <- matrix(,length(Age),length(qval))
colnames(Qtile[[y]]) <- as.character(qval)
for (i in 1:(length(Age))) {
Qtile[[y]][i,] <- quantile(LifeExp[[y]][i,], probs = qval/100) } }
return(Qtile)
}
## ------------------------------------------------------------------------ ##
## Computation of the relative dispersion ##
## ------------------------------------------------------------------------ ##
.GetRelDisp = function(LifeExp, Age, t1, t2, qval){
Qtile <- .GetQtiles(LifeExp, Age, t1, t2, qval)
RelDisp <- matrix(,length(Age),length(min(t1):max(t2)))
colnames(RelDisp) <- as.character(min(t1):max(t2))
for (yy in 1:length(min(t1):max(t2))){
for (xx in 1:length(Age)){
qvec <- Qtile[[yy]][xx,]
RelDisp[xx,yy] <- (qvec[3] - qvec[1]) / qvec[2]
}
}
RelDisp[RelDisp == Inf] <- NaN; RelDisp[RelDisp == -Inf] <- NaN
return(RelDisp)
}
## ------------------------------------------------------------------------ ##
## .GetFitSim function ##
## ------------------------------------------------------------------------ ##
.GetFitSim = function(DxtSim, MyData, AgeMethod, NamesMyData, NameMethod, NbSim, CompletionTable, AgeRangeOpt=NULL,BegAgeComp=NULL,P.Opt=NULL,h.Opt=NULL){
QxtFittedSim <- QxtFinalSim <- vector("list", NbSim)
for(i in 1:NbSim){
## ---------- If Method 1
if(NameMethod == "Method1"){
QxtFittedSim[[i]] <- FctMethod1(DxtSim[[i]], MyData$Ext, MyData$QxtRef, AgeMethod, MyData$AgeRef, MyData$YearCom, MyData$YearRef)
}
## ---------- If Method 2
if(NameMethod == "Method2"){
QxtFittedSim[[i]] <- FctMethod2(DxtSim[[i]], MyData$Ext, MyData$QxtRef, AgeMethod, MyData$AgeRef, MyData$YearCom, MyData$YearRef)
}
## ---------- If Method 3
if(NameMethod == "Method3"){
QxtFittedSim[[i]] <- FctMethod3(DxtSim[[i]], MyData$Ext, MyData$QxtRef, AgeMethod, MyData$AgeRef, MyData$YearCom, MyData$YearRef)
}
## ---------- If Method 4
if(NameMethod == "Method4"){
QxtFittedSim[[i]] <- FctMethod4_2ndPart(DxtSim[[i]], MyData$Ext, MyData$QxtRef, AgeMethod, MyData$AgeRef, MyData$YearCom, MyData$YearRef, P.Opt, h.Opt)
}
## ---------- Completion
if(CompletionTable == T){
QxtFinalSim[[i]] <- .CompletionDG2005(QxtFittedSim[[i]]$QxtFitted, as.numeric(rownames(QxtFittedSim[[1]]$QxtFitted)), min(MyData$YearCom):max(MyData$YearRef), AgeRangeOpt, c(BegAgeComp, 130), NameMethod)$QxtFinal
}
if(CompletionTable == F){
QxtFittedSim[[i]]$QxtFitted[QxtFittedSim[[i]]$QxtFitted > 1] <- 1
QxtFittedSim[[i]]$QxtFitted[is.na(QxtFittedSim[[i]]$QxtFitted)] <- 1
QxtFinalSim[[i]] <- QxtFittedSim[[i]]$QxtFitted
}
}
return(QxtFinalSim)
}
## ------------------------------------------------------------------------ ##
## Dispersion function ##
## ------------------------------------------------------------------------ ##
Dispersion = function(FinalMethod, MyData, NbSim, CompletionTable=T, Plot = F, Color = MyData$Param$Color){
AgeMethod=FinalMethod[[1]]$AgeRange
print("Simulate the number of deaths ...")
DxtSim <- vector("list",length(MyData)-1)
names(DxtSim) <- names(MyData)[1:(length(MyData)-1)]
for (i in 1:(length(MyData)-1)){
DxtSim[[i]] <- .SimDxt(FinalMethod[[i]]$QxtFinal, MyData[[i]]$Ext, AgeMethod, MyData[[i]]$AgeRef, MyData[[i]]$YearCom, NbSim)
}
print("Fit of the simulated data ...")
QxtFinalSim <- vector("list",length(MyData)-1)
names(QxtFinalSim) <- names(MyData)[1:(length(MyData)-1)]
for (i in 1:(length(MyData)-1)){
QxtFinalSim[[i]] <- .GetFitSim(DxtSim[[i]], MyData[[i]], AgeMethod, names(MyData)[i], FinalMethod[[1]]$NameMethod, NbSim, CompletionTable, FinalMethod[[i]]$AgeRangeOpt, FinalMethod[[i]]$BegAgeComp, FinalMethod[[i]]$P.Opt, FinalMethod[[i]]$h.Opt)
}
AgeFinal <- as.numeric(rownames(FinalMethod[[1]]$QxtFinal))
if(Plot == T){
Path <- "Results/Graphics/Dispersion"
.CreateDirectory(paste("/",Path,sep=""))
print(paste("Create graphics of the fit of the simulated data in .../",Path," ...", sep=""))
for(j in MyData[[1]]$YearCom){
if(length(MyData) == 3){
png(filename=paste(Path,"/",FinalMethod[[1]]$NameMethod,"-GraphSim-",j,".png", sep=""), width = 3800, height = 2100, res=300, pointsize= 12)
print(.SimPlot(FinalMethod, QxtFinalSim, MyData, min(AgeFinal):95, j, c(paste("Fit of the simulated data -",names(MyData)[1:(length(MyData)-1)],"- year",j)), NbSim, Color))
dev.off()
}
if(length(MyData) == 2){
png(filename=paste(Path,"/",FinalMethod[[1]]$NameMethod,"-GraphSim-",j,".png", sep=""), width = 2100, height = 2100, res=300, pointsize= 12)
print(.SimPlot(FinalMethod, QxtFinalSim, MyData, min(AgeFinal):95, j, paste("Fit of the simulated data -",names(MyData)[i],"- year",j), NbSim, Color))
dev.off()
}
}
}
print("Compute the coefficients of variation ...")
Cv <- vector("list",length(MyData)-1)
names(Cv) <- names(MyData)[1:(length(MyData)-1)]
for(i in 1:(length(MyData)-1)){
Cv[[i]] <- .GetCV(QxtFinalSim[[i]], AgeFinal, MyData[[i]]$YearCom, MyData[[i]]$YearRef, NbSim)
}
if(Plot == T){
print(paste("Create graphics of the coefficients of variation in .../",Path," ...", sep=""))
for(i in 1:(length(MyData)-1)){
png(filename=paste(Path,"/",FinalMethod[[1]]$NameMethod,"-CoefVar-",names(MyData)[i],".png", sep=""), width = 2100, height = 2100, res=300, pointsize= 12)
print(SurfacePlot(as.matrix(Cv[[i]]),expression(cv[xt]), paste("Coefficient of variation, ",FinalMethod[[1]]$NameMethod,", ",names(MyData)[i]," pop.", sep=""), c(min(AgeFinal),130,min(MyData[[i]]$YearCom),max(MyData[[i]]$YearRef)),Color))
dev.off()
}
}
print("Compute the simulated periodic life expectancies ...")
LifeExpSim <- vector("list",length(MyData)-1)
names(LifeExpSim) <- names(MyData)[1:(length(MyData)-1)]
for(i in 1:(length(MyData)-1)){
LifeExpSim[[i]] <- .GetSimExp(QxtFinalSim[[i]], AgeFinal, MyData[[i]]$YearCom, MyData[[i]]$YearRef, NbSim)
}
print("Compute the quantiles of the simulated periodic life expectancies ...")
QtilesVal <- c(2.5, 50, 97.5)
Qtile <- vector("list",length(MyData)-1)
names(Qtile) <- names(MyData)[1:(length(MyData)-1)]
for(i in 1:(length(MyData)-1)){
Qtile[[i]] <- .GetQtiles(LifeExpSim[[i]], AgeFinal, MyData[[i]]$YearCom, MyData[[i]]$YearRef, QtilesVal)
}
if(Plot == T){
print(paste("Create graphics of the quantiles of the simulated periodic life expectancies in .../",Path," ...", sep=""))
for(i in 1:(length(MyData)-1)){
png(filename=paste(Path,"/",FinalMethod[[1]]$NameMethod,"-QtileLifeExp-",names(MyData)[i],".png", sep=""), width = 8400, height = 1200, res=300, pointsize= 12)
print(.PlotExpQtle(Qtile[[i]], min(MyData[[i]]$YearCom):max(MyData[[i]]$YearRef) ,(ceiling(min(AgeFinal)/10)*10):100, paste("Quantiles of the simulated periodic life expectancies, ",FinalMethod[[1]]$NameMethod,", ",names(MyData)[i]," pop.",sep=""), Color))
dev.off()
}
}
print("Compute the relative dispersion ...")
QtilesVal <- c(5, 50, 95)
RelDisp <- vector("list",length(MyData)-1)
names(RelDisp) <- names(MyData)[1:(length(MyData)-1)]
for(i in 1:(length(MyData)-1)){
RelDisp[[i]] <- .GetRelDisp(LifeExpSim[[i]], AgeFinal, MyData[[i]]$YearCom, MyData[[i]]$YearRef, QtilesVal)
}
if(Plot == T){
print(paste("Create graphics of the relative dispersion in .../",Path," ...", sep=""))
for(i in 1:(length(MyData)-1)){
png(filename=paste(Path,"/",FinalMethod[[1]]$NameMethod,"-RelDisp-",names(MyData)[i],".png", sep=""), width = 8400, height = 1200, res=300, pointsize= 12)
print(.PlotRelDisp(RelDisp[[i]], min(MyData[[i]]$YearCom):max(MyData[[i]]$YearRef) , (ceiling(min(AgeFinal)/10)*10):100, paste("Relative dispersion, ",FinalMethod[[1]]$NameMethod,", ",names(MyData)[i]," pop.",sep=""), Color, c(670,50)))
dev.off()
}
}
return(list(Cv = Cv, LifeExpSim = LifeExpSim, Qtile = Qtile, RelDisp = RelDisp))
}
|
def888a61c31d9c2080e53881b14a14a77780c84 | 304b10494c526fa91fbd8c2fb1a15be80751a242 | /man/getResults.Rd | a168c3737679591fab64ec3e36c3ef167aca17bf | [
"MIT"
] | permissive | fxcebx/abbyyR | cb1bf2b2dcda7162c722355d461265360f999978 | c07e664302fa17b36f24a57508736abf9504da39 | refs/heads/master | 2020-12-26T04:15:54.716626 | 2015-09-20T14:03:59 | 2015-09-20T14:03:59 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 615 | rd | getResults.Rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/getResults.R
\name{getResults}
\alias{getResults}
\title{Get Results}
\usage{
getResults(output="")
}
\arguments{
\item{output}{Optional; folder to which you want to save the data from the processed images; Default is same folder as the script}
}
\description{
Get data from all the processed images.
The function goes through the finishedTasks data frame and downloads all the files in resultsUrl
}
\examples{
\dontrun{
getResults(output="")
}
}
\references{
\url{http://ocrsdk.com/documentation/apireference/getTaskStatus/}
}
|
6d011afb452ab7d9e75938670942a8af09404a0a | b545da725d70f13c023f8b34660b46536a27288d | /older_files/presence_Absence_count.R | 4a6f4a97bdbbcf1fbd6230e3ac6fc844550c91a8 | [] | no_license | baeolophus/ou-grassland-bird-survey | b64849e5b7c3bf63e1854da5454d3295a4df0709 | 5aa9523a3d09d107a7d92140de7fa8ec62fe411a | refs/heads/master | 2020-04-13T05:23:20.801879 | 2019-05-13T18:59:49 | 2019-05-13T18:59:49 | 68,133,712 | 0 | 1 | null | 2016-09-29T21:01:03 | 2016-09-13T18:01:07 | R | UTF-8 | R | false | false | 829 | r | presence_Absence_count.R | #sample sizes of presence and absence by species
#list of species
specieslist <- c("NOBO",
"UPSA",
"HOLA",
"CASP",
"FISP",
"LASP",
"GRSP",
"DICK",
"EAME",
"WEME",
"BHCO")
presence.absence.count <- function (SPECIES) {
#loading Rdata gets the rest!
load(paste0("~/college/OU-postdoc/research/grassland_bird_surveys/ougrassland/ensemble_results/Current/",
SPECIES,
"/",
SPECIES,
"_rdata.RData"))
new <- latlong.predictors.SPECIES[latlong.predictors.SPECIES$presence == 1, "presence"]
length(new)
}
listofeval <- lapply (specieslist,
FUN = presence.absence.count)
|
2d8a26f233992059fb87ac9437c4f837415e2f53 | a1460931d495d0515f16d47aae2129eca81fcb1e | /r_scripts/ndnsim_paper_statistics.R | b935776961442f0c4444274e135f54607aa97aed | [] | no_license | danposch/itec-scenarios | 1299053d6391c71253699cccbe47e3472b6d7e46 | 22a58a9819c2e62c9cc5b2f2e1125ea85cf32f33 | refs/heads/master | 2021-01-20T06:28:23.699993 | 2015-01-13T13:05:24 | 2015-01-13T13:05:24 | 16,543,899 | 0 | 2 | null | null | null | null | UTF-8 | R | false | false | 5,019 | r | ndnsim_paper_statistics.R | # adapt working directory
setwd("Z:\\concert\\ndnSIM_paper_simulations")
num_runs<-30
num_clients<-100
num_segments<-299
# the script will create variables called the same as the folder that contains the simulation runs
# loop through all variations of the simulations
subfolds<-dir()
for(subfold in subfolds) {
print(subfold)
# collect stats
all_levels=c()
all_unsmooth=c()
all_unsmooth_sum=c()
all_avgClientLevel=c()
all_buffer=c()
# loop over all runs in that subfolder
for(run in seq(0,num_runs-1)) {
data_folder = paste(subfold,"/output_run",sep="")
data_folder = paste(data_folder, toString(run),sep="")
data_folder = paste(data_folder, "/", sep="")
# loop over all clients
for(client in seq(0,num_clients-1)) {
filename = paste(data_folder, "ContentDst",sep="")
filename = paste(filename, toString(client), sep="")
filename = paste(filename, ".txt", sep="")
data<-read.csv(filename)
segmentNr<-as.vector(data$SegmentNr[0:num_segments])
levels<-as.vector(data$Level[0:num_segments])
buffer<-as.vector(data$Buffer[0:num_segments])
unsmooth<-as.vector(data$Unsmooth[0:num_segments])
requested<-as.vector(data$Requested[0:num_segments])
goodput<-as.vector(data$Goodput[0:num_segments])
# sum over unsmooth seconds for this client
sum_unsmooth<-sum(unsmooth)
all_unsmooth_sum<-c(all_unsmooth_sum,sum_unsmooth)
avg_quality<-mean(levels)
all_avgClientLevel<-c(all_avgClientLevel,avg_quality)
# fill in vector
all_levels=c(all_levels,levels)
all_unsmooth=c(all_unsmooth,unsmooth)
all_buffer=c(all_buffer,buffer)
}
}
d<-data.frame("Levels"=all_levels,Stalls=all_unsmooth,Buffer=all_buffer)
assign(subfold,d)
assign(paste(subfold,".AvgClientLevel",sep=""),all_avgClientLevel)
assign(paste(subfold,".UnsmoothSum",sep=""),all_unsmooth_sum)
}
##################
# compare buffer #
##################
# compare variances for Buffer, we see that they are not equal
var.test(dash_svc_bottleneck_20mbps$Buffer,dash_svc_bottleneck_30mbps$Buffer)
var.test(dash_svc_bottleneck_30mbps$Buffer,dash_svc_bottleneck_40mbps$Buffer)
# therefore we set var.equal=FALSE in t.test
# test: H0: 20Mbit$Buffer > 30Mbit$Buffer
t.test(dash_svc_bottleneck_20mbps$Buffer,dash_svc_bottleneck_30mbps$Buffer,alternative="less",paired=TRUE,var.equal=FALSE)
# test: H0: 30Mbit$Buffer > 40Mbit$Buffer
t.test(dash_svc_bottleneck_30mbps$Buffer,dash_svc_bottleneck_40mbps$Buffer,alternative="less",paired=TRUE,var.equal=FALSE)
##################
# compare Levels #
##################
# compare variances for Levels, we see that they are not equal
var.test(dash_svc_bottleneck_20mbps$Levels,dash_svc_bottleneck_30mbps$Levels)
var.test(dash_svc_bottleneck_30mbps$Levels,dash_svc_bottleneck_40mbps$Levels)
# therefore we set var.equal=FALSE in t.test
# compare means of Levels
t.test(dash_svc_bottleneck_20mbps$Levels,dash_svc_bottleneck_30mbps$Levels,alternative="less",paired=TRUE,var.equal=FALSE)
t.test(dash_svc_bottleneck_30mbps$Levels,dash_svc_bottleneck_40mbps$Levels,alternative="less",paired=TRUE,var.equal=FALSE)
#######################
# compare Sum(Stalls) #
#######################
# compare variances for Buffer, we see that they are not equal
var.test(dash_svc_bottleneck_20mbps.UnsmoothSum,dash_svc_bottleneck_30mbps.UnsmoothSum)
var.test(dash_svc_bottleneck_30mbps.UnsmoothSum,dash_svc_bottleneck_40mbps.UnsmoothSum)
# therefore we set var.equal=FALSE in t.test
t.test(dash_svc_bottleneck_20mbps.UnsmoothSum,dash_svc_bottleneck_30mbps.UnsmoothSum,alternative="greater",paired=TRUE,var.equal=FALSE)
t.test(dash_svc_bottleneck_30mbps.UnsmoothSum,dash_svc_bottleneck_40mbps.UnsmoothSum,alternative="greater",paired=TRUE,var.equal=FALSE)
par(mfrow=c(1,3))
boxplot(dash_svc_bottleneck_20mbps$Levels)
boxplot(dash_svc_bottleneck_30mbps$Levels)
boxplot(dash_svc_bottleneck_40mbps$Levels)
mean_levels<-c(
mean(dash_svc_bottleneck_20mbps$Levels),
mean(dash_svc_bottleneck_30mbps$Levels),
mean(dash_svc_bottleneck_40mbps$Levels)
)
mean_stalls<-c(
mean(dash_svc_bottleneck_20mbps.UnsmoothSum),
mean(dash_svc_bottleneck_20mbps.UnsmoothSum),
mean(dash_svc_bottleneck_20mbps.UnsmoothSum)
)
max_stalls<-c(
max(dash_svc_bottleneck_20mbps.UnsmoothSum),
max(dash_svc_bottleneck_30mbps.UnsmoothSum),
max(dash_svc_bottleneck_40mbps.UnsmoothSum)
)
min_avgLevel<-c(
min(dash_svc_bottleneck_20mbps.AvgClientLevel),
min(dash_svc_bottleneck_30mbps.AvgClientLevel),
min(dash_svc_bottleneck_40mbps.AvgClientLevel)
)
mean_buffer<-c(
mean(dash_svc_bottleneck_20mbps$Buffer),
mean(dash_svc_bottleneck_30mbps$Buffer),
mean(dash_svc_bottleneck_40mbps$Buffer)
)
results<-data.frame(AvgLevels=mean_levels,AvgStalls=mean_stalls,MaxStalls=max_stalls,MinClientLevel=min_avgLevel,AvgBuffer=mean_buffer)
rownames(results)<-c("Dash 20 Mbit/s", "Dash 30 MBit/s", "Dash 40 Mbit/s")
|
0833f5ab5744ad40324f9f71060f22be1f045490 | 19c8a0a285325c6f77522177fa8ef71d3c304310 | /tests/testthat/test-sf.R | 46f66f661f5fdb72d3a7e19347d37c86c3cb729f | [] | no_license | dcooley/gpx | 87b9931efbf2b2cd0d942833cb4b5c9c833ef1e1 | 342b521266f4a6553affa8d40514f60201831c15 | refs/heads/master | 2020-04-27T23:53:38.260775 | 2019-10-25T20:07:11 | 2019-10-25T20:07:11 | 174,794,900 | 6 | 0 | null | 2019-05-15T10:55:33 | 2019-03-10T08:07:40 | C++ | UTF-8 | R | false | false | 117 | r | test-sf.R | context("sf")
## tests
## track name (and other properties) are propogated
## z_range and m_range (when available)
|
73de33b3937414ae28845337e2623667165d938d | 210dba066acc1bd50f9d8229f5a8deff88720216 | /man/regress.plot.gen.Rd | 0a1289a9d971eb3709e1b29d82466179cbc8b4d3 | [] | no_license | johnypark/divSPtise | 3fb305e4f51d04e809ad6f4a71294ed5c723999a | b488b0978094b242a14860e7d1b38c2b4214ef75 | refs/heads/master | 2020-12-02T18:10:26.253087 | 2018-02-20T23:34:07 | 2018-02-20T23:34:07 | 96,488,162 | 1 | 0 | null | null | null | null | UTF-8 | R | false | true | 1,331 | rd | regress.plot.gen.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/regress.plot.gen.R
\name{regress.plot.gen}
\alias{regress.plot.gen}
\title{============================================================================================================
function ..regress.plot.gen()
this function is to generate "list" contain multiple plots with single x axis and multlie y varibes.
output is "list"
output can be recalled within multiplot function to generate figure
for example, f<-regress.plot.gen(df,x_name,y_list); mutliplot(plotlist=f, cols=N) will give desirable plots
============================================================================================================}
\usage{
regress.plot.gen(df_name, x_name, y_list, xlabname = NULL)
}
\description{
============================================================================================================
function ..regress.plot.gen()
this function is to generate "list" contain multiple plots with single x axis and multlie y varibes.
output is "list"
output can be recalled within multiplot function to generate figure
for example, f<-regress.plot.gen(df,x_name,y_list); mutliplot(plotlist=f, cols=N) will give desirable plots
============================================================================================================
}
|
affd276cfbdc3867df1781834b86ab4965420983 | e0a2289118030bf9600d5684bdf648442cc5f208 | /2X/2.12/GOSemSim/R/gene2GO.R | f07333809655f2bfdf4375e401e28e42f6f21e95 | [] | no_license | GuangchuangYu/bioc-release | af05d6d7fa9c05ab98006cd06ea8df39455e8bae | 886c0ccb1cd2d3911c3d363f7a0cd790e72329b7 | refs/heads/master | 2021-01-11T04:21:25.872653 | 2017-08-03T05:11:41 | 2017-08-03T05:11:41 | 71,201,332 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,868 | r | gene2GO.R | getDb <- function(organism) {
annoDb <- switch(organism,
anopheles = "org.Ag.eg.db",
arabidopsis = "org.At.tair.db",
bovine = "org.Bt.eg.db",
canine = "org.Cf.eg.db",
chicken = "org.Gg.eg.db",
chimp = "org.Pt.eg.db",
## coelicolor= "org.Sco.eg.db", ## this package is no longer supported.
ecolik12 = "org.EcK12.eg.db",
ecsakai = "org.EcSakai.eg.db",
fly = "org.Dm.eg.db",
human = "org.Hs.eg.db",
malaria = "org.Pf.plasmo.db",
mouse = "org.Mm.eg.db",
pig = "org.Ss.eg.db",
rat = "org.Rn.eg.db",
rhesus = "org.Mmu.eg.db",
worm = "org.Ce.eg.db",
xenopus = "org.Xl.eg.db",
yeast = "org.Sc.sgd.db",
zebrafish = "org.Dr.eg.db",
)
return(annoDb)
}
loadGOMap_internal <- function(organism){
annoDb <- getDb(organism)
## loading annotation pakcage
require(annoDb, character.only = TRUE)
gomap <- switch(organism,
anopheles = "org.Ag.egGO",
arabidopsis = "org.At.tairGO",
bovine = "org.Bt.egGO",
canine = "org.Cf.egGO",
chicken = "org.Gg.egGO",
chimp = "org.Pt.egGO",
## coelicolor = "org.Sco.egGO", ## no longer supports.
ecolik12 = "org.EcK12.egGO",
ecsakai = "org.EcSakai.egGO",
fly = "org.Dm.egGO",
human = "org.Hs.egGO",
malaria = "org.Pf.plasmoGO",
mouse = "org.Mm.egGO",
pig = "org.Ss.egGO",
rat = "org.Rn.egGO",
rhesus = "org.Mmu.egGO",
worm = "org.Ce.egGO",
xenopus = "org.Xl.egGO",
yeast = "org.Sc.sgdGO",
zebrafish = "org.Dr.egGO",
)
gomap <- eval(parse(text=gomap))
assign("gomap", gomap, envir=GOSemSimEnv)
assign("gomap.flag", organism, envir=GOSemSimEnv)
}
##' @importMethodsFrom AnnotationDbi exists
##' @importMethodsFrom AnnotationDbi get
loadGOMap <- function(organism) {
if(!exists("GOSemSimEnv")) .initial()
if (!exists("gomap", envir=GOSemSimEnv)) {
loadGOMap_internal(organism)
} else {
flag <- get("gomap.flag", envir=GOSemSimEnv)
if (flag != organism)
loadGOMap_internal(organism)
}
}
##' @importMethodsFrom AnnotationDbi get
gene2GO <- function(gene, organism, ont, dropCodes) {
gene <- as.character(gene)
loadGOMap(organism)
gomap <- get("gomap", envir=GOSemSimEnv)
go <- gomap[[gene]]
if (all(is.na(go)))
return (NA)
## go.df <- ldply(go, function(i) c(GOID=i$GOID, Evidence=i$Evidence, Ontology=i$Ontology))
## go.df <- go.df[ !go.df$Evidence %in% dropCodes, ] ## compatible to work with NA and NULL
## goid <- go.df[go.df$Ontology == ont, "GOID"]
goid <- sapply(go, function(i) i$GOID)
evidence <- sapply(go, function(i) i$Evidence)
ontology <- sapply(go, function(i) i$Ontology)
idx <- ! evidence %in% dropCodes
goid <- goid[idx] ## drop dropCodes Evidence
ontology <- ontology[idx]
goid <- goid[ontology == ont]
if (length(goid) == 0)
return (NA)
goid <- as.character(unique(goid))
return (goid)
}
|
c98018fa207f818dabc5c57c8eddcfbeb87816be | 7c07bda4bf4be142a34214f0f8b2526f88b807f2 | /R/clsave.R | 4742fc4bf6c68bacb68661bebdde8e26f9c530d7 | [] | no_license | zumbov2/colorizer | 2fc48193217ebaa4f69055ba1ed2a1c9788a33db | e0f63f020276261cb24f16f48d44a7a3612f7e25 | refs/heads/master | 2023-01-08T04:13:52.655688 | 2020-11-09T22:29:03 | 2020-11-09T22:29:03 | 306,308,869 | 22 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,376 | r | clsave.R | #' Save Colorized and Juxtaposed Images
#'
#' \code{clsave} saves images that have been colorized using \code{colorize} or
#' juxtaposed with \code{juxtapose}.
#'
#' @param response a response object of a \code{colorize} function call.
#' @param destfile a character string or vector with the name where the images are saved.
#'
#' @return Besides saving, the function returns the response object invisibly.
#'
#' @examples
#' \dontrun{
#' # Save colorized images
#' res <- colorize(img = "https://upload.wikimedia.org/wikipedia/commons/9/9e/Breadfruit.jpg")
#' clsave(res, destfile = "colorized_version.jpg")
#' }
#' @export
#' @importFrom dplyr filter
#' @importFrom stringr str_detect str_remove_all str_replace_all
#' @importFrom purrr walk2
clsave <- function(response, destfile = "") {
# Remove Non-Responses
response <- check_response(response)
# Save Colorized Images from URL
if (ncol(response) == 2) {
i <- c(1:nrow(response))
if (destfile == "") destfile <- rep("", nrow(response))
purrr::pwalk(list(response$response, destfile, i), save_col_wh)
}
# Save Juxtaposed Images
if (ncol(response) == 4) {
i <- c(1:nrow(response))
if (destfile == "") destfile <- rep("", nrow(response))
purrr::pwalk(list(response$jp_type, response$jp, destfile, i), save_jp_wh)
}
# Return response
return(invisible(response))
}
|
830322b52acf8a291aeee9375386f246fd7037bb | 0c670a77c2989187703bf630cda5a4c56549f32b | /R/supp_mat_Sensi_phylogentic_uncertainty_ok.R | 54dffaa4e5a168b746eb5600a56bfa0f10baac55 | [] | no_license | paternogbc/ms_global_flower_allometry | cbaed2a6fcc13d409077e0c545833be74ef02751 | 2a880afa720ac4f8f4a8905d10b0059e1d2df236 | refs/heads/master | 2022-04-30T01:18:59.266731 | 2022-03-08T14:03:26 | 2022-03-08T14:03:26 | 226,209,658 | 4 | 2 | null | null | null | null | UTF-8 | R | false | false | 6,522 | r | supp_mat_Sensi_phylogentic_uncertainty_ok.R | # Supp. Mat - Sensitivity analysis (Phylogenetic uncertainty)
### Packages:-------------------------------------------------------------------
library(sensiPhy)
library(phylolm)
library(tidyverse)
library(smatr)
library(phytools)
library(cowplot)
### Start-----------------------------------------------------------------------
rm(list = ls())
source("R/ZZZ_functions.R")
### Load data-------------------------------------------------------------------
p <- read.tree("data/processed/300_study_phylo_trees.tre")
d <- read.csv("data/processed/data_flower_biomass_partition.csv")
estimates <- read.csv("outputs/tables/supp/STable_model_stats_allometric_scalling.csv")
# Match with Phylogeny----------------------------------------------------------
rownames(d) <- d$sp
d <- match_dataphy(tot ~ 1, data = d, phy = p)
### 1. Male ~ Flower----
### phySMA----
and_sma <- tree_physma(y = "and", x = "tot", data = d$data, trees = d$phy, method = "BM")
### PGLS----
and_pgls <- tree_phylm(log10(and) ~ log10(tot), data = d$data, phy = d$phy,
n.tree = 300, model = "BM")
### 2. Female ~ Flower----
### phySMA----
gyn_sma <- tree_physma(y = "gyn", x = "tot", data = d$data, trees = d$phy,
method = "BM")
### PGLS----
gyn_pgls <- tree_phylm(log10(gyn) ~ log10(tot), data = d$data, phy = d$phy,
n.tree = 300, model = "BM")
### 3. Petals ~ Flower----
### phySMA----
pet_sma <- tree_physma(y = "pet", x = "tot", data = d$data, trees = d$phy, method = "BM")
### PGLS----
pet_pgls <- tree_phylm(log10(pet) ~ log10(tot), data = d$data, phy = d$phy,
n.tree = 300, model = "BM")
### 4. Sepals ~ Flower----
### phySMA----
sep_sma <- tree_physma(y = "sep", x = "tot", data = d$data, trees = d$phy, method = "BM")
### PGLS----
sep_pgls <- tree_phylm(log10(sep) ~ log10(tot), data = d$data, phy = d$phy,
n.tree = 300, model = "BM")
# Save sensitivity cache--------------------------------------------------------
sensi <- list("and_physma" = and_sma,
"and_pgls" = and_pgls$sensi.estimates,
"gyn_physma" = gyn_sma,
"gyn_pgls" = gyn_pgls$sensi.estimates,
"pet_physma" = pet_sma,
"pet_pgls" = pet_pgls$sensi.estimates,
"sep_physma" = sep_sma,
"sep_pgls" = sep_pgls$sensi.estimates)
saveRDS(object = sensi, file = "outputs/temp/sensi_phylogenetic_uncertainty.Rds")
# Plot Figures------------------------------------------------------------------
sensi <- readRDS("outputs/temp/sensi_phylogenetic_uncertainty.Rds")
cols <- c("#D55E00","#56B4E9","#CC79A7", "#009E73")
lims <- c(0.8, 1.3)
bre1 <- seq(0.8, 1.3, .05)
# phySMA-----
est_physma <- data.frame(
estimate = c(sensi$and_physma$estimate,
sensi$gyn_physma$estimate,
sensi$pet_physma$estimate,
sensi$sep_physma$estimate),
organ = rep(c("and", "gyn", "pet", "sep"), each = nrow(sensi$and_physma)))
g1 <-
ggplot(est_physma %>% filter(organ %in% c("and", "gyn")), aes(x = estimate, fill = organ)) +
geom_histogram(position = position_dodge(),
bins = 20, show.legend = T,
alpha = .8) +
scale_fill_manual(values = cols[1:2], name = "", labels = c("Male", "Female")) +
scale_x_continuous(breaks = seq(0.9, 1.3, 0.05)) +
labs(y = "Frequency", x = "Estimated slopes",
subtitle = "phySMA | (Male vs Female)") +
tema(base_size = 20) +
theme(legend.position = c(.88,.9),
legend.background = element_blank());g1
g2 <-
ggplot(est_physma %>% filter(organ %in% c("pet", "sep")),
aes(x = estimate, fill = organ)) +
geom_histogram(position = position_dodge(),
bins = 20, show.legend = T,
alpha = .8) +
scale_fill_manual(values = cols[3:4],
name = "", labels = c("Petals", "Sepals")) +
scale_x_continuous(breaks = seq(0.8, 1.5, 0.1)) +
labs(y = "Frequency", x = "Estimated slopes",
subtitle = "phySMA | (Petals vs Sepals)") +
tema(base_size = 20) +
theme(legend.position = c(.88,.9),
legend.background = element_blank());g2
gsma <- plot_grid(g1, g2,labels = LETTERS[1:2],
label_size = 20, label_fontface = "plain"); gsma
title <- ggdraw() +
draw_label("Sensitivity analysis - Phylogenetic uncertainty (phySMA)", size = 18)
gglobal <-
plot_grid(title, gsma, ncol = 1, rel_heights = c(0.1, 1));gglobal
ggsave("outputs/figures/supp/SFig_sensi_phylogenetic_uncertainty_SMA.png",
height = 5, width = 12.5, units = "in",
plot = gglobal)
# PGLS-----
est_pgls <- data.frame(
estimate = c(sensi$and_pgls$estimate,
sensi$gyn_pgls$estimate,
sensi$pet_pgls$estimate,
sensi$sep_pgls$estimate),
organ = rep(c("and", "gyn", "pet", "sep"), each = nrow(sensi$and_physma)))
g3 <-
ggplot(est_pgls %>% filter(organ %in% c("and", "gyn")), aes(x = estimate, fill = organ)) +
geom_histogram(position = position_dodge(),
bins = 20, show.legend = TRUE,
alpha = .8) +
scale_fill_manual(values = cols[1:2], name = "", labels = c("Male", "Female")) +
scale_x_continuous(breaks = seq(0.8, 1.3, 0.05)) +
labs(y = "Frequency", x = "Estimated slopes",
subtitle = "PGLS | (Male vs Female)") +
tema(base_size = 20) +
theme(legend.position = c(.88,.9),
legend.background = element_blank());g3
g4 <-
ggplot(est_pgls %>% filter(organ %in% c("pet", "sep")),
aes(x = estimate, fill = organ)) +
geom_histogram(position = position_dodge(),
bins = 20, show.legend = TRUE,
alpha = .8) +
scale_fill_manual(values = cols[3:4], name = "", labels = c("Petals", "Sepals")) +
scale_x_continuous(breaks = seq(0.7, 1.5, 0.1)) +
labs(y = "Frequency", x = "Estimated slopes",
subtitle = "PGLS | (Petals vs Sepals)") +
tema(base_size = 20) +
theme(legend.position = c(.88,.9),
legend.background = element_blank());g4
gpgls <- plot_grid(g3, g4,labels = LETTERS[1:2],
label_size = 20, label_fontface = "plain"); gpgls
title <- ggdraw() +
draw_label("Sensitivity analysis - Phylogenetic uncertainty (PGLS)", size = 18)
gglobal <-
plot_grid(title, gpgls, ncol = 1, rel_heights = c(0.1, 1));gglobal
ggsave("outputs/figures/supp/SFig_sensi_phylogenetic_uncertainty_PGLS.png",
height = 5, width = 12.5, units = "in",
plot = gglobal)
### END------------ |
5156c7c1271b3c772c1cf22edb1433b36b58c6bd | d1b86f5fc93fc226fcfa8a9e8736b692011f2ad8 | /probdist.R | a996d4c84f5f960b25ef5633ab75f830bb48ea58 | [] | no_license | SrijitMukherjee/RCodes | 4fa2a52b53f6f1ec433f2fd01ba1c027b54baec0 | d90962102b623ea9c3f17cca4eaf90f03ad0cd4e | refs/heads/main | 2023-08-29T12:41:51.734329 | 2021-09-18T06:52:02 | 2021-09-18T06:52:02 | 312,255,601 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 5,097 | r | probdist.R | library(ggfortify)
ui <- shinyUI(fluidPage(
titlePanel(title = h2("Exploring Probability Distributions", align = "center")),
sidebarLayout(
sidebarPanel(
selectInput("distribution", "Choose your Distribution", c("Binomial","Poisson","Geometric","Discrete Uniform","Hypergeometric","Negative Binomial","Normal","Exponential", "Gamma", "Uniform", "Beta")),
numericInput("parameter_one", "Choose your valid Parameter One", value = 0, min = -50, max = 50),
numericInput("parameter_two", "Choose your valid Parameter Two", value = 0, min = -50, max = 50),
numericInput("parameter_three", "Choose your valid Parameter Three", value = 0, min = -50, max = 50)
),
mainPanel(
h6("with Srijit Mukherjee", align = "center"),
h5("Play around with the valid parameters to understand how the pmf or pdf changes with change in the parameters.",align = "center"),
br(),
verbatimTextOutput("text"),
plotOutput("histogram")
)
)
)
)
server = function(input, output) {
output$text = renderText({
if(input$distribution == c("Binomial"))
{
return("Parameter One = Number of Trials, and Parameter Two = Probability of Success")
}
if(input$distribution == c("Poisson"))
{
return("Parameter One = Lambda(mean)")
}
if(input$distribution == c("Geometric"))
{
return("Parameter One = Probability of Success")
}
if(input$distribution == c("Discrete Uniform"))
{
return("Parameter One = Minimum Value, and Parameter Two = Maximum Value")
}
if(input$distribution == c("Negative Binomial"))
{
return("Parameter One = Number of Failures to stop, and Parameter Two = Probability of Success")
}
if(input$distribution == c("Hypergeometric"))
{
return("Parameter One = Number of Success States, Parameter Two = Number of Failure States, and Parameter Three = Number of Trials ")
}
if(input$distribution == c("Normal"))
{
return("Parameter One = Mean, and Parameter Two = Standard Deviation")
}
if(input$distribution == c("Exponential"))
{
return("Parameter One = Rate")
}
if(input$distribution == c("Gamma"))
{
return("Parameter One = Shape, and Parameter Two = Scale")
}
if(input$distribution == c("Uniform"))
{
return("Parameter One = Minimum Value, and Parameter Two = Maximum Value")
}
if(input$distribution == c("Beta"))
{
return("Parameter One = Shape 1, and Parameter Two = Shape 2")
}
})
output$histogram = renderPlot({
if(input$distribution == c("Binomial"))
{
x = 0:input$parameter_one
y = dbinom(x,input$parameter_one,input$parameter_two)
plot(x,y,type = "o", col = "red")
}
if(input$distribution == c("Poisson"))
{
x = 0:50
y = dpois(x,input$parameter_one)
#ggdistribution(dpois, x, lambda = input$parameter_one , fill = 'blue')
plot(x,y, type = "o", col = "red")
}
if(input$distribution == c("Geometric"))
{
x = 0:50
y = dgeom(x,input$parameter_one)
plot(x,y, type = "o", col = "red")
}
if(input$distribution == c("Discrete Uniform"))
{
if (floor(input$parameter_one) == input$parameter_one && floor(input$parameter_two) == input$parameter_two && input$parameter_one < input$parameter_two)
x = input$parameter_one:input$parameter_two
y = rep(1/(length(x)),length(x))
plot(x,y, type = "o", col = "red")
}
if(input$distribution == c("Negative Binomial"))
{
x = 0:50
y = dnbinom(x,input$parameter_one,input$parameter_two)
plot(x,y, type = "o", col = "red")
}
if(input$distribution == c("Hypergeometric"))
{
x <- seq(0,input$parameter_one+input$parameter_two,by = 1)
y = dhyper(x,input$parameter_one,input$parameter_two,input$parameter_three)
plot(x,y, type = "o", col = "red")
}
if(input$distribution == c("Normal"))
{
x <- seq(-100,100,length=10000)
y <- dnorm(x,input$parameter_one,input$parameter_two)
plot(x, y, col ="blue")
}
if(input$distribution == c("Exponential"))
{
x <- seq(0,20,length=10000)
y <- dexp(x,input$parameter_one)
plot(x, y, col ="blue")
}
if(input$distribution == c("Gamma"))
{
x <- seq(0,100,length=10000)
y <- dgamma(x, input$parameter_one, scale = input$parameter_two)
plot(x, y, col ="blue")
}
if(input$distribution == c("Uniform"))
{
x <- seq(input$parameter_one,input$parameter_two,length=10000)
y <- dunif(x, input$parameter_one, input$parameter_two)
plot(x, y, col ="blue")
}
if(input$distribution == c("Beta"))
{
x <- seq(0,1,length=10000)
y <- dbeta(x, input$parameter_one, input$parameter_two)
plot(x, y, col ="blue")
}
})
}
shinyApp(ui = ui, server = server)
|
cabe23f1fff532a39e7633e0a34a5e15f83a52d7 | ae1394b6b2d9bda2e045c9f2e514223c283f96cd | /11_selection_signatures/A13_all_REVIGO_treemap_p0.05_topGOweight_v38.r | e84cb7a58f145f73cc11df1afd0ce6d312525162 | [
"MIT"
] | permissive | susefranssen/Global_genome_diversity_Ldonovani_complex | 64d0eb3732ed14da378d80c71e10f5c67295db7d | 718db6eb103829e546e1ce2692a4c5352c266f2f | refs/heads/master | 2022-04-17T13:35:41.580544 | 2020-03-27T16:55:14 | 2020-03-27T16:55:14 | 220,972,362 | 1 | 1 | null | null | null | null | UTF-8 | R | false | false | 13,985 | r | A13_all_REVIGO_treemap_p0.05_topGOweight_v38.r |
setwd("~/work/Leish_donovaniComplex/MS01_globalDiversity_Ldonovani/github_scripts/Global_genome_diversity_Ldonovani_complex/11_selection_signatures/")
dir.create("REVIGO_GO_res_visualisation", showWarnings = F)
setwd("REVIGO_GO_res_visualisation")
library(treemap)# treemap package by Martijn Tennekes
library(data.table)
library(foreach)
#################
#
# Visualisation of GO results using REVIGO:
#
# revigo was used with the following parameters:
#
# http://revigo.irb.hr/
#
# all GO terms with a value for weightGO <0.05 (p-vals given to revive)
#
# for revigo standard parameters were used
# allowed similarity: medium (0.7)
# show: abs log10 pval
#
#
# input files that were used for REVIGO are:
#
# imput used for REVIGO were the GO terms with pval<0.05 (weightFish)
# "SNPeff_",comp,"/SNPeff_",comp,"_stats.genes_high_mod_topGO_classic.weight.0.05.txt"
# e.g. "SNPeff_Linf_vs_Ldon/SNPeff_Linf_vs_Ldon_stats.genes_high_mod_topGO_classic.weight.0.05.txt"
#
#################
#
# --------------------------------------------------------------------------
# Here is your data from REVIGO. Scroll down for plot configuration options.
# arranging data for revigo first
revigo.names <- c("term_ID","description","freqInDbPercent","abslog10pvalue","uniqueness","dispensability","representative");
revigo.data <- rbind(c("GO:0007018","microtubule-based movement",0.287,4.2676,0.617,0.000,"microtubule-based movement"),
c("GO:0007059","chromosome segregation",0.476,1.7605,0.717,0.167,"microtubule-based movement"),
c("GO:0015698","inorganic anion transport",0.872,1.8380,0.698,0.000,"inorganic anion transport"),
c("GO:0055085","transmembrane transport",8.916,2.3261,0.700,0.400,"inorganic anion transport"),
c("GO:0051656","establishment of organelle localization",0.180,1.5177,0.709,0.257,"inorganic anion transport"),
c("GO:0006974","cellular response to DNA damage stimulus",2.360,2.0650,0.693,0.034,"cellular response to DNA damage stimulus"),
c("GO:1902531","regulation of intracellular signal transduction",0.547,1.6770,0.674,0.474,"cellular response to DNA damage stimulus"),
c("GO:0006310","DNA recombination",1.641,1.3788,0.703,0.041,"DNA recombination"),
c("GO:0006464","cellular protein modification process",7.726,3.1938,0.664,0.510,"DNA recombination"),
c("GO:0001522","pseudouridine synthesis",0.350,1.3080,0.656,0.248,"DNA recombination"))
revigo.data <-data.table(revigo.data); setnames(revigo.data,colnames(revigo.data),revigo.names)
revigo.data[,group:="CH.Linf.nohy_3"]
all<-revigo.data
revigo.names <- c("term_ID","description","freqInDbPercent","abslog10pvalue","uniqueness","dispensability","representative");
revigo.data <- rbind(c("GO:0006468","protein phosphorylation",4.137,4.0132,0.246,0.000,"protein phosphorylation"),
c("GO:0052106","quorum sensing involved in interaction with host",0.000,1.6198,0.021,0.021,"quorum sensing involved in interaction with host"),
c("GO:0048874","homeostasis of number of cells in a free-living population",0.029,1.4437,0.020,0.692,"quorum sensing involved in interaction with host"))
revigo.data <-data.table(revigo.data); setnames(revigo.data,colnames(revigo.data),revigo.names)
revigo.data[,group:="CUK.Linf_11"]
all<-rbind(all, revigo.data)
revigo.names <- c("term_ID","description","freqInDbPercent","abslog10pvalue","uniqueness","dispensability","representative");
revigo.data <- rbind(c("GO:0006468","protein phosphorylation",4.137,6.4685,0.388,0.000,"protein phosphorylation"),
c("GO:0071897","DNA biosynthetic process",0.676,1.7399,0.374,0.172,"protein phosphorylation"),
c("GO:0043087","regulation of GTPase activity",0.510,1.6289,0.447,0.000,"regulation of GTPase activity"),
c("GO:0052564","response to immune response of other organism involved in symbiotic interaction",0.010,2.4559,0.444,0.000,"response to immune response of other organism involved in symbiotic interaction"))
revigo.data <-data.table(revigo.data); setnames(revigo.data,colnames(revigo.data),revigo.names)
revigo.data[,group:="Ldon1star_44"]
all<-rbind(all, revigo.data)
revigo.names <- c("term_ID","description","freqInDbPercent","abslog10pvalue","uniqueness","dispensability","representative");
revigo.data <- rbind(c("GO:0006261","DNA-dependent DNA replication",0.576,2.6198,0.624,0.000,"DNA-dependent DNA replication"),
c("GO:0006486","protein glycosylation",0.317,1.5058,0.563,0.243,"DNA-dependent DNA replication"),
c("GO:0009190","cyclic nucleotide biosynthetic process",0.182,2.4559,0.441,0.170,"DNA-dependent DNA replication"),
c("GO:0016310","phosphorylation",7.764,2.1427,0.583,0.398,"DNA-dependent DNA replication"),
c("GO:0006555","methionine metabolic process",0.358,2.1367,0.443,0.268,"DNA-dependent DNA replication"),
c("GO:0046854","phosphatidylinositol phosphorylation",0.173,1.7423,0.504,0.413,"DNA-dependent DNA replication"),
c("GO:0007018","microtubule-based movement",0.287,2.0555,0.606,0.030,"microtubule-based movement"),
c("GO:0007205","protein kinase C-activating G-protein coupled receptor signaling pathway",0.018,1.7423,0.645,0.057,"protein kinase C-activating G-protein coupled receptor signaling pathway"),
c("GO:0009966","regulation of signal transduction",0.857,1.4377,0.621,0.380,"protein kinase C-activating G-protein coupled receptor signaling pathway"))
revigo.data <-data.table(revigo.data); setnames(revigo.data,colnames(revigo.data),revigo.names)
revigo.data[,group:="Ldon2_7"]
all<-rbind(all, revigo.data)
revigo.names <- c("term_ID","description","freqInDbPercent","abslog10pvalue","uniqueness","dispensability","representative");
revigo.data <- rbind(c("GO:0006468","protein phosphorylation",4.137,6.6198,0.515,0.000,"protein phosphorylation"),
c("GO:0006259","DNA metabolic process",5.607,2.9208,0.570,0.301,"protein phosphorylation"),
c("GO:0006261","DNA-dependent DNA replication",0.576,1.8356,0.584,0.169,"protein phosphorylation"),
c("GO:0016311","dephosphorylation",1.250,1.6440,0.566,0.467,"protein phosphorylation"),
c("GO:1902531","regulation of intracellular signal transduction",0.547,2.0809,0.569,0.039,"regulation of intracellular signal transduction"),
c("GO:0070887","cellular response to chemical stimulus",1.007,1.6946,0.586,0.433,"regulation of intracellular signal transduction"),
c("GO:0032940","secretion by cell",0.763,1.6946,0.638,0.081,"secretion by cell"),
c("GO:0007006","mitochondrial membrane organization",0.065,1.6946,0.649,0.153,"secretion by cell"))
revigo.data <-data.table(revigo.data); setnames(revigo.data,colnames(revigo.data),revigo.names)
revigo.data[,group:="Ldon3star_18"]
all<-rbind(all, revigo.data)
revigo.names <- c("term_ID","description","freqInDbPercent","abslog10pvalue","uniqueness","dispensability","representative");
revigo.data <- rbind(c("GO:0006468","protein phosphorylation",4.137,6.7696,0.354,0.000,"protein phosphorylation"),
c("GO:0006298","mismatch repair",0.165,1.9101,0.431,0.147,"protein phosphorylation"),
c("GO:0001522","pseudouridine synthesis",0.350,1.4237,0.330,0.474,"protein phosphorylation"),
c("GO:0015698","inorganic anion transport",0.872,1.4486,0.566,0.000,"inorganic anion transport"),
c("GO:0006325","chromatin organization",0.668,2.0044,0.528,0.040,"chromatin organization"))
revigo.data <-data.table(revigo.data); setnames(revigo.data,colnames(revigo.data),revigo.names)
revigo.data[,group:="Ldon4_4"]
all<-rbind(all, revigo.data)
revigo.names <- c("term_ID","description","freqInDbPercent","abslog10pvalue","uniqueness","dispensability","representative");
revigo.data <- rbind(c("GO:0007018","microtubule-based movement",0.287,4.5528,0.734,0.000,"microtubule-based movement"),
c("GO:0006643","membrane lipid metabolic process",0.382,2.0706,0.779,0.171,"microtubule-based movement"),
c("GO:0007059","chromosome segregation",0.476,2.3468,0.796,0.167,"microtubule-based movement"),
c("GO:0032259","methylation",3.103,1.7773,0.876,0.000,"methylation"),
c("GO:0048193","Golgi vesicle transport",0.297,2.0410,0.819,0.000,"Golgi vesicle transport"),
c("GO:0055085","transmembrane transport",8.916,2.9586,0.815,0.396,"Golgi vesicle transport"),
c("GO:0046903","secretion",0.810,1.5143,0.783,0.269,"Golgi vesicle transport"),
c("GO:0048874","homeostasis of number of cells in a free-living population",0.029,2.8539,0.682,0.023,"homeostasis of number of cells in a free-living population"),
c("GO:0000723","telomere maintenance",0.133,1.9136,0.554,0.545,"homeostasis of number of cells in a free-living population"),
c("GO:0052106","quorum sensing involved in interaction with host",0.000,2.3872,0.695,0.692,"homeostasis of number of cells in a free-living population"),
c("GO:0044145","modulation of development of symbiont involved in interaction with host",0.000,2.7447,0.765,0.477,"homeostasis of number of cells in a free-living population"),
c("GO:0051276","chromosome organization",1.477,1.4976,0.729,0.592,"homeostasis of number of cells in a free-living population"),
c("GO:0006302","double-strand break repair",0.211,1.9318,0.747,0.027,"double-strand break repair"),
c("GO:0034470","ncRNA processing",2.222,1.3851,0.743,0.326,"double-strand break repair"),
c("GO:0006464","cellular protein modification process",7.726,7.4318,0.741,0.510,"double-strand break repair"),
c("GO:0001522","pseudouridine synthesis",0.350,1.7100,0.725,0.205,"double-strand break repair"),
c("GO:0016311","dephosphorylation",1.250,1.8041,0.824,0.056,"dephosphorylation"))
revigo.data <-data.table(revigo.data); setnames(revigo.data,colnames(revigo.data),revigo.names)
revigo.data[,group:="Ldon5star_7"]
all<-rbind(all, revigo.data)
revigo.names <- c("term_ID","description","freqInDbPercent","abslog10pvalue","uniqueness","dispensability","representative");
revigo.data <- rbind(c("GO:0006928","movement of cell or subcellular component",0.973,3.5850,0.492,0.000,"movement of cell or subcellular component"),
c("GO:0009405","pathogenesis",0.095,2.3990,0.640,0.000,"pathogenesis"),
c("GO:0006468","protein phosphorylation",4.137,1.8105,0.493,0.042,"protein phosphorylation"),
c("GO:0006650","glycerophospholipid metabolic process",0.543,1.5596,0.284,0.420,"protein phosphorylation"),
c("GO:0006643","membrane lipid metabolic process",0.382,1.4372,0.365,0.651,"protein phosphorylation"),
c("GO:0046903","secretion",0.810,1.8207,0.566,0.086,"secretion"))
revigo.data <-data.table(revigo.data); setnames(revigo.data,colnames(revigo.data),revigo.names)
revigo.data[,group:="Linf_vs_Ldon"]
all<-rbind(all, revigo.data)
revigo.names <- c("term_ID","description","freqInDbPercent","abslog10pvalue","uniqueness","dispensability","representative");
revigo.data <- rbind(c("GO:0006298","mismatch repair",0.165,2.9208,0.373,0.000,"mismatch repair"),
c("GO:0006979","response to oxidative stress",0.575,2.5229,0.390,0.509,"mismatch repair"),
c("GO:0006928","movement of cell or subcellular component",0.973,1.5482,0.507,0.030,"movement of cell or subcellular component"),
c("GO:0055085","transmembrane transport",8.916,2.6990,0.512,0.228,"movement of cell or subcellular component"),
c("GO:0060285","cilium-dependent cell motility",0.006,1.3575,0.429,0.131,"movement of cell or subcellular component"))
revigo.data <-data.table(revigo.data); setnames(revigo.data,colnames(revigo.data),revigo.names)
revigo.data[,group:="Linf1star_43"]
all<-rbind(all, revigo.data)
all<-all[order(representative)]
write.table(all, file="A13_all_REVIGO_treemap_p0.05_topGOweight_v38.txt", row.names = F, quote = T)
#--------------------
# this table is equivalent to the table saved just above but two additional columns were added manually
# that specify the colour that should be used
dat<-data.table(read.table("A13_all_REVIGO_treemap_p0.05_topGOweight_v38_col_mod.txt", header = T))
# dat[,abslog10pvalue:=abs(log10pvalue)]
dat[,color:=paste0("#",color)]
dat[,representative:=as.character(representative)]
dat[,match:=(term_ID==term_ID.1)]
for (mm in unique(dat$group))
{
# mm<-"Linf_vs_Ldon"
print(mm)
# print(unique(dat[group==mm, color]))
# print(unique(dat[group==mm, representative]))
#
pdf( file=paste0("revigo_treemap_",mm,".pdf"), width=9, height=5 ) # width and height are in inches
treemap(
dat[group==mm],
index = c("representative","description"),
vSize = "abslog10pvalue",
type = "categorical",
vColor = "representative",
title = mm,
inflate.labels = F, # set this to TRUE for space-filling group labels - good for posters
lowerbound.cex.labels = 0, # try to draw as many labels as possible (still, some small squares may not get a label)
bg.labels = "#CCCCCCAA", # define background color of group labels
# "#CCCCCC00" is fully transparent, "#CCCCCCAA" is semi-transparent grey, NA is opaque
position.legend = "none",
palette=unique(dat[group==mm, color]),
fontsize.labels = 18,
fontsize.title = 26
)
dev.off()
}
|
e36c50bc983e47f97faa6d5671985bfd789b413f | 0cb6966998569275367c1a4465f3e6c4958fcfb6 | /R/zero.ci.R | 8ca5a0bddf19612b3649e325b89b5e5c406c1f31 | [] | no_license | cran/meboot | 2f9c1c2c0351035ff449f377232854f1acd40a88 | 42e4e93fcacce7b5dc9f6fcee7803b45589d11de | refs/heads/master | 2023-09-02T22:08:24.060860 | 2023-08-22T20:20:02 | 2023-08-22T21:30:44 | 18,176,243 | 2 | 2 | null | null | null | null | UTF-8 | R | false | false | 635 | r | zero.ci.R |
zero.ci <- function(x, confl=0.05)
{
bigj <- length(x)
yle <- x[x<=0] #le=less than or equal to
m1 <- length(yle)
#m1/bigj
ygt <- x[x>0] #gt=greater than
m2 <- length(ygt)
#m2/bigj
# Want confidence interval adjusted so it covers
# the true zero (1-confl)*100 times one realization gives
# on random interval
xsort <- sort(x)
nlo <- ceiling(confl*m1)
nup <- floor(confl*m2)
if(nlo <= 0) nlo <- 1
if(nup == bigj) nup <- 0
lolim <- xsort[nlo]
uplim <- xsort[bigj-nup]
bnlo <- length(which(x<=lolim))
bnup <- length(which(x>uplim))
list(bnlo=bnlo, bnup=bnup, lolim=lolim, uplim=uplim)
}
|
e1e4cfe4eb9228d2a74c179fc5b14693216063e5 | b7dc4b6d032ced2c7377ea893aba3d59a53e1c65 | /R/mcmc.R | bb0bb7ebd0d2f7222760d702a2abd564dd40329f | [] | no_license | carlislerainey/strategic-mobilization | b293926e44b3c74b9cb4175a7a8c23fc60e156fe | 084df80c01ad337819ef602bd712468bef74c3df | refs/heads/master | 2021-01-25T05:21:36.573554 | 2014-10-21T10:46:56 | 2014-10-21T10:46:56 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 10,131 | r | mcmc.R |
# Make sure that working directory is set properly
# setwd("~/Dropbox/projects/strategic-mobilization/")
# Clear workspace
rm(list = ls())
# Load packages
library(Amelia) # for multiple imputation
library(R2jags) # run JAGS from R
# Load the individual-level (with missingness), district-level, and country-level data.
mis.data <- read.csv("output/mi-data.csv")
district.data <- read.csv("output/district-data.csv")
country.data <- read.csv("output/country-data.csv")
n.chains <- 10 # Each chain starts by multiply imputing the data sets and then running the JAGS model
sims.weak <- NULL # place to hold mcmc simulations
sims.flat <- NULL # place to hold mcmc simulations
sims.info <- NULL # place to hold mcmc simulations
for (chain in 1:n.chains) {
###########################################################
## Start by multiply imputing the individual-level data set
###########################################################
# Create data sets from each country to be separately multiply imputed
mis.Canada <- mis.data[mis.data$Alpha.Polity == "Canada", ]
mis.Finland <- mis.data[mis.data$Alpha.Polity == "Finland", ]
mis.GreatBritain <- mis.data[mis.data$Alpha.Polity == "Great Britain", ]
mis.Portugal2002 <- mis.data[mis.data$Alpha.Polity == "Portugal 2002", ]
mis.Portugal2005 <- mis.data[mis.data$Alpha.Polity == "Portugal 2005", ]
# Impute the datasets
allmis <- names(mis.Canada) %in% c("Religious.Attendance")
mis.Canada <- mis.Canada[!allmis] # drop variables that are entirely missing from the process
mi.Canada <- amelia(mis.Canada, m = 1,
idvars = c("X", "Alpha.Polity", "District", "PR", "Number.Seats", "Country"),
lgstc = c("District.Competitiveness"),
logs = c("ENEP") ,
ords = c("Male", "Education", "Married", "Union.Member",
"Household.Income", "Urban", "Campaign.Activities",
"Freq.Campaign", "Contacted", "Cast.Ballot",
"Vote.Matters", "Cast.Ballot.Previous", "Close.To.Party",
"Ideology", "Know1", "Know2", "Know3"))
allmis <- names(mis.Finland) %in% c("Religious.Attendance")
mis.Finland <- mis.Finland[!allmis] # drop variables that are entirely missing from the process
mi.Finland <- amelia(mis.Finland, m = 1,
idvars = c("X", "Alpha.Polity", "District", "PR", "Country"),
lgstc = c("District.Competitiveness"),
logs = c("ENEP"),
ords = c("Male", "Education", "Married", "Union.Member",
"Household.Income", "Urban", "Campaign.Activities",
"Freq.Campaign", "Contacted", "Cast.Ballot",
"Vote.Matters", "Cast.Ballot.Previous", "Close.To.Party",
"Ideology", "Know1", "Know2", "Know3"))
mi.GreatBritain <- amelia(mis.GreatBritain, m = 1,
idvars = c("X", "Alpha.Polity", "District", "PR", "Number.Seats", "Country"),
lgstc = c("District.Competitiveness"),
logs = c("ENEP"),
ords = c("Male", "Education", "Married", "Union.Member",
"Household.Income", "Urban", "Campaign.Activities",
"Freq.Campaign", "Religious.Attendance", "Contacted", "Cast.Ballot",
"Vote.Matters", "Cast.Ballot.Previous", "Close.To.Party",
"Ideology", "Know1", "Know2", "Know3"))
mi.Portugal2002 <- amelia(mis.Portugal2002, m = 1,
idvars = c("X", "Alpha.Polity", "District", "PR", "Country"),
lgstc = c("District.Competitiveness"),
logs = c("ENEP"),
ords = c("Male", "Education", "Married", "Union.Member",
"Household.Income", "Urban", "Campaign.Activities",
"Freq.Campaign", "Religious.Attendance", "Contacted", "Cast.Ballot",
"Vote.Matters", "Cast.Ballot.Previous", "Close.To.Party",
"Ideology", "Know1", "Know2", "Know3"))
allmis <- names(mis.Portugal2005) %in% c("Campaign.Activities", "Freq.Campaign")
mis.Portugal2005 <- mis.Portugal2005[!allmis] # drop variables that are entirely missing from the process
mi.Portugal2005 <- amelia(mis.Portugal2005, m = 1,
idvars = c("X", "Alpha.Polity", "District", "PR", "Country"),
lgstc = c("District.Competitiveness"),
logs = c("ENEP"),
ords = c("Male", "Education", "Married", "Union.Member",
"Household.Income", "Urban", "Religious.Attendance", "Contacted", "Cast.Ballot",
"Vote.Matters", "Cast.Ballot.Previous", "Close.To.Party",
"Ideology", "Know1", "Know2", "Know3"))
analysis.vars <- c("Contacted", "Age", "Male", "Education", "Union.Member", "Household.Income", "Urban", "Close.To.Party", "District.Competitiveness", "ENEP", "PR", "Alpha.Polity", "District")
# Stack the (individual-level) multiply imputed data sets
individual.data <- rbind(mi.Canada[[1]]$imp1[, analysis.vars],
mi.GreatBritain[[1]]$imp1[, analysis.vars],
mi.Finland[[1]]$imp1[, analysis.vars],
mi.Portugal2002[[1]]$imp1[, analysis.vars],
mi.Portugal2005[[1]]$imp1[, analysis.vars])
####################################################################
## Now use the multiply imputed data to estimate the model with JAGS
####################################################################
# set the variables as objects in the environment
n <- nrow(individual.data)
n.districts <- nrow(district.data)
n.countries <- nrow(country.data)
contacted <- individual.data$Contacted
stdz <- function(x) {
x.temp <- x - min(x)
x.temp <- x.temp/max(x.temp)
x.temp <- x.temp - median(x.temp)
return(x.temp)
}
age <- stdz(individual.data$Age)
male <- stdz(individual.data$Male)
education <- stdz(individual.data$Education)
union <- stdz(individual.data$Union.Member)
income <- stdz(individual.data$Household.Income)
urban <- stdz(individual.data$Urban)
close <- stdz(individual.data$Close.To.Party)
competitiveness <- district.data$District.Competitiveness
smdp <- country.data$SMDP
district <- individual.data$District
country <- district.data$Country
X <- cbind(age, male, education, union, income, urban, close)
# Set up prior parameters
d0 <- rep(0, 2)
D0 <- matrix(0, ncol = 2, nrow = 2)
diag(D0) <- .001
d1 <- rep(0, 2)
D1 <- matrix(0, ncol = 2, nrow = 2)
diag(D1) <- .001
# Set up objects for jags call
data <- list(contacted=contacted,
n=n,
n.districts=n.districts,
n.countries=n.countries,
district=district,
country=country,
age=age,
male=male,
education=education,
union=union,
income=income,
urban=urban,
close=close,
competitiveness=competitiveness,
smdp=smdp,
d0=d0,
D0=D0,
d1=d1,
D1=D1)
param <- c("alpha",
"beta.age", "beta.male", "beta.edu", "beta.union", "beta.income",
"beta.urban", "beta.close",
"gamma0", "gamma1",
"delta0", "delta1",
"sigma.alpha", "sigma.gamma0", "sigma.gamma1", "rho",
"p.track")
inits <- function() {
list ("alpha" = rnorm(n.districts),
"Gamma" = array(rnorm(2*n.countries), c(n.countries, 2)),
"beta.age" = rnorm(1, 0, 2),
"beta.male" = rnorm(1, 0, 2),
"beta.edu" = rnorm(1, 0, 2),
"beta.union" = rnorm(1, 0, 2),
"beta.income" = rnorm(1, 0, 2),
"beta.urban" = rnorm(1, 0, 2),
"beta.close" = rnorm(1, 0, 2),
"delta0" = rnorm(2),
"delta1" = rnorm(2),
"sigma.alpha" = runif(1, 0, 10),
"sigma.gamma0" = runif(1, 0, 10),
"sigma.gamma1" = runif(1, 0, 10),
"rho" = runif(1, -1, .1))
}
n.thin <- 10
n.iter <- 15000
n.burnin <- 5000
# Weakly informative variance priors
m.weak <- jags(model.file = "bugs/weak.bugs",
data = data,
inits = inits,
param = param,
n.chains = 1,
n.thin = n.thin,
n.burnin = n.burnin,#n.burnin,
n.iter = n.iter,#n.iter,
DIC = FALSE)
sims.weak <- rbind(sims.weak, m.weak$BUGSoutput$sims.matrix)
# flat variance priors
m.flat <- jags(model.file = "bugs/flat.bugs",
data = data,
inits = inits,
param = param,
n.chains = 1,
n.thin = n.thin,
n.burnin = n.burnin,#n.burnin,
n.iter = n.iter,#n.iter,
DIC = FALSE)
sims.flat <- rbind(sims.flat, m.flat$BUGSoutput$sims.matrix)
# informative variance priors
m.info <- jags(model.file = "bugs/info.bugs",
data = data,
inits = inits,
param = param,
n.chains = 1,
n.thin = n.thin,
n.burnin = n.burnin,#n.burnin,
n.iter = n.iter,#n.iter,
DIC = FALSE)
sims.info <- rbind(sims.info, m.info$BUGSoutput$sims.matrix)
}
save(sims.weak, sims.info, sims.flat, file = "output/mcmc-sims.RData")
|
f93301841bc62f49b0752ae04a3ddc5d300621ec | f70031dbc28b916f433ef6f0106dfebbfc8c9338 | /cachematrix.R | 79dd425b0ffcf751097aa5173a012eceba6e6214 | [] | no_license | Calctron/ProgrammingAssignment2 | fc8ea107e509da6cc16c740f3e07e29bb8db7336 | 3dd3da4ac9ba1379a39f9c5f5f805dcf449ff0cb | refs/heads/master | 2022-02-11T06:41:03.712760 | 2019-07-22T20:41:52 | 2019-07-22T20:41:52 | 198,129,103 | 0 | 0 | null | 2019-07-22T02:05:38 | 2019-07-22T02:05:37 | null | UTF-8 | R | false | false | 953 | r | cachematrix.R | # These functions create a list to cache a matrix and its inverse
# and calculate the inverse if it is not already cached.
# This function creates a list that stores the matrix a user
# gives as input and its inverse.
makeCacheMatrix <- function(x = matrix()) {
i <- NULL
set <- function(y){
x <<- y
i <<- NULL
}
get <- function() x
setinverse <- function(inverse) i <<- inverse
getinverse <- function() i
list(set = set, get = get,
setinverse = setinverse,
getinverse = getinverse)
}
# This function returns the inverse of a matrix if it is cached
# or calculates the inverse of a matrix and caches it.
cacheSolve <- function(x, ...) {
i <- x$getinverse()
if(!is.null(i)) {
message("getting cached data")
return(i)
}
data <- x$get()
i <- solve(data, ...)
x$setinverse(i)
i
} |
3d0a67a4fbf0e066496a5ee163b42cab024be7b3 | 6a28ba69be875841ddc9e71ca6af5956110efcb2 | /Numerical_Methods_In_Finance_And_Economics:_A_Matlab-Based_Introduction_by_Paolo_Brandimarte/CH8/EX8.3/Page_436_StopLoss.R | b6c4024f7a76d950110d229f3a2687797588d89b | [] | permissive | FOSSEE/R_TBC_Uploads | 1ea929010b46babb1842b3efe0ed34be0deea3c0 | 8ab94daf80307aee399c246682cb79ccf6e9c282 | refs/heads/master | 2023-04-15T04:36:13.331525 | 2023-03-15T18:39:42 | 2023-03-15T18:39:42 | 212,745,783 | 0 | 3 | MIT | 2019-10-04T06:57:33 | 2019-10-04T05:57:19 | null | UTF-8 | R | false | false | 1,420 | r | Page_436_StopLoss.R | require(pracma)
require(tictoc)
StopLoss <- function(S0,K,mu,sigma,r,T,Paths) {
NRepl = dim(Paths)[1]
NSteps = dim(Paths)[2]
NSteps = NSteps - 1
# true number of steps
Cost = matrix(0,NRepl,1)
dt = T/NSteps
DiscountFactors = exp(-r*(seq(0,NSteps,1)*dt))
for (k in 1:NRepl){
CashFlows = matrix(0,NSteps+1,1)
if (Paths[k,1] >= K){
Covered = 1
CashFlows[1] = -Paths[k,1]
} else {
Covered = 0
}
for (t in 2:(NSteps+1)){
if ((Covered == 1) & (Paths[k,t] < K)){
# Sell
Covered = 0
CashFlows[t] = Paths[k,t]
} else if ((Covered == 0) & (Paths[k,t] > K)){
# Buy
Covered = 1
CashFlows[t] = -Paths[k,t]
}
}
if (Paths[k,NSteps + 1] >= K){
# Option is exercised
CashFlows[NSteps + 1] = CashFlows[NSteps + 1] + K
}
Cost[k] = - dot(DiscountFactors,CashFlows)
}
return(mean(Cost))
}
AssetPaths <- function(S0,mu,sigma,T,NSteps,NRepl) {
SPaths = matrix(0,NRepl, 1+NSteps)
SPaths[,1] = S0
dt = T/NSteps
nudt = (mu-0.5*sigma^2)*dt
sidt = sigma*sqrt(dt)
for (i in 1:NRepl){
for (j in 1:NSteps){
SPaths[i,j+1]=SPaths[i,j]*exp(nudt + sidt*rnorm(1))
}
}
return(SPaths)
}
S0 = 50
K = 50
mu = 0.1
sigma = 0.4
r = 0.05
T = 5/12
NRepl =100000
NSteps = 10
set.seed(39473)
Paths=AssetPaths(S0,mu, sigma,T,NSteps,NRepl)
tic()
StopLoss(S0,K,mu,sigma,r,T,Paths)
toc() |
9e817814790f5d552fdac0f37d6b06da1680902b | 68912c6d14de1ddbced5ece23eb7ba3531ba817b | /R/dontTouch.R | 49c80a86f315d73a6a7291ef7c3b08036a4f3ec7 | [] | no_license | Auburngrads/DODschools | 8defd7540c6378dd4ea448c47de4a543884ba472 | ab790d0f3a3ad3b45c47a742cc0db90db260ddc9 | refs/heads/master | 2022-11-21T17:13:17.340805 | 2020-07-20T13:13:12 | 2020-07-20T13:13:12 | 93,908,492 | 1 | 1 | null | null | null | null | UTF-8 | R | false | false | 2,158 | r | dontTouch.R | #' Extract YAML data from metadata.yml
#'
#' @param file Path to the YAML file
#'
#' @importFrom yaml yaml.load_file
dontTouch <- function(file = NULL) {
yaml <- yaml::yaml.load_file(file)
yaml$affil1 <- paste(c(yaml$affiliation1$university_name,
yaml$affiliation1$faculty_group,
yaml$affiliation1$department,
yaml$affiliation1$street_address,
yaml$affiliation1$state,
yaml$affiliation1$city,
yaml$affiliation1$country,
yaml$affiliation1$postal_code),
collapse = ', ')
yaml$affil2 <- paste(c(yaml$affiliation2$university_name,
yaml$affiliation2$faculty_group,
yaml$affiliation2$department,
yaml$affiliation2$street_address,
yaml$affiliation2$state,
yaml$affiliation2$city,
yaml$affiliation2$country,
yaml$affiliation2$postal_code),
collapse = ', ')
yaml$affil3 <- paste(c(yaml$affiliation3$university_name,
yaml$affiliation3$faculty_group,
yaml$affiliation3$department,
yaml$affiliation3$street_address,
yaml$affiliation3$state,
yaml$affiliation3$city,
yaml$affiliation3$country,
yaml$affiliation3$postal_code),
collapse = ', ')
yaml$affil4 <- paste(c(yaml$affiliation4$university_name,
yaml$affiliation4$faculty_group,
yaml$affiliation4$department,
yaml$affiliation4$street_address,
yaml$affiliation4$state,
yaml$affiliation4$city,
yaml$affiliation4$country,
yaml$affiliation4$postal_code),
collapse = ', ')
return(yaml)
}
|
51a262cc5bf6e2ed1c342474af68cc1b45172f1a | 5343587b76fdadeb0dbd8c5d2a4ce3e6422af6fb | /RProjects/Plots.R | 5f011ea66f904d4b4c290c9c38af0bc0a15a8983 | [
"Apache-2.0"
] | permissive | Ankusha/yarn-monitoring | 2822bf9a1c75967d678e489e6deeee13f3f41c81 | edf02d944029dc87f156b67c6e494dfd42c7e830 | refs/heads/master | 2021-01-18T10:18:56.339237 | 2014-07-15T23:10:05 | 2014-07-15T23:10:05 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,319 | r | Plots.R | source("RProjects/MRRun.R")
source("RProjects/MRRun.R")
source("RProjects/MRTest.R")
plotInputBytesRead<-function(mrtest)
{
res<-vector()
for(i in 1:length(mrtest))
{
res<-c(res, mean(getInputBytesOfMapTasks.mrrun(mrtest[[i]])/(1024*1024)))
}
plot(res,type="l", xlab="run",ylab="mbytes")
}
plotElapsedMapTimesStat<-function(mrtest, add=FALSE)
{
means<-vector()
mins<-vector()
maxs<-vector()
sds<-vector()
for( i in 1:length(mrtest))
{
means<-c(means,mean(getElapsedTimesOfTasks.mrrun(mrtest[[i]])))
mins<-c(mins,min(getElapsedTimesOfTasks.mrrun(mrtest[[i]])))
maxs<-c(maxs,max(getElapsedTimesOfTasks.mrrun(mrtest[[i]])))
sds<-c(sds,sd(getElapsedTimesOfTasks.mrrun(mrtest[[i]])))
}
# par(fg="black")
errbar(1:length(mrtest),means, maxs, mins, errbar.col="red", ylab="ms", xlab="runs", main="Mean, min, max elapsed times per run", add=add)
# par(fg="red")
errbar(1:length(mrtest),means, means+sds, means-sds , lwd=2, add=TRUE, ylab="ms", xlab="runs", main="Mean, min, max elapsed times per run")
#par(fg="black")
}
plotInputRecordsProcessedPerSec<-function(mrtest)
{
for( i in 1:length(mrtest))
{
bytespersec<-getInputBytesOfMapTasks.mrrun(mrtest[[i]])/getElapsedTimesOfMapTasks.mrrun(mrtest[[i]])
if (i==1)
plot(bytespersec, type="l")
else
lines(bytespersec, col=i)
}
}
plotElapsedTimesByRun<-function(mrtest)
{
res<-vector()
for( i in 1:length(mrtest))
{
res<-c(res,getElapsedTime.mrrun(mrtest[[i]]))
}
barplot(res, names.arg=1:length(mrtest),xlab="run",ylab="ms")
}
plotMeanInputRecordsProcessedPerSecondByRun<-function(mrtest)
{
res<-vector()
for( i in 1:length(mrtest))
{
res<-c(res, mean(1000*getInputRecordsOfMapTasks.mrrun(mrtest[[i]])/getElapsedTimesOfMapTasks.mrrun(mrtest[[i]])))
}
barplot(res, names.arg=1:length(mrtest), xlab="run",ylab="records")
}
plotMeanBytesProcessedPerSecondByRun<-function(mrtest)
{
res<-vector()
for( i in 1:length(mrtest))
{
res<-c(res, mean((1000/(1024*1024))*getInputBytesOfMapTasks.mrrun(mrtest[[i]])/getElapsedTimesOfMapTasks.mrrun(mrtest[[i]])))
}
barplot(res, names.arg=1:length(mrtest), xlab="run",ylab="mbytes")
}
plotMeanInputBytesPerSecByNode<-function(mrtest, minmax=TRUE)
{
result<-list()
for( i in 1:length(mrtest))
{
res<-getInputBytesReadByNodesAndTasks.mrrun(mrtest[[i]])
result<-merge_items(res, result)
}
print(result)
means<-vector()
mins<-vector()
maxs<-vector()
sds<-vector()
res<-result[sort(names(result))]
for( n in 1:length(res))
{
means<-c(means, mean(res[[n]]))
mins<-c(mins, min(res[[n]]))
maxs<-c(maxs, max(res[[n]]))
sds<-c(sds,sd(res[[n]]))
}
if ( minmax)
errbar(names(res),means, maxs, mins, ylab="mbytes", xlab="nodes", main="Mean, min, max elapsed times per node")
else
errbar(names(res),means, means+sds, means-sds, ylab="mbytes", xlab="nodes", main="Mean +- stdev elapsed times per node")
}
plotMeanInputRecordsPerSecByNode<-function(mrtest, minmax=TRUE)
{
result<-list()
for( i in 1:length(mrtest))
{
res<-getInputRecordsReadByNodesAndTasks.mrrun(mrtest[[i]])
result<-merge_items(res, result)
}
means<-vector()
mins<-vector()
maxs<-vector()
sds<-vector()
res<-result[sort(names(result))]
for( n in 1:length(res))
{
means<-c(means, mean(res[[n]]))
mins<-c(mins, min(res[[n]]))
maxs<-c(maxs, max(res[[n]]))
sds<-c(sds,sd(res[[n]]))
}
if ( minmax)
errbar(names(res),means, maxs, mins, ylab="records", xlab="nodes", main="Mean, min, max elapsed times per node")
else
errbar(names(res),means, means+sds, means-sds, ylab="records", xlab="nodes", main="Mean +- stdev elapsed times per node")
res
}
merge_items<-function(items1, items2)
{
names<-names(items1)
for(i in 1:length(items1))
{
if ( is.null(items2[[names[i]]]))
items2[[names[i]]]<-items1[[i]]
else
items2[[names[i]]]<-c(items2[[names[i]]],items1[[i]])
}
items2
}
plotMeanElapsedMapTimesByNode<-function(mrtest, taskType="MAP", minmax=TRUE)
{
result<-list()
for( i in 1:length(mrtest))
{
res<-getMapElapsedTimesByNodesAndTasks.mrrun(mrtest[[i]])
result<-merge_items(res, result)
}
means<-vector()
mins<-vector()
maxs<-vector()
sds<-vector()
res<-result[sort(names(result))]
for( n in 1:length(res))
{
means<-c(means, mean(res[[n]]))
mins<-c(mins, min(res[[n]]))
maxs<-c(maxs, max(res[[n]]))
sds<-c(sds,sd(res[[n]]))
}
if ( minmax)
errbar(names(res),means, maxs, mins, ylab="ms", xlab="nodes", main="Mean, min, max elapsed times per node")
else
errbar(names(res),means, means+sds, means-sds, ylab="ms", xlab="nodes", main="Mean +- stdev elapsed times per node")
}
plotJobElapsedTimes<-function(mrtest)
{
res<-vector()
for(i in 1:length(mrtest))
{
res<-c(res,getElapsedTime.mrrun(mrtest[[i]]))
}
barplot(res, names.arg=1:length(mrtest), width=rep(1,length(mrtest)), space=0)
}
plotJobStartTimes<-function(mrtest)
{
res<-vector()
for(i in 1:length(mrtest))
{
res<-c(res,mrtest[[i]]$job$job$startTime-mrtest[[1]]$job$job$startTime)
}
barplot(res, names.arg=1:length(mrtest), width=rep(1,length(mrtest)), space=0)
}
plotJobElapsedTimesMeansForTests<-function(names, ...)
{
resTasks<-vector()
resJobs<-vector()
tests<-list(...)
print(names(tests[[1]]))
for( t in 1:length(tests))
{
resJobs<-c(resJobs,mean(getElapsedTimesOfRuns.mrtest(tests[[t]])))
resTasks<-c(resTasks,mean(getElapsedTimesOfTasks.mrtest(tests[[t]])))
}
plot(resJobs,type="b", ylab="ms",xlab="test", ylim=c(min(resJobs,resTasks),max(resJobs,resTasks)), axes=FALSE)
lines(resTasks,col="red")
points(resTasks,col="red")
axis(side=1,at=c(1,2,3,4),labels=names)
axis(side=2)
}
plotJobElapsedTimesMaxsForTests<-function(names, ...)
{
resTasks<-vector()
resJobs<-vector()
tests<-list(...)
for( t in 1:length(tests))
{
resTasks<-c(resTasks,max(getElapsedTimesOfTasks.mrtest(tests[[t]])))
resJobs<-c(resJobs,max(getElapsedTimesOfRuns.mrtest(tests[[t]])))
}
plot(resJobs,type="b", ylab="ms",xlab="test", ylim=c(min(resJobs,resTasks),max(resJobs,resTasks)), axes=FALSE)
lines(resTasks,col="red")
points(resTasks,col="red")
axis(side=1,at=c(1,2,3,4),labels=names)
axis(side=2)
}
|
15573adbd0029d4c4a11268f75c41c9e014db358 | 2f6361c852232dbece768fcc2ceca2d617727410 | /R/01_get_cubes.R | 1f3b262450c15be762172f10a878e9d6e266310b | [] | no_license | Datawheel/datachile-r | 53b2de5027cd129b149eed851e1284f3ee8de338 | e75a5dacba18a1a34f05c6555b080d2b3718af77 | refs/heads/master | 2021-01-25T13:41:23.822015 | 2018-03-02T20:00:18 | 2018-03-02T20:00:18 | 123,605,225 | 7 | 1 | null | null | null | null | UTF-8 | R | false | false | 776 | r | 01_get_cubes.R | #' Obtain available cubes
#' @export
#' @keywords functions
get_cubes <- function() {
ua <- user_agent("httr")
url <- "https://chilecube.datawheel.us/cubes"
resp <- GET(url, ua)
if (http_type(resp) != "application/json") {
stop("API did not return json", call. = FALSE)
}
parsed <- fromJSON(content(resp, "text"), simplifyVector = TRUE)
if (http_error(resp)) {
stop(
sprintf(
"DataChile API request failed [%s]\n%s\n<%s>",
status_code(resp),
parsed$message,
parsed$documentation_url
),
call. = FALSE
)
}
parsed <- tibble(cube = parsed$cubes$name) %>% distinct()
structure(
list(
cubes = parsed,
path = url,
response = resp
),
class = "datachile_api"
)
}
|
b2127fd88f823c7dd3f43013e50ff62d6c5b3956 | 753e3ba2b9c0cf41ed6fc6fb1c6d583af7b017ed | /service/paws.cloudformation/man/paws.cloudformation-package.Rd | 5c2ed6cdab9d3fb8da6bb139242ea435e9c9cad8 | [
"Apache-2.0"
] | permissive | CR-Mercado/paws | 9b3902370f752fe84d818c1cda9f4344d9e06a48 | cabc7c3ab02a7a75fe1ac91f6fa256ce13d14983 | refs/heads/master | 2020-04-24T06:52:44.839393 | 2019-02-17T18:18:20 | 2019-02-17T18:18:20 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 1,008 | rd | paws.cloudformation-package.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/paws.cloudformation_package.R
\docType{package}
\name{paws.cloudformation-package}
\alias{paws.cloudformation}
\alias{paws.cloudformation-package}
\title{paws.cloudformation: AWS CloudFormation}
\description{
AWS CloudFormation allows you to create and manage
AWS infrastructure deployments predictably and repeatedly. You can use
AWS CloudFormation to leverage AWS products, such as Amazon Elastic
Compute Cloud, Amazon Elastic Block Store, Amazon Simple Notification
Service, Elastic Load Balancing, and Auto Scaling to build
highly-reliable, highly scalable, cost-effective applications without
creating or configuring the underlying AWS infrastructure.
}
\author{
\strong{Maintainer}: David Kretch \email{david.kretch@gmail.com}
Authors:
\itemize{
\item Adam Banker \email{adam.banker39@gmail.com}
}
Other contributors:
\itemize{
\item Amazon.com, Inc. [copyright holder]
}
}
\keyword{internal}
|
92b860ba6083438678a6f196f4ec8ac8c4a8dfe3 | 9d953f2653fe4f86be11e778eddbe1c9f287717b | /Plot5.R | 7cd8edca696bd971882306de81b3fa7907a47e58 | [] | no_license | ww44ss/EDA_Project2 | b7feb007af504dc8c21b3f1ed70a8845f6dd0277 | 33d9493b8846cdc85411f05702f37dc46ed98666 | refs/heads/master | 2021-01-01T16:00:11.057034 | 2014-09-23T14:08:39 | 2014-09-23T14:08:39 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,461 | r | Plot5.R | ## EDA Project 2
## WAS
## August 2014
## Plot5.R
## Creates a plot of total PM2.5 emissions for Question 1.
## Get packages
require(RDS)
require(ggplot2)
## GET DATA
## this simple instruction just checks to see if the data are already in memory and if not then rread them in.
## it turns out this has been a huge timesaver FYI.
if(data25[1,1]=="09001") {print("data loaded")} else {
## Create file paths and read data
d <- getwd()
data25r <- readRDS(paste0(d, "/exdata-data-NEI_data/summarySCC_PM25.rds"))
dataclass <- readRDS(paste0(d, "/exdata-data-NEI_data/Source_Classification_Code.rds"))
## DATA CLEANING
## Get rid of incomplete cases
data25 <- data25r[complete.cases(data25r),]
## Create sample to test program changes wit shorter processing times
## Use this expression to create a sample of 20000 random elements of the data to reduce processing time
## data25 <- data25[c(1, sample(1:16497651,20000,replace=T)),]
#merge data sets (PM2.5 and class)
mergeddata <- merge(data25, dataclass, by = "SCC")
}
## DATA ANALYSIS
## Select "ON-ROAD" sources
onroad <- b[grepl("ON-ROAD", mergeddata$type) == TRUE, ]
## select Baltimore fips
e <- c[grepl("24510", c$fips) == TRUE, ]
## sum
aggpol <- aggregate(e$Emissions, list(year = e$year), sum)
colnames(aggpol) <- c("year", "totalPM25")
## PLOTS
## plot on screen
plot(aggpol, type="o", col=2, lty=1, ylim=c(0, 1000), lwd = 2, xlim= c(1998, 2008), ylab=expression('Total WA Co OR ON-ROAD PM'[2.5]))
abline(lm(aggpol$totalPM25~aggpol$year),col="blue", lty=2, lwd=1)
title(main = expression("Total Baltimore ON-ROAD PM"[2.5]*" by year"))
a<-qplot(year, totalPM25, data=aggpol, geom=c("point", "smooth"), method="lm", ylim=c(0, NA), main = expression("Total WA Co OR ON-ROAD PM"[2.5]*" by year") )
print(a)
## Store plot to .png file
png(filename=paste0(d, "/WaCoPlot5.png"))
plot(aggpol, type="o", col=2, lty=1, ylim=c(0, 1000), lwd = 2, xlim= c(1998, 2008), ylab=expression('Total WA Co OR ON-ROAD PM'[2.5]))
abline(lm(aggpol$totalPM25~aggpol$year),col="blue", lty=2, lwd=1)
title(main = expression("Total Baltimore ON-ROAD PM"[2.5]*" by year"))
dev.off()
png(filename=paste0(d, "/WaCoPlot5gg.png"))
a<-qplot(year, totalPM25, data=aggpol, geom=c("point", "smooth"), method="lm", ylim=c(0, NA), main = expression("Total WA Co OR ON-ROAD PM"[2.5]*" by year") )
print(a)
dev.off()
##
## End |
5777d030daf4e7a4c5bbe4dd27f43c0a52435243 | 86ec5a7a1fbacca6ffff7224fd97c2a7684a6cfa | /cnv_analysis/gCSI/plotDrugLR.R | 5dee15fec75ccf10997533a4c421d5ef1616088a | [] | no_license | bhklab/CellLineConcordance | f07cadd4a45236d926964a8f6a5cc8e38efb4a91 | acdb1ef9b7da474805d47642ac0abb4484982412 | refs/heads/master | 2021-03-27T09:23:50.147393 | 2019-07-31T14:01:06 | 2019-07-31T14:01:06 | 90,164,953 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,128 | r | plotDrugLR.R | library(gplots)
library(gtools)
library(scales)
library(reshape)
library(ggplot2)
library("RColorBrewer")
plotNoncordantCl <- function(x, title, target, binarize=FALSE,
nonmatch.threshold=0.7, ambiguous.threshold=0.8,
order=TRUE){
myPalette <- colorRampPalette(c(mismatch.col, "white"), bias=1, interpolate="linear")
x.m <- melt(as.matrix(x))
colnames(x.m) <- c("cell.line", "dataset", "concordance")
x.m$dataset <- factor(x.m$dataset, levels=target)
if(order) x.m$cell.line <- factor(x.m$cell.line, levels = rownames(x))
if(binarize) {
x.m$bin <- cut(x.m$concordance,
breaks=c(0, nonmatch.threshold,
ambiguous.threshold, 1))
sc <- scale_fill_manual(values = myPalette(3), na.value="grey50")
} else {
x.m$bin <- round(x.m$concordance,2)
sc <- scale_fill_gradientn(colours = myPalette(100), na.value="grey50")
}
p <- ggplot(x.m, aes(dataset, cell.line)) +
geom_tile(aes(fill = bin), colour = "black") +
sc
p + theme_bw() +
ggtitle(title) +
labs(x="", y="") +
theme(legend.position = "right",
axis.ticks = element_blank(),
axis.ticks.length=unit(0.85, "cm"),
axis.text.x = element_text(size = 10, angle = 90, hjust = 0, colour = "black")) +
coord_fixed(ratio=1)
}
orderRows <- function(dat, order.metric='min', ret.na=FALSE, row.idx=NULL){
print(paste0("Ordering metric: ", order.metric))
na.idx <- which(apply(dat, 1, function(x) all(is.na(x))))
if(length(na.idx) > 0){
naomit.dat <- dat[-na.idx,]
onlyna.dat <- dat[na.idx,]
} else {
naomit.dat <- dat
onlyna.dat <- NULL
}
switch(order.metric,
min={
ord.idx <- order(apply(naomit.dat, 1, min, na.rm=TRUE), decreasing = TRUE)
},
mean={
ord.idx <- order(apply(naomit.dat, 1, mean, na.rm=TRUE), decreasing = TRUE)
},
max={
ord.idx <- order(apply(naomit.dat, 1, max, na.rm=TRUE), decreasing = TRUE)
},
custom={
ord.idx <- match(row.idx, rownames(naomit.dat))
}
)
dat <- naomit.dat[ord.idx,]
if(ret.na & (length(na.idx) > 0)) dat <- rbind(dat, onlyna.dat)
dat
}
genSummMetrics <- function(){
load(paste0('nBraw', ".GNE_matchdf.Rdata"))
nb.anno.df <- orderRows(match.anno.df, order.metric='min', ret.na=TRUE)
load(paste0('nAraw', ".GNE_matchdf.Rdata"))
na.anno.df <- orderRows(match.anno.df, order.metric='min', ret.na=TRUE)
nAB.list <- list(as.numeric(na.anno.df), as.numeric(nb.anno.df))
means <- round(sapply(nAB.list, mean, na.rm=TRUE),2)
sds <- round(sapply(nAB.list, sd, na.rm=TRUE),2)
print(paste("Pearson correlation of: ", means, "+/-", sds))
threshold <- function(x, case, thresh=0.8, thresh.lo=0.6) {
if(all(is.na(x))){
res <- FALSE
} else {
switch(case,
"ge"= res <- all(x >= thresh, na.rm=TRUE),
"le" = res <- all(x <= thresh, na.rm=TRUE),
"gal" = res <- all(x < thresh & x > thresh.lo, na.rm=TRUE))
}
return(res)
}
match.a <- table(apply(na.anno.df, 1, function(x) threshold(x, 'ge', 0.8)))[2]
match.b <- table(apply(nb.anno.df, 1, function(x) threshold(x, 'ge', 0.8)))[2]
ambig.a <- table(apply(na.anno.df, 1, function(x) threshold(x, 'gal', 0.8, 0.6)))[2]
ambig.b <- table(apply(nb.anno.df, 1, function(x) threshold(x, 'gal', 0.8, 0.6)))[2]
disc.a <- table(apply(na.anno.df, 1, function(x) threshold(x, 'le', 0.6)))[2]
disc.b <- table(apply(nb.anno.df, 1, function(x) threshold(x, 'le', 0.6)))[2]
na.a <- table(apply(na.anno.df, 1, function(x) all(is.na(x))))[2]
na.b <- table(apply(nb.anno.df, 1, function(x) all(is.na(x))))[2]
summ.df<- data.frame("A" = c(match.a, ambig.a, disc.a, na.a),
"B" = c(match.b, ambig.b, disc.b, na.b))
summ.df
}
mismatch.col <- 'magenta'
match.col <- 'brown'
#### COPY NUMBER
setwd("/mnt/work1/users/bhklab/Projects/cell_line_clonality/total_cel_list/datasets/rdataFiles/cnv/output")
out.dir <- "/mnt/work1/users/bhklab/Projects/cell_line_clonality/total_cel_list/datasets/rdataFiles/cnv/output/GNE_summary_plots"
load(paste0('nBraw', ".GNE_matchdf.Rdata"))
match.anno.df <- orderRows(match.anno.df, order.metric='min', ret.na=TRUE)
names.x <- rownames(match.anno.df)
gen.summ.metrics <- FALSE
if(gen.summ.metrics) genSummMetrics()
for (hscr in c("nAraw", "nBraw")) {
#hscr <- 'nAraw'
#load(paste0(hscr, ".GNE.corr.Rdata"))
load(paste0(hscr, ".GNE_matchdf.Rdata"))
#match.anno.df <- orderRows(match.anno.df, order.metric='min', ret.na=TRUE)
match.anno.df <- orderRows(match.anno.df, order.metric='custom', ret.na=FALSE, row.idx=names.x)
pdf(file.path(out.dir, paste0(hscr, ".conc.pdf")), height = 100)
colnames(match.anno.df) <- toupper(colnames(match.anno.df))
plotNoncordantCl(match.anno.df, paste0("GNE-", hscr), colnames(match.anno.df),
binarize=TRUE, 0.65, 0.8, TRUE)
plotNoncordantCl(match.anno.df, paste0("GNE-", hscr), colnames(match.anno.df),
binarize=FALSE, 0.65, 0.8, TRUE)
dev.off()
cat(paste0("scp quever@mordor:", file.path(out.dir, paste0(hscr, ".conc.pdf .\n"))))
}
#### GENOTYPE
setwd("/mnt/work1/users/bhklab/Projects/cell_line_clonality/total_cel_list/datasets/2017_GNE/snp_fingerprint/output")
out.dir <- getwd()
load("GNE.genotypeConcordance.Rdata")
#match.anno.df <- orderRows(match.anno.df, order.metric='min', ret.na=TRUE)
match.anno.df <- orderRows(match.anno.df, order.metric='custom', ret.na=FALSE, row.idx=names.x)
pdf(file.path(out.dir, "genotype.conc.pdf"), height = 100)
colnames(match.anno.df) <- toupper(colnames(match.anno.df))
plotNoncordantCl(match.anno.df, "GNE-genotype", colnames(match.anno.df),
binarize=TRUE, 0.7, 0.8, TRUE)
plotNoncordantCl(match.anno.df, "GNE-genotype", colnames(match.anno.df),
binarize=FALSE, 0.7, 0.8, order=TRUE)
dev.off()
cat(paste0("scp quever@mordor:", file.path(out.dir, "genotype.conc.pdf"), " .\n"))
|
3b14cab99d12876938c564fddab6ce896ba7e350 | 6b93feb6f3e914fd50a2a0e125c1a8b22f533b8b | /Reporte_Grafico_ALL_Vphase.R | 478962f846c53d13aefc6aab4d6c339b7942d5ea | [] | no_license | porrasgeorge/ReporteGuana | e9c3e507c438fc427356de06e5f1ac2c91744c3b | 201b6246ba3c295772d315b1ce295d9a3068a225 | refs/heads/master | 2021-05-17T04:18:22.614953 | 2020-07-05T19:40:17 | 2020-07-05T19:40:17 | 250,619,057 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 11,792 | r | Reporte_Grafico_ALL_Vphase.R | Vphase_Report <- function(initial_dateCR, period_time, sources) {
## Constantes
# tension nomimal
tn <- 14376.02
limit087 <- 0.87 * tn
limit091 <- 0.91 * tn
limit093 <- 0.93 * tn
limit095 <- 0.95 * tn
limit105 <- 1.05 * tn
limit107 <- 1.07 * tn
limit109 <- 1.09 * tn
limit113 <- 1.13 * tn
## Conexion a SQL Server
channel <- odbcConnect("SQL_ION", uid = "R", pwd = "Con3$adm.")
#Carga de Tabla Quantity
quantity <-
sqlQuery(channel ,
"select top 1500000 ID, Name from Quantity where Name like 'Vphase%'")
odbcCloseAll()
## Filtrado de Tabla Quantity
quantity <- quantity %>% filter(grepl("^Vphase [abc]$", Name))
quantity$Name <- c('Van', 'Vbn', 'Vcn')
## Rango de fechas del reporte
## fecha inicial en local time
initial_date <- with_tz(initial_dateCR, tzone = "UTC")
final_date <- initial_date + period_time
########################################################################################
# #Carga de Tabla Quantity
# quantity <- sqlQuery(channel , "select top 1500000 ID, Name from Quantity where Name like 'Voltage%'")
# odbcCloseAll()
#
# ## Filtrado de Tabla Quantity
# quantity <- quantity %>% filter(grepl("^Voltage on Input V[123] Mean - Power Quality Monitoring$", Name))
# quantity$Name <- c('Van', 'Vbn', 'Vcn')
########################################################################################
channel <- odbcConnect("SQL_ION", uid = "R", pwd = "Con3$adm.")
## Carga de Tabla DataLog2
sources_ids <- paste0(sources$ID, collapse = ",")
quantity_ids <- paste0(quantity$ID, collapse = ",")
dataLog2 <-
sqlQuery(
channel ,
paste0(
"select top 500000 * from DataLog2 where ",
"SourceID in (",
sources_ids,
")",
" and QuantityID in (",
quantity_ids,
")",
" and TimestampUTC >= '",
initial_date,
"'",
" and TimestampUTC < '",
final_date,
"'"
)
)
odbcCloseAll()
## Transformacion de columnas
dataLog2$TimestampUTC <- as_datetime(dataLog2$TimestampUTC)
dataLog2$TimestampCR <-
with_tz(dataLog2$TimestampUTC, tzone = "America/Costa_Rica")
dataLog2$TimestampUTC <- NULL
dataLog2$ID <- NULL
## Union de tablas, borrado de columnas no importantes y Categorizacion de valores
dataLog <-
dataLog2 %>% left_join(quantity, by = c('QuantityID' = "ID")) %>%
left_join(sources, by = c('SourceID' = "ID"))
#rm(dataLog2)
names(dataLog)[names(dataLog) == "Name"] <- "Quantity"
dataLog$SourceID <- NULL
dataLog$QuantityID <- NULL
dataLog$DisplayName <- NULL
dataLog$TN087 <- if_else(dataLog$Value <= limit087, TRUE, FALSE)
dataLog$TN087_091 <-
if_else((dataLog$Value > limit087) &
(dataLog$Value <= limit091),
TRUE,
FALSE)
dataLog$TN091_093 <-
if_else((dataLog$Value > limit091) &
(dataLog$Value <= limit093),
TRUE,
FALSE)
dataLog$TN093_095 <-
if_else((dataLog$Value > limit093) &
(dataLog$Value <= limit095),
TRUE,
FALSE)
dataLog$TN095_105 <-
if_else((dataLog$Value > limit095) &
(dataLog$Value <= limit105),
TRUE,
FALSE)
dataLog$TN105_107 <-
if_else((dataLog$Value > limit105) &
(dataLog$Value <= limit107),
TRUE,
FALSE)
dataLog$TN107_109 <-
if_else((dataLog$Value > limit107) &
(dataLog$Value <= limit109),
TRUE,
FALSE)
dataLog$TN109_113 <-
if_else((dataLog$Value > limit109) &
(dataLog$Value <= limit113),
TRUE,
FALSE)
dataLog$TN113 <- if_else(dataLog$Value > limit113, TRUE, FALSE)
## Sumarizacion de variables
datalog_byMonth <- dataLog %>%
## group_by(year, month, Meter, Quantity) %>%
group_by(Meter, Quantity) %>%
summarise(
cantidad = n(),
TN087 = sum(TN087),
TN087_091 = sum(TN087_091),
TN091_093 = sum(TN091_093),
TN093_095 = sum(TN093_095),
TN095_105 = sum(TN095_105),
TN105_107 = sum(TN105_107),
TN107_109 = sum(TN107_109),
TN109_113 = sum(TN109_113),
TN113 = sum(TN113)
)
## Creacion del histograma
wb <- createWorkbook()
titleStyle <- createStyle(
fontSize = 18,
fontColour = "#FFFFFF",
halign = "center",
fgFill = "#4F81BD",
border = "TopBottom",
borderColour = "#4F81BD"
)
redStyle <- createStyle(
fontSize = 14,
fontColour = "#FFFFFF",
fgFill = "#FF1111",
halign = "center",
textDecoration = "Bold",
)
mainTableStyle <- createStyle(
fontSize = 12,
halign = "center",
textDecoration = "Bold",
border = "TopBottomLeftRight",
borderColour = "#4F81BD"
)
mainTableStyle2 <- createStyle(
fontSize = 12,
halign = "center",
border = "TopBottomLeftRight",
borderColour = "#4F81BD"
)
# Recorrer cada medidor
for (meter in sources$Meter) {
## meter <- "Casa_Chameleon"
print(paste0("Vphase ", meter))
# datos del medidor
data <- datalog_byMonth %>% filter(Meter == meter)
data <- as.data.frame(data[, c(2:12)])
if (nrow(data) > 0) {
meterFileName = paste0(meter, ".png")
# Crear una hoja
addWorksheet(wb, meter)
head_table <- data.frame("c1" = c("Medidor", "Variable", "Cantidad de filas"),
"c2" = c(gsub("_", " ", meter),
"Tension de fase",
sum(data$cantidad)))
head_table2 <- data.frame("c1" = c("Fecha Inicial", "Fecha Final"),
"c2" = c(format(initial_date, '%d/%m/%Y', tz = "America/Costa_Rica"),
format(final_date, '%d/%m/%Y', tz = "America/Costa_Rica")))
mergeCells(wb, meter, cols = 2:3, rows = 1)
mergeCells(wb, meter, cols = 2:3, rows = 2)
mergeCells(wb, meter, cols = 2:3, rows = 3)
mergeCells(wb, meter, cols = 6:7, rows = 1)
mergeCells(wb, meter, cols = 6:7, rows = 2)
writeData(
wb,
meter,
x = head_table,
startRow = 1,
rowNames = F,
colNames = F,
withFilter = F
)
writeData(
wb,
meter,
x = head_table2,
startRow = 1,
startCol = 5,
rowNames = F,
colNames = F,
withFilter = F
)
addStyle(wb, sheet = meter, mainTableStyle, rows = 1:3, cols = 1)
addStyle(wb, sheet = meter, mainTableStyle, rows = 1:2, cols = 5)
addStyle(wb, sheet = meter, mainTableStyle2, rows = 1:3, cols = 2:3, gridExpand = T)
addStyle(wb, sheet = meter, mainTableStyle2, rows = 1:2, cols = 6:7, gridExpand = T)
if(sum(data$cantidad) < 3024){
addStyle(wb, sheet = meter, redStyle, rows = 3, cols = 1:3)
}
## se agregan columnas con porcentuales
data$TN087p <- data$TN087 / data$cantidad
data$TN087_091p <- data$TN087_091 / data$cantidad
data$TN091_093p <- data$TN091_093 / data$cantidad
data$TN093_095p <- data$TN093_095 / data$cantidad
data$TN095_105p <- data$TN095_105 / data$cantidad
data$TN105_107p <- data$TN105_107 / data$cantidad
data$TN107_109p <- data$TN107_109 / data$cantidad
data$TN109_113p <- data$TN109_113 / data$cantidad
data$TN113p <- data$TN113 / data$cantidad
tabla_resumen <- as.data.frame(t(data), stringsAsFactors = F)
colnames(tabla_resumen) <-
as.character(unlist(tabla_resumen[1, ]))
tabla_resumen <- tabla_resumen[3:nrow(tabla_resumen), ]
t1 <- tabla_resumen[1:9, ]
t1[, 4:6] <- tabla_resumen[10:18, ]
colnames(t1) <-
c(
"Cantidad_Van",
"Cantidad_Vbn",
"Cantidad_Vcn",
"Porcent_Van",
"Porcent_Vbn",
"Porcent_Vcn"
)
rownames(t1) <-
c(
'<87%',
'87% <=x< 91%',
'91% <=x< 93%',
'93% <=x< 95%',
'95% <=x< 105%',
'105% <=x< 107%',
'107% <=x< 109%',
'109% <=x< 113%',
'>= 113%'
)
t1$Lim_Inferior <-
c(
0,
limit087,
limit091,
limit093,
limit095,
limit105,
limit107,
limit109,
limit113
)
t1$Lim_Superior <-
c(
limit087,
limit091,
limit093,
limit095,
limit105,
limit107,
limit109,
limit113,
100000
)
t1$Lim_Inferior <- round(t1$Lim_Inferior, 0)
t1$Lim_Superior <- round(t1$Lim_Superior, 0)
t1 <- t1[, c(7, 8, 1, 4, 2 , 5, 3, 6)]
for (i in 1:ncol(t1)) {
t1[, i] <- as.numeric(t1[, i])
}
rm(tabla_resumen)
t1 <- tibble::rownames_to_column(t1, "Variable")
class(t1$Porcent_Van) <- "percentage"
class(t1$Porcent_Vbn) <- "percentage"
class(t1$Porcent_Vcn) <- "percentage"
setColWidths(wb,
meter,
cols = c(1:10),
widths = c(20, rep(15, 9)))
writeDataTable(
wb,
meter,
x = t1,
startRow = 6,
rowNames = F,
tableStyle = "TableStyleMedium1",
withFilter = F
)
data_gather <- data[, c(1, 2, 12:20)]
colnames(data_gather) <-
c('Medicion',
'Cantidad Total',
'<87%',
'87% <=x< 91%',
'91% <=x< 93%',
'93% <=x< 95%',
'95% <=x< 105%',
'105% <=x< 107%',
'107% <=x< 109%',
'109% <=x< 113%',
'>= 113%'
)
data_gather <-
data_gather %>% gather("Variable", "Value", -c("Medicion", "Cantidad Total"))
data_gather$Variable <-
factor(
data_gather$Variable,
levels = c('<87%',
'87% <=x< 91%',
'91% <=x< 93%',
'93% <=x< 95%',
'95% <=x< 105%',
'105% <=x< 107%',
'107% <=x< 109%',
'109% <=x< 113%',
'>= 113%'
)
)
p1 <-
ggplot(data_gather,
aes(
x = Variable,
y = Value,
fill = Medicion,
label = Value
)) +
geom_col(position = "dodge", width = 0.7) +
scale_y_continuous(
labels = function(x)
paste0(100 * x, "%"),
limits = c(0, 1.2)
) +
geom_text(
aes(label = sprintf("%0.2f%%", 100 * Value)),
position = position_dodge(0.9),
angle = 90,
size = 2
) +
theme(axis.text.x = element_text(
angle = 90,
hjust = 1,
size = 8
)) +
ggtitle(gsub("_", " ", meter)) +
xlab("Variable") +
ylab("Porcentaje")
print(p1) #plot needs to be showing
insertPlot(
wb,
meter,
xy = c("A", 17),
width = 12,
height = 6,
fileType = "png",
units = "in"
)
}
}
## nombre de archivo
fileName <-
paste0(
"C:/Data Science/ArhivosGenerados/Coopeguanacaste/A_Tension de fase (",
with_tz(initial_date, tzone = "America/Costa_Rica"),
") - (",
with_tz(final_date, tzone = "America/Costa_Rica"),
").xlsx"
)
saveWorkbook(wb, fileName, overwrite = TRUE)
}
|
3075af99a52ef2b42919aef2c2a3643268b0637f | a6332520faffad807736705a1fb1625959a66dcc | /code/scripts/laptop/edgeR_2017_P1rnaseq.R | 2cea35f563efa11d4bbcc5fac0555194cfafa5f7 | [] | no_license | tgallagh/EE283project | d74b6c0773351ac95a5deb759b47d8f1b1feccc6 | c59f18bcf97f240878365adc79fb1bbd86df3a6a | refs/heads/master | 2021-01-23T02:35:32.433232 | 2017-03-27T20:48:06 | 2017-03-27T20:48:06 | 86,010,650 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,594 | r | edgeR_2017_P1rnaseq.R | #3/25/17
#script for my macbook
#R version 3.3.1
#EdgeR analysis of HTSEQcount output
setwd("/Users/Tara/GoogleDrive/P1_RNAseq/output/edgeR/")
#source("http://bioconductor.org/biocLite.R")
#biocLite("edgeR")
#install.packages("reshape2")
require("edgeR")
require("ggplot2")
#import data
counts <- read.table("/Users/Tara/GoogleDrive/P1_RNAseq/data/processed/htseq/P1_htseq_all.txt")
#get rid of last five lines from htseq output
counts <- counts[1:5844,]
groups <- c("control", "control", "control", "la", "la", "la",
"bdl", "bdl", "bdl")
#edgeR stores data in a simple list-based data object called a DGEList.
#This type of object is easy to use because it can be manipulated like any list in R.
#You can make this in R by specifying the counts and the groups in the function DGEList()
d <- DGEList(counts=counts,group=groups)
#First get rid of genes which did not occur frequently enough.
#We can choose this cutoff by saying we must have at least 100 counts per million (calculated with cpm() in R) on any particular gene that we want to keep.
#In this example, we're only keeping a gene if it has a cpm of 100 or greater for at least two samples.
cpm<-(cpm(d))
#returns counts per mILLION FROM A DGElist
#divides raw counts by library size and then multiplies by 1 mil
total_gene<-apply(d$counts, 2, sum) # total gene counts per sample
keep <- rowSums(cpm(d)>1) >= 2
d <- d[keep,]
dim(d)
#Recalculate library sizes of the DGEList object
d$samples$lib.size <- colSums(d$counts)
d$samples
names <- as.character(counts[,1])
#calcNormFactors" function normalizes for RNA composition
#it finds a set of scaling factors for the library sizes (i.e. number of reads in a sample)
#that minimize the log-fold changes between the samples for most genes
d<- calcNormFactors(d)
#plot MDS - multi-dimensional scaling plot of the RNA samples in which distances correspond to leading log-fold-changes between each pair of RNA samples.
tiff("P1_RNAseq.tiff")
plotMDS<-plotMDS(d, method="bcv", col=as.numeric(d$samples$group))
legend("bottom", as.character(unique(d$samples$group)), col=1:3, pch=20)
dev.off()
#create a design matrix using model.matrix function
fac <- paste(groups, sep=".")
fac <- factor(fac)
design <- model.matrix (~0+fac)
colnames(design) <- levels(fac)
#estimate dispersion
d <- estimateGLMCommonDisp(d, design)
d <- estimateGLMTrendedDisp(d, design)
d <- estimateGLMTagwiseDisp(d, design)
#mean variance plot
#grey = raw variance
#light blue = tagwise dispersion varaince
#dark blue = common dispersion
#black line = poission variance
tiff("Meanvarplot.tiff")
meanVarPlot <- plotMeanVar( d , show.raw.vars=TRUE ,
show.tagwise.vars=TRUE ,
show.binned.common.disp.vars=FALSE ,
show.ave.raw.vars=FALSE ,
dispersion.method = "qcml" , NBline = TRUE ,
nbins = 100 ,
pch = 16 ,
xlab ="Mean Expression (Log10 Scale)" ,
ylab = "Variance (Log10 Scale)" ,
main = "Mean-Variance Plot" )
dev.off()
#BCV plot
#the BCV plot shows the common trended and genewise dispersions
#as a function of average logCPM
#plotBCV(y)
#fit genewise negative binomial general linear model
fit <- glmFit(d, design)
#the likelihood ratio stats are computed for the comparison of interest
#compare P1 in TH to P1 in TH + lactic acid
##### positive logFC for genes upreg in la compared to control
control.vs.la <- glmLRT(fit, contrast=c(0,-1,1))
#compare control to bdl
####positive logFC means higher expressin in bdl
control.vs.bdl <- glmLRT(fit, contrast=c(1,-1,0))
##compare (pH 5 group - pH 7 group) to 0:
####positive logFC means higher expression in bdl
bdl.vs.la <- glmLRT(fit, contrast=c(1,0,-1))
#make files with ALL genes
bdl.vs.la.tags <- topTags(bdl.vs.la, adjust="BH", n=Inf)
control.vs.bdl.tags<- topTags(control.vs.bdl, adjust="BH", n=Inf)
control.vs.la.tags <- topTags(control.vs.la, adjust="BH", n=Inf)
####FDR =.1 (10 in 100 sig genes wrong) AS CUT-OFF
### Write out stats to .csv and also filter by FDR (0.05)
cutoff=0.1
bdl.vs.la.dge <- decideTestsDGE(bdl.vs.la, p=cutoff, adjust="BH")
#obtain number of upreg, downreg genes
bdl.vs.la.dge.summary<-summary(bdl.vs.la.dge)
bdl.vs.la.dge.tags<-rownames(d)[as.logical(bdl.vs.la.dge)]
tiff("smearplot_bdl_la")
plotSmear(bdl.vs.la, de.tags = bdl.vs.la.dge.tags, xlab="LogCPM (counts per million)",
main= "Upregulated and downregulated genes Bdl compared to LA", cex.main=.8)
dev.off()
control.vs.la.dge <- decideTestsDGE(control.vs.la, p=cutoff, adjust="BH")
#obtain number of upreg, downreg genes
control.vs.la.dge.summary<-summary(control.vs.la.dge)
control.vs.la.dge.tags<-rownames(d)[as.logical(control.vs.la.dge)]
tiff("smearplot_control_la")
plotSmear(control.vs.la, de.tags = control.vs.la.dge.tags, xlab="LogCPM (counts per million)",
main= "Upregulated and downregulated genes in LA compared to control", cex.main=.8)
dev.off()
control.vs.bdl.dge <- decideTestsDGE(control.vs.bdl, p=cutoff, adjust="BH")
#obtain number of upreg, downreg genes
control.vs.bdl.dge.summary<-summary(control.vs.bdl.dge)
control.vs.bdl.dge.tags<-rownames(d)[as.logical(control.vs.bdl.dge)]
tiff("smearplot_control_bdl")
plotSmear(control.vs.bdl, de.tags = control.vs.bdl.dge.tags, xlab="LogCPM (counts per million)",
main= "Upreg and downreg genes in BDL compared to control", cex.main=.8)
dev.off()
#write out summaries from comparisons
summary_all <- cbind(control.vs.bdl.dge.summary, control.vs.la.dge.summary, bdl.vs.la.dge.summary)
colnames(summary_all) <- c("control_bdl", "control_la", "bdl_la")
write.table(summary_all, "summary_DGE_all.txt", sep="\t")
#export DGE (FDR <- 0.1)
out_control_bdl<- topTags(control.vs.bdl, n=Inf, adjust.method="BH")
keep_control_bdl <- out_control_bdl$table$FDR <= 0.1
de_control_bdl<-out_control_bdl[keep_control_bdl,]
write.table(x = de_control_bdl, file = "de_control_bdl_filter.txt", sep ="\t", quote = FALSE)
out_control_la<- topTags(control.vs.la, n=Inf, adjust.method="BH")
keep_control_la <- out_control_la$table$FDR <= 0.1
de_control_la<-out_control_la[keep_control_la,]
write.table(x = de_control_la, file = "de_control_la_filter.txt", sep ="\t", quote = FALSE)
out_bdl_la<- topTags(bdl.vs.la, n=Inf, adjust.method="BH")
keep_bdl_la<- out_bdl_la$table$FDR <= 0.1
de_bdl_la<-out_bdl_la[keep_bdl_la,]
write.table(x = de_bdl_la, file = "de_bdl_la_filter.txt", sep ="\t", quote = FALSE)
|
f478ae50393fd2743e0005d13c079b3df0286151 | f9e7af7b91fd3e0619e5b6b2ac025f40392aa81a | /tests/testthat/test-ast.R | 6a2e2ad459e876235e2e354b45ab33cfc117795c | [
"MIT"
] | permissive | r-lib/lobstr | 0d34f3dae9059ed9d6e6b8c982e830077bb546e8 | d9ea7fb56632590f66bc4cb41f0d0a97875607c1 | refs/heads/main | 2023-08-23T10:43:53.526709 | 2022-06-23T13:34:01 | 2022-06-23T13:34:01 | 32,606,771 | 261 | 32 | NOASSERTION | 2022-06-22T14:13:04 | 2015-03-20T20:57:44 | R | UTF-8 | R | false | false | 785 | r | test-ast.R | test_that("quosures print same as expressions", {
expect_equal(ast_tree(quo(x)), ast_tree(expr(x)))
})
test_that("can print complex expression", {
skip_on_os("windows")
x <- expr(function(x) if (x > 1) f(y$x, "x", g()))
expect_snapshot({
ast(!!x)
})
})
test_that("can print complex expression without unicode", {
old <- options(lobstr.fancy.tree = FALSE)
on.exit(options(old))
x <- expr(function(x) if (x > 1) f(y$x, "x", g()))
expect_snapshot({
ast(!!x)
})
})
test_that("can print scalar expressions nicely", {
old <- options(lobstr.fancy.tree = FALSE)
on.exit(options(old))
x <- expr(list(
logical = c(FALSE, TRUE, NA),
integer = 1L,
double = 1,
character = "a",
complex = 1i
))
expect_snapshot({
ast(!!x)
})
})
|
5caab158c3645f0e2bdc70f1d1d40a92d08835e1 | 0dbd60b634c090f2153f21f945fb306495a67df6 | /R/ROMS_COBALT/make_force_ltl.R | 4cea6a18fd3ae26deafaf3ae61876edd8f4e5ae3 | [] | no_license | wechuli/large-pr | afa8ec8535dd3917f4f05476aa54ddac7a5e9741 | 5dea2a26fb71e9f996fd0b3ab6b069a31a44a43f | refs/heads/master | 2022-12-11T14:00:18.003421 | 2020-09-14T14:17:54 | 2020-09-14T14:17:54 | 295,407,336 | 0 | 0 | null | 2020-09-14T14:17:55 | 2020-09-14T12:20:52 | TypeScript | UTF-8 | R | false | false | 10,510 | r | make_force_ltl.R | #'@title creates ltl forcing files from ROMS_COBALT data
#'
#'Creates ltl forcing files similarly structured to temp/salt
#'files made by hydroconstruct. Assumes structure
#'of files made by ROMS_to_hydroconstruct script.
#'Works on one year at a time.
#'
#'Author: Hem Nalini Morzaria Luna, modified by J.Caracappa
#'
#
# roms.dir = 'C:/Users/joseph.caracappa/Documents/Atlantis/ROMS_COBALT/ROMS_COBALT output/ltl_statevars/'
# # # roms.files <- list.files(path=roms.dir, pattern="^roms_output_ltl_statevars_tohydro_.*\\.nc$", recursive = TRUE, full.names = TRUE, include.dirs = TRUE)
# roms.file = paste0(roms.dir,'roms_output_ltl_statevars_tohydro_1965.nc')
make_force_ltl = function(roms.dir,roms.file){
source(here::here('R','make_force_ltl'))
source(here::here('R','flatten_ROMS.R'))
bgm.polygons = 0:29
file.year = as.numeric(sort(gsub(".*_(\\d{4}).+","\\1",roms.file)))
# general
fill.value <- 0
this.geometry <- "neus_tmerc_RM2.bgm"
this.title <- "ROMS"
#options depth dimension
depth.bins <- 1:5
d.units <- "depth layers"
#options time dimension
timestep <- 24 # 12 hour timesteps
t.units <- "seconds since 1964-01-01 00:00:00 +10"
time.unit.length <- 1 # years
time.length <- 365
seconds.timestep <- 60*60*24
time.vector <- 1:time.length
#options polygon dimension
pol.units <- "spatial polygons"
pol.bins <- 1:30
# time.array <- make_time_array(time.vector)
pol.array <- pol.bins %>%
as.array
depth.array <- depth.bins %>%
as.array
ltl.nc = ncdf4::nc_open(roms.file)
time.steps = ltl.nc$dim$time$vals
dat.ls = roms2long(roms.file,is.hflux = F)
avg.data = dat.ls[[1]]
for(i in 2:length(dat.ls)){
avg.data = cbind(avg.data,dat.ls[[i]][,4])
colnames(avg.data)[i+3] = colnames(dat.ls[[i]])[4]
}
avg.data$month = as.numeric(format(avg.data$time,format= '%m'))
avg.data$year = as.numeric(format(avg.data$time,format = '%Y'))
# avg.data$timesteps = as.numeric(avg.data$time - as.POSIXct('1964-01-01 00:00:00',tz= 'UTC'))
avg.data$timesteps = difftime(avg.data$time,as.Date('1964-01-01 00:00:00 UTC'),units = 'sec')
avg.data = avg.data %>% select(year,month,time,timesteps,box,level,ndi,nlg,nlgz,nmdz,nsm,nsmz,silg,nbact) %>% arrange(year,month,time,box,level)
avg.data = na.omit(avg.data)
time.array = sort(unique(avg.data$timesteps))
time.steps <- avg.data %>%
distinct(timesteps) %>%
pull(timesteps)
these.years <- avg.data %>%
distinct(year) %>%
pull(year)
get_array <- function(eachvariable) {
print(eachvariable)
variable.list <- list()
for(eachindex in 1:length(time.steps)) {
print(eachindex)
this.index.data <- avg.data %>%
filter(timesteps==time.steps[eachindex])
pol.list <- list()
for(eachpolygon in bgm.polygons) {
this.value <- this.index.data %>%
filter(box == eachpolygon) %>%
arrange(desc(level)) %>%
select(all_of(eachvariable)) %>%
pull(1)
if(isEmpty(this.value)){
length(this.value) <- 5
} else if(!isEmpty(this.value)){
length(this.value) <- 5
# this.value[5] <- this.value[1]
}
pol.array <- this.value %>%
as.array()
pol.list[[eachpolygon+1]] <- pol.array
}
this.index <- do.call("cbind", pol.list) %>% as.array()
variable.list[[eachindex]] <- this.index
}
return(variable.list)
}
ndi.list.array <- get_array("ndi")
nlg.list.array <- get_array("nlg")
nlgz.list.array <- get_array("nlgz")
nmdz.list.array <- get_array("nmdz")
nsm.list.array <- get_array("nsm")
nsmz.list.array <- get_array("nsmz")
silg.list.array <- get_array("silg")
nbact.list.array <- get_array("nbact")
ndi.result.array <- array(dim=c(5,30,length(ndi.list.array)))
nlg.result.array <- array(dim=c(5,30,length(nlgz.list.array)))
nlgz.result.array <- array(dim=c(5,30,length(nlgz.list.array)))
nmdz.result.array <- array(dim=c(5,30,length(nmdz.list.array)))
nsm.result.array <- array(dim=c(5,30,length(nsm.list.array)))
nsmz.result.array <- array(dim=c(5,30,length(nsmz.list.array)))
silg.result.array <- array(dim=c(5,30,length(silg.list.array)))
nbact.result.array <- array(dim=c(5,30,length(nbact.list.array)))
for(eachindex in 1:length(ndi.list.array)){
ndi.result.array[,,eachindex] <- do.call("cbind", ndi.list.array[eachindex])
nlg.result.array[,,eachindex] <- do.call("cbind", nlg.list.array[eachindex])
nlgz.result.array[,,eachindex] <- do.call("cbind", nlgz.list.array[eachindex])
nmdz.result.array[,,eachindex] <- do.call("cbind", nmdz.list.array[eachindex])
nsm.result.array[,,eachindex] <- do.call("cbind", nsm.list.array[eachindex])
nsmz.result.array[,,eachindex] <- do.call("cbind", nsmz.list.array[eachindex])
silg.result.array[,,eachindex] <- do.call("cbind", silg.list.array[eachindex])
nbact.result.array[,,eachindex] <- do.call("cbind", nbact.list.array[eachindex])
}
make_hydro <- function(nc.name, t.units, seconds.timestep, this.title, this.geometry, time.array, cdf.name) {
nc.file <- create.nc(nc.name)
dim.def.nc(nc.file, "t", unlim=TRUE)
dim.def.nc(nc.file, "b", 30)
dim.def.nc(nc.file, "z", 5)
var.def.nc(nc.file, "t", "NC_DOUBLE", "t")
var.def.nc(nc.file, 'ndi', 'NC_DOUBLE', c('z','b','t'))
var.def.nc(nc.file, 'Diatom_N', 'NC_DOUBLE', c('z','b','t'))
var.def.nc(nc.file, 'Carniv_Zoo_N', 'NC_DOUBLE', c('z','b','t'))
var.def.nc(nc.file, 'Zoo_N', 'NC_DOUBLE', c('z','b','t'))
var.def.nc(nc.file, 'PicoPhytopl_N', 'NC_DOUBLE', c('z','b','t'))
var.def.nc(nc.file, 'MicroZoo_N', 'NC_DOUBLE', c('z','b','t'))
var.def.nc(nc.file, 'Diatom_S', 'NC_DOUBLE', c('z','b','t'))
var.def.nc(nc.file, 'Pelag_Bact_N', 'NC_DOUBLE', c('z','b','t'))
#_FillValue
att.put.nc(nc.file, 'ndi', "_FillValue", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'Diatom_N', "_FillValue", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'Carniv_Zoo_N', "_FillValue", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'Zoo_N', "_FillValue", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'PicoPhytopl_N', "_FillValue", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'MicroZoo_N', "_FillValue", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'Diatom_S', "_FillValue", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'Pelag_Bact_N', "_FillValue", "NC_DOUBLE", 0.2)
#missing_value
att.put.nc(nc.file, 'ndi', "missing_value", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'Diatom_N', "missing_value", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'Carniv_Zoo_N', "missing_value", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'Zoo_N', "missing_value", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'PicoPhytopl_N', "missing_value", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'MicroZoo_N', "missing_value", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'Diatom_S', "missing_value", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'Pelag_Bact_N', "missing_value", "NC_DOUBLE", 0.2)
#valid_min
att.put.nc(nc.file, 'ndi', "valid_min", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'Diatom_N', "valid_min", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'Carniv_Zoo_N', "valid_min", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'Zoo_N', "valid_min", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'PicoPhytopl_N', "valid_min", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'MicroZoo_N', "valid_min", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'Diatom_S', "valid_min", "NC_DOUBLE", 0)
att.put.nc(nc.file, 'Pelag_Bact_N', "valid_min", "NC_DOUBLE", 0)
#valid_max
att.put.nc(nc.file, 'ndi', "valid_max", "NC_DOUBLE", 999)
att.put.nc(nc.file, 'Diatom_N', "valid_max", "NC_DOUBLE", 999)
att.put.nc(nc.file, 'Carniv_Zoo_N', "valid_max", "NC_DOUBLE", 999)
att.put.nc(nc.file, 'Zoo_N', "valid_max", "NC_DOUBLE", 999)
att.put.nc(nc.file, 'PicoPhytopl_N', "valid_max", "NC_DOUBLE", 999)
att.put.nc(nc.file, 'MicroZoo_N', "valid_max", "NC_DOUBLE", 999)
att.put.nc(nc.file, 'Diatom_S', "valid_max", "NC_DOUBLE", 999)
att.put.nc(nc.file, 'Pelag_Bact_N', "valid_max", "NC_DOUBLE", 999)
#units
att.put.nc(nc.file, 'ndi', "units", "NC_CHAR", "mg N m-3")
att.put.nc(nc.file, 'Diatom_N', "units", "NC_CHAR", "mg N m-3")
att.put.nc(nc.file, 'Carniv_Zoo_N', "units", "NC_CHAR", "mg N m-3")
att.put.nc(nc.file, 'Zoo_N', "units", "NC_CHAR", "mg N m-3")
att.put.nc(nc.file, 'PicoPhytopl_N', "units", "NC_CHAR", "mg N m-3")
att.put.nc(nc.file, 'MicroZoo_N', "units", "NC_CHAR", "mg N m-3")
att.put.nc(nc.file, 'Diatom_S', "units", "NC_CHAR", "mg Si m-3")
att.put.nc(nc.file, 'Pelag_Bact_N', "units", "NC_CHAR", "mg N m-3")
att.put.nc(nc.file, "t", "units", "NC_CHAR", t.units)
att.put.nc(nc.file, "t", "dt", "NC_DOUBLE", seconds.timestep)
att.put.nc(nc.file, "NC_GLOBAL", "title", "NC_CHAR", this.title)
att.put.nc(nc.file, "NC_GLOBAL", "geometry", "NC_CHAR", this.geometry)
att.put.nc(nc.file, "NC_GLOBAL", "parameters", "NC_CHAR", "")
var.put.nc(nc.file, "t", time.array)
var.put.nc(nc.file,'ndi',ndi.result.array)
var.put.nc(nc.file,'Diatom_N',nlg.result.array)
var.put.nc(nc.file,'Carniv_Zoo_N',nlgz.result.array)
var.put.nc(nc.file,'Zoo_N',nmdz.result.array)
var.put.nc(nc.file,'PicoPhytopl_N',nsm.result.array)
var.put.nc(nc.file,'MicroZoo_N',nsmz.result.array)
var.put.nc(nc.file,'Diatom_S',silg.result.array)
var.put.nc(nc.file,'Pelag_Bact_N',nbact.result.array)
close.nc(nc.file)
system(paste("ncdump ",nc.name," > ", cdf.name,sep=""), wait = TRUE)
}
make_hydro(nc.name=paste0(roms.dir,"roms_ltl_force_",file.year,".nc"), t.units, seconds.timestep, this.title, this.geometry, time.array, cdf.name = paste0(roms.dir,"test_roms_",file.year,".cdf"))
}
# make.ltl.force(
# roms.dir = 'C:/Users/joseph.caracappa/Documents/Atlantis/ROMS_COBALT/ROMS_COBALT output/ltl_statevars/',
# # roms.files <- list.files(path=roms.dir, pattern="^roms_output_ltl_statevars_tohydro_.*\\.nc$", recursive = TRUE, full.names = TRUE, include.dirs = TRUE)
# roms.file = 'C:/Users/joseph.caracappa/Documents/Atlantis/ROMS_COBALT/ROMS_COBALT output/ltl_statevars/roms_output_ltl_statevars_tohydro_1964.nc'
#
# )
|
258f395bdeb11eab6c11f345a34f18da69e46fcd | 906f05f4a01eb6e8d745209fa6f346edf0190a26 | /R Code/server.R | 5ca0829f9a80d24bcf1fea1e92250044ad556165 | [] | no_license | irecasens/govhack_team | e729b5ad70e7e43be5f93c61bc3e8d5fc25ad536 | 62526525b72cb32ed4bcdf3cc4d237a676cefbec | refs/heads/master | 2021-01-01T19:41:43.781882 | 2017-08-07T04:47:01 | 2017-08-07T04:47:01 | 98,654,219 | 0 | 1 | null | null | null | null | UTF-8 | R | false | false | 1,825 | r | server.R | # server.R
library(shiny)
library(maps)
library(mapproj)
library(readxl)
library(ggplot2)
library(dplyr)
library(ggmap)
library(ggthemes)
tidy_viz = read.csv('tidy_viz.csv')
map <- get_map(location = 'Melbourne', maptype = "satellite", zoom = 15)
create_map <- function(data)
{
ggmap(map, extent = "device") + geom_density2d(data = data,
aes(x = LNG, y = LAT), size = 0.3) + stat_density2d(data = data,
aes(x = LNG, y = LAT, fill = ..level.., alpha = ..level..), size = 0.01,
bins = 16, geom = "polygon") + scale_fill_gradient(low = "green", high = "red") +
scale_alpha(range = c(0, 0.3), guide = FALSE)
}
shinyServer(
function(input, output) {
output$map <- renderPlot({
var <- switch(input$Measure_ID,
"Pedestrian Count" = "hourly_count",
"Vehicles Count" = "vehicles_count",
"Vehicle and Pedestrian Density" = "veh_ped_density")
filtered <- filter(tidy_viz, day_type == input$Day_ID
& risk >= input$Risk_ID[1] & risk <= input$Risk_ID[2]
& year >= input$Year_ID[1] & year <= input$Year_ID[2]
& time >= input$Time_ID[1] & time <= input$Time_ID[2])
if(var != "veh_ped_density"){
filtered <- filter(filtered, measure == var)
}
create_map(filtered)
})
}
) |
1c967cd2cb784766d885e192c87aadf36b678704 | 9077699975ffc6239097170bcc851790a5967dc6 | /R/MatchesIndices.r | 7ac0f19139d27274dbc2dcc8a4a121058a2d33a1 | [] | no_license | cran/HFWutils | 8eec52f548293d2057cbfdaad9bdaac39524ba20 | fa8bdcc82482007e09cfa6207ba7d5bd4189853c | refs/heads/master | 2021-01-01T15:51:28.662706 | 2008-05-17T00:00:00 | 2008-05-17T00:00:00 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 582 | r | MatchesIndices.r | MatchesIndices <- function (x,patt,fixed=FALSE)
{
indexF <- function(x) length(unlist(x))>0
M <- Matches(
Vector = x,
patt = patt,
fixed = fixed
) # Matches
out <- list()
out$index_x <- apply(M,1,indexF )
out$index_patt <- apply(M,2,indexF )
return(out)
} # MatchesIndices <- function (x,patt) |
f11e316f6edd883c539f73aecedfa486b6923757 | 3cd471fd83cdb44fa6f8b938737dd4d192ac323d | /man/vote_history.Rd | f92308e74dcfeacbd438f9c7d0c01ee9ae2fa493 | [
"MIT"
] | permissive | chas-mellish/survivoR | a4a8f7567630e534cd7c840bf13740259c3114ca | 0068ac1565c3093b8d95ddb987dfec976c433051 | refs/heads/master | 2023-04-19T22:33:15.059513 | 2021-05-11T12:58:47 | 2021-05-11T12:58:47 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 4,156 | rd | vote_history.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/datasets.R
\docType{data}
\name{vote_history}
\alias{vote_history}
\title{Vote history}
\format{
This data frame contains the following columns:
\describe{
\item{\code{season_name}}{The season name}
\item{\code{season}}{The season number}
\item{\code{episode}}{Episode number}
\item{\code{day}}{Day the tribal council took place}
\item{\code{tribe_status}}{The status of the tribe e.g. original, swapped, merged, etc. See details for more}
\item{\code{castaway}}{Name of the castaway}
\item{\code{immunity}}{Type of immunity held by the castaway at the time of the vote e.g. individual,
hidden (see details for hidden immunity data)}
\item{\code{vote}}{The castaway for which the vote was cast}
\item{\code{nullified}}{Was the vote nullified by a hidden immunity idol? Logical}
\item{\code{voted_out}}{The castaway who was voted out}
\item{\code{order}}{The order in which the castaway was voted off the island}
\item{\code{vote_order}}{In the case of ties this indicates the order the votes took place}
}
}
\source{
\url{https://en.wikipedia.org/wiki/Survivor_(American_TV_series)}
}
\usage{
vote_history
}
\description{
A dataset containing details on the vote history for each season
}
\details{
This data frame contains a complete history of votes cast across all seasons of Survivor. While there are consistent
events across the seasons there are some unique events such as the 'mutiny' in Survivor: Cook Islands (season 13)
or the 'Outcasts' in Survivor: Pearl Islands (season 7). For maintaining a standard, whenever there has been a change
in tribe for the castaways it has been recorded as \code{swapped}. \code{swapped} is used as the
term since 'the tribe swap' is a typical recurring milestone in each season of Survivor. Subsequent changes are recorded with
a trailing digit e.g. \code{swapped2}. This includes absorbed tribes e.g. Stephanie was 'absorbed'
in Survivor: Palau (season 10) and when 3 tribes are
reduced to 2. These cases are still considered 'swapped' to indicate a change in tribe status.
Some events result in a castaway attending tribal but not voting. These are recorded as
\describe{
\item{\code{Win}}{The castaway won the fire challenge}
\item{\code{Lose}}{The castaway lost the fire challenge}
\item{\code{None}}{The castaway did not cast a vote. This may be due to a vote steal or some other means}
\item{\code{Immune}}{The castaway did not vote but were immune from the vote}
}
Where a castaway has \code{immunity == 'hidden'} this means that player is protected by a hidden immunity idol. It may not
necessarily mean they played the idol, the idol may have been played for them. While the nullified votes data is complete
the \code{immunity} data does not include those who had immunity but did not receive a vote. This is a TODO.
In the case where the 'steal a vote' advantage was played, there is a second row for the castaway that stole the vote.
The castaway who had their vote stolen are is recorded as \code{None}.
Many castaways have been medically evacuated, quit or left the game for some other reason. In these cases where no votes
were cast there is a skip in the \code{order} variable. Since no votes were cast there is nothing to record on this
data frame. The correct order in which castaways departed the island is recorded on \code{castaways}.
In the case of a tie, \code{voted_out} is recorded as \code{tie} to indicate no one was voted off the island in that
instance. The re-vote is recorded with \code{vote_order = 2} to indicate this is the second round of voting. In
the case of a second tie \code{voted_out} is recorded as \code{tie2}. The third step is either a draw of rocks,
fire challenge or countback (in the early days of survivor). In these cases \code{vote} is recorded as the colour of the
rock drawn, result of the fire challenge or 'countback'.
}
\examples{
# The number of times Tony voted for each castaway in Survivor: Winners at War
library(dplyr)
vote_history \%>\%
filter(
season == 40,
castaway == "Tony"
) \%>\%
count(vote)
}
\keyword{datasets}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.