content
large_stringlengths
0
6.46M
path
large_stringlengths
3
331
license_type
large_stringclasses
2 values
repo_name
large_stringlengths
5
125
language
large_stringclasses
1 value
is_vendor
bool
2 classes
is_generated
bool
2 classes
length_bytes
int64
4
6.46M
extension
large_stringclasses
75 values
text
stringlengths
0
6.46M
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/MSGARCH.R \docType{package} \name{MSGARCH-package} \alias{MSGARCH} \alias{MSGARCH-package} \title{The R package MSGARCH} \description{ The \R package \pkg{MSGARCH} implements a comprehensive set of functionalities for Markov-switching GARCH (Haas et al. 2004a) and Mixture of GARCH (Haas et al. 2004b) models, This includes fitting, filtering, forecasting, and simulating. Other functions related to Value-at-Risk and Expected-Shortfall are also available.\cr The main functions of the package are coded in \code{C++} using \pkg{Rcpp} (Eddelbuettel and Francois, 2011) and \pkg{RcppArmadillo} (Eddelbuettel and Sanderson, 2014).\cr \pkg{MSGARCH} focuses on the conditional variance (and higher moments) process. Hence, there is no equation for the mean. Therefore, you must pre-filter via AR(1) before applying the model.\cr The \pkg{MSGARCH} package implements a variety of GARCH specifications together with several conditional distributions. This allows for a rich modeling environment for Markov-switching GARCH models. Each single-regime process is a one-lag process (e.g., GARCH(1,1)). When optimization is performed, we ensure that the variance in each regime is covariance-stationary and strictly positive (refer to the vignette for more information).\cr We refer to Ardia et al. (2017) \url{https://ssrn.com/abstract=2845809} for a detailed introduction to the package and its usage.\cr The authors acknowledge Google for financial support via the Google Summer of Code 2016 & 2017, the International Institute of Forecasters and Industrielle-Alliance. } \references{ Ardia, D. Bluteau, K. Boudt, K. Catania, L. & Trottier, D.-A. (2017). Markov-switching GARCH models in \R: The \pkg{MSGARCH} package. \url{https://ssrn.com/abstract=2845809} Eddelbuettel, D. & Francois, R. (2011). \pkg{Rcpp}: Seamless \R and \code{C++} integration. \emph{Journal of Statistical Software}, 40, 1-18. \url{http://www.jstatsoft.org/v40/i08/} Eddelbuettel, D. & Sanderson, C. (2014). \pkg{RcppArmadillo}: Accelerating \R with high-performance \code{C++} linear algebra. \emph{Computational Statistics & Data Analysis}, 71, 1054-1063. \url{http://dx.doi.org/10.1016/j.csda.2013.02.005} Haas, M. Mittnik, S. & Paolella, MS. (2004). A new approach to Markov-switching GARCH models. \emph{Journal of Financial Econometrics}, 2, 493-530. \url{http://doi.org/10.1093/jjfinec/nbh020} Haas, M. Mittnik, S. & Paolella, M. S. (2004b). Mixed normal conditional heteroskedasticity. \emph{Journal of Financial Econometrics}, 2, 211-250. \url{http://doi.org/10.1093/jjfinec/nbh009} } \seealso{ Useful links: \itemize{ \item \url{https://github.com/keblu/MSGARCH} \item Report bugs at \url{https://github.com/keblu/MSGARCH/issues} } } \author{ \strong{Maintainer}: Keven Bluteau \email{Keven.Bluteau@unine.ch} Authors: \itemize{ \item David Ardia \email{david.ardia.ch@gmail.com} \item Leopoldo Catania \email{leopoldo.catania@econ.au.dk} \item Denis-Alexandre Trottier \email{denis-alexandre.trottier.1@ulaval.ca} } Other contributors: \itemize{ \item Kris Boudt \email{kris.boudt@vub.ac.be} [contributor] \item Alexios Ghalanos \email{alexios@4dscape.com} [contributor] \item Brian Peterson \email{brian@braverock.com} [contributor] } }
/Package/man/MSGARCH-package.Rd
no_license
mariaferrerfernandez/MSGARCH
R
false
true
3,316
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/MSGARCH.R \docType{package} \name{MSGARCH-package} \alias{MSGARCH} \alias{MSGARCH-package} \title{The R package MSGARCH} \description{ The \R package \pkg{MSGARCH} implements a comprehensive set of functionalities for Markov-switching GARCH (Haas et al. 2004a) and Mixture of GARCH (Haas et al. 2004b) models, This includes fitting, filtering, forecasting, and simulating. Other functions related to Value-at-Risk and Expected-Shortfall are also available.\cr The main functions of the package are coded in \code{C++} using \pkg{Rcpp} (Eddelbuettel and Francois, 2011) and \pkg{RcppArmadillo} (Eddelbuettel and Sanderson, 2014).\cr \pkg{MSGARCH} focuses on the conditional variance (and higher moments) process. Hence, there is no equation for the mean. Therefore, you must pre-filter via AR(1) before applying the model.\cr The \pkg{MSGARCH} package implements a variety of GARCH specifications together with several conditional distributions. This allows for a rich modeling environment for Markov-switching GARCH models. Each single-regime process is a one-lag process (e.g., GARCH(1,1)). When optimization is performed, we ensure that the variance in each regime is covariance-stationary and strictly positive (refer to the vignette for more information).\cr We refer to Ardia et al. (2017) \url{https://ssrn.com/abstract=2845809} for a detailed introduction to the package and its usage.\cr The authors acknowledge Google for financial support via the Google Summer of Code 2016 & 2017, the International Institute of Forecasters and Industrielle-Alliance. } \references{ Ardia, D. Bluteau, K. Boudt, K. Catania, L. & Trottier, D.-A. (2017). Markov-switching GARCH models in \R: The \pkg{MSGARCH} package. \url{https://ssrn.com/abstract=2845809} Eddelbuettel, D. & Francois, R. (2011). \pkg{Rcpp}: Seamless \R and \code{C++} integration. \emph{Journal of Statistical Software}, 40, 1-18. \url{http://www.jstatsoft.org/v40/i08/} Eddelbuettel, D. & Sanderson, C. (2014). \pkg{RcppArmadillo}: Accelerating \R with high-performance \code{C++} linear algebra. \emph{Computational Statistics & Data Analysis}, 71, 1054-1063. \url{http://dx.doi.org/10.1016/j.csda.2013.02.005} Haas, M. Mittnik, S. & Paolella, MS. (2004). A new approach to Markov-switching GARCH models. \emph{Journal of Financial Econometrics}, 2, 493-530. \url{http://doi.org/10.1093/jjfinec/nbh020} Haas, M. Mittnik, S. & Paolella, M. S. (2004b). Mixed normal conditional heteroskedasticity. \emph{Journal of Financial Econometrics}, 2, 211-250. \url{http://doi.org/10.1093/jjfinec/nbh009} } \seealso{ Useful links: \itemize{ \item \url{https://github.com/keblu/MSGARCH} \item Report bugs at \url{https://github.com/keblu/MSGARCH/issues} } } \author{ \strong{Maintainer}: Keven Bluteau \email{Keven.Bluteau@unine.ch} Authors: \itemize{ \item David Ardia \email{david.ardia.ch@gmail.com} \item Leopoldo Catania \email{leopoldo.catania@econ.au.dk} \item Denis-Alexandre Trottier \email{denis-alexandre.trottier.1@ulaval.ca} } Other contributors: \itemize{ \item Kris Boudt \email{kris.boudt@vub.ac.be} [contributor] \item Alexios Ghalanos \email{alexios@4dscape.com} [contributor] \item Brian Peterson \email{brian@braverock.com} [contributor] } }
#Loading the Libraries library(shiny) library(igraph) library(network) library(sna) library(ndtv) library(extrafont) library(networkD3) library(shinythemes) library(readr) library(stats) library(base) library(rworldmap) library(animation) library(caTools) library(RColorBrewer) library(classInt) library(data.table, warn.conflicts = FALSE, quietly = TRUE) library(dplyr, warn.conflicts = FALSE, quietly = TRUE) library(dtplyr, warn.conflicts = FALSE, quietly = TRUE) library(ggplot2, warn.conflicts = FALSE, quietly = TRUE) library(tidyr, warn.conflicts = FALSE, quietly = TRUE) library(maps, warn.conflicts = FALSE, quietly = TRUE) library(reshape) library(graphics) library(reshape2) library(plotly) #Designing the User Interface ui <- fluidPage(theme = shinytheme("superhero"), titlePanel(title=h1("Development Analysis Of Africa", align="center")), sidebarPanel( br(), h4(helpText("")), selectInput("Type", label = h3("Select the Category:"), choices = c("General","Specific")), conditionalPanel(condition = "input.Type == 'General'",selectInput("Indicators", label = h3("Select Indicators:"), as.list(counts$IndicatorName)), uiOutput("years")), actionButton(inputId = "go", label = "RUN") ), mainPanel( uiOutput("type") ) ) #Designing the Server server <- shinyServer(function(input, output){ #Year slider output$years <- renderUI({ id1 <- input$Indicators id <- counts[counts$IndicatorName == id1,] conditionalPanel(condition = "input.Indicators = id1", sliderInput("Years", label = h3("Select the year:"), value = 2000, min = id$FirstYear, max = id$LastYear,step = 1 )) }) #Main Output output$type <- renderUI({ check1 <- input$Type == "General" check2 <- input$Type == "Specific" if( check1) { tabsetPanel( tabPanel("Map", plotOutput(outputId = "plot1")), tabPanel("Tabular View", tableOutput("table")), tabPanel("TOP 5s", plotlyOutput(outputId = "plot2")), tabPanel("BOTTOM 5s", plotlyOutput(outputId = "plot3")), tabPanel("Limitations of Selected Indicator", textOutput(outputId = 'Limitation')) ) } else if(check2){ tabsetPanel( tabPanel( "Economic Policy and Debt", h3("GDP per capita (current US$)"), plotOutput(outputId = 'plot4'), h3("GDP per capita growth (annual %)"),plotOutput(outputId = "plot5"), h3("GNI per capita growth(annual %)"), plotOutput(outputId = 'plot6') ), tabPanel( "Education", h3("Literacy rate, adult total (% of people ages 15 and above)"),plotOutput(outputId = "plot7") ), tabPanel( "Environment", h3("Access to electricity (% of population)"),plotOutput(outputId = "plot8"), h3("Population density (people per sq. km of land area)"),plotOutput(outputId = "plot9") ), tabPanel( "Financial Sector", h3("Inflation, consumer prices (annual %)"),plotOutput(outputId = "plot10") ), tabPanel( "Health", h3("Population growth (annual %)"), plotOutput(outputId = 'plot11'), h3("Life expectancy at birth, total (years)"), plotOutput(outputId = 'plot12'), h3("Mortality rate, under-5 (per 1,000 live births)"), plotOutput(outputId = 'plot13'), h3("People using at least basic drinking water services (% of population)"),plotOutput(outputId = "plot14") ), tabPanel( "Infrastructure", h3("Research and development expenditure (% of GDP)"), plotOutput(outputId = 'plot15'), h3("Mobile cellular subscriptions"),plotOutput(outputId = "plot16"), h3("Individuals using the Internet (% of population)"),plotOutput(outputId = "plot17") ), tabPanel( "Private Sector & Trade", h3("Time required to start a business (days)"),plotOutput(outputId = "plot18"), h3("Time required to build a warehouse (days)"),plotOutput(outputId = "plot19") ), tabPanel( "Public Sector", h3("Armed forces personnel, total"),plotOutput(outputId = "plot20") ), tabPanel( "Social Protection & Labor", h3("Labor force, total"),plotOutput(outputId = "plot21") ) ) } else{ print("Not Applicable") } }) ####Functions for the Output### data <- eventReactive(input$go,{ id <- input$Indicators id <- counts[counts$IndicatorName == id,] }) ##Map Plot map <- eventReactive(input$go,{ indicatorName <- input$Indicators indicatorYear <- input$Years filtered <- Indicators[Indicators$IndicatorName==indicatorName & Indicators$Year==indicatorYear,] #filtered <- Indicators[Indicators$IndicatorName==Indicators$IndicatorName[1] & Indicators$Year==2000,] map.world <- merge(x=map_data(map="world"), y=filtered[,c("CountryName","Value")], by.x="region", by.y="CountryName", all.x=TRUE) map.world <- map.world[order(map.world$order),] ggplot(map.world) + geom_map(map=map.world, aes(map_id=region, fill=Value)) + borders("world",colour="black")+ coord_map(xlim = c(-20, 55),ylim = c(-40, 40)) + scale_fill_gradient(low = "blue", high = "red", guide = "colourbar") + theme(axis.line=element_blank(), axis.text.x=element_blank(), axis.text.y=element_blank(), axis.ticks=element_blank(), axis.title.x=element_blank(), axis.title.y=element_blank(), panel.background=element_blank(), panel.border=element_blank(), panel.grid.major=element_blank(), panel.grid.minor=element_blank(), plot.background=element_blank(), legend.title=element_blank(), legend.position="bottom", legend.key.width = unit(6, "lines"), legend.text = element_text(size = 10) ) + ggtitle(paste0(indicatorName, " in ", indicatorYear)) }) ##Table Plot table <- eventReactive(input$go, { indicatorName <- input$Indicators indicatorYear <- input$Years filtered <- Indicators[Indicators$IndicatorName==indicatorName & Indicators$Year==indicatorYear,] filtered[,c("CountryName", "Value","Income.Group", "Region")] }) ##Highest Country graph top <- eventReactive(input$go, { Indicators %>% filter(IndicatorName == input$Indicators) -> mData mData %>% filter(Year == input$Years) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == input$Years) %>% arrange(desc(Value)) %>% select(CountryName) %>% slice(1:5) -> top5Cont top5 = left_join(top5Cont, cData, "CountryName") top5 = top5[,c('CountryName','Year','Value')] top5_m = rbind(top5,Average) pre <- lm(Value~.,data = top5_m) maxyear = max(top5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(top5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(top5_m$CountryName),x) pretop5_m <- data.frame(CountryName,Year) pretop5_m$Value <- predict(pre,pretop5_m) top5_pre <- rbind(top5_m,pretop5_m) p <- ggplot(data=top5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Highest Performing ", input$Indicators)) + ylab("Values") print(ggplotly(p)) }) ##Lowest Country graph bottom <- eventReactive(input$go, { Indicators %>% filter(IndicatorName == input$Indicators) -> mData mData %>% filter(Year == input$Years) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == input$Years) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) p <- ggplot(data=bot5_pre, aes(x=Year, y=Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing ", input$Indicators)) + ylab("Values") print(ggplotly(p)) }) ###Limitation of Indicators Limitation <- eventReactive(input$go,{ id <- input$Indicators id <- Indicators[Indicators$IndicatorName == id,] print(as.character(unique(id$Limitations.and.exceptions))) }) ###Specific of GDP percapita current US $ plot4 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'GDP per capita (current US$)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing GDP per capita (current US$)")) + ylab("Values")) }) ###Specific of GDP per capita growth (annual %) plot5 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'GDP per capita growth (annual %)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing GDP per capita growth (annual %)")) + ylab("Values")) }) ###Specific of GNI per capita growth (annual %) plot6 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'GNI per capita growth (annual %)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing GNI per capita growth (annual %)")) + ylab("Values")) }) ###Specific of Literacy rate, adult total (% of people ages 15 and above) plot7 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Literacy rate, adult total (% of people ages 15 and above)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing Literacy rate, adult total (% of people ages 15 and above)")) + ylab("Values")) }) ###Specific of Access to electricity (% of population) plot8 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Access to electricity (% of population)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing Access to electricity (% of population)")) + ylab("Values")) }) ###Specific of Population density (people per sq. km of land area) plot9 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Population density (people per sq. km of land area)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Population density (people per sq. km of land area))'")) + ylab("Values")) }) ###Specific of Inflation, consumer prices (annual %) plot10 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Inflation, consumer prices (annual %)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Inflation, consumer prices (annual %)'")) + ylab("Values")) }) ###Specific of Population growth (annual %) plot11 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Population growth (annual %)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Population growth (annual %)'")) + ylab("Values")) }) ###Specific of Life expectancy at birth, total (years) plot12 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Life expectancy at birth, total (years)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Life expectancy at birth, total (years)'")) + ylab("Values")) }) ###Specific of Mortality rate, under-5 (per 1,000 live births) plot13 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Mortality rate, under-5 (per 1,000 live births)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Mortality rate, under-5 (per 1,000 live births)'")) + ylab("Values")) }) ###Specific of People using at least basic drinking water services (% of population) plot14 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'People using at least basic drinking water services (% of population)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'People using at least basic drinking water services (% of population)'")) + ylab("Values")) }) ###Specific of Research and development expenditure (% of GDP) plot15 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Research and development expenditure (% of GDP)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Research and development expenditure (% of GDP)'")) + ylab("Values")) }) ###Specific of Mobile cellular subscriptions plot16 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Mobile cellular subscriptions') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Mobile cellular subscriptions'")) + ylab("Values")) }) ###Specific of Individuals using the Internet (% of population) plot17 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Individuals using the Internet (% of population)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Individuals using the Internet (% of population)'")) + ylab("Values")) }) ###Specific Of Time required to start a business (days) plot18 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Time required to start a business (days)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Time required to start a business (days)")) + ylab("Values")) }) ###Specific of Time required to build a warehouse (days) plot19 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Time required to build a warehouse (days)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Time required to build a warehouse (days)")) + ylab("Values")) }) ###Specific of Armed forces personnel, total plot20 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Armed forces personnel, total') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Armed forces personnel, total'")) + ylab("Values")) }) ###Specific of Labor force, total plot21 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Labor force, total') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Labor force, total'")) + ylab("Values")) }) output$plot1 <- renderPlot({ map()}) output$table <- renderTable({table()}) output$plot2 <- renderPlotly({ top()}) output$plot3 <- renderPlotly({ bottom()}) output$Limitation <- renderPrint({Limitation()}) output$plot4 <- renderPlot({plot4()}) output$plot5 <- renderPlot({plot5()}) output$plot6 <- renderPlot({plot6()}) output$plot7 <- renderPlot({plot7()}) output$plot8 <- renderPlot({plot8()}) output$plot9 <- renderPlot({plot9()}) output$plot10 <- renderPlot({plot10()}) output$plot11 <- renderPlot({plot11()}) output$plot12 <- renderPlot({plot12()}) output$plot13 <- renderPlot({plot13()}) output$plot14 <- renderPlot({plot14()}) output$plot15 <- renderPlot({plot15()}) output$plot16 <- renderPlot({plot16()}) output$plot17 <- renderPlot({plot17()}) output$plot18 <- renderPlot({plot18()}) output$plot19 <- renderPlot({plot19()}) output$plot20 <- renderPlot({plot20()}) output$plot21 <- renderPlot({plot21()}) }) #Calling the Application shinyApp(ui = ui , server = server)
/Part2.R
no_license
PallaviVarandani/World-Development-Indicators-Dashboard
R
false
false
42,307
r
#Loading the Libraries library(shiny) library(igraph) library(network) library(sna) library(ndtv) library(extrafont) library(networkD3) library(shinythemes) library(readr) library(stats) library(base) library(rworldmap) library(animation) library(caTools) library(RColorBrewer) library(classInt) library(data.table, warn.conflicts = FALSE, quietly = TRUE) library(dplyr, warn.conflicts = FALSE, quietly = TRUE) library(dtplyr, warn.conflicts = FALSE, quietly = TRUE) library(ggplot2, warn.conflicts = FALSE, quietly = TRUE) library(tidyr, warn.conflicts = FALSE, quietly = TRUE) library(maps, warn.conflicts = FALSE, quietly = TRUE) library(reshape) library(graphics) library(reshape2) library(plotly) #Designing the User Interface ui <- fluidPage(theme = shinytheme("superhero"), titlePanel(title=h1("Development Analysis Of Africa", align="center")), sidebarPanel( br(), h4(helpText("")), selectInput("Type", label = h3("Select the Category:"), choices = c("General","Specific")), conditionalPanel(condition = "input.Type == 'General'",selectInput("Indicators", label = h3("Select Indicators:"), as.list(counts$IndicatorName)), uiOutput("years")), actionButton(inputId = "go", label = "RUN") ), mainPanel( uiOutput("type") ) ) #Designing the Server server <- shinyServer(function(input, output){ #Year slider output$years <- renderUI({ id1 <- input$Indicators id <- counts[counts$IndicatorName == id1,] conditionalPanel(condition = "input.Indicators = id1", sliderInput("Years", label = h3("Select the year:"), value = 2000, min = id$FirstYear, max = id$LastYear,step = 1 )) }) #Main Output output$type <- renderUI({ check1 <- input$Type == "General" check2 <- input$Type == "Specific" if( check1) { tabsetPanel( tabPanel("Map", plotOutput(outputId = "plot1")), tabPanel("Tabular View", tableOutput("table")), tabPanel("TOP 5s", plotlyOutput(outputId = "plot2")), tabPanel("BOTTOM 5s", plotlyOutput(outputId = "plot3")), tabPanel("Limitations of Selected Indicator", textOutput(outputId = 'Limitation')) ) } else if(check2){ tabsetPanel( tabPanel( "Economic Policy and Debt", h3("GDP per capita (current US$)"), plotOutput(outputId = 'plot4'), h3("GDP per capita growth (annual %)"),plotOutput(outputId = "plot5"), h3("GNI per capita growth(annual %)"), plotOutput(outputId = 'plot6') ), tabPanel( "Education", h3("Literacy rate, adult total (% of people ages 15 and above)"),plotOutput(outputId = "plot7") ), tabPanel( "Environment", h3("Access to electricity (% of population)"),plotOutput(outputId = "plot8"), h3("Population density (people per sq. km of land area)"),plotOutput(outputId = "plot9") ), tabPanel( "Financial Sector", h3("Inflation, consumer prices (annual %)"),plotOutput(outputId = "plot10") ), tabPanel( "Health", h3("Population growth (annual %)"), plotOutput(outputId = 'plot11'), h3("Life expectancy at birth, total (years)"), plotOutput(outputId = 'plot12'), h3("Mortality rate, under-5 (per 1,000 live births)"), plotOutput(outputId = 'plot13'), h3("People using at least basic drinking water services (% of population)"),plotOutput(outputId = "plot14") ), tabPanel( "Infrastructure", h3("Research and development expenditure (% of GDP)"), plotOutput(outputId = 'plot15'), h3("Mobile cellular subscriptions"),plotOutput(outputId = "plot16"), h3("Individuals using the Internet (% of population)"),plotOutput(outputId = "plot17") ), tabPanel( "Private Sector & Trade", h3("Time required to start a business (days)"),plotOutput(outputId = "plot18"), h3("Time required to build a warehouse (days)"),plotOutput(outputId = "plot19") ), tabPanel( "Public Sector", h3("Armed forces personnel, total"),plotOutput(outputId = "plot20") ), tabPanel( "Social Protection & Labor", h3("Labor force, total"),plotOutput(outputId = "plot21") ) ) } else{ print("Not Applicable") } }) ####Functions for the Output### data <- eventReactive(input$go,{ id <- input$Indicators id <- counts[counts$IndicatorName == id,] }) ##Map Plot map <- eventReactive(input$go,{ indicatorName <- input$Indicators indicatorYear <- input$Years filtered <- Indicators[Indicators$IndicatorName==indicatorName & Indicators$Year==indicatorYear,] #filtered <- Indicators[Indicators$IndicatorName==Indicators$IndicatorName[1] & Indicators$Year==2000,] map.world <- merge(x=map_data(map="world"), y=filtered[,c("CountryName","Value")], by.x="region", by.y="CountryName", all.x=TRUE) map.world <- map.world[order(map.world$order),] ggplot(map.world) + geom_map(map=map.world, aes(map_id=region, fill=Value)) + borders("world",colour="black")+ coord_map(xlim = c(-20, 55),ylim = c(-40, 40)) + scale_fill_gradient(low = "blue", high = "red", guide = "colourbar") + theme(axis.line=element_blank(), axis.text.x=element_blank(), axis.text.y=element_blank(), axis.ticks=element_blank(), axis.title.x=element_blank(), axis.title.y=element_blank(), panel.background=element_blank(), panel.border=element_blank(), panel.grid.major=element_blank(), panel.grid.minor=element_blank(), plot.background=element_blank(), legend.title=element_blank(), legend.position="bottom", legend.key.width = unit(6, "lines"), legend.text = element_text(size = 10) ) + ggtitle(paste0(indicatorName, " in ", indicatorYear)) }) ##Table Plot table <- eventReactive(input$go, { indicatorName <- input$Indicators indicatorYear <- input$Years filtered <- Indicators[Indicators$IndicatorName==indicatorName & Indicators$Year==indicatorYear,] filtered[,c("CountryName", "Value","Income.Group", "Region")] }) ##Highest Country graph top <- eventReactive(input$go, { Indicators %>% filter(IndicatorName == input$Indicators) -> mData mData %>% filter(Year == input$Years) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == input$Years) %>% arrange(desc(Value)) %>% select(CountryName) %>% slice(1:5) -> top5Cont top5 = left_join(top5Cont, cData, "CountryName") top5 = top5[,c('CountryName','Year','Value')] top5_m = rbind(top5,Average) pre <- lm(Value~.,data = top5_m) maxyear = max(top5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(top5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(top5_m$CountryName),x) pretop5_m <- data.frame(CountryName,Year) pretop5_m$Value <- predict(pre,pretop5_m) top5_pre <- rbind(top5_m,pretop5_m) p <- ggplot(data=top5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Highest Performing ", input$Indicators)) + ylab("Values") print(ggplotly(p)) }) ##Lowest Country graph bottom <- eventReactive(input$go, { Indicators %>% filter(IndicatorName == input$Indicators) -> mData mData %>% filter(Year == input$Years) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == input$Years) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) p <- ggplot(data=bot5_pre, aes(x=Year, y=Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing ", input$Indicators)) + ylab("Values") print(ggplotly(p)) }) ###Limitation of Indicators Limitation <- eventReactive(input$go,{ id <- input$Indicators id <- Indicators[Indicators$IndicatorName == id,] print(as.character(unique(id$Limitations.and.exceptions))) }) ###Specific of GDP percapita current US $ plot4 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'GDP per capita (current US$)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing GDP per capita (current US$)")) + ylab("Values")) }) ###Specific of GDP per capita growth (annual %) plot5 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'GDP per capita growth (annual %)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing GDP per capita growth (annual %)")) + ylab("Values")) }) ###Specific of GNI per capita growth (annual %) plot6 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'GNI per capita growth (annual %)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing GNI per capita growth (annual %)")) + ylab("Values")) }) ###Specific of Literacy rate, adult total (% of people ages 15 and above) plot7 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Literacy rate, adult total (% of people ages 15 and above)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing Literacy rate, adult total (% of people ages 15 and above)")) + ylab("Values")) }) ###Specific of Access to electricity (% of population) plot8 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Access to electricity (% of population)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing Access to electricity (% of population)")) + ylab("Values")) }) ###Specific of Population density (people per sq. km of land area) plot9 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Population density (people per sq. km of land area)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Population density (people per sq. km of land area))'")) + ylab("Values")) }) ###Specific of Inflation, consumer prices (annual %) plot10 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Inflation, consumer prices (annual %)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Inflation, consumer prices (annual %)'")) + ylab("Values")) }) ###Specific of Population growth (annual %) plot11 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Population growth (annual %)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Population growth (annual %)'")) + ylab("Values")) }) ###Specific of Life expectancy at birth, total (years) plot12 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Life expectancy at birth, total (years)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Life expectancy at birth, total (years)'")) + ylab("Values")) }) ###Specific of Mortality rate, under-5 (per 1,000 live births) plot13 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Mortality rate, under-5 (per 1,000 live births)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Mortality rate, under-5 (per 1,000 live births)'")) + ylab("Values")) }) ###Specific of People using at least basic drinking water services (% of population) plot14 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'People using at least basic drinking water services (% of population)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'People using at least basic drinking water services (% of population)'")) + ylab("Values")) }) ###Specific of Research and development expenditure (% of GDP) plot15 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Research and development expenditure (% of GDP)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Research and development expenditure (% of GDP)'")) + ylab("Values")) }) ###Specific of Mobile cellular subscriptions plot16 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Mobile cellular subscriptions') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Mobile cellular subscriptions'")) + ylab("Values")) }) ###Specific of Individuals using the Internet (% of population) plot17 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Individuals using the Internet (% of population)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Individuals using the Internet (% of population)'")) + ylab("Values")) }) ###Specific Of Time required to start a business (days) plot18 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Time required to start a business (days)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Time required to start a business (days)")) + ylab("Values")) }) ###Specific of Time required to build a warehouse (days) plot19 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Time required to build a warehouse (days)') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Time required to build a warehouse (days)")) + ylab("Values")) }) ###Specific of Armed forces personnel, total plot20 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Armed forces personnel, total') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Armed forces personnel, total'")) + ylab("Values")) }) ###Specific of Labor force, total plot21 <- eventReactive(input$go,{ Indicators %>% filter(IndicatorName == 'Labor force, total') -> mData mData %>% filter(Year == max(mData$Year)) %>% select(CountryCode) -> conSet mData %>% group_by(Year) %>% summarize(Value = mean(Value)) -> Average Average$CountryName = 'Averages' cData = left_join(conSet, mData, by = "CountryCode") cData %>% filter(Year == max(mData$Year)) %>% arrange(Value) %>% select(CountryName) %>% slice(1:5) -> bot5Cont bot5 = left_join(bot5Cont, cData, "CountryName") bot5 = bot5[,c('CountryName','Year','Value')] bot5_m = rbind(bot5,Average) pre <- lm(Value~.,data = bot5_m) maxyear = max(bot5_m$Year) Year = c() for (i in maxyear:2030) { Year =append(Year,rep(i,length(unique(bot5_m$CountryName)))) } x = length(unique(Year)) CountryName <- rep(unique(bot5_m$CountryName),x) prebot5_m <- data.frame(CountryName,Year) prebot5_m$Value <- predict(pre,prebot5_m) bot5_pre <- rbind(bot5_m,prebot5_m) print(ggplot(data=bot5_pre, aes(x=Year, y= Value, group=CountryName, colour=CountryName)) + geom_line(size = 1) + geom_vline(xintercept = maxyear, linetype="dotted", color = "blue", size=1.5) + ggtitle(paste0("Lowest Performing 'Labor force, total'")) + ylab("Values")) }) output$plot1 <- renderPlot({ map()}) output$table <- renderTable({table()}) output$plot2 <- renderPlotly({ top()}) output$plot3 <- renderPlotly({ bottom()}) output$Limitation <- renderPrint({Limitation()}) output$plot4 <- renderPlot({plot4()}) output$plot5 <- renderPlot({plot5()}) output$plot6 <- renderPlot({plot6()}) output$plot7 <- renderPlot({plot7()}) output$plot8 <- renderPlot({plot8()}) output$plot9 <- renderPlot({plot9()}) output$plot10 <- renderPlot({plot10()}) output$plot11 <- renderPlot({plot11()}) output$plot12 <- renderPlot({plot12()}) output$plot13 <- renderPlot({plot13()}) output$plot14 <- renderPlot({plot14()}) output$plot15 <- renderPlot({plot15()}) output$plot16 <- renderPlot({plot16()}) output$plot17 <- renderPlot({plot17()}) output$plot18 <- renderPlot({plot18()}) output$plot19 <- renderPlot({plot19()}) output$plot20 <- renderPlot({plot20()}) output$plot21 <- renderPlot({plot21()}) }) #Calling the Application shinyApp(ui = ui , server = server)
# This file is generated by make.paws. Please do not edit here. #' @importFrom paws.common get_config new_operation new_request send_request #' @include sso_service.R NULL #' Returns the STS short-term credentials for a given role name that is #' assigned to the user #' #' @description #' Returns the STS short-term credentials for a given role name that is #' assigned to the user. #' #' @usage #' sso_get_role_credentials(roleName, accountId, accessToken) #' #' @param roleName &#91;required&#93; The friendly name of the role that is assigned to the user. #' @param accountId &#91;required&#93; The identifier for the AWS account that is assigned to the user. #' @param accessToken &#91;required&#93; The token issued by the `CreateToken` API call. For more information, #' see #' [CreateToken](https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_CreateToken.html) #' in the *IAM Identity Center OIDC API Reference Guide*. #' #' @return #' A list with the following syntax: #' ``` #' list( #' roleCredentials = list( #' accessKeyId = "string", #' secretAccessKey = "string", #' sessionToken = "string", #' expiration = 123 #' ) #' ) #' ``` #' #' @section Request syntax: #' ``` #' svc$get_role_credentials( #' roleName = "string", #' accountId = "string", #' accessToken = "string" #' ) #' ``` #' #' @keywords internal #' #' @rdname sso_get_role_credentials #' #' @aliases sso_get_role_credentials sso_get_role_credentials <- function(roleName, accountId, accessToken) { op <- new_operation( name = "GetRoleCredentials", http_method = "GET", http_path = "/federation/credentials", paginator = list() ) input <- .sso$get_role_credentials_input(roleName = roleName, accountId = accountId, accessToken = accessToken) output <- .sso$get_role_credentials_output() config <- get_config() svc <- .sso$service(config) request <- new_request(svc, op, input, output) response <- send_request(request) return(response) } .sso$operations$get_role_credentials <- sso_get_role_credentials #' Lists all roles that are assigned to the user for a given AWS account #' #' @description #' Lists all roles that are assigned to the user for a given AWS account. #' #' @usage #' sso_list_account_roles(nextToken, maxResults, accessToken, accountId) #' #' @param nextToken The page token from the previous response output when you request #' subsequent pages. #' @param maxResults The number of items that clients can request per page. #' @param accessToken &#91;required&#93; The token issued by the `CreateToken` API call. For more information, #' see #' [CreateToken](https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_CreateToken.html) #' in the *IAM Identity Center OIDC API Reference Guide*. #' @param accountId &#91;required&#93; The identifier for the AWS account that is assigned to the user. #' #' @return #' A list with the following syntax: #' ``` #' list( #' nextToken = "string", #' roleList = list( #' list( #' roleName = "string", #' accountId = "string" #' ) #' ) #' ) #' ``` #' #' @section Request syntax: #' ``` #' svc$list_account_roles( #' nextToken = "string", #' maxResults = 123, #' accessToken = "string", #' accountId = "string" #' ) #' ``` #' #' @keywords internal #' #' @rdname sso_list_account_roles #' #' @aliases sso_list_account_roles sso_list_account_roles <- function(nextToken = NULL, maxResults = NULL, accessToken, accountId) { op <- new_operation( name = "ListAccountRoles", http_method = "GET", http_path = "/assignment/roles", paginator = list(input_token = "nextToken", output_token = "nextToken", limit_key = "maxResults", result_key = "roleList") ) input <- .sso$list_account_roles_input(nextToken = nextToken, maxResults = maxResults, accessToken = accessToken, accountId = accountId) output <- .sso$list_account_roles_output() config <- get_config() svc <- .sso$service(config) request <- new_request(svc, op, input, output) response <- send_request(request) return(response) } .sso$operations$list_account_roles <- sso_list_account_roles #' Lists all AWS accounts assigned to the user #' #' @description #' Lists all AWS accounts assigned to the user. These AWS accounts are #' assigned by the administrator of the account. For more information, see #' [Assign User #' Access](https://docs.aws.amazon.com/singlesignon/latest/userguide/useraccess.html#assignusers) #' in the *IAM Identity Center User Guide*. This operation returns a #' paginated response. #' #' @usage #' sso_list_accounts(nextToken, maxResults, accessToken) #' #' @param nextToken (Optional) When requesting subsequent pages, this is the page token from #' the previous response output. #' @param maxResults This is the number of items clients can request per page. #' @param accessToken &#91;required&#93; The token issued by the `CreateToken` API call. For more information, #' see #' [CreateToken](https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_CreateToken.html) #' in the *IAM Identity Center OIDC API Reference Guide*. #' #' @return #' A list with the following syntax: #' ``` #' list( #' nextToken = "string", #' accountList = list( #' list( #' accountId = "string", #' accountName = "string", #' emailAddress = "string" #' ) #' ) #' ) #' ``` #' #' @section Request syntax: #' ``` #' svc$list_accounts( #' nextToken = "string", #' maxResults = 123, #' accessToken = "string" #' ) #' ``` #' #' @keywords internal #' #' @rdname sso_list_accounts #' #' @aliases sso_list_accounts sso_list_accounts <- function(nextToken = NULL, maxResults = NULL, accessToken) { op <- new_operation( name = "ListAccounts", http_method = "GET", http_path = "/assignment/accounts", paginator = list(input_token = "nextToken", output_token = "nextToken", limit_key = "maxResults", result_key = "accountList") ) input <- .sso$list_accounts_input(nextToken = nextToken, maxResults = maxResults, accessToken = accessToken) output <- .sso$list_accounts_output() config <- get_config() svc <- .sso$service(config) request <- new_request(svc, op, input, output) response <- send_request(request) return(response) } .sso$operations$list_accounts <- sso_list_accounts #' Removes the locally stored SSO tokens from the client-side cache and #' sends an API call to the IAM Identity Center service to invalidate the #' corresponding server-side IAM Identity Center sign in session #' #' @description #' Removes the locally stored SSO tokens from the client-side cache and #' sends an API call to the IAM Identity Center service to invalidate the #' corresponding server-side IAM Identity Center sign in session. #' #' If a user uses IAM Identity Center to access the AWS CLI, the user’s IAM #' Identity Center sign in session is used to obtain an IAM session, as #' specified in the corresponding IAM Identity Center permission set. More #' specifically, IAM Identity Center assumes an IAM role in the target #' account on behalf of the user, and the corresponding temporary AWS #' credentials are returned to the client. #' #' After user logout, any existing IAM role sessions that were created by #' using IAM Identity Center permission sets continue based on the duration #' configured in the permission set. For more information, see [User #' authentications](https://docs.aws.amazon.com/singlesignon/latest/userguide/authconcept.html) #' in the *IAM Identity Center User Guide*. #' #' @usage #' sso_logout(accessToken) #' #' @param accessToken &#91;required&#93; The token issued by the `CreateToken` API call. For more information, #' see #' [CreateToken](https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_CreateToken.html) #' in the *IAM Identity Center OIDC API Reference Guide*. #' #' @return #' An empty list. #' #' @section Request syntax: #' ``` #' svc$logout( #' accessToken = "string" #' ) #' ``` #' #' @keywords internal #' #' @rdname sso_logout #' #' @aliases sso_logout sso_logout <- function(accessToken) { op <- new_operation( name = "Logout", http_method = "POST", http_path = "/logout", paginator = list() ) input <- .sso$logout_input(accessToken = accessToken) output <- .sso$logout_output() config <- get_config() svc <- .sso$service(config) request <- new_request(svc, op, input, output) response <- send_request(request) return(response) } .sso$operations$logout <- sso_logout
/paws/R/sso_operations.R
permissive
paws-r/paws
R
false
false
8,490
r
# This file is generated by make.paws. Please do not edit here. #' @importFrom paws.common get_config new_operation new_request send_request #' @include sso_service.R NULL #' Returns the STS short-term credentials for a given role name that is #' assigned to the user #' #' @description #' Returns the STS short-term credentials for a given role name that is #' assigned to the user. #' #' @usage #' sso_get_role_credentials(roleName, accountId, accessToken) #' #' @param roleName &#91;required&#93; The friendly name of the role that is assigned to the user. #' @param accountId &#91;required&#93; The identifier for the AWS account that is assigned to the user. #' @param accessToken &#91;required&#93; The token issued by the `CreateToken` API call. For more information, #' see #' [CreateToken](https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_CreateToken.html) #' in the *IAM Identity Center OIDC API Reference Guide*. #' #' @return #' A list with the following syntax: #' ``` #' list( #' roleCredentials = list( #' accessKeyId = "string", #' secretAccessKey = "string", #' sessionToken = "string", #' expiration = 123 #' ) #' ) #' ``` #' #' @section Request syntax: #' ``` #' svc$get_role_credentials( #' roleName = "string", #' accountId = "string", #' accessToken = "string" #' ) #' ``` #' #' @keywords internal #' #' @rdname sso_get_role_credentials #' #' @aliases sso_get_role_credentials sso_get_role_credentials <- function(roleName, accountId, accessToken) { op <- new_operation( name = "GetRoleCredentials", http_method = "GET", http_path = "/federation/credentials", paginator = list() ) input <- .sso$get_role_credentials_input(roleName = roleName, accountId = accountId, accessToken = accessToken) output <- .sso$get_role_credentials_output() config <- get_config() svc <- .sso$service(config) request <- new_request(svc, op, input, output) response <- send_request(request) return(response) } .sso$operations$get_role_credentials <- sso_get_role_credentials #' Lists all roles that are assigned to the user for a given AWS account #' #' @description #' Lists all roles that are assigned to the user for a given AWS account. #' #' @usage #' sso_list_account_roles(nextToken, maxResults, accessToken, accountId) #' #' @param nextToken The page token from the previous response output when you request #' subsequent pages. #' @param maxResults The number of items that clients can request per page. #' @param accessToken &#91;required&#93; The token issued by the `CreateToken` API call. For more information, #' see #' [CreateToken](https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_CreateToken.html) #' in the *IAM Identity Center OIDC API Reference Guide*. #' @param accountId &#91;required&#93; The identifier for the AWS account that is assigned to the user. #' #' @return #' A list with the following syntax: #' ``` #' list( #' nextToken = "string", #' roleList = list( #' list( #' roleName = "string", #' accountId = "string" #' ) #' ) #' ) #' ``` #' #' @section Request syntax: #' ``` #' svc$list_account_roles( #' nextToken = "string", #' maxResults = 123, #' accessToken = "string", #' accountId = "string" #' ) #' ``` #' #' @keywords internal #' #' @rdname sso_list_account_roles #' #' @aliases sso_list_account_roles sso_list_account_roles <- function(nextToken = NULL, maxResults = NULL, accessToken, accountId) { op <- new_operation( name = "ListAccountRoles", http_method = "GET", http_path = "/assignment/roles", paginator = list(input_token = "nextToken", output_token = "nextToken", limit_key = "maxResults", result_key = "roleList") ) input <- .sso$list_account_roles_input(nextToken = nextToken, maxResults = maxResults, accessToken = accessToken, accountId = accountId) output <- .sso$list_account_roles_output() config <- get_config() svc <- .sso$service(config) request <- new_request(svc, op, input, output) response <- send_request(request) return(response) } .sso$operations$list_account_roles <- sso_list_account_roles #' Lists all AWS accounts assigned to the user #' #' @description #' Lists all AWS accounts assigned to the user. These AWS accounts are #' assigned by the administrator of the account. For more information, see #' [Assign User #' Access](https://docs.aws.amazon.com/singlesignon/latest/userguide/useraccess.html#assignusers) #' in the *IAM Identity Center User Guide*. This operation returns a #' paginated response. #' #' @usage #' sso_list_accounts(nextToken, maxResults, accessToken) #' #' @param nextToken (Optional) When requesting subsequent pages, this is the page token from #' the previous response output. #' @param maxResults This is the number of items clients can request per page. #' @param accessToken &#91;required&#93; The token issued by the `CreateToken` API call. For more information, #' see #' [CreateToken](https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_CreateToken.html) #' in the *IAM Identity Center OIDC API Reference Guide*. #' #' @return #' A list with the following syntax: #' ``` #' list( #' nextToken = "string", #' accountList = list( #' list( #' accountId = "string", #' accountName = "string", #' emailAddress = "string" #' ) #' ) #' ) #' ``` #' #' @section Request syntax: #' ``` #' svc$list_accounts( #' nextToken = "string", #' maxResults = 123, #' accessToken = "string" #' ) #' ``` #' #' @keywords internal #' #' @rdname sso_list_accounts #' #' @aliases sso_list_accounts sso_list_accounts <- function(nextToken = NULL, maxResults = NULL, accessToken) { op <- new_operation( name = "ListAccounts", http_method = "GET", http_path = "/assignment/accounts", paginator = list(input_token = "nextToken", output_token = "nextToken", limit_key = "maxResults", result_key = "accountList") ) input <- .sso$list_accounts_input(nextToken = nextToken, maxResults = maxResults, accessToken = accessToken) output <- .sso$list_accounts_output() config <- get_config() svc <- .sso$service(config) request <- new_request(svc, op, input, output) response <- send_request(request) return(response) } .sso$operations$list_accounts <- sso_list_accounts #' Removes the locally stored SSO tokens from the client-side cache and #' sends an API call to the IAM Identity Center service to invalidate the #' corresponding server-side IAM Identity Center sign in session #' #' @description #' Removes the locally stored SSO tokens from the client-side cache and #' sends an API call to the IAM Identity Center service to invalidate the #' corresponding server-side IAM Identity Center sign in session. #' #' If a user uses IAM Identity Center to access the AWS CLI, the user’s IAM #' Identity Center sign in session is used to obtain an IAM session, as #' specified in the corresponding IAM Identity Center permission set. More #' specifically, IAM Identity Center assumes an IAM role in the target #' account on behalf of the user, and the corresponding temporary AWS #' credentials are returned to the client. #' #' After user logout, any existing IAM role sessions that were created by #' using IAM Identity Center permission sets continue based on the duration #' configured in the permission set. For more information, see [User #' authentications](https://docs.aws.amazon.com/singlesignon/latest/userguide/authconcept.html) #' in the *IAM Identity Center User Guide*. #' #' @usage #' sso_logout(accessToken) #' #' @param accessToken &#91;required&#93; The token issued by the `CreateToken` API call. For more information, #' see #' [CreateToken](https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_CreateToken.html) #' in the *IAM Identity Center OIDC API Reference Guide*. #' #' @return #' An empty list. #' #' @section Request syntax: #' ``` #' svc$logout( #' accessToken = "string" #' ) #' ``` #' #' @keywords internal #' #' @rdname sso_logout #' #' @aliases sso_logout sso_logout <- function(accessToken) { op <- new_operation( name = "Logout", http_method = "POST", http_path = "/logout", paginator = list() ) input <- .sso$logout_input(accessToken = accessToken) output <- .sso$logout_output() config <- get_config() svc <- .sso$service(config) request <- new_request(svc, op, input, output) response <- send_request(request) return(response) } .sso$operations$logout <- sso_logout
#!/usr/bin/env Rscript # set log log <- file(snakemake@log[[1]], open = "wt") sink(log, type = "message") sink(log, append = TRUE, type = "output") library(data.table) guppy_summary_file <- snakemake@input[["guppy_results"]] guppy_filtered_file <- snakemake@output[["guppy_results"]] # dev # guppy_summary_file <- "output/030_guppy-barcoder/barcoding_summary.txt" guppy_summary <- fread(guppy_summary_file) guppy_summary[, barcode_rear_bc := unlist( strsplit(barcode_rear_id, "_"))[[1]], by = barcode_rear_id] guppy_summary[, barcode_front_bc := unlist( strsplit(barcode_front_id, "_"))[[1]], by = barcode_front_id] guppy_summary[, barcode_full_bc := unlist( strsplit(barcode_full_arrangement, "_"))[[1]], by = barcode_front_id] guppy_filtered <- guppy_summary[barcode_arrangement != "unclassified"] # guppy_filtered <- guppy_summary[ # barcode_front_score >= median(barcode_front_score) | # barcode_rear_score >= median(barcode_rear_score)][ # barcode_front_bc == barcode_rear_bc] fwrite(guppy_filtered, guppy_filtered_file, sep = '\t') # Log sessionInfo()
/src/filter_guppy_results.R
no_license
TomHarrop/csd-demux
R
false
false
1,114
r
#!/usr/bin/env Rscript # set log log <- file(snakemake@log[[1]], open = "wt") sink(log, type = "message") sink(log, append = TRUE, type = "output") library(data.table) guppy_summary_file <- snakemake@input[["guppy_results"]] guppy_filtered_file <- snakemake@output[["guppy_results"]] # dev # guppy_summary_file <- "output/030_guppy-barcoder/barcoding_summary.txt" guppy_summary <- fread(guppy_summary_file) guppy_summary[, barcode_rear_bc := unlist( strsplit(barcode_rear_id, "_"))[[1]], by = barcode_rear_id] guppy_summary[, barcode_front_bc := unlist( strsplit(barcode_front_id, "_"))[[1]], by = barcode_front_id] guppy_summary[, barcode_full_bc := unlist( strsplit(barcode_full_arrangement, "_"))[[1]], by = barcode_front_id] guppy_filtered <- guppy_summary[barcode_arrangement != "unclassified"] # guppy_filtered <- guppy_summary[ # barcode_front_score >= median(barcode_front_score) | # barcode_rear_score >= median(barcode_rear_score)][ # barcode_front_bc == barcode_rear_bc] fwrite(guppy_filtered, guppy_filtered_file, sep = '\t') # Log sessionInfo()
########################################################################################## # simdata_pct_threshold ########################################################################################## simdata_pct_threshold <-function(dataset, params=NULL) { # dataset = read_csv("./data/nmdat_0226_2019.csv", col_names=TRUE, # col_type=cols(.default=col_character())) # read as character figure=NULL table =NULL data = NULL # load mrgsolve cpp model #------------------------------------- library(dplyr) library("mrgsolve") if (1==2) { cppModel_file ="/home/feng.yang/handbook/model/cpp/LN001.cpp" cppModel=mread(model='cppModel',project=dirname(cppModel_file),file=basename(cppModel_file)) adsl = data.frame(ID=seq(1,10,by=1), WGTBL=75) # a data.frame adex = parseARMA(c("3 mg/kg IV Q2W*12 ", "3 mg/kg IV QW*1 + 350 mg IV Q3W*8")) # a data.frame or a event object seed = 1234 simulation_delta = 1 followup_period = 112 infusion_hrs = 1 # duplicated simulations nrep = 10 nsubject = nrow(adsl) dataset = lapply(1:nrep, function(i) { print(i) # virtual subject idata = adsl %>% sample_n(size=nsubject, replace=TRUE) %>% mutate(ID = 1:nsubject) # run simulation cbind(REP=i, runSim_by_dosing_regimen2(cppModel, idata, adex, simulation_delta = 0.1,# default density of time point tgrid = NULL, # extra timepoint (other than delta) infusion_hrs = 1, followup_period = 84, seed=sample(1:1000, 1)) %>% as.data.frame() ) }) %>% bind_rows() } #------------------------------ # these key varaibles needed #------------------------------ key.column.lst<-c("ID","ARMA","TEST","IPRED","TAD","TIME","II","EXSEQ") missing.column.lst<-key.column.lst[which(!key.column.lst %in% colnames(dataset))] message<-paste0("missing variable(s) of ", paste0(missing.column.lst, sep=", ", collapse="")) validate(need(all(key.column.lst %in% colnames(dataset)), message=message) ) ############################################################## # calculate the percentage of patient above # conc threshold at treatmetn period ############################################################## tdata = dataset %>% filter(TIME==24*7) %>% # at week 24 mutate(CONC_GT_100 = ifelse(CP>=100, 1, 0), CONC_GT_150 = ifelse(CP>=150, 1, 0), CONC_GT_200 = ifelse(CP>=200, 1, 0) ) # percentage of population above theshold concentration #------------------------------------------------------- #tdata$REP = 1 # number of replicates tdata = tdata %>% group_by(REP, ARMA) %>% summarise(N=length(unique(ID)), PCT_CONC_GT_100 = sum(CONC_GT_100)/N*100, PCT_CONC_GT_150 = sum(CONC_GT_150)/N*100, PCT_CONC_GT_200 = sum(CONC_GT_200)/N*100 ) %>% gather(TEST, DVOR, -REP, -ARMA, -N) # calculate stats #---------------------- tabl = tdata %>% # exclude pre-dose samples # calculate the statistics (Mean, SE, SD) calc_stats(id="REP", group_by=c("ARMA", "TEST"), value="DVOR") %>% select(ARMA, TEST, N, #Mean, meanMinusSE, meanPlusSE, Mean_SD, Mean_SE, Median_Range) %>% arrange(ARMA, TEST) tabl # paste0(round(sum(CONC_GT_200)/N*100,digits=2),"%") attr(tabl, 'title') <- "Percentage of Subject Achieving threshold Concentration for Different Dosing Regimens at Week24" attr(tabl, 'footnote') <- paste0("QW: weekly, Q2W: bi-weekly") table[["CONC_PCT_WK24"]] = tabl return(list(figure=figure, table=table, message=message)) } ################################################################# # final output ################################################################# if (ihandbook) { output = suppressWarnings(simdata_pct_threshold(dataset, params=NULL)) }
/script/simdata_pct_threshold.R
no_license
fyang72/handbook
R
false
false
4,329
r
########################################################################################## # simdata_pct_threshold ########################################################################################## simdata_pct_threshold <-function(dataset, params=NULL) { # dataset = read_csv("./data/nmdat_0226_2019.csv", col_names=TRUE, # col_type=cols(.default=col_character())) # read as character figure=NULL table =NULL data = NULL # load mrgsolve cpp model #------------------------------------- library(dplyr) library("mrgsolve") if (1==2) { cppModel_file ="/home/feng.yang/handbook/model/cpp/LN001.cpp" cppModel=mread(model='cppModel',project=dirname(cppModel_file),file=basename(cppModel_file)) adsl = data.frame(ID=seq(1,10,by=1), WGTBL=75) # a data.frame adex = parseARMA(c("3 mg/kg IV Q2W*12 ", "3 mg/kg IV QW*1 + 350 mg IV Q3W*8")) # a data.frame or a event object seed = 1234 simulation_delta = 1 followup_period = 112 infusion_hrs = 1 # duplicated simulations nrep = 10 nsubject = nrow(adsl) dataset = lapply(1:nrep, function(i) { print(i) # virtual subject idata = adsl %>% sample_n(size=nsubject, replace=TRUE) %>% mutate(ID = 1:nsubject) # run simulation cbind(REP=i, runSim_by_dosing_regimen2(cppModel, idata, adex, simulation_delta = 0.1,# default density of time point tgrid = NULL, # extra timepoint (other than delta) infusion_hrs = 1, followup_period = 84, seed=sample(1:1000, 1)) %>% as.data.frame() ) }) %>% bind_rows() } #------------------------------ # these key varaibles needed #------------------------------ key.column.lst<-c("ID","ARMA","TEST","IPRED","TAD","TIME","II","EXSEQ") missing.column.lst<-key.column.lst[which(!key.column.lst %in% colnames(dataset))] message<-paste0("missing variable(s) of ", paste0(missing.column.lst, sep=", ", collapse="")) validate(need(all(key.column.lst %in% colnames(dataset)), message=message) ) ############################################################## # calculate the percentage of patient above # conc threshold at treatmetn period ############################################################## tdata = dataset %>% filter(TIME==24*7) %>% # at week 24 mutate(CONC_GT_100 = ifelse(CP>=100, 1, 0), CONC_GT_150 = ifelse(CP>=150, 1, 0), CONC_GT_200 = ifelse(CP>=200, 1, 0) ) # percentage of population above theshold concentration #------------------------------------------------------- #tdata$REP = 1 # number of replicates tdata = tdata %>% group_by(REP, ARMA) %>% summarise(N=length(unique(ID)), PCT_CONC_GT_100 = sum(CONC_GT_100)/N*100, PCT_CONC_GT_150 = sum(CONC_GT_150)/N*100, PCT_CONC_GT_200 = sum(CONC_GT_200)/N*100 ) %>% gather(TEST, DVOR, -REP, -ARMA, -N) # calculate stats #---------------------- tabl = tdata %>% # exclude pre-dose samples # calculate the statistics (Mean, SE, SD) calc_stats(id="REP", group_by=c("ARMA", "TEST"), value="DVOR") %>% select(ARMA, TEST, N, #Mean, meanMinusSE, meanPlusSE, Mean_SD, Mean_SE, Median_Range) %>% arrange(ARMA, TEST) tabl # paste0(round(sum(CONC_GT_200)/N*100,digits=2),"%") attr(tabl, 'title') <- "Percentage of Subject Achieving threshold Concentration for Different Dosing Regimens at Week24" attr(tabl, 'footnote') <- paste0("QW: weekly, Q2W: bi-weekly") table[["CONC_PCT_WK24"]] = tabl return(list(figure=figure, table=table, message=message)) } ################################################################# # final output ################################################################# if (ihandbook) { output = suppressWarnings(simdata_pct_threshold(dataset, params=NULL)) }
library(RSelenium) remDr <- remoteDriver(remoteServerAddr = 'localhost', port = 4445, browserName= 'chrome') remDr$open() url <- 'https://hotel.naver.com/hotels/item?hotelId=hotel:Shilla_Stay_Yeoksam&destination_kor=%EC%8B%A0%EB%9D%BC%EC%8A%A4%ED%85%8C%EC%9D%B4%20%EC%97%AD%EC%82%BC&rooms=2' remDr$navigate(url) Sys.sleep(3) pageLink <- NULL reple <- NULL curr_PageOldNum <- 0 repeat{ doms <- remDr$findElements(using = 'css selector', '.txt ng-binding') Sys.sleep(1) reple_v <- sapply(doms, function(x) {x$getElemtntText()}) print(reple_v) reple <- append(reple, unlist(reple_v)) cat(length(reple), '\n') pageLink <- remDr$findElements(using = 'css selector', '') }
/[R] lab10.R
no_license
haoingg/analysis
R
false
false
693
r
library(RSelenium) remDr <- remoteDriver(remoteServerAddr = 'localhost', port = 4445, browserName= 'chrome') remDr$open() url <- 'https://hotel.naver.com/hotels/item?hotelId=hotel:Shilla_Stay_Yeoksam&destination_kor=%EC%8B%A0%EB%9D%BC%EC%8A%A4%ED%85%8C%EC%9D%B4%20%EC%97%AD%EC%82%BC&rooms=2' remDr$navigate(url) Sys.sleep(3) pageLink <- NULL reple <- NULL curr_PageOldNum <- 0 repeat{ doms <- remDr$findElements(using = 'css selector', '.txt ng-binding') Sys.sleep(1) reple_v <- sapply(doms, function(x) {x$getElemtntText()}) print(reple_v) reple <- append(reple, unlist(reple_v)) cat(length(reple), '\n') pageLink <- remDr$findElements(using = 'css selector', '') }
library(changepoint) ### Name: ncpts.max<- ### Title: Generic Function - ncpts.max<- ### Aliases: ncpts.max<- ### Keywords: methods cpt internal ### ** Examples x=new("cpt") # new cpt object ncpts.max(x)<-10 # replaces the vector of changepoint in object x with 10
/data/genthat_extracted_code/changepoint/examples/ncpts.max-.Rd.R
no_license
surayaaramli/typeRrh
R
false
false
272
r
library(changepoint) ### Name: ncpts.max<- ### Title: Generic Function - ncpts.max<- ### Aliases: ncpts.max<- ### Keywords: methods cpt internal ### ** Examples x=new("cpt") # new cpt object ncpts.max(x)<-10 # replaces the vector of changepoint in object x with 10
#' @title Plot Random Forest Proximity Scores #' @description Create a plot of Random Forest proximity scores using #' multi-dimensional scaling. #' #' @param rf A \code{randomForest} object. #' @param dim.x,dim.y Numeric values giving x and y dimensions to plot from #' multidimensional scaling of proximity scores. #' @param legend.loc Character keyword specifying location of legend. #' Can be \code{"bottom", "top", "left", "right"}. #' @param point.size Size of central points. #' @param circle.size Size of circles around correctly classified #' points as argument to 'cex'. Set to NULL for no circles. #' @param circle.border Width of circle border. #' @param hull.alpha value giving alpha transparency level for convex hull shading. #' Setting to \code{NULL} produces no shading. Ignored for regression models. #' @param plot logical determining whether or not to show plot. #' #' @details Produces a scatter plot of proximity scores for \code{dim.x} and #' \code{dim.y} dimensions from a multidimensional scale (MDS) conversion of #' proximity scores from a \code{randomForest} object. For classification #' models, a convex hull is drawn around the a-priori classes with points #' colored according to original (inner) and predicted (outer) class. #' #' @return a list with \code{prox.cmd}: the MDS scores of the selected dimensions, #' and \code{g} the \code{\link{ggplot}} object. #' #' @author Eric Archer \email{eric.archer@@noaa.gov} #' #' @examples #' data(symb.metab) #' #' rf <- randomForest(type ~ ., symb.metab, proximity = TRUE) #' proximityPlot(rf) #' #' @importFrom stats cmdscale #' @importFrom grDevices chull rainbow #' @importFrom ggplot2 ggplot aes geom_point labs theme element_blank geom_polygon element_rect #' @export #' proximityPlot <- function(rf, dim.x = 1, dim.y = 2, legend.loc = c("top", "bottom", "left", "right"), point.size = 2, circle.size = 8, circle.border = 1, hull.alpha = 0.3, plot = TRUE) { if(is.null(rf$proximity)) { stop("'rf' has no 'proximity' element. rerun with 'proximity = TRUE'") } prox.cmd <- cmdscale(1 - rf$proximity, k = max(c(dim.x, dim.y))) prox.cmd <- prox.cmd[, c(dim.x, dim.y)] df <- data.frame(prox.cmd, class = rf$y, predicted = rf$predicted) colnames(df)[1:2] <- c("x", "y") g <- ggplot(df, aes_string("x", "y")) # Add convex hulls if(rf$type != "regression") { loc.hull <- tapply(1:nrow(prox.cmd), rf$y, function(i) { ch <- chull(prox.cmd[i, 1], prox.cmd[i, 2]) c(i[ch], i[ch[1]]) }) for(ch in 1:length(loc.hull)) { ch.df <- df[loc.hull[[ch]], ] g <- g + if(!is.null(hull.alpha)) { geom_polygon( aes_string("x", "y", color = "class", fill = "class"), alpha = hull.alpha, data = ch.df, inherit.aes = FALSE, show.legend = FALSE ) } else { geom_polygon( aes_string("x", "y", color = "class"), fill = "transparent", data = ch.df, inherit.aes = FALSE, show.legend = FALSE ) } } } g <- g + geom_point(aes_string(color = "class"), size = point.size) + labs(x = paste("Dimension", dim.x), y = paste("Dimension", dim.y)) + theme( legend.position = match.arg(legend.loc), legend.title = element_blank(), legend.key = element_rect(color = NA, fill = NA), panel.background = element_rect(color = "black", fill = NA), panel.grid.major = element_blank(), panel.grid.minor = element_blank() ) # Add predicted circles if(!is.null(circle.size)) { g <- g + geom_point( aes_string(color = "predicted"), shape = 21, size = circle.size, stroke = circle.border, show.legend = FALSE ) } if(plot) print(g) invisible(list(prox.cmd = prox.cmd, g = g)) }
/R/proximityPlot.R
no_license
dejavu2010/rfPermute
R
false
false
3,959
r
#' @title Plot Random Forest Proximity Scores #' @description Create a plot of Random Forest proximity scores using #' multi-dimensional scaling. #' #' @param rf A \code{randomForest} object. #' @param dim.x,dim.y Numeric values giving x and y dimensions to plot from #' multidimensional scaling of proximity scores. #' @param legend.loc Character keyword specifying location of legend. #' Can be \code{"bottom", "top", "left", "right"}. #' @param point.size Size of central points. #' @param circle.size Size of circles around correctly classified #' points as argument to 'cex'. Set to NULL for no circles. #' @param circle.border Width of circle border. #' @param hull.alpha value giving alpha transparency level for convex hull shading. #' Setting to \code{NULL} produces no shading. Ignored for regression models. #' @param plot logical determining whether or not to show plot. #' #' @details Produces a scatter plot of proximity scores for \code{dim.x} and #' \code{dim.y} dimensions from a multidimensional scale (MDS) conversion of #' proximity scores from a \code{randomForest} object. For classification #' models, a convex hull is drawn around the a-priori classes with points #' colored according to original (inner) and predicted (outer) class. #' #' @return a list with \code{prox.cmd}: the MDS scores of the selected dimensions, #' and \code{g} the \code{\link{ggplot}} object. #' #' @author Eric Archer \email{eric.archer@@noaa.gov} #' #' @examples #' data(symb.metab) #' #' rf <- randomForest(type ~ ., symb.metab, proximity = TRUE) #' proximityPlot(rf) #' #' @importFrom stats cmdscale #' @importFrom grDevices chull rainbow #' @importFrom ggplot2 ggplot aes geom_point labs theme element_blank geom_polygon element_rect #' @export #' proximityPlot <- function(rf, dim.x = 1, dim.y = 2, legend.loc = c("top", "bottom", "left", "right"), point.size = 2, circle.size = 8, circle.border = 1, hull.alpha = 0.3, plot = TRUE) { if(is.null(rf$proximity)) { stop("'rf' has no 'proximity' element. rerun with 'proximity = TRUE'") } prox.cmd <- cmdscale(1 - rf$proximity, k = max(c(dim.x, dim.y))) prox.cmd <- prox.cmd[, c(dim.x, dim.y)] df <- data.frame(prox.cmd, class = rf$y, predicted = rf$predicted) colnames(df)[1:2] <- c("x", "y") g <- ggplot(df, aes_string("x", "y")) # Add convex hulls if(rf$type != "regression") { loc.hull <- tapply(1:nrow(prox.cmd), rf$y, function(i) { ch <- chull(prox.cmd[i, 1], prox.cmd[i, 2]) c(i[ch], i[ch[1]]) }) for(ch in 1:length(loc.hull)) { ch.df <- df[loc.hull[[ch]], ] g <- g + if(!is.null(hull.alpha)) { geom_polygon( aes_string("x", "y", color = "class", fill = "class"), alpha = hull.alpha, data = ch.df, inherit.aes = FALSE, show.legend = FALSE ) } else { geom_polygon( aes_string("x", "y", color = "class"), fill = "transparent", data = ch.df, inherit.aes = FALSE, show.legend = FALSE ) } } } g <- g + geom_point(aes_string(color = "class"), size = point.size) + labs(x = paste("Dimension", dim.x), y = paste("Dimension", dim.y)) + theme( legend.position = match.arg(legend.loc), legend.title = element_blank(), legend.key = element_rect(color = NA, fill = NA), panel.background = element_rect(color = "black", fill = NA), panel.grid.major = element_blank(), panel.grid.minor = element_blank() ) # Add predicted circles if(!is.null(circle.size)) { g <- g + geom_point( aes_string(color = "predicted"), shape = 21, size = circle.size, stroke = circle.border, show.legend = FALSE ) } if(plot) print(g) invisible(list(prox.cmd = prox.cmd, g = g)) }
library(unbalhaar) ### Name: inner.prod.max.p ### Title: Unbalanced Haar wavelet which maximises the inner product ### Aliases: inner.prod.max.p ### Keywords: math ### ** Examples inner.prod.max.p(c(rep(0, 100), rep(1, 200)), .55)
/data/genthat_extracted_code/unbalhaar/examples/inner.prod.max.p.Rd.R
no_license
surayaaramli/typeRrh
R
false
false
238
r
library(unbalhaar) ### Name: inner.prod.max.p ### Title: Unbalanced Haar wavelet which maximises the inner product ### Aliases: inner.prod.max.p ### Keywords: math ### ** Examples inner.prod.max.p(c(rep(0, 100), rep(1, 200)), .55)
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/helper_function.R \name{permutation_cor} \alias{permutation_cor} \title{Permutations to build a differential network based on correlation analysis} \usage{ permutation_cor( m, p, n_group_1, n_group_2, data_group_1, data_group_2, type_of_cor ) } \arguments{ \item{m}{This is the number of permutations.} \item{p}{This is the number of biomarker candidates.} \item{n_group_1}{This is the number of subjects in group 1.} \item{n_group_2}{This is the number of subjects in group 2.} \item{data_group_1}{This is a \eqn{n*p} matrix containing group 1 data.} \item{data_group_2}{THis is a \eqn{n*p} matrix containing group 2 data.} \item{type_of_cor}{If this is NULL, pearson correlation coefficient will be calculated as default. Otherwise, a character string "spearman" will calculate the spearman correlation coefficient.} } \value{ A multi-dimensional matrix that contains the permutation result. } \description{ A permutation test that randomly permutes the sample labels in distinct biological groups for each biomolecule. The difference in each paired biomolecule is considered statistically significant if it falls into the 2.5% tails on either end of the empirical distribution curve. }
/man/permutation_cor.Rd
permissive
ressomlab/INDEED
R
false
true
1,302
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/helper_function.R \name{permutation_cor} \alias{permutation_cor} \title{Permutations to build a differential network based on correlation analysis} \usage{ permutation_cor( m, p, n_group_1, n_group_2, data_group_1, data_group_2, type_of_cor ) } \arguments{ \item{m}{This is the number of permutations.} \item{p}{This is the number of biomarker candidates.} \item{n_group_1}{This is the number of subjects in group 1.} \item{n_group_2}{This is the number of subjects in group 2.} \item{data_group_1}{This is a \eqn{n*p} matrix containing group 1 data.} \item{data_group_2}{THis is a \eqn{n*p} matrix containing group 2 data.} \item{type_of_cor}{If this is NULL, pearson correlation coefficient will be calculated as default. Otherwise, a character string "spearman" will calculate the spearman correlation coefficient.} } \value{ A multi-dimensional matrix that contains the permutation result. } \description{ A permutation test that randomly permutes the sample labels in distinct biological groups for each biomolecule. The difference in each paired biomolecule is considered statistically significant if it falls into the 2.5% tails on either end of the empirical distribution curve. }
context("StridedTensorMap") library(Rcpp) test_that("basic uses of StridedTensorMap work",{ cppfile <- system.file(file.path('tests', 'testthat', 'cpp', 'StridedTensorMap_tests.cpp'), package = 'nCompiler') test <- nCompiler:::QuietSourceCpp(cppfile) x <- array(1:(6*5*4), dim = c(6, 5, 4)) expect_equal(STM1(x), x[, 2:3, ][2, 2, 3]) expect_equal(STM2(x), x[, 2:3, ]) expect_equal(STM3(x), log(x[, 2:3, ])) expect_equal(STM4(x), log(x[, 2:3, ])) expect_equal(STM5(x), {temp <- x; temp[,,] <- 0; temp[, 2:3, ] <- x[, 2:3, ]; temp}) expect_equal(STM6(x), x[2:5, 3, 2:3]) expect_equal(STM7(x), x[ 5, 1:3, 2:3 ]) expect_equal(STM8(x), array(x[5, 1:3, 2])) expect_equal(STM9(x), x[5, , 2:3]) })
/nCompiler/tests/testthat/test-StridedTensorMap.R
permissive
nimble-dev/nCompiler
R
false
false
714
r
context("StridedTensorMap") library(Rcpp) test_that("basic uses of StridedTensorMap work",{ cppfile <- system.file(file.path('tests', 'testthat', 'cpp', 'StridedTensorMap_tests.cpp'), package = 'nCompiler') test <- nCompiler:::QuietSourceCpp(cppfile) x <- array(1:(6*5*4), dim = c(6, 5, 4)) expect_equal(STM1(x), x[, 2:3, ][2, 2, 3]) expect_equal(STM2(x), x[, 2:3, ]) expect_equal(STM3(x), log(x[, 2:3, ])) expect_equal(STM4(x), log(x[, 2:3, ])) expect_equal(STM5(x), {temp <- x; temp[,,] <- 0; temp[, 2:3, ] <- x[, 2:3, ]; temp}) expect_equal(STM6(x), x[2:5, 3, 2:3]) expect_equal(STM7(x), x[ 5, 1:3, 2:3 ]) expect_equal(STM8(x), array(x[5, 1:3, 2])) expect_equal(STM9(x), x[5, , 2:3]) })
#### Help.Functions library(quadprog) library(MASS) hspace<-function(beta,c) { p<-length(beta) sgn.b<-sign(beta) #a<-sgn.b==0 #sgn.b[a]<-1 ord.b<-rank(abs(beta),ties.method="first") w<-c*(1:p-1)+(1-c) h.beta<-w[ord.b]*sgn.b return(h.beta) } H.creater<-function(H,beta,t,c,epsilon) { nrow.H<-nrow(H) p<-length(beta) ### active sets for(i in 1:nrow.H) { if(abs((sum(H[i,-1]*beta[-1])-t))>epsilon)H[i,]<-rep(0,p)#hspace(beta,c) } h.new<-t(hspace(beta[-1],c)) ### vertical vert.check<-rep(0,nrow.H) for(i in 1:nrow.H) { if(sum(abs(H[i,-1]))!=0){vert.check[i]<-sum(h.new*H[i,-1])/(sqrt(sum(h.new^2))*sqrt(sum(H[i,-1]^2)))} } a<-(1-vert.check)<epsilon H[a,]<-rep(0,p) b<-rowSums(abs(H))!=0 if(all(b)==FALSE){H.top<-NULL} if(any(b)==TRUE){H.top<-H[b,-1]}#matrix(,ncol=p)} #if(sum(H[b,])==0)nrow.H.top<-0 #H<-H.top #if(all(a)==FALSE) #{ H<-rbind(H.top,h.new) #} return(H) } l2norm<-function(vec) { l2length<-sqrt(sum(vec^2)) if(l2length>0){vec.norm<-vec/l2length} if(l2length==0){vec.norm<-vec} return(list("length"=l2length,"normed"=vec.norm)) } fs.oscar<-function(X,y,beta.alt, H.new, bvec, meq=0,familiy,epsilon=1e-8) { #if(family$family=="binomial")y<-as.factor(y) p<-ncol(X) beta.alt<-beta.alt#glm(y~X-1,family)$coef beta.alt.l2<-l2norm(beta.alt[-1])$length delta<-Inf #X<-cbind(1,X) while(delta>epsilon) { eta<-X%*%beta.alt dev.mat<-family$mu.eta(eta) mu<-family$linkinv(eta) s.mat<-family$variance(mu) W<-diag(c(dev.mat^2/s.mat)) Dmat<-t(X)%*%W%*%X dvec<-t(X)%*%W%*%(eta+diag(c(1/dev.mat))%*%(y-mu)) beta.neu<-solve.QP(Dmat, dvec, -t(H.new), -bvec, meq=0, factorized=FALSE)$solution delta<-l2norm(beta.neu-beta.alt)$length/beta.alt.l2 #cat(delta,"\n") beta.alt<-beta.neu beta.alt.l2<-l2norm(beta.alt)$length } return(beta.neu) } #### Active Set Estimate oscar.as<-function(X,y,ml,t,c,epsilon=1e-8,family) { beta.alt<-ml#glm(y~X-1,family)$coef H.new.eq.H.old<-FALSE H.old<-matrix(0,1,ncol(X)) bvec<-rep(t,1) #BETA<-NULL while(H.new.eq.H.old==FALSE) { H.new<-cbind(0,H.creater(H.old,beta.alt,t,c,epsilon))#,epsilon=epsilon) bvec<-c(rep(t,nrow(H.new))) if(family$family!="gaussian") { beta<-fs.oscar(X, y,beta.alt, H.new, bvec, meq=0,family,epsilon) } if(family$family=="gaussian") { Dmat<-t(X)%*%X dvec<-t(X)%*%y beta<-solve.QP(Dmat, dvec, -t(H.new), -bvec, meq=0, factorized=FALSE)$solution } #if(all(dim(H.old)==dim(H.new))){H.new.eq.H.old<-all(H.old==H.new)} if(l2norm(beta-beta.alt)$length/l2norm(beta.alt)$length<epsilon){H.new.eq.H.old<-TRUE} H.old<-H.new #BETA<-cbind(BETA,beta) #cat(max(abs(beta%*%t(H.new))),"\n") beta.alt<-beta #matplot(t(BETA),type="l") } return(beta) } glm.oscar<-function(y.org,X.org,family,t.seq,c,epsilon) { y<-y.org p<-ncol(X.org) X<-scale(X.org,center=TRUE,scale=TRUE) X<-cbind(1,X) control=glm.control(epsilon = 1e-8, maxit = 1000, trace = FALSE) ml<-glm(y~X-1,family,control=control) kq.hspace<-hspace(ml$coef[-1],c) tmax<-sum(kq.hspace*ml$coef[-1]) t.seq<-tmax*t.seq#(tmax*0.01,tmax*0.99,length=t.length) t.length<-length(t.seq) #if(length(t.length)==1)t.seq<-c(t.length) BETA<-matrix(0,t.length,p+1) for(i in 1:t.length) { BETA[i,]<-oscar.as(X,y,ml$coef,t=t.seq[i],c=c,epsilon,family) cat(i,"\n") } BETA.org<-BETA[,-1]%*%diag(1/sd(X.org)) Inter.org<-BETA[,1]-BETA.org%*%colMeans(X.org) BETA.org<-cbind(Inter.org,BETA.org) return(list( "Beta.std"=BETA, "Beta"=BETA.org, "family"=family, "t.seq"=t.seq ) ) } predict.glm.oscar<-function(BETA,X,y) { X<-cbind(1,X) n<-nrow(X) t.length<-nrow(BETA$Beta) FIT<-matrix(0,n,t.length) ETA<-matrix(0,n,t.length) DEV<-matrix(0,n,t.length) SUMDEV<-rep(0,t.length) wt<-rep(1,n) for(i in 1:t.length) { ETA[,i]<-X%*%BETA$Beta[i,] FIT[,i]<-BETA$family$linkinv(ETA[,i]) DEV[,i]<-BETA$family$dev.resids(y, FIT[,i], wt) SUMDEV[i]<-sum(DEV[,i]) } return(list( "eta"=ETA, "fit"=FIT, "dev"=DEV, "sumdev"=SUMDEV ) ) } cv.glm.oscar<-function(X,y,family,splitM,c.seq,t.seq,epsilon) { nsplit<-ncol(splitM) dev.all<-Inf #dev<-rep(0,length(t.seq)) ncseq<-length(c.seq) for(j in 1:ncseq) { dev<-rep(0,length(t.seq)) for(i in 1:nsplit) { X.train<-X[splitM[,i]==1,] y.train<-y[splitM[,i]==1] train<-glm.oscar(y.train,X.train,family=family,t.seq=t.seq,c=c.seq[j],epsilon=1e-8) X.test<-X[splitM[,i]==0,] y.test<-y[splitM[,i]==0] test<-predict.glm.oscar(train,X.test,y.test) dev<-dev+test$sumdev } cat(min(dev),"\n") if(min(dev)<dev.all) { t.opt<-which.min(dev) c.opt<-c.seq[j] dev.all<-min(dev) } } train<-glm.oscar(y,X,family=family,t.seq=t.seq[t.opt],c=c.opt,epsilon=1e-8) return(list( "model"=train, "dev"=dev, "c"=c.opt, "t"=t.opt)) }
/catdata/vignettes/glmOSCAR_101028.r
no_license
ingted/R-Examples
R
false
false
4,830
r
#### Help.Functions library(quadprog) library(MASS) hspace<-function(beta,c) { p<-length(beta) sgn.b<-sign(beta) #a<-sgn.b==0 #sgn.b[a]<-1 ord.b<-rank(abs(beta),ties.method="first") w<-c*(1:p-1)+(1-c) h.beta<-w[ord.b]*sgn.b return(h.beta) } H.creater<-function(H,beta,t,c,epsilon) { nrow.H<-nrow(H) p<-length(beta) ### active sets for(i in 1:nrow.H) { if(abs((sum(H[i,-1]*beta[-1])-t))>epsilon)H[i,]<-rep(0,p)#hspace(beta,c) } h.new<-t(hspace(beta[-1],c)) ### vertical vert.check<-rep(0,nrow.H) for(i in 1:nrow.H) { if(sum(abs(H[i,-1]))!=0){vert.check[i]<-sum(h.new*H[i,-1])/(sqrt(sum(h.new^2))*sqrt(sum(H[i,-1]^2)))} } a<-(1-vert.check)<epsilon H[a,]<-rep(0,p) b<-rowSums(abs(H))!=0 if(all(b)==FALSE){H.top<-NULL} if(any(b)==TRUE){H.top<-H[b,-1]}#matrix(,ncol=p)} #if(sum(H[b,])==0)nrow.H.top<-0 #H<-H.top #if(all(a)==FALSE) #{ H<-rbind(H.top,h.new) #} return(H) } l2norm<-function(vec) { l2length<-sqrt(sum(vec^2)) if(l2length>0){vec.norm<-vec/l2length} if(l2length==0){vec.norm<-vec} return(list("length"=l2length,"normed"=vec.norm)) } fs.oscar<-function(X,y,beta.alt, H.new, bvec, meq=0,familiy,epsilon=1e-8) { #if(family$family=="binomial")y<-as.factor(y) p<-ncol(X) beta.alt<-beta.alt#glm(y~X-1,family)$coef beta.alt.l2<-l2norm(beta.alt[-1])$length delta<-Inf #X<-cbind(1,X) while(delta>epsilon) { eta<-X%*%beta.alt dev.mat<-family$mu.eta(eta) mu<-family$linkinv(eta) s.mat<-family$variance(mu) W<-diag(c(dev.mat^2/s.mat)) Dmat<-t(X)%*%W%*%X dvec<-t(X)%*%W%*%(eta+diag(c(1/dev.mat))%*%(y-mu)) beta.neu<-solve.QP(Dmat, dvec, -t(H.new), -bvec, meq=0, factorized=FALSE)$solution delta<-l2norm(beta.neu-beta.alt)$length/beta.alt.l2 #cat(delta,"\n") beta.alt<-beta.neu beta.alt.l2<-l2norm(beta.alt)$length } return(beta.neu) } #### Active Set Estimate oscar.as<-function(X,y,ml,t,c,epsilon=1e-8,family) { beta.alt<-ml#glm(y~X-1,family)$coef H.new.eq.H.old<-FALSE H.old<-matrix(0,1,ncol(X)) bvec<-rep(t,1) #BETA<-NULL while(H.new.eq.H.old==FALSE) { H.new<-cbind(0,H.creater(H.old,beta.alt,t,c,epsilon))#,epsilon=epsilon) bvec<-c(rep(t,nrow(H.new))) if(family$family!="gaussian") { beta<-fs.oscar(X, y,beta.alt, H.new, bvec, meq=0,family,epsilon) } if(family$family=="gaussian") { Dmat<-t(X)%*%X dvec<-t(X)%*%y beta<-solve.QP(Dmat, dvec, -t(H.new), -bvec, meq=0, factorized=FALSE)$solution } #if(all(dim(H.old)==dim(H.new))){H.new.eq.H.old<-all(H.old==H.new)} if(l2norm(beta-beta.alt)$length/l2norm(beta.alt)$length<epsilon){H.new.eq.H.old<-TRUE} H.old<-H.new #BETA<-cbind(BETA,beta) #cat(max(abs(beta%*%t(H.new))),"\n") beta.alt<-beta #matplot(t(BETA),type="l") } return(beta) } glm.oscar<-function(y.org,X.org,family,t.seq,c,epsilon) { y<-y.org p<-ncol(X.org) X<-scale(X.org,center=TRUE,scale=TRUE) X<-cbind(1,X) control=glm.control(epsilon = 1e-8, maxit = 1000, trace = FALSE) ml<-glm(y~X-1,family,control=control) kq.hspace<-hspace(ml$coef[-1],c) tmax<-sum(kq.hspace*ml$coef[-1]) t.seq<-tmax*t.seq#(tmax*0.01,tmax*0.99,length=t.length) t.length<-length(t.seq) #if(length(t.length)==1)t.seq<-c(t.length) BETA<-matrix(0,t.length,p+1) for(i in 1:t.length) { BETA[i,]<-oscar.as(X,y,ml$coef,t=t.seq[i],c=c,epsilon,family) cat(i,"\n") } BETA.org<-BETA[,-1]%*%diag(1/sd(X.org)) Inter.org<-BETA[,1]-BETA.org%*%colMeans(X.org) BETA.org<-cbind(Inter.org,BETA.org) return(list( "Beta.std"=BETA, "Beta"=BETA.org, "family"=family, "t.seq"=t.seq ) ) } predict.glm.oscar<-function(BETA,X,y) { X<-cbind(1,X) n<-nrow(X) t.length<-nrow(BETA$Beta) FIT<-matrix(0,n,t.length) ETA<-matrix(0,n,t.length) DEV<-matrix(0,n,t.length) SUMDEV<-rep(0,t.length) wt<-rep(1,n) for(i in 1:t.length) { ETA[,i]<-X%*%BETA$Beta[i,] FIT[,i]<-BETA$family$linkinv(ETA[,i]) DEV[,i]<-BETA$family$dev.resids(y, FIT[,i], wt) SUMDEV[i]<-sum(DEV[,i]) } return(list( "eta"=ETA, "fit"=FIT, "dev"=DEV, "sumdev"=SUMDEV ) ) } cv.glm.oscar<-function(X,y,family,splitM,c.seq,t.seq,epsilon) { nsplit<-ncol(splitM) dev.all<-Inf #dev<-rep(0,length(t.seq)) ncseq<-length(c.seq) for(j in 1:ncseq) { dev<-rep(0,length(t.seq)) for(i in 1:nsplit) { X.train<-X[splitM[,i]==1,] y.train<-y[splitM[,i]==1] train<-glm.oscar(y.train,X.train,family=family,t.seq=t.seq,c=c.seq[j],epsilon=1e-8) X.test<-X[splitM[,i]==0,] y.test<-y[splitM[,i]==0] test<-predict.glm.oscar(train,X.test,y.test) dev<-dev+test$sumdev } cat(min(dev),"\n") if(min(dev)<dev.all) { t.opt<-which.min(dev) c.opt<-c.seq[j] dev.all<-min(dev) } } train<-glm.oscar(y,X,family=family,t.seq=t.seq[t.opt],c=c.opt,epsilon=1e-8) return(list( "model"=train, "dev"=dev, "c"=c.opt, "t"=t.opt)) }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/coupled2_hmc_kernel.R \name{coupled2_hmc_kernel} \alias{coupled2_hmc_kernel} \title{HMC kernel for two coupled chains} \usage{ coupled2_hmc_kernel( level, state1, state2, identical, tuning, proposal_coupling ) } \arguments{ \item{level}{an integer that determines probability distribution} \item{state1}{a list with state of the first chain} \item{state2}{a list with state of the second chain} \item{identical}{a boolean variable that is True if chains are identical and False otherwise} \item{tuning}{a list of parameters neeeded for HMC (aka time step for leapfrog integration)} \item{proposal_coupling}{function that determing proposal coupling} } \value{ a list that contains states of two chains, updated value of the flag "identical" and cost of computations } \description{ generation of coupled proposals + accept/reject step }
/man/coupled2_hmc_kernel.Rd
no_license
jeremyhengjm/UnbiasedMultilevel
R
false
true
932
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/coupled2_hmc_kernel.R \name{coupled2_hmc_kernel} \alias{coupled2_hmc_kernel} \title{HMC kernel for two coupled chains} \usage{ coupled2_hmc_kernel( level, state1, state2, identical, tuning, proposal_coupling ) } \arguments{ \item{level}{an integer that determines probability distribution} \item{state1}{a list with state of the first chain} \item{state2}{a list with state of the second chain} \item{identical}{a boolean variable that is True if chains are identical and False otherwise} \item{tuning}{a list of parameters neeeded for HMC (aka time step for leapfrog integration)} \item{proposal_coupling}{function that determing proposal coupling} } \value{ a list that contains states of two chains, updated value of the flag "identical" and cost of computations } \description{ generation of coupled proposals + accept/reject step }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/PlotSampleNumberVsSamplingEffort.R \name{PlotSampleNumberVsSamplingEffort} \alias{PlotSampleNumberVsSamplingEffort} \title{Represent total sampling effort according to sample number.} \usage{ PlotSampleNumberVsSamplingEffort(data, seqReplicat) } \arguments{ \item{data}{A FData object.} \item{seqReplicat}{A numeric vector with replicate numbers} } \value{ Return a plot with the sampling effort in y and the sample number in x. Lables are the replicate number. } \description{ Represent total sampling effort according to sample number. }
/man/PlotSampleNumberVsSamplingEffort.Rd
no_license
thomasdenis973/FormatDataToGDS
R
false
true
619
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/PlotSampleNumberVsSamplingEffort.R \name{PlotSampleNumberVsSamplingEffort} \alias{PlotSampleNumberVsSamplingEffort} \title{Represent total sampling effort according to sample number.} \usage{ PlotSampleNumberVsSamplingEffort(data, seqReplicat) } \arguments{ \item{data}{A FData object.} \item{seqReplicat}{A numeric vector with replicate numbers} } \value{ Return a plot with the sampling effort in y and the sample number in x. Lables are the replicate number. } \description{ Represent total sampling effort according to sample number. }
#' Convert an adjacency matrix (ie - from the \code{sparseiCov} function) to an igraph object suitable for plotting via phyloseq's \code{plot_network}. #' #' @param Adj an Adjacency matrix #' @export adj2igraph <- function(Adj, names=1:ncol(Adj), rmEmptyNodes=FALSE) { g <- graph.adjacency(Adj, mode = "undirected", weighted = TRUE) if (rmEmptyNodes) { ind <- which(degree(g) < 1) g <- delete.vertices(g, ind) names <- names[-ind] } V(g)$name <- names g }
/R/plotNet.R
no_license
TankMermaid/SpiecEasi
R
false
false
503
r
#' Convert an adjacency matrix (ie - from the \code{sparseiCov} function) to an igraph object suitable for plotting via phyloseq's \code{plot_network}. #' #' @param Adj an Adjacency matrix #' @export adj2igraph <- function(Adj, names=1:ncol(Adj), rmEmptyNodes=FALSE) { g <- graph.adjacency(Adj, mode = "undirected", weighted = TRUE) if (rmEmptyNodes) { ind <- which(degree(g) < 1) g <- delete.vertices(g, ind) names <- names[-ind] } V(g)$name <- names g }
/naivnibaies/baies.r
no_license
ice8scream/ml0
R
false
false
1,784
r
# Reading the data data <- read.table("household_power_consumption.txt", header=T, sep= ";", na.strings = "?") # Filtering just the needed dates library(dplyr) data <- data %>% filter(Date == "1/2/2007" | Date == "2/2/2007") # Cleaning dates variables library(lubridate) # Loads lubridate data$dt <- paste(data$Date, data$Time, sep = " ") # create day-time vector data$dt <- dmy_hms(data$dt) # format it as date # Open the PNG device png(filename = "plot3.png", width = 480, height = 480, units = "px") # Plot 3: plot(x=data$dt, y=data$Sub_metering_1, type="l", xlab="", ylab = "Energy sub metering") # creates the plotting with submet1 points(data$dt, data$Sub_metering_2, type = "l", col="red") # insert submet2 points(data$dt, data$Sub_metering_3, type = "l", col="blue") # insert submet3 #insert the legend: legend("topright", lty="solid" ,col=c("black", "red", "blue"), legend=c("Sub_metering_1","Sub_metering_2","Sub_metering_3")) # Close PNG device dev.off()
/plot3.R
no_license
sibemberg/ExData_Plotting1
R
false
false
975
r
# Reading the data data <- read.table("household_power_consumption.txt", header=T, sep= ";", na.strings = "?") # Filtering just the needed dates library(dplyr) data <- data %>% filter(Date == "1/2/2007" | Date == "2/2/2007") # Cleaning dates variables library(lubridate) # Loads lubridate data$dt <- paste(data$Date, data$Time, sep = " ") # create day-time vector data$dt <- dmy_hms(data$dt) # format it as date # Open the PNG device png(filename = "plot3.png", width = 480, height = 480, units = "px") # Plot 3: plot(x=data$dt, y=data$Sub_metering_1, type="l", xlab="", ylab = "Energy sub metering") # creates the plotting with submet1 points(data$dt, data$Sub_metering_2, type = "l", col="red") # insert submet2 points(data$dt, data$Sub_metering_3, type = "l", col="blue") # insert submet3 #insert the legend: legend("topright", lty="solid" ,col=c("black", "red", "blue"), legend=c("Sub_metering_1","Sub_metering_2","Sub_metering_3")) # Close PNG device dev.off()
#### Lists and Data Frames ##################################################### #### 1. Creating a List #### # Different Data Types # From different objects char <- "Harry" vec <- c("A", "B", "C") mtr <- matrix(data = 1:9, nrow = 3) ary <- array(data = 1:18, dim = c(3,3,2)) #### 2. Accessing Components of a List #### # Member Reference # Slicing # Named Components #### 3. Creating a Data Frame #### employee_df <- data.frame(first = c("Michael", "Jim", "Dwight", "Pam"), second = c("Scott", "Halpert", "Schrute", "Beesly"), position = c("Regional Manager", "Sales Representative", "Sales Representative", "Receptionist"), age = c(40, 27, 35, 26), salary = c(79000, 45000, 52000, 30000)) #### 4. Adding Data #### # New Record new_employee <- data.frame(first = "Angela", second = "Martin", position = "Senior Accountant", age = 31, salary = 57000) #New Variable sex <- c("male", "male", "male", "female", "female") #### 5. Data Frame Subsets #### # Extracting Rows and Columns (Records and Variables) # By Index # By Name # Logical Flags #### 6. R Built in Data Sets #### #### Next Time ################################################################# plot(employee_df$age, employee_df$salary)
/tutorial_6__headings_only.R
no_license
HarryDoesData/R-for-Beginners
R
false
false
1,501
r
#### Lists and Data Frames ##################################################### #### 1. Creating a List #### # Different Data Types # From different objects char <- "Harry" vec <- c("A", "B", "C") mtr <- matrix(data = 1:9, nrow = 3) ary <- array(data = 1:18, dim = c(3,3,2)) #### 2. Accessing Components of a List #### # Member Reference # Slicing # Named Components #### 3. Creating a Data Frame #### employee_df <- data.frame(first = c("Michael", "Jim", "Dwight", "Pam"), second = c("Scott", "Halpert", "Schrute", "Beesly"), position = c("Regional Manager", "Sales Representative", "Sales Representative", "Receptionist"), age = c(40, 27, 35, 26), salary = c(79000, 45000, 52000, 30000)) #### 4. Adding Data #### # New Record new_employee <- data.frame(first = "Angela", second = "Martin", position = "Senior Accountant", age = 31, salary = 57000) #New Variable sex <- c("male", "male", "male", "female", "female") #### 5. Data Frame Subsets #### # Extracting Rows and Columns (Records and Variables) # By Index # By Name # Logical Flags #### 6. R Built in Data Sets #### #### Next Time ################################################################# plot(employee_df$age, employee_df$salary)
#!/usr/bin/env Rscript ## Create SummarizedExperiment (se) object from Salmon counts args = commandArgs(trailingOnly=TRUE) if (length(args) < 2) { stop("Usage: salmon_se.r <coldata> <salmon_out>", call.=FALSE) } coldata = args[1] counts_fn = args[2] tpm_fn = args[3] tx2gene = "salmon_tx2gene.tsv" info = file.info(tx2gene) if (info$size == 0){ tx2gene = NULL }else{ rowdata = read.csv(tx2gene, sep="\t", header = FALSE) colnames(rowdata) = c("tx", "gene_id", "gene_name") tx2gene = rowdata[,1:2] } counts = read.csv(counts_fn, row.names=1, sep="\t") tpm = read.csv(tpm_fn, row.names=1, sep="\t") if (length(intersect(rownames(counts), rowdata[["tx"]])) > length(intersect(rownames(counts), rowdata[["gene_id"]]))){ by_what = "tx" } else { by_what = "gene_id" rowdata = unique(rowdata[,2:3]) } if (file.exists(coldata)){ coldata = read.csv(coldata, sep="\t") coldata = coldata[match(colnames(counts), coldata[,1]),] coldata = cbind(files = fns, coldata) }else{ message("ColData not avaliable ", coldata) coldata = data.frame(files = colnames(counts), names = colnames(counts)) } library(SummarizedExperiment) rownames(coldata) = coldata[["names"]] extra = setdiff(rownames(counts), as.character(rowdata[[by_what]])) if (length(extra) > 0){ rowdata = rbind(rowdata, data.frame(tx=extra, gene_id=extra, gene_name=extra)) } rowdata = rowdata[match(rownames(counts), as.character(rowdata[[by_what]])),] rownames(rowdata) = rowdata[[by_what]] se = SummarizedExperiment(assays = list(counts = counts, abundance = tpm), colData = DataFrame(coldata), rowData = rowdata) saveRDS(se, file = paste0(tools::file_path_sans_ext(counts_fn), ".rds"))
/bin/salmon_summarizedexperiment.r
permissive
sk-sahu/rnaseq
R
false
false
1,866
r
#!/usr/bin/env Rscript ## Create SummarizedExperiment (se) object from Salmon counts args = commandArgs(trailingOnly=TRUE) if (length(args) < 2) { stop("Usage: salmon_se.r <coldata> <salmon_out>", call.=FALSE) } coldata = args[1] counts_fn = args[2] tpm_fn = args[3] tx2gene = "salmon_tx2gene.tsv" info = file.info(tx2gene) if (info$size == 0){ tx2gene = NULL }else{ rowdata = read.csv(tx2gene, sep="\t", header = FALSE) colnames(rowdata) = c("tx", "gene_id", "gene_name") tx2gene = rowdata[,1:2] } counts = read.csv(counts_fn, row.names=1, sep="\t") tpm = read.csv(tpm_fn, row.names=1, sep="\t") if (length(intersect(rownames(counts), rowdata[["tx"]])) > length(intersect(rownames(counts), rowdata[["gene_id"]]))){ by_what = "tx" } else { by_what = "gene_id" rowdata = unique(rowdata[,2:3]) } if (file.exists(coldata)){ coldata = read.csv(coldata, sep="\t") coldata = coldata[match(colnames(counts), coldata[,1]),] coldata = cbind(files = fns, coldata) }else{ message("ColData not avaliable ", coldata) coldata = data.frame(files = colnames(counts), names = colnames(counts)) } library(SummarizedExperiment) rownames(coldata) = coldata[["names"]] extra = setdiff(rownames(counts), as.character(rowdata[[by_what]])) if (length(extra) > 0){ rowdata = rbind(rowdata, data.frame(tx=extra, gene_id=extra, gene_name=extra)) } rowdata = rowdata[match(rownames(counts), as.character(rowdata[[by_what]])),] rownames(rowdata) = rowdata[[by_what]] se = SummarizedExperiment(assays = list(counts = counts, abundance = tpm), colData = DataFrame(coldata), rowData = rowdata) saveRDS(se, file = paste0(tools::file_path_sans_ext(counts_fn), ".rds"))
testlist <- list(a = 0L, b = 0L, x = c(-1073741825L, 1358954496L, 0L, 0L, 0L)) result <- do.call(grattan:::anyOutside,testlist) str(result)
/grattan/inst/testfiles/anyOutside/libFuzzer_anyOutside/anyOutside_valgrind_files/1610054645-test.R
no_license
akhikolla/updated-only-Issues
R
false
false
140
r
testlist <- list(a = 0L, b = 0L, x = c(-1073741825L, 1358954496L, 0L, 0L, 0L)) result <- do.call(grattan:::anyOutside,testlist) str(result)
#' Kernel Discriminative Component Analysis #' #' Performs kernel discriminative component analysis on the given data. #' #' Put KDCA function details here. #' #' @param k n x n kernel matrix. Result of the \code{\link{kmatrixGauss}} function. #' n is the number of samples. #' @param chunks \code{n * 1} vector describing the chunklets: #' \code{-1} in the \code{i} th place means that point \code{i} #' doesn\'t belong to any chunklet; #' integer \code{j} in place \code{i} means that point \code{i} #' belongs to chunklet j. #' The chunklets indexes should be 1:(number of chunklets). #' @param neglinks \code{s * s} matrix describing the negative relationship #' between all the \code{s} chunklets. #' For the element \eqn{neglinks_{ij}}: #' \eqn{neglinks_{ij} = 1} means chunklet \code{i} and chunklet {j} #' have negative constraint(s); #' \eqn{neglinks_{ij} = -1} means chunklet \code{i} and chunklet {j} #' don\'t have negative constraints #' or we don\'t have information about that. #' @param useD optional. When not given, DCA is done in the original dimension #' and B is full rank. When useD is given, DCA is preceded by #' constraints based LDA which reduces the dimension to useD. #' B in this case is of rank useD. #' #' @return list of the KDCA results: #' \item{B}{KDCA suggested Mahalanobis matrix} #' \item{DCA}{KDCA suggested transformation of the data. #' The dimension is (original data dimension) * (useD)} #' \item{newData}{KDCA transformed data} #' #' @keywords dca kdca discriminant component transformation mahalanobis metric #' #' @aliases kdca #' #' @note Put some note here. #' #' @author Xiao Nan <\url{http://www.road2stat.com}> #' #' @seealso See \code{\link{kmatrixGauss}} for the Gaussian kernel computation, #' and \code{\link{dca}} for the linear version of DCA. #' #' @export kdca #' #' @references #' Steven C.H. Hoi, W. Liu, M.R. Lyu and W.Y. Ma (2006). #' Learning Distance Metrics with Contextual Constraints for Image Retrieval. #' \emph{Proceedings IEEE Conference on Computer Vision and Pattern Recognition #' (CVPR2006)}. #' #' @examples #' kdca(NULL) kdca <- function(k, chunks, neglinks, useD) { NULL }
/R/kdca.R
permissive
mutual-ai/sdml
R
false
false
2,403
r
#' Kernel Discriminative Component Analysis #' #' Performs kernel discriminative component analysis on the given data. #' #' Put KDCA function details here. #' #' @param k n x n kernel matrix. Result of the \code{\link{kmatrixGauss}} function. #' n is the number of samples. #' @param chunks \code{n * 1} vector describing the chunklets: #' \code{-1} in the \code{i} th place means that point \code{i} #' doesn\'t belong to any chunklet; #' integer \code{j} in place \code{i} means that point \code{i} #' belongs to chunklet j. #' The chunklets indexes should be 1:(number of chunklets). #' @param neglinks \code{s * s} matrix describing the negative relationship #' between all the \code{s} chunklets. #' For the element \eqn{neglinks_{ij}}: #' \eqn{neglinks_{ij} = 1} means chunklet \code{i} and chunklet {j} #' have negative constraint(s); #' \eqn{neglinks_{ij} = -1} means chunklet \code{i} and chunklet {j} #' don\'t have negative constraints #' or we don\'t have information about that. #' @param useD optional. When not given, DCA is done in the original dimension #' and B is full rank. When useD is given, DCA is preceded by #' constraints based LDA which reduces the dimension to useD. #' B in this case is of rank useD. #' #' @return list of the KDCA results: #' \item{B}{KDCA suggested Mahalanobis matrix} #' \item{DCA}{KDCA suggested transformation of the data. #' The dimension is (original data dimension) * (useD)} #' \item{newData}{KDCA transformed data} #' #' @keywords dca kdca discriminant component transformation mahalanobis metric #' #' @aliases kdca #' #' @note Put some note here. #' #' @author Xiao Nan <\url{http://www.road2stat.com}> #' #' @seealso See \code{\link{kmatrixGauss}} for the Gaussian kernel computation, #' and \code{\link{dca}} for the linear version of DCA. #' #' @export kdca #' #' @references #' Steven C.H. Hoi, W. Liu, M.R. Lyu and W.Y. Ma (2006). #' Learning Distance Metrics with Contextual Constraints for Image Retrieval. #' \emph{Proceedings IEEE Conference on Computer Vision and Pattern Recognition #' (CVPR2006)}. #' #' @examples #' kdca(NULL) kdca <- function(k, chunks, neglinks, useD) { NULL }
which.max(red_wine_data_raw$total_sulfur_dioxide) red_wine_data_no_outliers <-red_wine_data_raw[-1082,] which.max(red_wine_data_no_outliers$total_sulfur_dioxide) red_wine_data_no_outliers <-red_wine_data_raw[-1080,] k <- 5 folds <- sample (1:k, nrow(red_wine_data_no_outliers), replace = TRUE) cv_error <- matrix(NA, k, 11, dimnames = list(NULL, paste(1:11))) for (j in 1:k){ best_fit <- regsubsets(quality~., data = red_wine_data_no_outliers [folds !=j,], nvmax = 11) for (i in 1:11){ pred = predict.regsubsets(best_fit, red_wine_data_no_outliers[folds==j,], id = i) cv_error[j,i] = mean( (red_wine_data_no_outliers$quality[folds==j]-pred)^2) } } (mean_cv_errors <- apply(cv_error, 2, mean)) plot(mean_cv_errors, type = 'b', main = "Best Model Selection", xlab = "# Parameters Considered", ylab = "MSE") which.min(mean_cv_errors) reg_best <- regsubsets(quality ~., data = red_wine_data_no_outliers, nvmax = 7) coef(reg_best, 7)
/src/total_sulfur_outliers_test.R
no_license
medewitt/wine_analysis
R
false
false
981
r
which.max(red_wine_data_raw$total_sulfur_dioxide) red_wine_data_no_outliers <-red_wine_data_raw[-1082,] which.max(red_wine_data_no_outliers$total_sulfur_dioxide) red_wine_data_no_outliers <-red_wine_data_raw[-1080,] k <- 5 folds <- sample (1:k, nrow(red_wine_data_no_outliers), replace = TRUE) cv_error <- matrix(NA, k, 11, dimnames = list(NULL, paste(1:11))) for (j in 1:k){ best_fit <- regsubsets(quality~., data = red_wine_data_no_outliers [folds !=j,], nvmax = 11) for (i in 1:11){ pred = predict.regsubsets(best_fit, red_wine_data_no_outliers[folds==j,], id = i) cv_error[j,i] = mean( (red_wine_data_no_outliers$quality[folds==j]-pred)^2) } } (mean_cv_errors <- apply(cv_error, 2, mean)) plot(mean_cv_errors, type = 'b', main = "Best Model Selection", xlab = "# Parameters Considered", ylab = "MSE") which.min(mean_cv_errors) reg_best <- regsubsets(quality ~., data = red_wine_data_no_outliers, nvmax = 7) coef(reg_best, 7)
# Sum of exponential functions for sourcing # ----------------------------------------------------------------------------- # The functions in order are: # pred.sumexp - gives sum of exponentials for given set of parameters # mle.sumexp - maximum likelihood estimation function to be minimised # optim.sumexp - determine parameters for differing numbers of exponentials # err.interv - trapezoidal error function to be minimised # optim.interv - determine intervals that give the smallest error # ----------------------------------------------------------------------------- # Sum of exponentials predicted concentrations function # pred.sumexp <- function(par, x, d = 0) { # # There are currently issues with order.sumexp messing with optimisation if it is within pred.sumexp # # Provides predictions according to model parameters # # par = sum of exponential parameters # # x = independent variable (time) # # d = order of derivative (uses dth derivative of model) # # Define objects # l <- length(par) # number of parameters # a <- l %% 2 == 1 # absorption status (odd parameter length == absorption) # n <- ceiling(l/2) # number of exponentials # p <- order.sumexp(par, n, a) # order parameters (allows for flip-flop) # # Sum of exponentials # for (i in 1:n) { # for each exponential # if (i == 1) yhat <- p[i]^d*exp(p[i]*x + p[n+i]) # first exponential defines yhat # else if (i != n | !a) yhat <- yhat + p[i]^d*exp(p[i]*x + p[n+i]) # following exponentials add to yhat # else if (a) yhat <- yhat - p[i]^d*exp(p[i]*x)*sum(exp(p[(n+1):(2*n-1)])) # for absorption curve apply final term # } # return(yhat) # predicted dependent variable (drug concentration) # } order.sumexp <- function(par, n, a) { # Sorts sum of exponential parameters to enable pred.sumexp # par = sum of exponential parameters # n = number of exponentials # a = absorption status m <- -abs(head(par, n+a)) # slope parameters (prevents exponential growth) b <- tail(par, -(n+a)) # intercept parameters m.ord <- order(m, decreasing = T) # slope order (terminal first, absorption last) b.ord <- m.ord # intercept order (match slopes) if (a) b.ord <- order(m[-m.ord[n+a]], decreasing = T) # if absorption curve remove extra term unname(c(m[m.ord], b[b.ord])) # ordered parameters } # Oldest function # pred.sumexp <- function(x, t, d = 0) { # l <- length(x) # a <- ifelse(l %% 2 == 0, 0, 1) # n <- ceiling(l/2) # for (i in 1:n) { # if (i == 1) y <- x[i]^d*exp(x[i]*t + x[n+i]) # else if (i != n | a == 0) y <- y + x[i]^d*exp(x[i]*t + x[n+i]) # else if (a == 1) y <- y - x[i]^d*exp(x[i]*t)*sum(exp(x[(n+1):(2*n-1)])) # } # return(y) # } # Less Old function pred.sumexp <- function(par, x, d = 0) { # Provides predictions according to model parameters # par = sum of exponential parameters # x = independent variable (time) # d = order of derivative (uses dth derivative of model) # Define objects l <- length(par) # number of parameters a <- l %% 2 == 1 # absorption status (odd parameter length == absorption) n <- ceiling(l/2) # number of exponentials m <- -abs(par[1:n]) # slope parameters (prevents exponential growth) b <- par[(n+1):l] # intercept parameters # Order parameters (allows for flip-flop) m.ord <- order(m, decreasing = T) # slope order (terminal first, absorption last) b.ord <- m.ord # intercept order (match slopes) if (a) b.ord <- order(m[-m.ord[n]], decreasing = T) # if absorption curve remove extra term p <- c(m[m.ord], b[b.ord]) # ordered parameters # Sum of exponentials for (i in 1:n) { # for each exponential if (i == 1) yhat <- p[i]^d*exp(p[i]*x + p[n+i]) # first exponential defines yhat else if (i != n | !a) yhat <- yhat + p[i]^d*exp(p[i]*x + p[n+i]) # following exponentials add to yhat else if (a) yhat <- yhat - p[i]^d*exp(p[i]*x)*sum(exp(p[(n+1):(2*n-1)])) # for absorption curve apply final term } return(yhat) # predicted dependent variable (drug concentration) } # Maximum likelihood estimation function for parameter optimisation mle.sumexp <- function(par, x, y, sigma, ga = F) { # Determine log likelihood of given model parameters # par = sum of exponential parameters # x = independent variable (time) # y = observed dependent variable (drug concentration) # sigma = proportional error # ga = genetic algorithm status z <- ifelse(ga, 2, -2) # adjust ofv for minimisation or maximisation yhat <- pred.sumexp(par, x) # sum of exponential model prediction loglik <- dnorm(y, yhat, abs(yhat)*sigma, log = T) # log likelihood return(z*sum(loglik)) # objective function value } # Fit sum of exponentials to curve for different numbers of exponentials # without hessian optim.sumexp <- function(data, oral = F, nexp = 3) { x <- data[which(data[, 2] > 0 & data[, 1] != 0), 1] y <- data[which(data[, 2] > 0 & data[, 1] != 0), 2] opt.par <- list(NULL) opt.val <- list(NULL) opt.gra <- list(NULL) opt.con <- list(NULL) opt.mes <- list(NULL) lm.sub <- which(y == max(y))[1]:length(y) lmres <- unname(lm(log(y[lm.sub]) ~ x[lm.sub])$coefficients) for (i in 1:nexp) { if (i == 1 & !oral) { optres <- list( par = c(lmres[2], lmres[1]), value = mle.sumexp(unname(c(lmres[2], lmres[1])), x, y, 0.01), counts = NULL, convergence = 0, message = NULL ) } else { gares <- ga("real-valued", mle.sumexp, x = x, y = y, ga = T, sigma = 0.01, min = c(rep(lmres[2]*50, i + oral), rep(lmres[1]-2, i)), max = c(rep(lmres[2]/50, i + oral), rep(lmres[1]+2, i)), selection = gareal_lrSelection, crossover = gareal_spCrossover, mutation = gareal_raMutation, maxiter = 50, popSize = 250, monitor = F ) optres <- optim( gares@solution[1, ], mle.sumexp, method = "BFGS", x = x, y = y, sigma = 0.01 ) } slope.par <- optres$par[1:(i+oral)] slope.ord <- order(slope.par, decreasing = T) par.ord <- unname(c(slope.par[slope.ord], optres$par[(i+oral+1):length(optres$par)])) opt.par[[i]] <- par.ord opt.val[[i]] <- optres$value opt.gra[[i]] <- optres$counts opt.con[[i]] <- optres$convergence opt.mes[[i]] <- ifelse(is.null(optres$message), "NULL", optres$message) } res <- list(par = opt.par, value = opt.val, counts = opt.gra, convergence = opt.con, message = opt.mes) res } # with hessian matrix # Fit sum of exponential parameters to data optim.sumexp.hes <- function(data, oral = F, nexp = 3) { # Determines best sum of exponential for given data # data = mean pooled data; # first column independent variable; # second column dependent variable # oral = whether the drug displays an absorption curve # nexp = maximum fitted number of exponentials # Prepare data (remove t == 0, remove negative concentrations) x <- data[which(data[, 2] > 0 & data[, 1] != 0), 1] y <- data[which(data[, 2] > 0 & data[, 1] != 0), 2] # Set up objects opt.par <- list(NULL) opt.val <- list(NULL) opt.gra <- list(NULL) opt.con <- list(NULL) opt.mes <- list(NULL) opt.hes <- list(NULL) lm.sub <- which(y == max(y))[1]:length(y) # browser() repeat { lmres <- unname(lm(log(y[lm.sub]) ~ x[lm.sub])$coefficients) if (is.na(lmres[2])) { lmres <- c(max(y), -0.00001) break } if (lmres[2] < 0) break else lm.sub <- lm.sub[-1] } for (i in 1:nexp) { if (i == 1 & !oral) { optres <- list( par = c(lmres[2], lmres[1]), value = mle.sumexp(unname(c(lmres[2], lmres[1])), x, y, 0.01), counts = NULL, convergence = 0, message = NULL ) } else { gares <- try(ga("real-valued", mle.sumexp, x = x, y = y, ga = T, sigma = 0.01, min = c(rep(lmres[2]*50, i + oral), rep(lmres[1]-2, i)), max = c(rep(lmres[2]/50, i + oral), rep(lmres[1]+2, i)), selection = gareal_lrSelection, crossover = gareal_spCrossover, mutation = gareal_raMutation, maxiter = 50, popSize = 250, monitor = F )) if (class(gares) == "try-error") browser() optres <- try(optim( gares@solution[1, ], mle.sumexp, method = "BFGS", hessian = T, x = x, y = y, sigma = 0.01 )) if (class(optres) == "try-error") { optres <- list( par = gares@solution[1,], value = mle.sumexp(gares@solution[1,], x, y, 0.01), counts = c("function" = 501, gradient = NA), convergence = 99, message = "zero gradient", hessian = matrix(NA, ncol = length(gares@solution[1,]), nrow = length(gares@solution[1,]) ) ) } } slope.par <- -abs(head(optres$par, i + oral)) int.par <- tail(optres$par, -(i + oral)) slope.ord <- order(slope.par, decreasing = T) int.ord <- slope.ord # intercept order (match slopes) if (oral) int.ord <- order(slope.par[-slope.ord[i]], decreasing = T) par.ord <- unname(c(slope.par[slope.ord], int.par[int.ord])) opt.par[[i]] <- par.ord opt.val[[i]] <- optres$value opt.gra[[i]] <- optres$counts opt.con[[i]] <- optres$convergence opt.mes[[i]] <- ifelse(is.null(optres$message), "NULL", optres$message) opt.hes[[i]] <- optres$hessian } res <- list(par = opt.par, value = opt.val, counts = opt.gra, convergence = opt.con, message = opt.mes, hessian = opt.hes) res } optim.sumexp.se <- function(data, oral = F, nexp = 3) { x <- data[which(data[, 2] > 0 & data[, 1] != 0), 1] y <- data[which(data[, 2] > 0 & data[, 1] != 0), 2] opt.par <- list(NULL) opt.val <- list(NULL) opt.gra <- list(NULL) opt.con <- list(NULL) opt.mes <- list(NULL) opt.hes <- list(NULL) lm.sub <- which(y == max(y))[1]:length(y) lmres <- unname(lm(log(y[lm.sub]) ~ x[lm.sub])$coefficients) for (i in 1:nexp) { if (i == 1 & !oral) { optres <- list( par = c(lmres[2], lmres[1]), value = mle.sumexp(unname(c(lmres[2], lmres[1])), x, y, 0.01), counts = NULL, convergence = 0, message = NULL ) } else { if (is.na(lmres[2])) { lmres <- c(max(y), -0.00001) } gares <- ga("real-valued", mle.sumexp, x = x, y = y, ga = T, sigma = 0.01, min = c(rep(lmres[2]*50, i + oral), rep(lmres[1]-2, i)), max = c(rep(lmres[2]/50, i + oral), rep(lmres[1]+2, i)), selection = gareal_lrSelection, crossover = gareal_spCrossover, mutation = gareal_raMutation, maxiter = 50, popSize = 250, monitor = F ) optres <- optim( gares@solution[1, ], mle.sumexp, method = "BFGS", hessian = T, x = x, y = y, sigma = 0.01 ) repeat { vc_mat <- suppressWarnings(try(solve(optres$hessian))) if(class(vc_mat) != "try-error") { se <- suppressWarnings(sqrt(diag(vc_mat))) if (!any(is.nan(se))) { se_percent <- se/optres$par*100 if (max(se_percent) <= 100) { break } } } gares <- ga("real-valued", mle.sumexp, x = x, y = y, ga = T, sigma = 0.01, min = c(rep(lmres[2]*50, i + oral), rep(lmres[1]-2, i)), max = c(rep(lmres[2]/50, i + oral), rep(lmres[1]+2, i)), selection = gareal_lrSelection, crossover = gareal_spCrossover, mutation = gareal_raMutation, maxiter = 50, popSize = 250, monitor = F ) optres <- optim( gares@solution[1, ], mle.sumexp, method = "BFGS", hessian = T, x = x, y = y, sigma = 0.01 ) } } slope.par <- optres$par[1:(i+oral)] slope.ord <- order(slope.par, decreasing = T) par.ord <- unname(c(slope.par[slope.ord], optres$par[(i+oral+1):length(optres$par)])) opt.par[[i]] <- par.ord opt.val[[i]] <- optres$value opt.gra[[i]] <- optres$counts opt.con[[i]] <- optres$convergence opt.mes[[i]] <- ifelse(is.null(optres$message), "NULL", optres$message) opt.hes[[i]] <- optres$hessian } res <- list(par = opt.par, value = opt.val, counts = opt.gra, convergence = opt.con, message = opt.mes, hessian = opt.hes) res } mle.sumexp.sig <- function(par, x, y, errmod, ga = F) { # Determine log likelihood of given model parameters # par = sum of exponential parameters # x = independent variable (time) # y = observed dependent variable (drug concentration) # errmod = error model to be used c("add", "prop", "both") # ga = genetic algorithm status nerr <- ifelse(errmod == "both", 2, 1) # set number of error parameters fit.par <- head(par, -nerr) # define sum of exponentail parameters err.par <- tail(par, nerr) # define error paramters z <- ifelse(ga, 2, -2) # adjust ofv for minimisation or maximisation yhat <- pred.sumexp(fit.par, x) # sum of exponential model prediction # Define standard deviation of normal distribution if (errmod == "add") sd <- err.par else if (errmod == "prop") sd <- abs(yhat)*err.par else if (errmod == "both") { add <- err.par[1] prop <- abs(yhat)*err.par[2] sd <- sqrt(add^2 + prop^2) } else stop("Please enter valid error model; \"add\", \"prop\" or \"both\"") # Determine log likelihood loglik <- suppressWarnings(dnorm(y, yhat, sd, log = T)) return(z*sum(loglik)) } optim.sumexp.sig <- function(data, oral = F, nexp = 3) { # Determines best sum of exponential for given data # data = mean pooled data; # first column independent variable; # second column dependent variable # oral = whether the drug displays an absorption curve # nexp = maximum fitted number of exponentials # Prepare data (remove t == 0, remove negative concentrations) x <- data[which(data[, 2] > 0 & data[, 1] != 0), 1] y <- data[which(data[, 2] > 0 & data[, 1] != 0), 2] # Set up objects res <- list(par = NULL, sumexp = NULL, value = NULL, error = NULL, hessian = NULL, message = NULL) lm.sub <- which(y == max(y))[1]:length(y) repeat { lm.mod <- lm(log(y[lm.sub]) ~ x[lm.sub]) lmres <- unname(lm.mod$coefficients) if (is.na(lmres[2])) { lmres <- c(max(y), -pred.lambdaz(y, x)) break } if (lmres[2] < 0) break else lm.sub <- tail(lm.sub, -1) } # Estimate candidate model parameters lm.sd <- sd(residuals(lm.mod)) lm.add <- matrix(c(0, max(y)*lm.sd), nrow = 2) lm.prop <- matrix(c(lm.sd/50, lm.sd*50), nrow = 2) cand.mod <- expand.grid(1:nexp, c("add", "prop", "both")) # candidate models for (i in 1:nrow(cand.mod)) { mod <- cand.mod[i, ] if (mod[[2]] == "both") lm.err <- cbind(lm.add, lm.prop) else lm.err <- get(paste0("lm.", mod[[2]])) gares <- try(ga("real-valued", mle.sumexp.sig, x = x, y = y, ga = T, errmod = mod[[2]], min = c(rep(lmres[2]*50, mod[[1]] + oral), rep(lmres[1]-2, mod[[1]]), lm.err[1,]), max = c(rep(lmres[2]/50, mod[[1]] + oral), rep(lmres[1]+2, mod[[1]]), lm.err[2,]), selection = gareal_lrSelection, crossover = gareal_spCrossover, mutation = gareal_raMutation, maxiter = 50, popSize = 250, monitor = F )) if (class(gares) == "try-error") browser() optres <- try(optim( gares@solution[1, ], mle.sumexp.sig, method = "BFGS", hessian = T, x = x, y = y, errmod = mod[[2]] )) if (class(optres) == "try-error") { optres <- list( par = gares@solution[1,], value = mle.sumexp.sig(gares@solution[1,], x, y, errmod = mod[[2]]), counts = c("function" = 501, gradient = NA), convergence = 99, message = "zero gradient", hessian = matrix(NA, ncol = length(gares@solution[1,]), nrow = length(gares@solution[1,]) ) ) } # Create output fit.par <- head(optres$par, -ncol(lm.err)) err.par <- tail(optres$par, ncol(lm.err)) par.ord <- order.sumexp(fit.par, mod[[1]], oral) res$par[[i]] <- optres$par res$sumexp[[i]] <- par.ord res$value[[i]] <- c(ofv = optres$value, optres$counts) res$error[[i]] <- c(signif(err.par, 5), type = paste(mod[[2]])) res$hessian[[i]] <- optres$hessian res$message[[i]] <- c(convergence = optres$convergence, message = ifelse(is.null(optres$message), "NULL", optres$message) ) } res } mle.sumexp.err <- function(par, x, y, errmod, ga = F) { # Determine log likelihood of given model parameters # par = sum of exponential parameters # x = independent variable (time) # y = observed dependent variable (drug concentration) # errmod = error model to be used c("add", "prop", "both") # ga = genetic algorithm status nerr <- ifelse(errmod == "both", 2, 1) # set number of error parameters fit.par <- head(par, -nerr) # define sum of exponentail parameters err.par <- tail(par, nerr) # define error paramters z <- ifelse(ga, 2, -2) # adjust ofv for minimisation or maximisation yhat <- pred.sumexp(fit.par, x) # sum of exponential model prediction # Define standard deviation of normal distribution if (errmod == "add") sd <- err.par else if (errmod == "prop") sd <- abs(yhat)*err.par else if (errmod == "both") { add <- err.par[1] prop <- abs(yhat)*err.par[2] sd <- sqrt(add^2 + prop^2) } else stop("Please enter valid error model; \"add\", \"prop\" or \"both\"") # Determine log likelihood loglik <- suppressWarnings(dnorm(y, yhat, sd, log = T)) return(z*sum(loglik)) } optim.sumexp.new <- function(data, oral = F, nexp = 3) { # Determines best sum of exponential for given data # data = mean pooled data; # first column independent variable; # second column dependent variable # oral = whether the drug displays an absorption curve # nexp = maximum fitted number of exponentials # Set up objects res <- list(par = NULL, sumexp = NULL, value = NULL, error = NULL, hessian = NULL, message = NULL) # opt.par <- list(NULL) # opt.val <- list(NULL) # opt.gra <- list(NULL) # opt.con <- list(NULL) # opt.mes <- list(NULL) # opt.hes <- list(NULL) # Prepare data (remove t == 0, remove negative concentrations) x <- data[which(data[, 2] > 0 & data[, 1] != 0), 1] y <- data[which(data[, 2] > 0 & data[, 1] != 0), 2] lmres <- -pred.lambdaz(y, x)[1] lm.sub <- which(y == max(y))[1]:length(y) repeat { lm.mod <- lm(log(y[lm.sub]) ~ x[lm.sub]) lmres <- unname(lm.mod$coefficients) if (is.na(lmres[2])) { lmres <- c(max(y), -pred.lambdaz(y, x)) break } if (lmres[2] < 0) break else lm.sub <- tail(lm.sub, -1) } # Estimate candidate model parameters for (i in 1:nexp) { gares <- try(ga("real-valued", mle.sumexp, x = x, y = y, ga = T, sigma = 0.01, min = c(rep(lmres[2]*50, i + oral), rep(lmres[1]-2, i)), max = c(rep(lmres[2]/50, i + oral), rep(lmres[1]+2, i)), selection = gareal_lrSelection, crossover = gareal_spCrossover, mutation = gareal_raMutation, maxiter = 50, popSize = 250, monitor = F )) if (class(gares) == "try-error") browser() optres <- try(optim( gares@solution[1, ], mle.sumexp, method = "BFGS", hessian = T, x = x, y = y, sigma = 0.01 )) if (class(optres) == "try-error") { optres <- list( par = gares@solution[1,], value = mle.sumexp(gares@solution[1,], x, y, 0.01), counts = c("function" = 501, gradient = NA), convergence = 99, message = "zero gradient", hessian = matrix(NA, ncol = length(gares@solution[1,]), nrow = length(gares@solution[1,]) ) ) } # Create output par.ord <- order.sumexp(optres$par, i, oral) res$par[[i]] <- optres$par res$sumexp[[i]] <- par.ord res$value[[i]] <- c(ofv = optres$value, optres$counts) res$error[[i]] <- c(0.01, "fixed") res$hessian[[i]] <- optres$hessian res$message[[i]] <- c(convergence = optres$convergence, message = ifelse(is.null(optres$message), "NULL", optres$message) ) } res } # Chi-squared difference test # Takes a list of optim results and gives the best optim result chisq.sumexp <- function(opt) { i <- 1 for (j in 2:length(opt$par)) { degf <- length(opt$par[[j]]) - length(opt$par[[i]]) x <- opt$value[[i]] - opt$value[[j]] p <- pchisq(x, degf, lower.tail = F) if (p < 0.01) { i <- i + 1 } } return(sapply(opt, function(x) x[i])) } chisq.sumexp.aic <- function(opt) { x <- unlist(opt$value) k <- unlist(lapply(opt$par, length)) aic <- x + 2*k return(sapply(opt, function(x) x[which(aic == min(aic))])) } chisq.sumexp.bic <- function(opt, nobs) { x <- unlist(opt$value) k <- unlist(lapply(opt$par, length)) bic <- x + log(nobs)*k return(sapply(opt, function(x) x[which(bic == min(bic))])) } best.sumexp.lrt <- function(opt) { values <- unlist(opt$value) ofv <- values[which(names(values) == "ofv")] i <- 1 for (j in 2:length(opt$par)) { degf <- length(opt$par[[j]]) - length(opt$par[[i]]) x <- ofv[i] - ofv[j] p <- pchisq(x, degf, lower.tail = F) if (p < 0.01) { i <- i + 1 } } return(sapply(opt, function(x) x[i])) } best.sumexp.aic <- function(opt) { values <- unlist(opt$value) ofv <- values[which(names(values) == "ofv")] k <- unlist(lapply(opt$par, length)) aic <- ofv + 2*k return(sapply(opt, function(x) x[which(aic == min(aic))])) } best.sumexp.bic <- function(opt, nobs) { values <- unlist(opt$value) ofv <- values[which(names(values) == "ofv")] k <- unlist(lapply(opt$par, length)) bic <- ofv + log(nobs)*k return(sapply(opt, function(x) x[which(bic == min(bic))])) } # Trapezoidal error function for interval optimisation # standard using times err.interv <- function(par, exp.par, tfirst, tlast, tmax = NULL, a = F) { times <- c(tfirst, par, tlast, tmax) deltat <- diff(times) if (a) { all.secd <- abs(pred.sumexp(exp.par, times, 2)) secd <- c(NULL) for (i in 1:(length(times)-1)) { secd[i] <- all.secd[which(all.secd == max(all.secd[c(i, i + 1)]))][1] } } else { secd <- pred.sumexp(exp.par, times[-length(times)], 2) } err <- deltat^3*secd/12 sum(err^2) } # ga using times err.interv.ga <- function(par, exp.par, tfirst, tlast, tmax = NULL, a = F, ga = F) { z <- ifelse(ga, -1, 1) times <- unique(c(tfirst, par, tlast, tmax)) times <- times[order(times)] deltat <- diff(times) if (a) { all.secd <- abs(pred.sumexp(exp.par, times, 2)) secd <- c(NULL) for (i in 1:(length(times)-1)) { secd[i] <- all.secd[which(all.secd == max(all.secd[c(i, i + 1)]))][1] } } else { secd <- pred.sumexp(exp.par, times[-length(times)], 2) } err <- deltat^3*secd/12 return(z*sum(err^2)) } # standard using dt err.interv.dt <- function(par, exp.par, tfirst, tlast, a = F) { times <- c(tfirst, cumsum(par), tlast) deltat <- diff(times) if (a) { all.secd <- abs(pred.sumexp(exp.par, times, 2)) secd <- c(NULL) for (i in 1:(length(times)-1)) { secd[i] <- all.secd[which(all.secd == max(all.secd[c(i, i + 1)]))][1] } } else { secd <- pred.sumexp(exp.par, times[-length(times)], 2) } err <- deltat^3*secd/12 return(sum(err^2)) } err.interv.dtmax <- function(par, exp.par, tfirst, tlast, a = F, tmax = NULL) { times <- c(tfirst, cumsum(par), tmax, tlast) times <- sort(times) deltat <- diff(times) if (a) { all.secd <- abs(pred.sumexp(exp.par, times, 2)) secd <- c(NULL) for (i in 1:(length(times)-1)) { secd[i] <- all.secd[which(all.secd == max(all.secd[c(i, i + 1)]))][1] } } else { secd <- pred.sumexp(exp.par, times[-length(times)], 2) } err <- deltat^3*secd/12 sumerr <- sqrt(mean(err^2)) return(sumerr) } # Interval optimising function # standard using times optim.interv <- function(times, par, tmax = NULL) { x <- times[order(times)] init.par <- x[-c(1, length(x))] if (!is.null(tmax)) init.par <- init.par[-length(init.par)] xmin <- min(x) xmax <- max(x) absorp <- ifelse((length(par) %% 2) != 0, T, F) res <- optim( init.par, err.interv, method = "L-BFGS-B", control = c(maxit = 500), lower = xmin, upper = xmax, exp.par = par, tfirst = xmin + 0.01, tlast = xmax - 0.01, tmax = tmax, a = absorp ) return(res) } # using dt instead of times optim.interv.dt <- function(par, times, tmax = NULL) { tfirst <- min(times) tlast <- max(times) npar <- length(times) - 2 absorp <- ifelse((length(par) %% 2) != 0, T, F) init.par <- cumsum(rep(tlast/48, npar)) res <- optim( init.par, err.interv.dt, method = "L-BFGS-B", hessian = T, lower = tlast/48, upper = tlast - npar*tlast/48, exp.par = par, tfirst = tfirst, tlast = tlast, a = absorp ) res$times <- cumsum(res$par) return(res) } optim.interv.dtmax <- function(par, times, tmax = FALSE) { tfirst <- min(times) tlast <- max(times) tbound <- tlast - tfirst absorp <- ifelse((length(par) %% 2) != 0, T, F) npar <- length(times) - (2 + tmax*absorp) init.par <- cumsum(rep(tbound/48, npar)) if (tmax & absorp) tmax.val <- tmax.sumexp(par, tlast, tlast/720) # maximum length of 721 else tmax.val <- NULL res <- try(optim( init.par, err.interv.dtmax, method = "L-BFGS-B", hessian = T, lower = tbound/48, upper = tbound/2, exp.par = par, tfirst = tfirst, tlast = tlast, a = absorp, tmax = tmax.val )) if (class(res) == "try-error") { res <- try(optim( init.par, err.interv.dtmax, method = "L-BFGS-B", hessian = T, lower = tbound/48, upper = tbound/(npar/2), exp.par = par, tfirst = tfirst, tlast = tlast, a = absorp, tmax = tmax.val )) } res$times <- sort(c(cumsum(c(tfirst, res$par)), tmax.val, tlast)) return(res) } # using ga for initial parameters optim.interv.ga100 <- function(par, times, tmax = NULL) { tfirst <- min(times) tlast <- max(times) if (is.null(tmax)) { is.tmax <- 2 } else if (tmax == 0) { is.tmax <- 2 } else { is.tmax <- 3 } npar <- length(times) - is.tmax absorp <- ifelse((length(par) %% 2) != 0, T, F) flag <- 0 repeat { gares <- ga("real-valued", err.interv.ga, exp.par = par, a = absorp, tfirst = tfirst, tlast = tlast, tmax = tmax, ga = T, min = rep(tfirst + 0.01, npar), max = rep(tlast - 0.01, npar), selection = gareal_lrSelection, crossover = gareal_spCrossover, mutation = gareal_raMutation, maxiter = 50, popSize = 250, monitor = F ) res <- optim( gares@solution[order(gares@solution)], err.interv.ga, method = "BFGS", hessian = T, exp.par = par, tfirst = tfirst, tlast = tlast, tmax = tmax, a = absorp ) vc_mat <- try(solve(res$hessian)) if(class(vc_mat) != "try-error") { se <- sqrt(diag(vc_mat)) if (!any(is.nan(se))) { se_percent <- se/res$par*100 if (max(se_percent) <= 100) { res$flag <- flag res$se <- se_percent break } } } flag <- flag + 1 if (flag == 10) { res$flag <- flag res$se <- NA break } } return(res) } optim.interv.ga50 <- function(par, times, tmax = NULL) { tfirst <- min(times) tlast <- max(times) is.tmax <- ifelse(is.null(tmax), 2, 3) npar <- length(times) - is.tmax absorp <- ifelse((length(par) %% 2) != 0, T, F) flag <- 0 repeat { gares <- ga("real-valued", err.interv.ga, exp.par = par, a = absorp, tfirst = tfirst, tlast = tlast, tmax = tmax, ga = T, min = rep(tfirst + 0.01, npar), max = rep(tlast - 0.01, npar), selection = gareal_lrSelection, crossover = gareal_spCrossover, mutation = gareal_raMutation, maxiter = 50, popSize = 250, monitor = F ) res <- optim( gares@solution[order(gares@solution)], err.interv.ga, method = "BFGS", hessian = T, exp.par = par, tfirst = tfirst, tlast = tlast, tmax = tmax, a = absorp ) vc_mat <- try(solve(res$hessian)) if(class(vc_mat) != "try-error") { se <- sqrt(diag(vc_mat)) if (!any(is.nan(se))) { se_percent <- se/res$par*100 if (max(se_percent) <= 50 | flag == 10) { res$flag <- flag res$se <- se_percent break } } } flag <- flag + 1 } return(res) } pred.tlast <- function(par, tlast) { i <- round(tlast/12, 0) perc.term <- 1 timeloop <- seq(0, i*12, by = i*12/120) predloop <- pred.sumexp(par, timeloop) tmax <- timeloop[which(predloop == max(predloop))] while(perc.term > 0.2 & i < 730) { if (exists("init")) { repeat { i <- i + 1 timeloop <- seq(0, i*12, by = i*12/120) predloop <- pred.sumexp(par, timeloop) clast <- tail(predloop, 1) if (clast < cterm | i == 730) break } } clast <- tail(predloop, 1) auclast <- auc.interv.sumexp(timeloop, par, log = T) lambz <- max(head(par, ceiling(length(par)/2))) aucinf <- clast/-lambz perc.term <- aucinf/(auclast+aucinf) cterm <- clast*(0.18/perc.term) init <- 1 } return(c(i*12, 1-perc.term)) } pred.tlast.lam <- function(par) { nexp <- ceiling(length(par)/2) lambz <- abs(max(par[1:nexp])) hl <- log(2)/lambz ceiling(hl/4)*12 } obs.tlast.lam <- function(obs) { lambz <- pred.lambdaz(obs[, 2], obs[, 1]) hl <- log(2)/lambz ceiling(hl/4)*12 } pred.lambdaz <- function(dv, t) { if (t[1] == 0) dv[1] <- 0 mdv <- which(dv == 0) i <- 3 bestR2 <- -1 bestk <- 0 if (length(dv[-mdv]) >= i) { repeat { mod <- suppressWarnings(lm(log(tail(dv[-mdv], i)) ~ tail(unique(t)[-mdv], i))) k <- -1*mod$coefficients["tail(unique(t)[-mdv], i)"] R2 <- suppressWarnings(as.numeric(summary(mod)["adj.r.squared"])) if (is.na(k)) browser() if (is.nan(R2)) R2 <- suppressWarnings(as.numeric(summary(mod)["r.squared"])) if (k > 0) { if (R2 > bestR2) { if (i > 2) bestR2 <- R2 bestk <- k } else { break } } if (i == 5 & bestk == 0) { # mdv <- c(mdv, which(dv == max(tail(dv[-mdv], 3)))) i <- 1 } if (i == length(dv[-mdv])) { # last ditch effort (intended for simulation study only) if (bestk > 0) break else { mod <- suppressWarnings(lm(log(tail(dv, 2)) ~ tail(unique(t), 2))) bestk <- -1*mod$coefficients["tail(unique(t), 2)"] if (bestk > 0) break else { bestk <- log(2)/56 break } } } i <- i + 1 } } bestk } # ----------------------------------------------------------------------------- # Determine tmax given a set of sumexp parameters tmax.sumexp <- function(par, tlast = 24, res = 0.1) { times <- seq(0, tlast, by = res) yhat <- pred.sumexp(par, times) return(times[which(yhat == max(yhat))]) } # Determine auc given a set of intervals auc.interv <- function(times, par, fn, log = F) { C <- do.call(fn, list(x = times, p = par)) auc <- c(NULL) for (i in 2:length(C)) { h <- times[i] - times[i-1] dC <- C[i-1] - C[i] if (log & dC > 0) auc[i-1] <- dC*h/log(C[i-1]/C[i]) else auc[i-1] <- (C[i-1] + C[i])*h/2 } return(sum(auc)) } auc.interv.sumexp <- function(times, par, log = F) { C <- pred.sumexp(par, times) auc <- c(NULL) for (i in 2:length(C)) { h <- times[i] - times[i-1] dC <- C[i-1] - C[i] if (log & dC > 0) auc[i-1] <- dC*h/log(C[i-1]/C[i]) else auc[i-1] <- (C[i-1] + C[i])*h/2 } return(sum(auc)) } auc.interv.lam <- function(par, t) { dv <- pred.sumexp(par, unique(t)) lambz <- try(-pred.lambdaz(dv, t)[2]) if (class(lambz) == "try-error") return(0) else return(tail(dv, 1)/lambz) } # Without for loop # auc.interv <- function(times, par, fn, log = F) { # C <- do.call(fn, list(x = times, p = par)) # h <- diff(times) # EC <- sum(C[-1], C[-length(C)]) # dC <- diff(-C) # if (!log) auc <- EC*h/2 # else auc[i-1] <- dC*h/log(C[i-1]/C[i]) # return(sum(auc)) # } # Plot random data plot.rdata <- function(data, t, n, interv) { plotdata <- data.frame( id = rep(1:n, each = length(t)), time = rep(t, times = n), dv = as.vector(data) ) plotobj <- NULL plotobj <- ggplot(data = plotdata) plotobj <- plotobj + ggtitle("Random Concentration Time Curves") plotobj <- plotobj + geom_line(aes(x = time, y = dv), colour = "red") plotobj <- plotobj + geom_vline(xintercept = interv, colour = "green4", linetype = "dashed") plotobj <- plotobj + scale_y_continuous("Concentration (mg/mL)\n") plotobj <- plotobj + scale_x_continuous("\nTime after dose (hrs)") plotobj <- plotobj + facet_wrap(~id, ncol = round(sqrt(n)), scales = "free") return(plotobj) }
/study_functions.R
no_license
jhhughes256/optinterval
R
false
false
34,695
r
# Sum of exponential functions for sourcing # ----------------------------------------------------------------------------- # The functions in order are: # pred.sumexp - gives sum of exponentials for given set of parameters # mle.sumexp - maximum likelihood estimation function to be minimised # optim.sumexp - determine parameters for differing numbers of exponentials # err.interv - trapezoidal error function to be minimised # optim.interv - determine intervals that give the smallest error # ----------------------------------------------------------------------------- # Sum of exponentials predicted concentrations function # pred.sumexp <- function(par, x, d = 0) { # # There are currently issues with order.sumexp messing with optimisation if it is within pred.sumexp # # Provides predictions according to model parameters # # par = sum of exponential parameters # # x = independent variable (time) # # d = order of derivative (uses dth derivative of model) # # Define objects # l <- length(par) # number of parameters # a <- l %% 2 == 1 # absorption status (odd parameter length == absorption) # n <- ceiling(l/2) # number of exponentials # p <- order.sumexp(par, n, a) # order parameters (allows for flip-flop) # # Sum of exponentials # for (i in 1:n) { # for each exponential # if (i == 1) yhat <- p[i]^d*exp(p[i]*x + p[n+i]) # first exponential defines yhat # else if (i != n | !a) yhat <- yhat + p[i]^d*exp(p[i]*x + p[n+i]) # following exponentials add to yhat # else if (a) yhat <- yhat - p[i]^d*exp(p[i]*x)*sum(exp(p[(n+1):(2*n-1)])) # for absorption curve apply final term # } # return(yhat) # predicted dependent variable (drug concentration) # } order.sumexp <- function(par, n, a) { # Sorts sum of exponential parameters to enable pred.sumexp # par = sum of exponential parameters # n = number of exponentials # a = absorption status m <- -abs(head(par, n+a)) # slope parameters (prevents exponential growth) b <- tail(par, -(n+a)) # intercept parameters m.ord <- order(m, decreasing = T) # slope order (terminal first, absorption last) b.ord <- m.ord # intercept order (match slopes) if (a) b.ord <- order(m[-m.ord[n+a]], decreasing = T) # if absorption curve remove extra term unname(c(m[m.ord], b[b.ord])) # ordered parameters } # Oldest function # pred.sumexp <- function(x, t, d = 0) { # l <- length(x) # a <- ifelse(l %% 2 == 0, 0, 1) # n <- ceiling(l/2) # for (i in 1:n) { # if (i == 1) y <- x[i]^d*exp(x[i]*t + x[n+i]) # else if (i != n | a == 0) y <- y + x[i]^d*exp(x[i]*t + x[n+i]) # else if (a == 1) y <- y - x[i]^d*exp(x[i]*t)*sum(exp(x[(n+1):(2*n-1)])) # } # return(y) # } # Less Old function pred.sumexp <- function(par, x, d = 0) { # Provides predictions according to model parameters # par = sum of exponential parameters # x = independent variable (time) # d = order of derivative (uses dth derivative of model) # Define objects l <- length(par) # number of parameters a <- l %% 2 == 1 # absorption status (odd parameter length == absorption) n <- ceiling(l/2) # number of exponentials m <- -abs(par[1:n]) # slope parameters (prevents exponential growth) b <- par[(n+1):l] # intercept parameters # Order parameters (allows for flip-flop) m.ord <- order(m, decreasing = T) # slope order (terminal first, absorption last) b.ord <- m.ord # intercept order (match slopes) if (a) b.ord <- order(m[-m.ord[n]], decreasing = T) # if absorption curve remove extra term p <- c(m[m.ord], b[b.ord]) # ordered parameters # Sum of exponentials for (i in 1:n) { # for each exponential if (i == 1) yhat <- p[i]^d*exp(p[i]*x + p[n+i]) # first exponential defines yhat else if (i != n | !a) yhat <- yhat + p[i]^d*exp(p[i]*x + p[n+i]) # following exponentials add to yhat else if (a) yhat <- yhat - p[i]^d*exp(p[i]*x)*sum(exp(p[(n+1):(2*n-1)])) # for absorption curve apply final term } return(yhat) # predicted dependent variable (drug concentration) } # Maximum likelihood estimation function for parameter optimisation mle.sumexp <- function(par, x, y, sigma, ga = F) { # Determine log likelihood of given model parameters # par = sum of exponential parameters # x = independent variable (time) # y = observed dependent variable (drug concentration) # sigma = proportional error # ga = genetic algorithm status z <- ifelse(ga, 2, -2) # adjust ofv for minimisation or maximisation yhat <- pred.sumexp(par, x) # sum of exponential model prediction loglik <- dnorm(y, yhat, abs(yhat)*sigma, log = T) # log likelihood return(z*sum(loglik)) # objective function value } # Fit sum of exponentials to curve for different numbers of exponentials # without hessian optim.sumexp <- function(data, oral = F, nexp = 3) { x <- data[which(data[, 2] > 0 & data[, 1] != 0), 1] y <- data[which(data[, 2] > 0 & data[, 1] != 0), 2] opt.par <- list(NULL) opt.val <- list(NULL) opt.gra <- list(NULL) opt.con <- list(NULL) opt.mes <- list(NULL) lm.sub <- which(y == max(y))[1]:length(y) lmres <- unname(lm(log(y[lm.sub]) ~ x[lm.sub])$coefficients) for (i in 1:nexp) { if (i == 1 & !oral) { optres <- list( par = c(lmres[2], lmres[1]), value = mle.sumexp(unname(c(lmres[2], lmres[1])), x, y, 0.01), counts = NULL, convergence = 0, message = NULL ) } else { gares <- ga("real-valued", mle.sumexp, x = x, y = y, ga = T, sigma = 0.01, min = c(rep(lmres[2]*50, i + oral), rep(lmres[1]-2, i)), max = c(rep(lmres[2]/50, i + oral), rep(lmres[1]+2, i)), selection = gareal_lrSelection, crossover = gareal_spCrossover, mutation = gareal_raMutation, maxiter = 50, popSize = 250, monitor = F ) optres <- optim( gares@solution[1, ], mle.sumexp, method = "BFGS", x = x, y = y, sigma = 0.01 ) } slope.par <- optres$par[1:(i+oral)] slope.ord <- order(slope.par, decreasing = T) par.ord <- unname(c(slope.par[slope.ord], optres$par[(i+oral+1):length(optres$par)])) opt.par[[i]] <- par.ord opt.val[[i]] <- optres$value opt.gra[[i]] <- optres$counts opt.con[[i]] <- optres$convergence opt.mes[[i]] <- ifelse(is.null(optres$message), "NULL", optres$message) } res <- list(par = opt.par, value = opt.val, counts = opt.gra, convergence = opt.con, message = opt.mes) res } # with hessian matrix # Fit sum of exponential parameters to data optim.sumexp.hes <- function(data, oral = F, nexp = 3) { # Determines best sum of exponential for given data # data = mean pooled data; # first column independent variable; # second column dependent variable # oral = whether the drug displays an absorption curve # nexp = maximum fitted number of exponentials # Prepare data (remove t == 0, remove negative concentrations) x <- data[which(data[, 2] > 0 & data[, 1] != 0), 1] y <- data[which(data[, 2] > 0 & data[, 1] != 0), 2] # Set up objects opt.par <- list(NULL) opt.val <- list(NULL) opt.gra <- list(NULL) opt.con <- list(NULL) opt.mes <- list(NULL) opt.hes <- list(NULL) lm.sub <- which(y == max(y))[1]:length(y) # browser() repeat { lmres <- unname(lm(log(y[lm.sub]) ~ x[lm.sub])$coefficients) if (is.na(lmres[2])) { lmres <- c(max(y), -0.00001) break } if (lmres[2] < 0) break else lm.sub <- lm.sub[-1] } for (i in 1:nexp) { if (i == 1 & !oral) { optres <- list( par = c(lmres[2], lmres[1]), value = mle.sumexp(unname(c(lmres[2], lmres[1])), x, y, 0.01), counts = NULL, convergence = 0, message = NULL ) } else { gares <- try(ga("real-valued", mle.sumexp, x = x, y = y, ga = T, sigma = 0.01, min = c(rep(lmres[2]*50, i + oral), rep(lmres[1]-2, i)), max = c(rep(lmres[2]/50, i + oral), rep(lmres[1]+2, i)), selection = gareal_lrSelection, crossover = gareal_spCrossover, mutation = gareal_raMutation, maxiter = 50, popSize = 250, monitor = F )) if (class(gares) == "try-error") browser() optres <- try(optim( gares@solution[1, ], mle.sumexp, method = "BFGS", hessian = T, x = x, y = y, sigma = 0.01 )) if (class(optres) == "try-error") { optres <- list( par = gares@solution[1,], value = mle.sumexp(gares@solution[1,], x, y, 0.01), counts = c("function" = 501, gradient = NA), convergence = 99, message = "zero gradient", hessian = matrix(NA, ncol = length(gares@solution[1,]), nrow = length(gares@solution[1,]) ) ) } } slope.par <- -abs(head(optres$par, i + oral)) int.par <- tail(optres$par, -(i + oral)) slope.ord <- order(slope.par, decreasing = T) int.ord <- slope.ord # intercept order (match slopes) if (oral) int.ord <- order(slope.par[-slope.ord[i]], decreasing = T) par.ord <- unname(c(slope.par[slope.ord], int.par[int.ord])) opt.par[[i]] <- par.ord opt.val[[i]] <- optres$value opt.gra[[i]] <- optres$counts opt.con[[i]] <- optres$convergence opt.mes[[i]] <- ifelse(is.null(optres$message), "NULL", optres$message) opt.hes[[i]] <- optres$hessian } res <- list(par = opt.par, value = opt.val, counts = opt.gra, convergence = opt.con, message = opt.mes, hessian = opt.hes) res } optim.sumexp.se <- function(data, oral = F, nexp = 3) { x <- data[which(data[, 2] > 0 & data[, 1] != 0), 1] y <- data[which(data[, 2] > 0 & data[, 1] != 0), 2] opt.par <- list(NULL) opt.val <- list(NULL) opt.gra <- list(NULL) opt.con <- list(NULL) opt.mes <- list(NULL) opt.hes <- list(NULL) lm.sub <- which(y == max(y))[1]:length(y) lmres <- unname(lm(log(y[lm.sub]) ~ x[lm.sub])$coefficients) for (i in 1:nexp) { if (i == 1 & !oral) { optres <- list( par = c(lmres[2], lmres[1]), value = mle.sumexp(unname(c(lmres[2], lmres[1])), x, y, 0.01), counts = NULL, convergence = 0, message = NULL ) } else { if (is.na(lmres[2])) { lmres <- c(max(y), -0.00001) } gares <- ga("real-valued", mle.sumexp, x = x, y = y, ga = T, sigma = 0.01, min = c(rep(lmres[2]*50, i + oral), rep(lmres[1]-2, i)), max = c(rep(lmres[2]/50, i + oral), rep(lmres[1]+2, i)), selection = gareal_lrSelection, crossover = gareal_spCrossover, mutation = gareal_raMutation, maxiter = 50, popSize = 250, monitor = F ) optres <- optim( gares@solution[1, ], mle.sumexp, method = "BFGS", hessian = T, x = x, y = y, sigma = 0.01 ) repeat { vc_mat <- suppressWarnings(try(solve(optres$hessian))) if(class(vc_mat) != "try-error") { se <- suppressWarnings(sqrt(diag(vc_mat))) if (!any(is.nan(se))) { se_percent <- se/optres$par*100 if (max(se_percent) <= 100) { break } } } gares <- ga("real-valued", mle.sumexp, x = x, y = y, ga = T, sigma = 0.01, min = c(rep(lmres[2]*50, i + oral), rep(lmres[1]-2, i)), max = c(rep(lmres[2]/50, i + oral), rep(lmres[1]+2, i)), selection = gareal_lrSelection, crossover = gareal_spCrossover, mutation = gareal_raMutation, maxiter = 50, popSize = 250, monitor = F ) optres <- optim( gares@solution[1, ], mle.sumexp, method = "BFGS", hessian = T, x = x, y = y, sigma = 0.01 ) } } slope.par <- optres$par[1:(i+oral)] slope.ord <- order(slope.par, decreasing = T) par.ord <- unname(c(slope.par[slope.ord], optres$par[(i+oral+1):length(optres$par)])) opt.par[[i]] <- par.ord opt.val[[i]] <- optres$value opt.gra[[i]] <- optres$counts opt.con[[i]] <- optres$convergence opt.mes[[i]] <- ifelse(is.null(optres$message), "NULL", optres$message) opt.hes[[i]] <- optres$hessian } res <- list(par = opt.par, value = opt.val, counts = opt.gra, convergence = opt.con, message = opt.mes, hessian = opt.hes) res } mle.sumexp.sig <- function(par, x, y, errmod, ga = F) { # Determine log likelihood of given model parameters # par = sum of exponential parameters # x = independent variable (time) # y = observed dependent variable (drug concentration) # errmod = error model to be used c("add", "prop", "both") # ga = genetic algorithm status nerr <- ifelse(errmod == "both", 2, 1) # set number of error parameters fit.par <- head(par, -nerr) # define sum of exponentail parameters err.par <- tail(par, nerr) # define error paramters z <- ifelse(ga, 2, -2) # adjust ofv for minimisation or maximisation yhat <- pred.sumexp(fit.par, x) # sum of exponential model prediction # Define standard deviation of normal distribution if (errmod == "add") sd <- err.par else if (errmod == "prop") sd <- abs(yhat)*err.par else if (errmod == "both") { add <- err.par[1] prop <- abs(yhat)*err.par[2] sd <- sqrt(add^2 + prop^2) } else stop("Please enter valid error model; \"add\", \"prop\" or \"both\"") # Determine log likelihood loglik <- suppressWarnings(dnorm(y, yhat, sd, log = T)) return(z*sum(loglik)) } optim.sumexp.sig <- function(data, oral = F, nexp = 3) { # Determines best sum of exponential for given data # data = mean pooled data; # first column independent variable; # second column dependent variable # oral = whether the drug displays an absorption curve # nexp = maximum fitted number of exponentials # Prepare data (remove t == 0, remove negative concentrations) x <- data[which(data[, 2] > 0 & data[, 1] != 0), 1] y <- data[which(data[, 2] > 0 & data[, 1] != 0), 2] # Set up objects res <- list(par = NULL, sumexp = NULL, value = NULL, error = NULL, hessian = NULL, message = NULL) lm.sub <- which(y == max(y))[1]:length(y) repeat { lm.mod <- lm(log(y[lm.sub]) ~ x[lm.sub]) lmres <- unname(lm.mod$coefficients) if (is.na(lmres[2])) { lmres <- c(max(y), -pred.lambdaz(y, x)) break } if (lmres[2] < 0) break else lm.sub <- tail(lm.sub, -1) } # Estimate candidate model parameters lm.sd <- sd(residuals(lm.mod)) lm.add <- matrix(c(0, max(y)*lm.sd), nrow = 2) lm.prop <- matrix(c(lm.sd/50, lm.sd*50), nrow = 2) cand.mod <- expand.grid(1:nexp, c("add", "prop", "both")) # candidate models for (i in 1:nrow(cand.mod)) { mod <- cand.mod[i, ] if (mod[[2]] == "both") lm.err <- cbind(lm.add, lm.prop) else lm.err <- get(paste0("lm.", mod[[2]])) gares <- try(ga("real-valued", mle.sumexp.sig, x = x, y = y, ga = T, errmod = mod[[2]], min = c(rep(lmres[2]*50, mod[[1]] + oral), rep(lmres[1]-2, mod[[1]]), lm.err[1,]), max = c(rep(lmres[2]/50, mod[[1]] + oral), rep(lmres[1]+2, mod[[1]]), lm.err[2,]), selection = gareal_lrSelection, crossover = gareal_spCrossover, mutation = gareal_raMutation, maxiter = 50, popSize = 250, monitor = F )) if (class(gares) == "try-error") browser() optres <- try(optim( gares@solution[1, ], mle.sumexp.sig, method = "BFGS", hessian = T, x = x, y = y, errmod = mod[[2]] )) if (class(optres) == "try-error") { optres <- list( par = gares@solution[1,], value = mle.sumexp.sig(gares@solution[1,], x, y, errmod = mod[[2]]), counts = c("function" = 501, gradient = NA), convergence = 99, message = "zero gradient", hessian = matrix(NA, ncol = length(gares@solution[1,]), nrow = length(gares@solution[1,]) ) ) } # Create output fit.par <- head(optres$par, -ncol(lm.err)) err.par <- tail(optres$par, ncol(lm.err)) par.ord <- order.sumexp(fit.par, mod[[1]], oral) res$par[[i]] <- optres$par res$sumexp[[i]] <- par.ord res$value[[i]] <- c(ofv = optres$value, optres$counts) res$error[[i]] <- c(signif(err.par, 5), type = paste(mod[[2]])) res$hessian[[i]] <- optres$hessian res$message[[i]] <- c(convergence = optres$convergence, message = ifelse(is.null(optres$message), "NULL", optres$message) ) } res } mle.sumexp.err <- function(par, x, y, errmod, ga = F) { # Determine log likelihood of given model parameters # par = sum of exponential parameters # x = independent variable (time) # y = observed dependent variable (drug concentration) # errmod = error model to be used c("add", "prop", "both") # ga = genetic algorithm status nerr <- ifelse(errmod == "both", 2, 1) # set number of error parameters fit.par <- head(par, -nerr) # define sum of exponentail parameters err.par <- tail(par, nerr) # define error paramters z <- ifelse(ga, 2, -2) # adjust ofv for minimisation or maximisation yhat <- pred.sumexp(fit.par, x) # sum of exponential model prediction # Define standard deviation of normal distribution if (errmod == "add") sd <- err.par else if (errmod == "prop") sd <- abs(yhat)*err.par else if (errmod == "both") { add <- err.par[1] prop <- abs(yhat)*err.par[2] sd <- sqrt(add^2 + prop^2) } else stop("Please enter valid error model; \"add\", \"prop\" or \"both\"") # Determine log likelihood loglik <- suppressWarnings(dnorm(y, yhat, sd, log = T)) return(z*sum(loglik)) } optim.sumexp.new <- function(data, oral = F, nexp = 3) { # Determines best sum of exponential for given data # data = mean pooled data; # first column independent variable; # second column dependent variable # oral = whether the drug displays an absorption curve # nexp = maximum fitted number of exponentials # Set up objects res <- list(par = NULL, sumexp = NULL, value = NULL, error = NULL, hessian = NULL, message = NULL) # opt.par <- list(NULL) # opt.val <- list(NULL) # opt.gra <- list(NULL) # opt.con <- list(NULL) # opt.mes <- list(NULL) # opt.hes <- list(NULL) # Prepare data (remove t == 0, remove negative concentrations) x <- data[which(data[, 2] > 0 & data[, 1] != 0), 1] y <- data[which(data[, 2] > 0 & data[, 1] != 0), 2] lmres <- -pred.lambdaz(y, x)[1] lm.sub <- which(y == max(y))[1]:length(y) repeat { lm.mod <- lm(log(y[lm.sub]) ~ x[lm.sub]) lmres <- unname(lm.mod$coefficients) if (is.na(lmres[2])) { lmres <- c(max(y), -pred.lambdaz(y, x)) break } if (lmres[2] < 0) break else lm.sub <- tail(lm.sub, -1) } # Estimate candidate model parameters for (i in 1:nexp) { gares <- try(ga("real-valued", mle.sumexp, x = x, y = y, ga = T, sigma = 0.01, min = c(rep(lmres[2]*50, i + oral), rep(lmres[1]-2, i)), max = c(rep(lmres[2]/50, i + oral), rep(lmres[1]+2, i)), selection = gareal_lrSelection, crossover = gareal_spCrossover, mutation = gareal_raMutation, maxiter = 50, popSize = 250, monitor = F )) if (class(gares) == "try-error") browser() optres <- try(optim( gares@solution[1, ], mle.sumexp, method = "BFGS", hessian = T, x = x, y = y, sigma = 0.01 )) if (class(optres) == "try-error") { optres <- list( par = gares@solution[1,], value = mle.sumexp(gares@solution[1,], x, y, 0.01), counts = c("function" = 501, gradient = NA), convergence = 99, message = "zero gradient", hessian = matrix(NA, ncol = length(gares@solution[1,]), nrow = length(gares@solution[1,]) ) ) } # Create output par.ord <- order.sumexp(optres$par, i, oral) res$par[[i]] <- optres$par res$sumexp[[i]] <- par.ord res$value[[i]] <- c(ofv = optres$value, optres$counts) res$error[[i]] <- c(0.01, "fixed") res$hessian[[i]] <- optres$hessian res$message[[i]] <- c(convergence = optres$convergence, message = ifelse(is.null(optres$message), "NULL", optres$message) ) } res } # Chi-squared difference test # Takes a list of optim results and gives the best optim result chisq.sumexp <- function(opt) { i <- 1 for (j in 2:length(opt$par)) { degf <- length(opt$par[[j]]) - length(opt$par[[i]]) x <- opt$value[[i]] - opt$value[[j]] p <- pchisq(x, degf, lower.tail = F) if (p < 0.01) { i <- i + 1 } } return(sapply(opt, function(x) x[i])) } chisq.sumexp.aic <- function(opt) { x <- unlist(opt$value) k <- unlist(lapply(opt$par, length)) aic <- x + 2*k return(sapply(opt, function(x) x[which(aic == min(aic))])) } chisq.sumexp.bic <- function(opt, nobs) { x <- unlist(opt$value) k <- unlist(lapply(opt$par, length)) bic <- x + log(nobs)*k return(sapply(opt, function(x) x[which(bic == min(bic))])) } best.sumexp.lrt <- function(opt) { values <- unlist(opt$value) ofv <- values[which(names(values) == "ofv")] i <- 1 for (j in 2:length(opt$par)) { degf <- length(opt$par[[j]]) - length(opt$par[[i]]) x <- ofv[i] - ofv[j] p <- pchisq(x, degf, lower.tail = F) if (p < 0.01) { i <- i + 1 } } return(sapply(opt, function(x) x[i])) } best.sumexp.aic <- function(opt) { values <- unlist(opt$value) ofv <- values[which(names(values) == "ofv")] k <- unlist(lapply(opt$par, length)) aic <- ofv + 2*k return(sapply(opt, function(x) x[which(aic == min(aic))])) } best.sumexp.bic <- function(opt, nobs) { values <- unlist(opt$value) ofv <- values[which(names(values) == "ofv")] k <- unlist(lapply(opt$par, length)) bic <- ofv + log(nobs)*k return(sapply(opt, function(x) x[which(bic == min(bic))])) } # Trapezoidal error function for interval optimisation # standard using times err.interv <- function(par, exp.par, tfirst, tlast, tmax = NULL, a = F) { times <- c(tfirst, par, tlast, tmax) deltat <- diff(times) if (a) { all.secd <- abs(pred.sumexp(exp.par, times, 2)) secd <- c(NULL) for (i in 1:(length(times)-1)) { secd[i] <- all.secd[which(all.secd == max(all.secd[c(i, i + 1)]))][1] } } else { secd <- pred.sumexp(exp.par, times[-length(times)], 2) } err <- deltat^3*secd/12 sum(err^2) } # ga using times err.interv.ga <- function(par, exp.par, tfirst, tlast, tmax = NULL, a = F, ga = F) { z <- ifelse(ga, -1, 1) times <- unique(c(tfirst, par, tlast, tmax)) times <- times[order(times)] deltat <- diff(times) if (a) { all.secd <- abs(pred.sumexp(exp.par, times, 2)) secd <- c(NULL) for (i in 1:(length(times)-1)) { secd[i] <- all.secd[which(all.secd == max(all.secd[c(i, i + 1)]))][1] } } else { secd <- pred.sumexp(exp.par, times[-length(times)], 2) } err <- deltat^3*secd/12 return(z*sum(err^2)) } # standard using dt err.interv.dt <- function(par, exp.par, tfirst, tlast, a = F) { times <- c(tfirst, cumsum(par), tlast) deltat <- diff(times) if (a) { all.secd <- abs(pred.sumexp(exp.par, times, 2)) secd <- c(NULL) for (i in 1:(length(times)-1)) { secd[i] <- all.secd[which(all.secd == max(all.secd[c(i, i + 1)]))][1] } } else { secd <- pred.sumexp(exp.par, times[-length(times)], 2) } err <- deltat^3*secd/12 return(sum(err^2)) } err.interv.dtmax <- function(par, exp.par, tfirst, tlast, a = F, tmax = NULL) { times <- c(tfirst, cumsum(par), tmax, tlast) times <- sort(times) deltat <- diff(times) if (a) { all.secd <- abs(pred.sumexp(exp.par, times, 2)) secd <- c(NULL) for (i in 1:(length(times)-1)) { secd[i] <- all.secd[which(all.secd == max(all.secd[c(i, i + 1)]))][1] } } else { secd <- pred.sumexp(exp.par, times[-length(times)], 2) } err <- deltat^3*secd/12 sumerr <- sqrt(mean(err^2)) return(sumerr) } # Interval optimising function # standard using times optim.interv <- function(times, par, tmax = NULL) { x <- times[order(times)] init.par <- x[-c(1, length(x))] if (!is.null(tmax)) init.par <- init.par[-length(init.par)] xmin <- min(x) xmax <- max(x) absorp <- ifelse((length(par) %% 2) != 0, T, F) res <- optim( init.par, err.interv, method = "L-BFGS-B", control = c(maxit = 500), lower = xmin, upper = xmax, exp.par = par, tfirst = xmin + 0.01, tlast = xmax - 0.01, tmax = tmax, a = absorp ) return(res) } # using dt instead of times optim.interv.dt <- function(par, times, tmax = NULL) { tfirst <- min(times) tlast <- max(times) npar <- length(times) - 2 absorp <- ifelse((length(par) %% 2) != 0, T, F) init.par <- cumsum(rep(tlast/48, npar)) res <- optim( init.par, err.interv.dt, method = "L-BFGS-B", hessian = T, lower = tlast/48, upper = tlast - npar*tlast/48, exp.par = par, tfirst = tfirst, tlast = tlast, a = absorp ) res$times <- cumsum(res$par) return(res) } optim.interv.dtmax <- function(par, times, tmax = FALSE) { tfirst <- min(times) tlast <- max(times) tbound <- tlast - tfirst absorp <- ifelse((length(par) %% 2) != 0, T, F) npar <- length(times) - (2 + tmax*absorp) init.par <- cumsum(rep(tbound/48, npar)) if (tmax & absorp) tmax.val <- tmax.sumexp(par, tlast, tlast/720) # maximum length of 721 else tmax.val <- NULL res <- try(optim( init.par, err.interv.dtmax, method = "L-BFGS-B", hessian = T, lower = tbound/48, upper = tbound/2, exp.par = par, tfirst = tfirst, tlast = tlast, a = absorp, tmax = tmax.val )) if (class(res) == "try-error") { res <- try(optim( init.par, err.interv.dtmax, method = "L-BFGS-B", hessian = T, lower = tbound/48, upper = tbound/(npar/2), exp.par = par, tfirst = tfirst, tlast = tlast, a = absorp, tmax = tmax.val )) } res$times <- sort(c(cumsum(c(tfirst, res$par)), tmax.val, tlast)) return(res) } # using ga for initial parameters optim.interv.ga100 <- function(par, times, tmax = NULL) { tfirst <- min(times) tlast <- max(times) if (is.null(tmax)) { is.tmax <- 2 } else if (tmax == 0) { is.tmax <- 2 } else { is.tmax <- 3 } npar <- length(times) - is.tmax absorp <- ifelse((length(par) %% 2) != 0, T, F) flag <- 0 repeat { gares <- ga("real-valued", err.interv.ga, exp.par = par, a = absorp, tfirst = tfirst, tlast = tlast, tmax = tmax, ga = T, min = rep(tfirst + 0.01, npar), max = rep(tlast - 0.01, npar), selection = gareal_lrSelection, crossover = gareal_spCrossover, mutation = gareal_raMutation, maxiter = 50, popSize = 250, monitor = F ) res <- optim( gares@solution[order(gares@solution)], err.interv.ga, method = "BFGS", hessian = T, exp.par = par, tfirst = tfirst, tlast = tlast, tmax = tmax, a = absorp ) vc_mat <- try(solve(res$hessian)) if(class(vc_mat) != "try-error") { se <- sqrt(diag(vc_mat)) if (!any(is.nan(se))) { se_percent <- se/res$par*100 if (max(se_percent) <= 100) { res$flag <- flag res$se <- se_percent break } } } flag <- flag + 1 if (flag == 10) { res$flag <- flag res$se <- NA break } } return(res) } optim.interv.ga50 <- function(par, times, tmax = NULL) { tfirst <- min(times) tlast <- max(times) is.tmax <- ifelse(is.null(tmax), 2, 3) npar <- length(times) - is.tmax absorp <- ifelse((length(par) %% 2) != 0, T, F) flag <- 0 repeat { gares <- ga("real-valued", err.interv.ga, exp.par = par, a = absorp, tfirst = tfirst, tlast = tlast, tmax = tmax, ga = T, min = rep(tfirst + 0.01, npar), max = rep(tlast - 0.01, npar), selection = gareal_lrSelection, crossover = gareal_spCrossover, mutation = gareal_raMutation, maxiter = 50, popSize = 250, monitor = F ) res <- optim( gares@solution[order(gares@solution)], err.interv.ga, method = "BFGS", hessian = T, exp.par = par, tfirst = tfirst, tlast = tlast, tmax = tmax, a = absorp ) vc_mat <- try(solve(res$hessian)) if(class(vc_mat) != "try-error") { se <- sqrt(diag(vc_mat)) if (!any(is.nan(se))) { se_percent <- se/res$par*100 if (max(se_percent) <= 50 | flag == 10) { res$flag <- flag res$se <- se_percent break } } } flag <- flag + 1 } return(res) } pred.tlast <- function(par, tlast) { i <- round(tlast/12, 0) perc.term <- 1 timeloop <- seq(0, i*12, by = i*12/120) predloop <- pred.sumexp(par, timeloop) tmax <- timeloop[which(predloop == max(predloop))] while(perc.term > 0.2 & i < 730) { if (exists("init")) { repeat { i <- i + 1 timeloop <- seq(0, i*12, by = i*12/120) predloop <- pred.sumexp(par, timeloop) clast <- tail(predloop, 1) if (clast < cterm | i == 730) break } } clast <- tail(predloop, 1) auclast <- auc.interv.sumexp(timeloop, par, log = T) lambz <- max(head(par, ceiling(length(par)/2))) aucinf <- clast/-lambz perc.term <- aucinf/(auclast+aucinf) cterm <- clast*(0.18/perc.term) init <- 1 } return(c(i*12, 1-perc.term)) } pred.tlast.lam <- function(par) { nexp <- ceiling(length(par)/2) lambz <- abs(max(par[1:nexp])) hl <- log(2)/lambz ceiling(hl/4)*12 } obs.tlast.lam <- function(obs) { lambz <- pred.lambdaz(obs[, 2], obs[, 1]) hl <- log(2)/lambz ceiling(hl/4)*12 } pred.lambdaz <- function(dv, t) { if (t[1] == 0) dv[1] <- 0 mdv <- which(dv == 0) i <- 3 bestR2 <- -1 bestk <- 0 if (length(dv[-mdv]) >= i) { repeat { mod <- suppressWarnings(lm(log(tail(dv[-mdv], i)) ~ tail(unique(t)[-mdv], i))) k <- -1*mod$coefficients["tail(unique(t)[-mdv], i)"] R2 <- suppressWarnings(as.numeric(summary(mod)["adj.r.squared"])) if (is.na(k)) browser() if (is.nan(R2)) R2 <- suppressWarnings(as.numeric(summary(mod)["r.squared"])) if (k > 0) { if (R2 > bestR2) { if (i > 2) bestR2 <- R2 bestk <- k } else { break } } if (i == 5 & bestk == 0) { # mdv <- c(mdv, which(dv == max(tail(dv[-mdv], 3)))) i <- 1 } if (i == length(dv[-mdv])) { # last ditch effort (intended for simulation study only) if (bestk > 0) break else { mod <- suppressWarnings(lm(log(tail(dv, 2)) ~ tail(unique(t), 2))) bestk <- -1*mod$coefficients["tail(unique(t), 2)"] if (bestk > 0) break else { bestk <- log(2)/56 break } } } i <- i + 1 } } bestk } # ----------------------------------------------------------------------------- # Determine tmax given a set of sumexp parameters tmax.sumexp <- function(par, tlast = 24, res = 0.1) { times <- seq(0, tlast, by = res) yhat <- pred.sumexp(par, times) return(times[which(yhat == max(yhat))]) } # Determine auc given a set of intervals auc.interv <- function(times, par, fn, log = F) { C <- do.call(fn, list(x = times, p = par)) auc <- c(NULL) for (i in 2:length(C)) { h <- times[i] - times[i-1] dC <- C[i-1] - C[i] if (log & dC > 0) auc[i-1] <- dC*h/log(C[i-1]/C[i]) else auc[i-1] <- (C[i-1] + C[i])*h/2 } return(sum(auc)) } auc.interv.sumexp <- function(times, par, log = F) { C <- pred.sumexp(par, times) auc <- c(NULL) for (i in 2:length(C)) { h <- times[i] - times[i-1] dC <- C[i-1] - C[i] if (log & dC > 0) auc[i-1] <- dC*h/log(C[i-1]/C[i]) else auc[i-1] <- (C[i-1] + C[i])*h/2 } return(sum(auc)) } auc.interv.lam <- function(par, t) { dv <- pred.sumexp(par, unique(t)) lambz <- try(-pred.lambdaz(dv, t)[2]) if (class(lambz) == "try-error") return(0) else return(tail(dv, 1)/lambz) } # Without for loop # auc.interv <- function(times, par, fn, log = F) { # C <- do.call(fn, list(x = times, p = par)) # h <- diff(times) # EC <- sum(C[-1], C[-length(C)]) # dC <- diff(-C) # if (!log) auc <- EC*h/2 # else auc[i-1] <- dC*h/log(C[i-1]/C[i]) # return(sum(auc)) # } # Plot random data plot.rdata <- function(data, t, n, interv) { plotdata <- data.frame( id = rep(1:n, each = length(t)), time = rep(t, times = n), dv = as.vector(data) ) plotobj <- NULL plotobj <- ggplot(data = plotdata) plotobj <- plotobj + ggtitle("Random Concentration Time Curves") plotobj <- plotobj + geom_line(aes(x = time, y = dv), colour = "red") plotobj <- plotobj + geom_vline(xintercept = interv, colour = "green4", linetype = "dashed") plotobj <- plotobj + scale_y_continuous("Concentration (mg/mL)\n") plotobj <- plotobj + scale_x_continuous("\nTime after dose (hrs)") plotobj <- plotobj + facet_wrap(~id, ncol = round(sqrt(n)), scales = "free") return(plotobj) }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/create_dd_experiment.R \name{errors_vs_dd} \alias{errors_vs_dd} \title{Execute pirouette's dd experiment} \usage{ errors_vs_dd(n_replicates = 2) } \arguments{ \item{n_replicates}{number of replicates} } \value{ nothing } \description{ Execute pirouette's dd experiment } \author{ Giovanni Laudanno }
/man/errors_vs_dd.Rd
no_license
Giappo/PirouetteExamples
R
false
true
378
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/create_dd_experiment.R \name{errors_vs_dd} \alias{errors_vs_dd} \title{Execute pirouette's dd experiment} \usage{ errors_vs_dd(n_replicates = 2) } \arguments{ \item{n_replicates}{number of replicates} } \value{ nothing } \description{ Execute pirouette's dd experiment } \author{ Giovanni Laudanno }
CyTOF_LDAtrain <- function (TrainingSamplesExt,TrainingLabelsExt,mode, RelevantMarkers,LabelIndex = FALSE,Transformation){ # CyTOF_LDAtrain function can be used to train a Linear Discriminant # Analysis (LDA) classifier, on the labeled CyTOF samples. The trained # classifier model can be then used to automatically annotate new CyTOF # samples. # # Input description # # TrainingSamplesExt: extension of the folder containing the training samples, # can be either in FCS or csv format. # # TrainingLabelsExt: extention of the folder containing the labels (cell types) # for the training samples, must be in csv format. Labels # files must be in the same order as the samples, to match # each lables file to the corresponding sample. # # mode: either 'FCS' or 'CSV', defining the samples format. # # RelevantMarkers: list of integers, enumerating the markers to be used for # classification. # # LabelIndex: Integer value indicating the column containing the labels of each cell, # in case it exists in the same files with the samples. In such case, 'TrainingLabelsExt' # is not used. # FALSE (default) = labels are stored in separate csv files found in 'TrainingLabelsExt' # # Transformation: 'arcsinh' = apply arcsinh transformation with cofactor of 5 prior to classifier training # 'log' = apply logarithmic transformation proir to classifier training # FALSE = no transformation applied. # # Example # Model <- CyTOF_LDAtrain('HMIS-2/Samples/','HMIS-2/Labels/','CSV',c(1:28),'arcsinh') # # For citation and further information please refer to this publication: # "Predicting cell populations in single cell mass cytometry data" Training.data = data.frame() Training.labels <- data.frame() if (mode == 'FCS'){ files = list.files(path = TrainingSamplesExt, pattern = '.fcs',full.names = TRUE) library(flowCore) for (i in files){ Temp <- read.FCS(i,transformation = FALSE, truncate_max_range = FALSE) colnames(Temp@exprs) <- Temp@parameters@data$name Training.data = rbind(Training.data,as.data.frame(Temp@exprs)[,RelevantMarkers]) if(LabelIndex){ Training.labels = rbind(Training.labels,as.data.frame(Temp@exprs)[,LabelIndex,drop = FALSE]) } } } else if (mode == 'CSV'){ files = list.files(path = TrainingSamplesExt, pattern = '.csv',full.names = TRUE) for (i in files){ Temp <- read.csv(i,header = FALSE) Training.data = rbind(Training.data,as.data.frame(Temp)[,RelevantMarkers]) if(LabelIndex){ Training.labels = rbind(Training.labels,as.data.frame(Temp)[,LabelIndex,drop = FALSE]) } } } else{ stop('Invalid file format mode, choose FCS or CSV') } if(Transformation != FALSE){ if(Transformation == 'arcsinh'){ Training.data = asinh(Training.data/5) } else if (Transformation == 'log'){ Training.data = log(Training.data) Training.data[sapply(Training.data,is.infinite)] = 0 } } if(!LabelIndex){ Labels.files <- list.files(path = TrainingLabelsExt, pattern = '.csv', full.names = TRUE) for (i in Labels.files){ Temp <- read.csv(i,header = FALSE, colClasses = "character") Training.labels = rbind(Training.labels,as.data.frame(Temp)) } if(dim(Training.data)[1]!=dim(Training.labels)[1]){ stop('Length of training data and labels does not match')} } LDAclassifier <- MASS::lda(Training.data,as.factor(Training.labels[,1])) Model <- list(LDAclassifier = LDAclassifier,Transformation = Transformation, markers = RelevantMarkers) return(Model) }
/code/autogating/lda/CyTOF_LDAtrain.R
no_license
liupeng2117/CyTOF-review-paper-data-and-code-
R
false
false
3,862
r
CyTOF_LDAtrain <- function (TrainingSamplesExt,TrainingLabelsExt,mode, RelevantMarkers,LabelIndex = FALSE,Transformation){ # CyTOF_LDAtrain function can be used to train a Linear Discriminant # Analysis (LDA) classifier, on the labeled CyTOF samples. The trained # classifier model can be then used to automatically annotate new CyTOF # samples. # # Input description # # TrainingSamplesExt: extension of the folder containing the training samples, # can be either in FCS or csv format. # # TrainingLabelsExt: extention of the folder containing the labels (cell types) # for the training samples, must be in csv format. Labels # files must be in the same order as the samples, to match # each lables file to the corresponding sample. # # mode: either 'FCS' or 'CSV', defining the samples format. # # RelevantMarkers: list of integers, enumerating the markers to be used for # classification. # # LabelIndex: Integer value indicating the column containing the labels of each cell, # in case it exists in the same files with the samples. In such case, 'TrainingLabelsExt' # is not used. # FALSE (default) = labels are stored in separate csv files found in 'TrainingLabelsExt' # # Transformation: 'arcsinh' = apply arcsinh transformation with cofactor of 5 prior to classifier training # 'log' = apply logarithmic transformation proir to classifier training # FALSE = no transformation applied. # # Example # Model <- CyTOF_LDAtrain('HMIS-2/Samples/','HMIS-2/Labels/','CSV',c(1:28),'arcsinh') # # For citation and further information please refer to this publication: # "Predicting cell populations in single cell mass cytometry data" Training.data = data.frame() Training.labels <- data.frame() if (mode == 'FCS'){ files = list.files(path = TrainingSamplesExt, pattern = '.fcs',full.names = TRUE) library(flowCore) for (i in files){ Temp <- read.FCS(i,transformation = FALSE, truncate_max_range = FALSE) colnames(Temp@exprs) <- Temp@parameters@data$name Training.data = rbind(Training.data,as.data.frame(Temp@exprs)[,RelevantMarkers]) if(LabelIndex){ Training.labels = rbind(Training.labels,as.data.frame(Temp@exprs)[,LabelIndex,drop = FALSE]) } } } else if (mode == 'CSV'){ files = list.files(path = TrainingSamplesExt, pattern = '.csv',full.names = TRUE) for (i in files){ Temp <- read.csv(i,header = FALSE) Training.data = rbind(Training.data,as.data.frame(Temp)[,RelevantMarkers]) if(LabelIndex){ Training.labels = rbind(Training.labels,as.data.frame(Temp)[,LabelIndex,drop = FALSE]) } } } else{ stop('Invalid file format mode, choose FCS or CSV') } if(Transformation != FALSE){ if(Transformation == 'arcsinh'){ Training.data = asinh(Training.data/5) } else if (Transformation == 'log'){ Training.data = log(Training.data) Training.data[sapply(Training.data,is.infinite)] = 0 } } if(!LabelIndex){ Labels.files <- list.files(path = TrainingLabelsExt, pattern = '.csv', full.names = TRUE) for (i in Labels.files){ Temp <- read.csv(i,header = FALSE, colClasses = "character") Training.labels = rbind(Training.labels,as.data.frame(Temp)) } if(dim(Training.data)[1]!=dim(Training.labels)[1]){ stop('Length of training data and labels does not match')} } LDAclassifier <- MASS::lda(Training.data,as.factor(Training.labels[,1])) Model <- list(LDAclassifier = LDAclassifier,Transformation = Transformation, markers = RelevantMarkers) return(Model) }
MAKECacheMatrix <- function(y = matrix()){ inv <- NULL set <- function(a){ y <<- a inv <<- NULL } get <- function() {y} setInverse <- function(inverse) {inv <<- inverse} getInverse <- function() {inv} list(set = set, get = get, setInverse = setInverse, getInverse = getInverse) } cacheSolve <- function(y, ...){ inv <- y$getInverse() if(!is.null(inv)){ message("CACHED DATA") return(inv) } mat <- y$get() inv <- solve(mat, ...) y$setInverse(inv) inv }
/myycachematrixx.R
no_license
Deepakspk1/ProgrammingAssignment2
R
false
false
608
r
MAKECacheMatrix <- function(y = matrix()){ inv <- NULL set <- function(a){ y <<- a inv <<- NULL } get <- function() {y} setInverse <- function(inverse) {inv <<- inverse} getInverse <- function() {inv} list(set = set, get = get, setInverse = setInverse, getInverse = getInverse) } cacheSolve <- function(y, ...){ inv <- y$getInverse() if(!is.null(inv)){ message("CACHED DATA") return(inv) } mat <- y$get() inv <- solve(mat, ...) y$setInverse(inv) inv }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/export2gams.R \name{export_weights_gams} \alias{export_weights_gams} \alias{list_col_names} \title{Export neural nets weights and biases to .csv and write gams code} \usage{ export_weights_gams(model, inputs_vec, type) } \arguments{ \item{model}{keras .h5 saved model.} \item{inputs_vec}{vector with input column names} \item{type}{Letter used to identify the dataset being used} } \description{ Save and export model weights and biases for use in GAMS reconstruction of neural networks. } \examples{ ## Manually extracted column names inputs <- c("lsu", "lon", "lat", "tem", "pre", "rad", "soil") ## Suggested way to extract the inputs from the dataset inputs <- c(colnames(df.harvest)[-ncol(df.harvest)]) export2gams(model, module, means, stddevs, inputs_vec, "s") } \author{ Marcos Alves \email{mppalves@gmail.com} }
/man/export_weights_gams.Rd
no_license
mppalves/GSTools
R
false
true
903
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/export2gams.R \name{export_weights_gams} \alias{export_weights_gams} \alias{list_col_names} \title{Export neural nets weights and biases to .csv and write gams code} \usage{ export_weights_gams(model, inputs_vec, type) } \arguments{ \item{model}{keras .h5 saved model.} \item{inputs_vec}{vector with input column names} \item{type}{Letter used to identify the dataset being used} } \description{ Save and export model weights and biases for use in GAMS reconstruction of neural networks. } \examples{ ## Manually extracted column names inputs <- c("lsu", "lon", "lat", "tem", "pre", "rad", "soil") ## Suggested way to extract the inputs from the dataset inputs <- c(colnames(df.harvest)[-ncol(df.harvest)]) export2gams(model, module, means, stddevs, inputs_vec, "s") } \author{ Marcos Alves \email{mppalves@gmail.com} }
#' Load a tile #' @title Load LAS tile #' @description loads the first tile where the point is located #' @param path is the path to the files #' @param coordX x-coordinate in RD coordinates #' @param coordY y-coordinate in RD coordinates #' @export #' loadTile <- function(path, coordX, coordY){ coordX<-str_pad(as.integer(floor(coordX/1000)*1000), 6, pad = "0") coordY<-str_pad(as.integer(floor(coordY/1000)*1000), 6, pad = "0") uuid<-UUIDgenerate() multifileFlag<-checkIfMultiTile(path, coordX, coordY) multifiles <- NULL if(multifileFlag){ multifiles<-list.files(path = path, pattern = paste0("ahn_", coordX,"_", coordY,"_.*"), full.names = T, recursive = T) } dir.create(paste0(temp_dir,uuid,"/")) centralFile<-list.files(path = path, paste0("ahn_", coordX,"_", coordY,".laz"), full.names = T, recursive = T) files<-c(centralFile,multifiles) if(length(files)!=0){ lapply(files,file.copy,to=paste0(temp_dir,uuid,"/")) currentFiles<-list.files(path = paste0(temp_dir, uuid,"/"), full.names = TRUE) #print(paste(currentFiles)) lapply(paste(lasZipLocation, currentFiles), system) #system("/usr/people/pagani/opt/testFile/LAStools/bin/laszip .laz") system(paste0("rm ", temp_dir, uuid,"/*.laz")) files_las<-list.files(paste0(temp_dir, uuid),pattern="*.las$",full.names = TRUE) out.matrix<-lapply(files_las, readLAS) outDF<-do.call(rbind.data.frame, out.matrix) #out<-data.frame(out.matrix) system(paste0("rm ", temp_dir, uuid,"/*.las")) rm(out.matrix,multifiles,uuid,centralFile,files,currentFiles,files_las) gc() return(outDF) } else { return(NULL) } }
/R/loadtile.R
no_license
AnneMossink/SkyViewFactor
R
false
false
1,628
r
#' Load a tile #' @title Load LAS tile #' @description loads the first tile where the point is located #' @param path is the path to the files #' @param coordX x-coordinate in RD coordinates #' @param coordY y-coordinate in RD coordinates #' @export #' loadTile <- function(path, coordX, coordY){ coordX<-str_pad(as.integer(floor(coordX/1000)*1000), 6, pad = "0") coordY<-str_pad(as.integer(floor(coordY/1000)*1000), 6, pad = "0") uuid<-UUIDgenerate() multifileFlag<-checkIfMultiTile(path, coordX, coordY) multifiles <- NULL if(multifileFlag){ multifiles<-list.files(path = path, pattern = paste0("ahn_", coordX,"_", coordY,"_.*"), full.names = T, recursive = T) } dir.create(paste0(temp_dir,uuid,"/")) centralFile<-list.files(path = path, paste0("ahn_", coordX,"_", coordY,".laz"), full.names = T, recursive = T) files<-c(centralFile,multifiles) if(length(files)!=0){ lapply(files,file.copy,to=paste0(temp_dir,uuid,"/")) currentFiles<-list.files(path = paste0(temp_dir, uuid,"/"), full.names = TRUE) #print(paste(currentFiles)) lapply(paste(lasZipLocation, currentFiles), system) #system("/usr/people/pagani/opt/testFile/LAStools/bin/laszip .laz") system(paste0("rm ", temp_dir, uuid,"/*.laz")) files_las<-list.files(paste0(temp_dir, uuid),pattern="*.las$",full.names = TRUE) out.matrix<-lapply(files_las, readLAS) outDF<-do.call(rbind.data.frame, out.matrix) #out<-data.frame(out.matrix) system(paste0("rm ", temp_dir, uuid,"/*.las")) rm(out.matrix,multifiles,uuid,centralFile,files,currentFiles,files_las) gc() return(outDF) } else { return(NULL) } }
\name{SC19022-package} \alias{SC19022-package} \alias{SC19022} \docType{package} \title{ The homework of the course: statistic computing. } \description{ The homework of the course: statistic computing. } \author{ SC19022 Maintainer: SC19022 <haoyue000000@qq.com> } \keyword{ package }
/man/SC19022-package.Rd
no_license
SC19022/SC19022
R
false
false
292
rd
\name{SC19022-package} \alias{SC19022-package} \alias{SC19022} \docType{package} \title{ The homework of the course: statistic computing. } \description{ The homework of the course: statistic computing. } \author{ SC19022 Maintainer: SC19022 <haoyue000000@qq.com> } \keyword{ package }
'data.world-r Copyright 2017 data.world, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. This product includes software developed at data.world, Inc.(http://www.data.world/).' library("data.world") library("testthat") source("testUtil.R") test_that("addFilesBySource making the correct HTTR request" , { request <- data.world::FileBatchUpdateRequest() request <- data.world::addFile(request = request, name = "file.csv", url = "https://data.world/some_file.csv") request <- data.world::addFile(request = request, name = "file2.csv", url = "https://data.world/some_file2.csv") response <- with_mock( `httr::POST` = function(url , body, header , progress, userAgent) { expect_equal(url, "https://api.data.world/v0/datasets/ownerid/datasetid/files") expect_equal(header$headers[["Authorization"]], "Bearer API_TOKEN") expect_equal(rjson::toJSON(request), body) expect_equal(userAgent$options$useragent, data.world::userAgent()) return(successMessageResponse()) }, `mime::guess_type` = function(...) NULL, data.world::addFilesBySource(data.world(token = "API_TOKEN"), dataset = "ownerid/datasetid", fileBatchUpdateRequest = request) ) expect_equal(class(response), "SuccessMessage") }) test_that("addFileBySource making the correct HTTR request" , { request <- data.world::FileBatchUpdateRequest() request <- data.world::addFile(request = request, name = "file.csv", url = "https://data.world/some_file.csv") response <- with_mock( `httr::POST` = function(url , body, header , progress, userAgent) { expect_equal(url, "https://api.data.world/v0/datasets/ownerid/datasetid/files") expect_equal(header$headers[["Authorization"]], "Bearer API_TOKEN") expect_equal(rjson::toJSON(request), body) expect_equal(userAgent$options$useragent, data.world::userAgent()) return(successMessageResponse()) }, `mime::guess_type` = function(...) NULL, data.world::addFileBySource(data.world(token = "API_TOKEN"), dataset = "ownerid/datasetid", name = "file.csv", url = "https://data.world/some_file.csv") ) expect_equal(class(response), "SuccessMessage") })
/tests/testthat/testAddFilesBySource.R
permissive
mrhelmus/data.world-r
R
false
false
2,685
r
'data.world-r Copyright 2017 data.world, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. This product includes software developed at data.world, Inc.(http://www.data.world/).' library("data.world") library("testthat") source("testUtil.R") test_that("addFilesBySource making the correct HTTR request" , { request <- data.world::FileBatchUpdateRequest() request <- data.world::addFile(request = request, name = "file.csv", url = "https://data.world/some_file.csv") request <- data.world::addFile(request = request, name = "file2.csv", url = "https://data.world/some_file2.csv") response <- with_mock( `httr::POST` = function(url , body, header , progress, userAgent) { expect_equal(url, "https://api.data.world/v0/datasets/ownerid/datasetid/files") expect_equal(header$headers[["Authorization"]], "Bearer API_TOKEN") expect_equal(rjson::toJSON(request), body) expect_equal(userAgent$options$useragent, data.world::userAgent()) return(successMessageResponse()) }, `mime::guess_type` = function(...) NULL, data.world::addFilesBySource(data.world(token = "API_TOKEN"), dataset = "ownerid/datasetid", fileBatchUpdateRequest = request) ) expect_equal(class(response), "SuccessMessage") }) test_that("addFileBySource making the correct HTTR request" , { request <- data.world::FileBatchUpdateRequest() request <- data.world::addFile(request = request, name = "file.csv", url = "https://data.world/some_file.csv") response <- with_mock( `httr::POST` = function(url , body, header , progress, userAgent) { expect_equal(url, "https://api.data.world/v0/datasets/ownerid/datasetid/files") expect_equal(header$headers[["Authorization"]], "Bearer API_TOKEN") expect_equal(rjson::toJSON(request), body) expect_equal(userAgent$options$useragent, data.world::userAgent()) return(successMessageResponse()) }, `mime::guess_type` = function(...) NULL, data.world::addFileBySource(data.world(token = "API_TOKEN"), dataset = "ownerid/datasetid", name = "file.csv", url = "https://data.world/some_file.csv") ) expect_equal(class(response), "SuccessMessage") })
#----- data ------- fer<-read.table("input_data/Cas9_fertility_Niki.txt", header=T, sep="\t") fer <- fer[,0:7] fer$replicate <- as.factor(fer$replicate) fer <- within(fer, strain <- relevel(strain, ref = 12)) fer <- within(fer, sex <- relevel(sex, ref = 3)) fer <- subset(fer, sex != "female") fer <- aggregate(.~strain+cross+sex, data=fer, median) fer$replicate <- NULL fer$survival <- fer$adults/fer$embryos fer$crsex <- with(fer, interaction(cross, sex)) fer <- droplevels(fer) #----- glm model <- glm(cbind(as.integer(adults),(as.integer(embryos)-as.integer(adults))) ~ crsex, family=binomial, data=fer) print(summary(model))
/Figure_3/Figure3_fertility_stats.R
no_license
genome-traffic/medflyXpaper
R
false
false
632
r
#----- data ------- fer<-read.table("input_data/Cas9_fertility_Niki.txt", header=T, sep="\t") fer <- fer[,0:7] fer$replicate <- as.factor(fer$replicate) fer <- within(fer, strain <- relevel(strain, ref = 12)) fer <- within(fer, sex <- relevel(sex, ref = 3)) fer <- subset(fer, sex != "female") fer <- aggregate(.~strain+cross+sex, data=fer, median) fer$replicate <- NULL fer$survival <- fer$adults/fer$embryos fer$crsex <- with(fer, interaction(cross, sex)) fer <- droplevels(fer) #----- glm model <- glm(cbind(as.integer(adults),(as.integer(embryos)-as.integer(adults))) ~ crsex, family=binomial, data=fer) print(summary(model))
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/cstab.R \docType{package} \name{cstab} \alias{cstab} \alias{cstab-package} \title{cstab: Selection of number of clusters via normalized clustering instability} \description{ Selection of the number of clusters in cluster analysis using stability methods. } \details{ \tabular{ll}{ Package: \tab cstab\cr Type: \tab Package\cr Version: \tab 0.01\cr Date: \tab 2016-07-26\cr License: \tab GPL (>= 2)\cr } } \author{ Dirk U. Wulff <dirk.wulff@gmail.com> Jonas M. B. Haslbeck <jonas.haslbeck@gmail.com> }
/man/cstab.Rd
no_license
dwulff/cstab
R
false
true
604
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/cstab.R \docType{package} \name{cstab} \alias{cstab} \alias{cstab-package} \title{cstab: Selection of number of clusters via normalized clustering instability} \description{ Selection of the number of clusters in cluster analysis using stability methods. } \details{ \tabular{ll}{ Package: \tab cstab\cr Type: \tab Package\cr Version: \tab 0.01\cr Date: \tab 2016-07-26\cr License: \tab GPL (>= 2)\cr } } \author{ Dirk U. Wulff <dirk.wulff@gmail.com> Jonas M. B. Haslbeck <jonas.haslbeck@gmail.com> }
data1 = read.csv(file = "C:\\Users\\Desktop\\file2.csv",header = TRUE,sep = ",") data1 var1=data1$X1 var2=data1$X2 var3=data1$X3 var4=data1$X4 var1 var2 var3 var4 subset = t(data.frame(var4,var2,var3)) row_name = data1$X1 barplot(subset, beside=TRUE,col=c("green","darkblue","red"),names.arg = row_name,las = 2,ylim = c(0,25),angle = 45) legend("topleft", legend = c("Scalearc Cache","Direct Non-Cache", "Scalearc Non-Cache"),fill = c("darkblue","red","green")) title(main="Sysbench Throughput", col.main="black", font.main=4) title(xlab= "Threads", col.lab=rgb(0,0,0)) title(ylab= " Throughput in Million Requests/Second", col.lab=rgb(0,0,0))
/Scripts/R scripts/Bar_Plots_Scalearc.R
no_license
cruizen/ScaleArc_MySQL_perf_benchmark
R
false
false
645
r
data1 = read.csv(file = "C:\\Users\\Desktop\\file2.csv",header = TRUE,sep = ",") data1 var1=data1$X1 var2=data1$X2 var3=data1$X3 var4=data1$X4 var1 var2 var3 var4 subset = t(data.frame(var4,var2,var3)) row_name = data1$X1 barplot(subset, beside=TRUE,col=c("green","darkblue","red"),names.arg = row_name,las = 2,ylim = c(0,25),angle = 45) legend("topleft", legend = c("Scalearc Cache","Direct Non-Cache", "Scalearc Non-Cache"),fill = c("darkblue","red","green")) title(main="Sysbench Throughput", col.main="black", font.main=4) title(xlab= "Threads", col.lab=rgb(0,0,0)) title(ylab= " Throughput in Million Requests/Second", col.lab=rgb(0,0,0))
## This file contains functions, written as part of Coursera's ## R Programming course Programming Assignment 2. ## These functions were written as an exercise to demonstrate R's ## lexical scoping capabilities. ## These functions follow the getter and setter style ## This function makes a cache matrix, consisting of getters and ## setters. In effect, then, getting a matrix's inverse ## gets it from the cache rather than recalculating the same. makeCacheMatrix <- function(x = matrix()) { # initializing the inverse matrix to NULL inv_mat <- NULL # getter function for the inverse matrix get_mat.invr <- function() { inv_mat } # setter function for the inverse matrix set_mat.invr <- function(setinv) { inv_mat <<- setinv } # getter function for matrix get_mat <- function() x # setter function for matrix set_mat <- function(mx) { x <<- mx inv_mat <<- NULL } # returns a list of getter and setter functions list(set = set_mat, get = get_mat,set.invr = set_mat.invr,get.invr = get_mat.invr) } ## Write a short comment describing this function cacheSolve <- function(x, ...) { # CHECK IF CACHE VERSION OF MATRIX EXISTS cache_inv <- x$get.invr() #CASE : CASE VERSION EXISTS if(!is.null(cache_inv)) { message("Cache Version Found. Returning....") return(cache_inv) } # CASE : CACHE VERSION DOES NOT EXIST # solve is called to compute inverse orig_mat <- x$get() inv <- solve(orig_mat, ...) x$set.invr(inv) # returns the inverse inv }
/cachematrix.R
no_license
keertank189/ProgrammingAssignment2
R
false
false
1,555
r
## This file contains functions, written as part of Coursera's ## R Programming course Programming Assignment 2. ## These functions were written as an exercise to demonstrate R's ## lexical scoping capabilities. ## These functions follow the getter and setter style ## This function makes a cache matrix, consisting of getters and ## setters. In effect, then, getting a matrix's inverse ## gets it from the cache rather than recalculating the same. makeCacheMatrix <- function(x = matrix()) { # initializing the inverse matrix to NULL inv_mat <- NULL # getter function for the inverse matrix get_mat.invr <- function() { inv_mat } # setter function for the inverse matrix set_mat.invr <- function(setinv) { inv_mat <<- setinv } # getter function for matrix get_mat <- function() x # setter function for matrix set_mat <- function(mx) { x <<- mx inv_mat <<- NULL } # returns a list of getter and setter functions list(set = set_mat, get = get_mat,set.invr = set_mat.invr,get.invr = get_mat.invr) } ## Write a short comment describing this function cacheSolve <- function(x, ...) { # CHECK IF CACHE VERSION OF MATRIX EXISTS cache_inv <- x$get.invr() #CASE : CASE VERSION EXISTS if(!is.null(cache_inv)) { message("Cache Version Found. Returning....") return(cache_inv) } # CASE : CACHE VERSION DOES NOT EXIST # solve is called to compute inverse orig_mat <- x$get() inv <- solve(orig_mat, ...) x$set.invr(inv) # returns the inverse inv }
#Set the working directory setwd('E://StorageSync//Grainger Files//2016//Projects//PPC Incremental Test//') #Load the choroplethr package library(devtools) #install_github('arilamstein/choroplethrZip@v1.5.0') library(choroplethrZip) library(choroplethr) library(RColorBrewer) library(plyr) library(ggplot2) #Read in all zipcodes and state regions data(zip.regions) data(continental_us_states) #De-duplciate the zipcodes uniq_zip<-count(zip.regions,c('region')) #Read in GIS zipcodes with mappings to DMAs ztd<-read.table('ZIP_TO_DMA_PPCI3.dat',sep='\t',header = TRUE) #Merge the data together zc<-merge(uniq_zip,ztd,by = 'region', all.x = TRUE) #Replace N/A values in the TC column with 0 zc$TC[is.na(zc$TC)]<- 0 #TC should be a factor zc$TC<-as.factor(zc$TC) #read in only region and the value we are interested in (test and control) zcp<-zc[,c(1,4)] colnames(zcp)[2]<-"value" choro <- ZipChoropleth$new(zcp) choro$set_zoom_zip(state_zoom=continental_us_states,county_zoom = NULL, msa_zoom = NULL, zip_zoom = NULL) choro$ggplot_polygon<-geom_polygon(aes(fill = value),color = NA) choro$ggplot_scale <- scale_fill_manual(name='value',values = c('gray','blue','red', 'green','purple'), drop=FALSE) choro$render()
/Plot Test and Control DMAs for Display Incremental.R
no_license
sralbrecht/Display-GeoTest-2016
R
false
false
1,389
r
#Set the working directory setwd('E://StorageSync//Grainger Files//2016//Projects//PPC Incremental Test//') #Load the choroplethr package library(devtools) #install_github('arilamstein/choroplethrZip@v1.5.0') library(choroplethrZip) library(choroplethr) library(RColorBrewer) library(plyr) library(ggplot2) #Read in all zipcodes and state regions data(zip.regions) data(continental_us_states) #De-duplciate the zipcodes uniq_zip<-count(zip.regions,c('region')) #Read in GIS zipcodes with mappings to DMAs ztd<-read.table('ZIP_TO_DMA_PPCI3.dat',sep='\t',header = TRUE) #Merge the data together zc<-merge(uniq_zip,ztd,by = 'region', all.x = TRUE) #Replace N/A values in the TC column with 0 zc$TC[is.na(zc$TC)]<- 0 #TC should be a factor zc$TC<-as.factor(zc$TC) #read in only region and the value we are interested in (test and control) zcp<-zc[,c(1,4)] colnames(zcp)[2]<-"value" choro <- ZipChoropleth$new(zcp) choro$set_zoom_zip(state_zoom=continental_us_states,county_zoom = NULL, msa_zoom = NULL, zip_zoom = NULL) choro$ggplot_polygon<-geom_polygon(aes(fill = value),color = NA) choro$ggplot_scale <- scale_fill_manual(name='value',values = c('gray','blue','red', 'green','purple'), drop=FALSE) choro$render()
# 머신러닝 # 개요 # training/test 데이터 준비 # 머신러닝을 이용해서 # 지도학습 기반 모델을 생성할 때 전체 데이터셋을 사용해서 한꺼번에 분석하지 않고 학습용/평가용 데이터셋으로 분할해서 분석 진행함 보통 일정한 비율로 데이터셋을 훈련/평가 데이터로 분할하는데 7:3 또는 8.5:2.5 정도로 나눔 ex) iris 데이터셋을 train/test 데이터로 분할 1) 단순 분할 iris 데이터셋은 총 150개 이므로 7:3으로 분할하면 105:45로 나눌 수 있음 train <- iris[1:105,] test <- iris[106:150,] table(train[,5]) table(test[,5]) 2) 무작위 분할 set.seed(20181119) idx <- sample(1:150, size=105, replace=F) idx 1~150 숫자를 105개 비복원추출방식으로 추출 train <- iris[idx, ] test <- iris[-idx, ] table(train$Species) table(test$Species) 데이터는 일관성이 다소 부족하고 편중되어 있음 -> 모델의 좋은 성능은 나오기 어려움 3) 각 데이터의 특성과 분포를 고려해서 데이터를 분할할 필요 있음 carat 패키지의 createDataPartition 함수 이용 install.packages('caret') library(caret) idx <- createDataPartition(iris[,5], p=0.7, list=F) train <- iris[idx,] test <- iris[-idx,] table(train$Species) table(test$Species) # 교차검증 전체 데이터를 이용해서 모델 생성 -과적합 발생 즉, 훈련데이터는 잘 적중시키지만, 새로운 데이터에 대해 잘 적중시키지 못하는 문제 생김 과적합을 어느정도 줄이려면 주어진 데이터의 일부는 훈련데이터로 사용하고 나머지는 검증데이터로 사용 훈련/검증 데이터 분리해서 모델을 생성하는 방법 중 하나는 교차검증을 이용하는 것임 즉, 데이터를 다수의 조각으로 나눈 후 (n등분) 훈련과 검증을 나누어 반복하는 방법임 install.packages('cvTools') library(cvTools) # cvFolds(15, K=3, type='') # random, consecutive, interleaved cvFolds(15,K=3, type='random') # fold 1,2,3 일 때 검증데이터의 idx는 453 임 cvFolds(15,K=3, type='consecutive') # fold 1 일 때 검증데이터의 idx는 124345 임 cvFolds(15,K=3, type='interleaved') # fold 123 일 때 검증데이터의 idx는 123 임 set.seed(2018111817) cv <- cvFolds(150, K=5, type= "interleaved") cv <- cvFolds(nrow(iris), K=5, type='interleaved') cv$which # 교차검증시 사용할 데이터의 회차 표시 head(cv$subsets) # 교차검증 시 데이터 idx 표시 # 교차검증 1회차 데이터 idxK <- cv$subsets[which(cv$which==1),1] idxK trainK1 <- iris[-idxK,] testK1 <- iris[idxK,] table(trainK1$Species) table(testK1$Species) # caret 패키지를 이용한 교차검증 cv <- createFolds(iris[,5], k=5) cv$Fold1 trainK1 <- iris[-cv$Fold1,] testK1 <- iris[cv$Fold1,] table(trainK1$Species) table(testK1$Species)
/R22-MachineLearning.R
no_license
soonyoungcheon/R
R
false
false
2,879
r
# 머신러닝 # 개요 # training/test 데이터 준비 # 머신러닝을 이용해서 # 지도학습 기반 모델을 생성할 때 전체 데이터셋을 사용해서 한꺼번에 분석하지 않고 학습용/평가용 데이터셋으로 분할해서 분석 진행함 보통 일정한 비율로 데이터셋을 훈련/평가 데이터로 분할하는데 7:3 또는 8.5:2.5 정도로 나눔 ex) iris 데이터셋을 train/test 데이터로 분할 1) 단순 분할 iris 데이터셋은 총 150개 이므로 7:3으로 분할하면 105:45로 나눌 수 있음 train <- iris[1:105,] test <- iris[106:150,] table(train[,5]) table(test[,5]) 2) 무작위 분할 set.seed(20181119) idx <- sample(1:150, size=105, replace=F) idx 1~150 숫자를 105개 비복원추출방식으로 추출 train <- iris[idx, ] test <- iris[-idx, ] table(train$Species) table(test$Species) 데이터는 일관성이 다소 부족하고 편중되어 있음 -> 모델의 좋은 성능은 나오기 어려움 3) 각 데이터의 특성과 분포를 고려해서 데이터를 분할할 필요 있음 carat 패키지의 createDataPartition 함수 이용 install.packages('caret') library(caret) idx <- createDataPartition(iris[,5], p=0.7, list=F) train <- iris[idx,] test <- iris[-idx,] table(train$Species) table(test$Species) # 교차검증 전체 데이터를 이용해서 모델 생성 -과적합 발생 즉, 훈련데이터는 잘 적중시키지만, 새로운 데이터에 대해 잘 적중시키지 못하는 문제 생김 과적합을 어느정도 줄이려면 주어진 데이터의 일부는 훈련데이터로 사용하고 나머지는 검증데이터로 사용 훈련/검증 데이터 분리해서 모델을 생성하는 방법 중 하나는 교차검증을 이용하는 것임 즉, 데이터를 다수의 조각으로 나눈 후 (n등분) 훈련과 검증을 나누어 반복하는 방법임 install.packages('cvTools') library(cvTools) # cvFolds(15, K=3, type='') # random, consecutive, interleaved cvFolds(15,K=3, type='random') # fold 1,2,3 일 때 검증데이터의 idx는 453 임 cvFolds(15,K=3, type='consecutive') # fold 1 일 때 검증데이터의 idx는 124345 임 cvFolds(15,K=3, type='interleaved') # fold 123 일 때 검증데이터의 idx는 123 임 set.seed(2018111817) cv <- cvFolds(150, K=5, type= "interleaved") cv <- cvFolds(nrow(iris), K=5, type='interleaved') cv$which # 교차검증시 사용할 데이터의 회차 표시 head(cv$subsets) # 교차검증 시 데이터 idx 표시 # 교차검증 1회차 데이터 idxK <- cv$subsets[which(cv$which==1),1] idxK trainK1 <- iris[-idxK,] testK1 <- iris[idxK,] table(trainK1$Species) table(testK1$Species) # caret 패키지를 이용한 교차검증 cv <- createFolds(iris[,5], k=5) cv$Fold1 trainK1 <- iris[-cv$Fold1,] testK1 <- iris[cv$Fold1,] table(trainK1$Species) table(testK1$Species)
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/at_pm.R \docType{data} \name{at_pm} \alias{at_pm} \title{Particulate Matter (PM)} \format{Data frame with columns \describe{ \item{stazione}{Stazione, first two characters are the Provincia} \item{parameter}{inquinante} \item{year}{Year} \item{month}{Month} \item{day}{Day} \item{value}{Measure} \item{valid}{0 = measure is not valid, 1 = measure is valid} }} \usage{ at_pm } \description{ This dataset contains the daily measures of PM10 and PM2.5 for the years 2008-2014. } \author{ Patrick Hausmann, Source: ARPAT Toscana } \keyword{datasets}
/man/at_pm.Rd
no_license
patperu/AriaToscana
R
false
true
625
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/at_pm.R \docType{data} \name{at_pm} \alias{at_pm} \title{Particulate Matter (PM)} \format{Data frame with columns \describe{ \item{stazione}{Stazione, first two characters are the Provincia} \item{parameter}{inquinante} \item{year}{Year} \item{month}{Month} \item{day}{Day} \item{value}{Measure} \item{valid}{0 = measure is not valid, 1 = measure is valid} }} \usage{ at_pm } \description{ This dataset contains the daily measures of PM10 and PM2.5 for the years 2008-2014. } \author{ Patrick Hausmann, Source: ARPAT Toscana } \keyword{datasets}
################## # Intro to R extra session # April 2020 # 1. reproducibility # ################## # Reproducibility http://christophergandrud.github.io/RepResR-RStudio/ file:///Users/Maggie/Desktop/ReproducibleResearch_Chapter2.pdf # p.22 Chapter 2. Reproducible Research with R and RStudio Second Edition #1. Document everything! #2. Everything is a (text) file. #3. All files should be human readable. #4. Explicitly tie your files together. #5. Have a plan to organize, store, and make your files available. # You should record your session info. Many things in R have # stayed the same since it was introduced in the early 1990s. This makes it easy # for future researchers to recreate what was done in the past. However, things # can change from one version of R to another and especially from one version # of an R package to another. Also, the way R functions and how R packages # are handled can vary across dierent operating systems, so it’s important to # note what system you used. Finally, you may have R set to load packages # by default (see Section 3.1.8 for information about packages). These packages # might be necessary to run your code, but other people might not know what # packages and what versions of the packages were loaded from just looking at # your source code. The sessionInfo command in R prints a record of all of # these things. # Print R session info sessionInfo() # to view what you have loaded along with the versions writeLines(capture.output(sessionInfo()), "my_session.txt") # get a list of your full system environment. including paths. Sys.getenv() # Start all scripts with basic info: ################## # R Source code file used to create Figure 3 in My Article # Created by Christopher Gandrud # MIT License ##################
/7_Extra_Session/Extra_Session_Reproducibility.R
no_license
MagB/R_Course1
R
false
false
1,794
r
################## # Intro to R extra session # April 2020 # 1. reproducibility # ################## # Reproducibility http://christophergandrud.github.io/RepResR-RStudio/ file:///Users/Maggie/Desktop/ReproducibleResearch_Chapter2.pdf # p.22 Chapter 2. Reproducible Research with R and RStudio Second Edition #1. Document everything! #2. Everything is a (text) file. #3. All files should be human readable. #4. Explicitly tie your files together. #5. Have a plan to organize, store, and make your files available. # You should record your session info. Many things in R have # stayed the same since it was introduced in the early 1990s. This makes it easy # for future researchers to recreate what was done in the past. However, things # can change from one version of R to another and especially from one version # of an R package to another. Also, the way R functions and how R packages # are handled can vary across dierent operating systems, so it’s important to # note what system you used. Finally, you may have R set to load packages # by default (see Section 3.1.8 for information about packages). These packages # might be necessary to run your code, but other people might not know what # packages and what versions of the packages were loaded from just looking at # your source code. The sessionInfo command in R prints a record of all of # these things. # Print R session info sessionInfo() # to view what you have loaded along with the versions writeLines(capture.output(sessionInfo()), "my_session.txt") # get a list of your full system environment. including paths. Sys.getenv() # Start all scripts with basic info: ################## # R Source code file used to create Figure 3 in My Article # Created by Christopher Gandrud # MIT License ##################
# cerner_2^5_2018 numerals <- readline(prompt="Enter roman numerals: ") print(as.integer(as.roman(numerals)))
/TwentyOne/FromRomanNumerals.r
permissive
jgrieger/2-to-the-5th
R
false
false
110
r
# cerner_2^5_2018 numerals <- readline(prompt="Enter roman numerals: ") print(as.integer(as.roman(numerals)))
#' This functionZmix_multi_tempered #' @param stuff and more #' @keywords multi #' @export #' @examples #' #not run Zmix_multi_CSIRO<-function(YZ, iter, k, alphas, sim=FALSE,Propmin=0.05, simlabel="sim", savelabel="PPplot",DATA_ADDITIONAL ){ parallelAccept<-function(w1, w2, a1, a2){ w1[w1< 1e-200]<-1e-200 # truncate so super small values dont crash everyting w2[w2< 1e-200]<-1e-200 T1<-dDirichlet(w2, a1, log=TRUE) T2<-dDirichlet(w1, a2, log=TRUE) B1<-dDirichlet(w1, a1, log=TRUE) B2<-dDirichlet(w2, a2, log=TRUE) MH<-min(1, exp(T1+T2-B1-B2)) Ax<-sample(c(1,0), 1, prob=c(MH,1-MH)) return(Ax) } CovSample<-function(nsk, WkZk, ybark){ #NEW 11 June 2015 ck<-c0+nsk if (nsk==0) { MCMCpack::riwish(c0, C0) } else { Ck<- C0 +((nsk*n0)/(nsk+n0)*crossprod(ybark-b0)) +WkZk MCMCpack::riwish(ck,Ck) } } MuSample<-function(newCovLISTk, nsk,WkZk, ybark){ newCovLISTk<-matrix(newCovLISTk, nrow=r, byrow=T) if (nsk==0) { rmvnorm(1, b0, newCovLISTk/n0) } else { bk<-(n0/(nsk+n0))*b0+(nsk/(n0+nsk))*ybark Bk<-(1/(nsk+n0))*newCovLISTk rmvnorm(1, t(bk), Bk) }} minions<-function(ZZ){ IndiZ <- (ZZ == matrix((1:k), nrow = n, ncol = k, byrow = T)) ns <- apply(IndiZ,2,sum) # size of each group .Ysplit<-replicate(k, list()) #storage to group Y's WkZ<- replicate(k, list(0)) #storage for within group variability ybar<- replicate(k, list(0)) for (.i in 1:k){ .Ysplit[[.i]]<-Y[ZZ==.i,] if (ns[.i]>1){ # for groups with >1 obsevations ybar[[.i]]<- as.matrix(t(apply(.Ysplit[[.i]], 2, mean) )) } else if (ns[.i]==1){ ybar[[.i]]<- t( as.matrix(.Ysplit[[.i]])) } else { ybar[[.i]]<- NA } #Within group unexplained variability if (ns[.i]==0) { WkZ[[.i]]<- NA } else if (ns[.i]==1){ WkZ[[.i]]<-crossprod(as.matrix(.Ysplit[[.i]]-ybar[[.i]])) } else { for ( .n in 1:ns[.i]){ WkZ[[.i]]<-WkZ[[.i]]+ crossprod( .Ysplit[[.i]][.n,]-ybar[[.i]]) } } } list('ns'=ns,'ybar'=ybar, 'WkZ'=WkZ) } maxZ<-function (x) as.numeric(names(which.max(table( x )))) if(sim==TRUE){ Y<- as.matrix(YZ$Y) }else{ Y<- as.matrix(YZ) } # change as needed for sim VS case studies, could add an option in function EndSize<-(2*iter)/3 nCh<-length(alphas) r<-dim(Y)[2] ; n <-dim(Y)[1] n0=1 # this is tau, right? Mus<- replicate(k, list()) ## Covs<- replicate(k, list()) ## # k vector per iteration (fix so as to save iterations) # storing final: Ps <- replicate(nCh, matrix(0,nrow = iter, ncol = k) , simplify=F) Zs<- replicate(nCh, matrix(0,nrow = iter, ncol = n) , simplify=F) v<-data.frame(matrix(NA, nrow=0, ncol=r*r+2)) FINcov<-replicate(nCh, v , simplify=F) v2<-data.frame(matrix(NA, nrow=0, ncol=r+2)) FINmu<-replicate(nCh, v2 , simplify=F) Loglike <- matrix(0,nrow = iter, ncol = 1) SteadyScore<-data.frame("Iteration"=c(1:iter), "K0"=0) #hyperpars Ck<- replicate(k, list()) b0<-apply(Y,2,mean) c0<-r+1 C0<-0.75*cov(Y) d<-sum(c(1:r))+r # FOR PP mydata<-Y require(wq) # STEP 1: initiate groups (iteration 1 only) pb <- txtProgressBar(min = 0, max = iter, style = 3) for (.it in 1:iter){ #for each iteration if(.it %% 10 == 0) { Sys.sleep(0.01) par(mfrow=c(2,1)) plot(SteadyScore$K0~SteadyScore$Iteration, main='#non-empty groups', type='l') ts.plot(Ps[[nCh]], main='emptying', col=rainbow(k)) Sys.sleep(0) setTxtProgressBar(pb, .it) } for (.ch in 1:nCh){ #for each chain if (.it==1){ .Zs <-as.vector(kmeans(Y, centers=k)$cluster) bits<-minions(.Zs) ns<-bits$ns; ybar<-bits$ybar WkZ<-bits$WkZ ns <-apply((.Zs == matrix((1:k), nrow = n, ncol = k, byrow = T)),2,sum) } else { # uses the allocations from end of last iteration bits<-minions(Zs[[.ch]][.it-1,]) ns<-bits$ns; ybar<-bits$ybar ; WkZ<-bits$WkZ .Zs<-Zs[[.ch]][.it-1,] } # STEP 2.1 : GENERATE Samples for WEIGHTS from DIRICHLET dist # COUNTER FOR ALPHA CHANGE if (ShrinkAlpha==TRUE){ Ps[[.ch]][.it,] = rdirichlet(m=1,par= ns+alphas[.ch]) #STEP 2.2 GENERATE Samples from Covariance Matrix for each component newCov<-mapply( CovSample, ns, WkZ, ybar) newCovLIST<-as.list(as.data.frame(newCov)) FINcov[[.ch]]<-rbind(FINcov[[.ch]],cbind(t(newCov), 'K'=1:k, 'Iteration'=.it)) #STEP 2.3 GENERATE SAMPLEs of the component specific Mu's from multivariate normal (bk,Bk) newMu<-mapply(MuSample, newCovLIST, ns, WkZ , ybar) newMuLIST<-as.list(as.data.frame(newMu)) FINmu[[.ch]]<-rbind(FINmu[[.ch]],cbind(t(newMu), 'K'=1:k, 'Iteration'=.it)) # STEP 3: Draw new classification probabilities: PZs<-matrix(0,ncol=k, nrow=n) for (i in 1:n) { for (.k in 1:k){ PZs[,.k]<- #dmvnorm(Y, newMuLIST[[.k]], matrix(newCovLIST[[.k]], nrow=r, byrow=T))*Ps[[.ch]][.it,.k] dMvn(Y, newMuLIST[[.k]], matrix(newCovLIST[[.k]], nrow=r, byrow=T))*Ps[[.ch]][.it,.k] }} #scale each row to one for (i in 1:n) { if (sum(PZs[i,])==0) { PZs[i,]<-rep(1/k,k) # if all probs are zero, randomly allocate obs. very rare, might lead to crap results }else { PZs[i,]<-PZs[i,]/sum(PZs[i,])}} ## NEW allocations based on probabilities for (i in 1:n){ Zs[[.ch]][.it,i]=sample((1:k),1, prob=PZs[i,])} } # end of chain loop SteadyScore$K0[.it]<-sum(table(Zs[[nCh]][.it,])>0) ## PARALLEL TEMPERING MOVES if(.it>10){ if( .it%%2==0){chainset<- c(1:(nCh-1))[c(1:(nCh-1))%%2==0]} else {chainset<- c(1:(nCh-1))[c(1:(nCh-1))%%2!=0] } #odds for( eachChain in 1:length(chainset)){ Chain1<-chainset[eachChain] Chain2<-Chain1+1 MHratio<- parallelAccept(Ps[[Chain1]][.it,], Ps[[Chain2]][.it,], rep(alphas[Chain1],k), rep(alphas[Chain2],k)) if (MHratio==1){ .z1<- Zs[[Chain1]][.it,] ; .z2<- Zs[[Chain2]][.it,] Zs[[Chain1]][.it,]<-.z2 ; Zs[[Chain2]][.it,]<-.z1 } } } for (i in 1:n){ non0id<-c(1:k)[ns > 0] .ll<-0 for (numK in 1:length(non0id)){.ll<-.ll+ Ps[[nCh]][.it,non0id[numK]]* dmvnorm(Y[i,], newMu[,non0id[numK]], matrix(newCov[,non0id[numK]], ncol=r,nrow=r, byrow=T))} Loglike[.it]<-Loglike[.it]+ log(.ll)} } close(pb) ps<-Ps[[nCh]][c(iter-EndSize+1):iter,] mu<-subset(FINmu[[nCh]], Iteration>c(iter-EndSize)) covs<-subset(FINcov[[nCh]], Iteration>c(iter-EndSize)) zs<-Zs[[nCh]][c(iter-EndSize):(iter-1),] Loglike<-Loglike[c(iter-EndSize+1):iter] SteadyScore<-SteadyScore$K[c(iter-EndSize+1):iter] Grun<-list(Mu = mu,Cov=covs, P= ps, Zs=zs, Y=YZ, Loglike=Loglike, SteadyScore=SteadyScore) ################################# # Post Process # # ## 1. split by K0 K0<-as.numeric(names(table(Grun$SteadyScore))) # SAVE table of tests, parameter estimates and clustering (Z's) p_vals<-data.frame("K0"=K0, "Probability"=as.numeric(table(Grun$SteadyScore))/dim(Grun$P)[1], "RAND"=NA, "MAE"=NA, "MSE"=NA,"Pmin"=NA, "Pmax"=NA, "Concordance"=NA, "MAPE"=NA, "MSPE"=NA) K0estimates<-vector("list", length(K0)) GrunK0us_FIN<-vector("list", length(K0)) ZTable<-vector("list", length(K0)) #for each K0: for ( .K0 in 1:length(K0)){ if( p_vals$Probability[.K0]>0.05){ GrunK0<-Grun # split data by K0 .iterK0<-c(1:dim(Grun$P)[1])[Grun$SteadyScore==K0[.K0]] GrunK0$Mu<-Grun$Mu[ Grun$Mu$Iteration %in% (min(Grun$Mu$Iteration)+.iterK0-1),] GrunK0$Cov<-Grun$Cov[ Grun$Mu$Iteration %in% (min(Grun$Cov$Iteration)+.iterK0-1),] GrunK0$P<- Grun$P[.iterK0,] GrunK0$Loglike<- Grun$Loglike[.iterK0] GrunK0$Zs<- Grun$Zs[.iterK0,] GrunK0$SteadyScore<- Grun$SteadyScore[.iterK0] GrunK0us<-QuickSwitchMVN(GrunK0,Propmin ) GrunK0us_FIN[[.K0]]<-GrunK0us Zhat<- factor( apply(t(GrunK0us$Zs), 1,maxZ)) names(GrunK0us$Pars)[1]<-'k' GrunK0us$Pars$k<-as.numeric(as.character(GrunK0us$Pars$k)) # Ztemp<-GrunK0us$Zs # ALOC PROBABILITIES ZTable[[.K0]]<-data.frame("myY"=NULL, "k"=NULL, "Prob"=NULL) maxK<-max(Ztemp) for (i in 1:n){ rr<-factor(Ztemp[,i], levels=1:maxK) ZTable[[.K0]]<-rbind(ZTable[[.K0]],cbind(i,c(1:maxK), matrix(table(rr)/ length(rr) ))) } names(ZTable[[.K0]])<-c("Yid", "k", "Prob") ZTable[[.K0]]$k<-as.factor(ZTable[[.K0]]$k) # COMPUTE means of parameters #.par<-melt(GrunK0us$Pars, id.vars=c("Iteration", "k")) # Zetc<-aggregate( value~variable+factor(k), mean ,data=.par) # CI .par<-melt(GrunK0us$Pars, id.vars=c("Iteration", "k")) theta<-aggregate( value~variable+factor(k), mean ,data=.par) mu<-round(aggregate( value~variable+factor(k), mean ,data=.par)[,3], 2) ci<-round(aggregate( value~variable+factor(k), quantile,c(0.025, 0.975) ,data=.par)[,3],2) #use this: thetaCI<-data.frame( "variable"= as.factor(theta[,1]) , "k"=theta[,2], "Estimate"=mu, "CI_025"=ci[,1] ,"CI_975"=ci[,2] ) # thetaCI<-data.frame( theta[,c(1,2)] , "value"=paste( mu, "(", ci[,1] , "," ,ci[,2] ,")", sep="" )) #old K0estimates[[.K0]]<-cbind(thetaCI, "K0"=K0[.K0]) # PLOTS density pars GrunK0us$Pars$k<-as.factor(GrunK0us$Pars$k) if(p_vals$Probability[.K0]==max(p_vals$Probability)){ plot_P<-ggplot(data=GrunK0us$Pars, aes(y=P, x=k)) + geom_boxplot(aes(fill=k), outlier.shape = NA)+ggtitle( bquote( atop(italic( .(simlabel) ), atop("Weights"))))+ ylab("")+xlab("Components (k)") +theme_bw()+ theme(legend.position = "none") plot_Mu1<-ggplot(data=GrunK0us$Pars, aes(y=Mu_1, x=k)) + geom_boxplot(aes(fill=k), outlier.shape = NA)+ggtitle(bquote(atop(italic( "Posterior summaries"), atop("Means (Dim 1)"))))+ylab("Mean (Dim 1)")+xlab("Cluster (k)") +theme_bw()+ theme(legend.position = "none") plot_Mu2<-ggplot(data=GrunK0us$Pars, aes(y=Mu_2, x=k)) + geom_boxplot(aes(fill=k), outlier.shape = NA)+ ggtitle(bquote(atop(italic(paste( "p(K=", .(K0[.K0]), ")=", .(p_vals$Probability[.K0]), sep="")), atop("Means (Dim 2)"))))+ylab("Mean (Dim 2)")+xlab("Cluster (k)") +theme_bw()+ theme(legend.position = "none") # plot clusters names(Y)<-paste("Y", 1:r, sep="_") Clusplot<-ggplot( data.frame(Y, "Cluster"=Zhat), aes(y=princomp(Y)$scores[,1], x=princomp(Y)$scores[,2], shape=Cluster, colour=Cluster))+geom_point()+theme_bw()+ theme(legend.position = "none") Clusplot2<-ggplot( data.frame(Y, "Cluster"=Zhat), aes(y=princomp(Y)$scores[,1], x=princomp(Y)$scores[,3], shape=Cluster, colour=Cluster))+geom_point()+theme_bw()+ theme(legend.position = "none") Clusplot3<-ggplot( data.frame(Y, "Cluster"=Zhat), aes(y=princomp(Y)$scores[,2], x=princomp(Y)$scores[,3], shape=Cluster, colour=Cluster))+geom_point()+theme_bw() pdf( file= paste("PPplots_", savelabel ,"K_", K0[.K0] ,".pdf", sep=""), width=10, height=5) print( layOut( list(plot_P, 1, 1), list(plot_Mu1, 1, 2), list(plot_Mu2, 1,3), list(Clusplot, 2,1), list(Clusplot2, 2,2), list(Clusplot3, 2,3))) dev.off() } }} Final_Pars<-do.call(rbind, K0estimates) #BEFORE list( Final_Pars, p_vals, "Z"=Zhat) ################################### # # TO UPDATE FOR MVN # # finish PP LOOP ######################## RegionName<-simlabel Specs<-DATA_ADDITIONAL # number of models found NumModel<-length(K0) Part1<-data.frame( "Region"=RegionName,"Model_ID"=1:length(p_vals$K0), "P_model" =p_vals$Probability) # for each model, get allocation probs and join with ids for (ModelID in 1:NumModel){ modelK0now<-as.numeric(levels(factor(p_vals$K0)))[ModelID] kProb<- ZTable[[ModelID]][, -1] # k and probability of allocation names(kProb)[2]<-"P_Allocation" kPars<- K0estimates[[ModelID]] # PARAMETERS for( j in 1:modelK0now){ Parameters<-data.frame(subset(K0estimates[[ModelID]], k==j), Part1[ModelID,]) # BIND ID with allocation probability if(j==1 & ModelID==1){ .df<-merge( cbind( "Score"=RegionName, "Model_ID"=ModelID, Specs, subset(kProb, k==j)), Parameters) }else{ .df<- rbind(.df, merge( cbind( "Score"=RegionName, "Model_ID"=ModelID, Specs, subset(kProb, k==j)), Parameters)) } } } Final_Pars<-do.call(rbind, K0estimates) print(p_vals) Final_Result<-list(Final_Pars, p_vals, "All"= .df, Zestimates, ZTable, "Pars_us"=GrunK0us_FIN) write.csv(.df, file=paste("Zmix_", SaveFileName, ".csv", sep="")) save(Final_Result, file=paste("Zmix_", SaveFileName, ".RDATA", sep="")) return(Final_Result) }
/R/Zmix_multi_CSIRO.r
no_license
zoevanhavre/Zmix_devVersion2
R
false
false
12,482
r
#' This functionZmix_multi_tempered #' @param stuff and more #' @keywords multi #' @export #' @examples #' #not run Zmix_multi_CSIRO<-function(YZ, iter, k, alphas, sim=FALSE,Propmin=0.05, simlabel="sim", savelabel="PPplot",DATA_ADDITIONAL ){ parallelAccept<-function(w1, w2, a1, a2){ w1[w1< 1e-200]<-1e-200 # truncate so super small values dont crash everyting w2[w2< 1e-200]<-1e-200 T1<-dDirichlet(w2, a1, log=TRUE) T2<-dDirichlet(w1, a2, log=TRUE) B1<-dDirichlet(w1, a1, log=TRUE) B2<-dDirichlet(w2, a2, log=TRUE) MH<-min(1, exp(T1+T2-B1-B2)) Ax<-sample(c(1,0), 1, prob=c(MH,1-MH)) return(Ax) } CovSample<-function(nsk, WkZk, ybark){ #NEW 11 June 2015 ck<-c0+nsk if (nsk==0) { MCMCpack::riwish(c0, C0) } else { Ck<- C0 +((nsk*n0)/(nsk+n0)*crossprod(ybark-b0)) +WkZk MCMCpack::riwish(ck,Ck) } } MuSample<-function(newCovLISTk, nsk,WkZk, ybark){ newCovLISTk<-matrix(newCovLISTk, nrow=r, byrow=T) if (nsk==0) { rmvnorm(1, b0, newCovLISTk/n0) } else { bk<-(n0/(nsk+n0))*b0+(nsk/(n0+nsk))*ybark Bk<-(1/(nsk+n0))*newCovLISTk rmvnorm(1, t(bk), Bk) }} minions<-function(ZZ){ IndiZ <- (ZZ == matrix((1:k), nrow = n, ncol = k, byrow = T)) ns <- apply(IndiZ,2,sum) # size of each group .Ysplit<-replicate(k, list()) #storage to group Y's WkZ<- replicate(k, list(0)) #storage for within group variability ybar<- replicate(k, list(0)) for (.i in 1:k){ .Ysplit[[.i]]<-Y[ZZ==.i,] if (ns[.i]>1){ # for groups with >1 obsevations ybar[[.i]]<- as.matrix(t(apply(.Ysplit[[.i]], 2, mean) )) } else if (ns[.i]==1){ ybar[[.i]]<- t( as.matrix(.Ysplit[[.i]])) } else { ybar[[.i]]<- NA } #Within group unexplained variability if (ns[.i]==0) { WkZ[[.i]]<- NA } else if (ns[.i]==1){ WkZ[[.i]]<-crossprod(as.matrix(.Ysplit[[.i]]-ybar[[.i]])) } else { for ( .n in 1:ns[.i]){ WkZ[[.i]]<-WkZ[[.i]]+ crossprod( .Ysplit[[.i]][.n,]-ybar[[.i]]) } } } list('ns'=ns,'ybar'=ybar, 'WkZ'=WkZ) } maxZ<-function (x) as.numeric(names(which.max(table( x )))) if(sim==TRUE){ Y<- as.matrix(YZ$Y) }else{ Y<- as.matrix(YZ) } # change as needed for sim VS case studies, could add an option in function EndSize<-(2*iter)/3 nCh<-length(alphas) r<-dim(Y)[2] ; n <-dim(Y)[1] n0=1 # this is tau, right? Mus<- replicate(k, list()) ## Covs<- replicate(k, list()) ## # k vector per iteration (fix so as to save iterations) # storing final: Ps <- replicate(nCh, matrix(0,nrow = iter, ncol = k) , simplify=F) Zs<- replicate(nCh, matrix(0,nrow = iter, ncol = n) , simplify=F) v<-data.frame(matrix(NA, nrow=0, ncol=r*r+2)) FINcov<-replicate(nCh, v , simplify=F) v2<-data.frame(matrix(NA, nrow=0, ncol=r+2)) FINmu<-replicate(nCh, v2 , simplify=F) Loglike <- matrix(0,nrow = iter, ncol = 1) SteadyScore<-data.frame("Iteration"=c(1:iter), "K0"=0) #hyperpars Ck<- replicate(k, list()) b0<-apply(Y,2,mean) c0<-r+1 C0<-0.75*cov(Y) d<-sum(c(1:r))+r # FOR PP mydata<-Y require(wq) # STEP 1: initiate groups (iteration 1 only) pb <- txtProgressBar(min = 0, max = iter, style = 3) for (.it in 1:iter){ #for each iteration if(.it %% 10 == 0) { Sys.sleep(0.01) par(mfrow=c(2,1)) plot(SteadyScore$K0~SteadyScore$Iteration, main='#non-empty groups', type='l') ts.plot(Ps[[nCh]], main='emptying', col=rainbow(k)) Sys.sleep(0) setTxtProgressBar(pb, .it) } for (.ch in 1:nCh){ #for each chain if (.it==1){ .Zs <-as.vector(kmeans(Y, centers=k)$cluster) bits<-minions(.Zs) ns<-bits$ns; ybar<-bits$ybar WkZ<-bits$WkZ ns <-apply((.Zs == matrix((1:k), nrow = n, ncol = k, byrow = T)),2,sum) } else { # uses the allocations from end of last iteration bits<-minions(Zs[[.ch]][.it-1,]) ns<-bits$ns; ybar<-bits$ybar ; WkZ<-bits$WkZ .Zs<-Zs[[.ch]][.it-1,] } # STEP 2.1 : GENERATE Samples for WEIGHTS from DIRICHLET dist # COUNTER FOR ALPHA CHANGE if (ShrinkAlpha==TRUE){ Ps[[.ch]][.it,] = rdirichlet(m=1,par= ns+alphas[.ch]) #STEP 2.2 GENERATE Samples from Covariance Matrix for each component newCov<-mapply( CovSample, ns, WkZ, ybar) newCovLIST<-as.list(as.data.frame(newCov)) FINcov[[.ch]]<-rbind(FINcov[[.ch]],cbind(t(newCov), 'K'=1:k, 'Iteration'=.it)) #STEP 2.3 GENERATE SAMPLEs of the component specific Mu's from multivariate normal (bk,Bk) newMu<-mapply(MuSample, newCovLIST, ns, WkZ , ybar) newMuLIST<-as.list(as.data.frame(newMu)) FINmu[[.ch]]<-rbind(FINmu[[.ch]],cbind(t(newMu), 'K'=1:k, 'Iteration'=.it)) # STEP 3: Draw new classification probabilities: PZs<-matrix(0,ncol=k, nrow=n) for (i in 1:n) { for (.k in 1:k){ PZs[,.k]<- #dmvnorm(Y, newMuLIST[[.k]], matrix(newCovLIST[[.k]], nrow=r, byrow=T))*Ps[[.ch]][.it,.k] dMvn(Y, newMuLIST[[.k]], matrix(newCovLIST[[.k]], nrow=r, byrow=T))*Ps[[.ch]][.it,.k] }} #scale each row to one for (i in 1:n) { if (sum(PZs[i,])==0) { PZs[i,]<-rep(1/k,k) # if all probs are zero, randomly allocate obs. very rare, might lead to crap results }else { PZs[i,]<-PZs[i,]/sum(PZs[i,])}} ## NEW allocations based on probabilities for (i in 1:n){ Zs[[.ch]][.it,i]=sample((1:k),1, prob=PZs[i,])} } # end of chain loop SteadyScore$K0[.it]<-sum(table(Zs[[nCh]][.it,])>0) ## PARALLEL TEMPERING MOVES if(.it>10){ if( .it%%2==0){chainset<- c(1:(nCh-1))[c(1:(nCh-1))%%2==0]} else {chainset<- c(1:(nCh-1))[c(1:(nCh-1))%%2!=0] } #odds for( eachChain in 1:length(chainset)){ Chain1<-chainset[eachChain] Chain2<-Chain1+1 MHratio<- parallelAccept(Ps[[Chain1]][.it,], Ps[[Chain2]][.it,], rep(alphas[Chain1],k), rep(alphas[Chain2],k)) if (MHratio==1){ .z1<- Zs[[Chain1]][.it,] ; .z2<- Zs[[Chain2]][.it,] Zs[[Chain1]][.it,]<-.z2 ; Zs[[Chain2]][.it,]<-.z1 } } } for (i in 1:n){ non0id<-c(1:k)[ns > 0] .ll<-0 for (numK in 1:length(non0id)){.ll<-.ll+ Ps[[nCh]][.it,non0id[numK]]* dmvnorm(Y[i,], newMu[,non0id[numK]], matrix(newCov[,non0id[numK]], ncol=r,nrow=r, byrow=T))} Loglike[.it]<-Loglike[.it]+ log(.ll)} } close(pb) ps<-Ps[[nCh]][c(iter-EndSize+1):iter,] mu<-subset(FINmu[[nCh]], Iteration>c(iter-EndSize)) covs<-subset(FINcov[[nCh]], Iteration>c(iter-EndSize)) zs<-Zs[[nCh]][c(iter-EndSize):(iter-1),] Loglike<-Loglike[c(iter-EndSize+1):iter] SteadyScore<-SteadyScore$K[c(iter-EndSize+1):iter] Grun<-list(Mu = mu,Cov=covs, P= ps, Zs=zs, Y=YZ, Loglike=Loglike, SteadyScore=SteadyScore) ################################# # Post Process # # ## 1. split by K0 K0<-as.numeric(names(table(Grun$SteadyScore))) # SAVE table of tests, parameter estimates and clustering (Z's) p_vals<-data.frame("K0"=K0, "Probability"=as.numeric(table(Grun$SteadyScore))/dim(Grun$P)[1], "RAND"=NA, "MAE"=NA, "MSE"=NA,"Pmin"=NA, "Pmax"=NA, "Concordance"=NA, "MAPE"=NA, "MSPE"=NA) K0estimates<-vector("list", length(K0)) GrunK0us_FIN<-vector("list", length(K0)) ZTable<-vector("list", length(K0)) #for each K0: for ( .K0 in 1:length(K0)){ if( p_vals$Probability[.K0]>0.05){ GrunK0<-Grun # split data by K0 .iterK0<-c(1:dim(Grun$P)[1])[Grun$SteadyScore==K0[.K0]] GrunK0$Mu<-Grun$Mu[ Grun$Mu$Iteration %in% (min(Grun$Mu$Iteration)+.iterK0-1),] GrunK0$Cov<-Grun$Cov[ Grun$Mu$Iteration %in% (min(Grun$Cov$Iteration)+.iterK0-1),] GrunK0$P<- Grun$P[.iterK0,] GrunK0$Loglike<- Grun$Loglike[.iterK0] GrunK0$Zs<- Grun$Zs[.iterK0,] GrunK0$SteadyScore<- Grun$SteadyScore[.iterK0] GrunK0us<-QuickSwitchMVN(GrunK0,Propmin ) GrunK0us_FIN[[.K0]]<-GrunK0us Zhat<- factor( apply(t(GrunK0us$Zs), 1,maxZ)) names(GrunK0us$Pars)[1]<-'k' GrunK0us$Pars$k<-as.numeric(as.character(GrunK0us$Pars$k)) # Ztemp<-GrunK0us$Zs # ALOC PROBABILITIES ZTable[[.K0]]<-data.frame("myY"=NULL, "k"=NULL, "Prob"=NULL) maxK<-max(Ztemp) for (i in 1:n){ rr<-factor(Ztemp[,i], levels=1:maxK) ZTable[[.K0]]<-rbind(ZTable[[.K0]],cbind(i,c(1:maxK), matrix(table(rr)/ length(rr) ))) } names(ZTable[[.K0]])<-c("Yid", "k", "Prob") ZTable[[.K0]]$k<-as.factor(ZTable[[.K0]]$k) # COMPUTE means of parameters #.par<-melt(GrunK0us$Pars, id.vars=c("Iteration", "k")) # Zetc<-aggregate( value~variable+factor(k), mean ,data=.par) # CI .par<-melt(GrunK0us$Pars, id.vars=c("Iteration", "k")) theta<-aggregate( value~variable+factor(k), mean ,data=.par) mu<-round(aggregate( value~variable+factor(k), mean ,data=.par)[,3], 2) ci<-round(aggregate( value~variable+factor(k), quantile,c(0.025, 0.975) ,data=.par)[,3],2) #use this: thetaCI<-data.frame( "variable"= as.factor(theta[,1]) , "k"=theta[,2], "Estimate"=mu, "CI_025"=ci[,1] ,"CI_975"=ci[,2] ) # thetaCI<-data.frame( theta[,c(1,2)] , "value"=paste( mu, "(", ci[,1] , "," ,ci[,2] ,")", sep="" )) #old K0estimates[[.K0]]<-cbind(thetaCI, "K0"=K0[.K0]) # PLOTS density pars GrunK0us$Pars$k<-as.factor(GrunK0us$Pars$k) if(p_vals$Probability[.K0]==max(p_vals$Probability)){ plot_P<-ggplot(data=GrunK0us$Pars, aes(y=P, x=k)) + geom_boxplot(aes(fill=k), outlier.shape = NA)+ggtitle( bquote( atop(italic( .(simlabel) ), atop("Weights"))))+ ylab("")+xlab("Components (k)") +theme_bw()+ theme(legend.position = "none") plot_Mu1<-ggplot(data=GrunK0us$Pars, aes(y=Mu_1, x=k)) + geom_boxplot(aes(fill=k), outlier.shape = NA)+ggtitle(bquote(atop(italic( "Posterior summaries"), atop("Means (Dim 1)"))))+ylab("Mean (Dim 1)")+xlab("Cluster (k)") +theme_bw()+ theme(legend.position = "none") plot_Mu2<-ggplot(data=GrunK0us$Pars, aes(y=Mu_2, x=k)) + geom_boxplot(aes(fill=k), outlier.shape = NA)+ ggtitle(bquote(atop(italic(paste( "p(K=", .(K0[.K0]), ")=", .(p_vals$Probability[.K0]), sep="")), atop("Means (Dim 2)"))))+ylab("Mean (Dim 2)")+xlab("Cluster (k)") +theme_bw()+ theme(legend.position = "none") # plot clusters names(Y)<-paste("Y", 1:r, sep="_") Clusplot<-ggplot( data.frame(Y, "Cluster"=Zhat), aes(y=princomp(Y)$scores[,1], x=princomp(Y)$scores[,2], shape=Cluster, colour=Cluster))+geom_point()+theme_bw()+ theme(legend.position = "none") Clusplot2<-ggplot( data.frame(Y, "Cluster"=Zhat), aes(y=princomp(Y)$scores[,1], x=princomp(Y)$scores[,3], shape=Cluster, colour=Cluster))+geom_point()+theme_bw()+ theme(legend.position = "none") Clusplot3<-ggplot( data.frame(Y, "Cluster"=Zhat), aes(y=princomp(Y)$scores[,2], x=princomp(Y)$scores[,3], shape=Cluster, colour=Cluster))+geom_point()+theme_bw() pdf( file= paste("PPplots_", savelabel ,"K_", K0[.K0] ,".pdf", sep=""), width=10, height=5) print( layOut( list(plot_P, 1, 1), list(plot_Mu1, 1, 2), list(plot_Mu2, 1,3), list(Clusplot, 2,1), list(Clusplot2, 2,2), list(Clusplot3, 2,3))) dev.off() } }} Final_Pars<-do.call(rbind, K0estimates) #BEFORE list( Final_Pars, p_vals, "Z"=Zhat) ################################### # # TO UPDATE FOR MVN # # finish PP LOOP ######################## RegionName<-simlabel Specs<-DATA_ADDITIONAL # number of models found NumModel<-length(K0) Part1<-data.frame( "Region"=RegionName,"Model_ID"=1:length(p_vals$K0), "P_model" =p_vals$Probability) # for each model, get allocation probs and join with ids for (ModelID in 1:NumModel){ modelK0now<-as.numeric(levels(factor(p_vals$K0)))[ModelID] kProb<- ZTable[[ModelID]][, -1] # k and probability of allocation names(kProb)[2]<-"P_Allocation" kPars<- K0estimates[[ModelID]] # PARAMETERS for( j in 1:modelK0now){ Parameters<-data.frame(subset(K0estimates[[ModelID]], k==j), Part1[ModelID,]) # BIND ID with allocation probability if(j==1 & ModelID==1){ .df<-merge( cbind( "Score"=RegionName, "Model_ID"=ModelID, Specs, subset(kProb, k==j)), Parameters) }else{ .df<- rbind(.df, merge( cbind( "Score"=RegionName, "Model_ID"=ModelID, Specs, subset(kProb, k==j)), Parameters)) } } } Final_Pars<-do.call(rbind, K0estimates) print(p_vals) Final_Result<-list(Final_Pars, p_vals, "All"= .df, Zestimates, ZTable, "Pars_us"=GrunK0us_FIN) write.csv(.df, file=paste("Zmix_", SaveFileName, ".csv", sep="")) save(Final_Result, file=paste("Zmix_", SaveFileName, ".RDATA", sep="")) return(Final_Result) }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/ped.R \docType{data} \name{ped} \alias{ped} \title{Simulated data on 20 families.} \format{ A data frame with 288 rows (corresponding to persons) and 8 variables: \describe{ \item{family}{an identifier for the person's family} \item{indiv}{an identifier (ID) for the person} \item{mother}{the individual ID of the person's mother} \item{father}{the individual ID of the person's father} \item{sex}{the person's sex (M = male, F = female)} \item{aff}{the person's affected status (1 = case, 0 = control)} \item{age}{the person's age, in years} \item{typed}{a flag indicating if methylation data is available (1 = available, 0 = unavailable)} } } \source{ Simulated } \usage{ ped } \description{ A dataset giving the relationship structure of 20 families and phenotypic data on the family members } \keyword{datasets}
/man/ped.Rd
no_license
cran/heritEWAS
R
false
true
894
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/ped.R \docType{data} \name{ped} \alias{ped} \title{Simulated data on 20 families.} \format{ A data frame with 288 rows (corresponding to persons) and 8 variables: \describe{ \item{family}{an identifier for the person's family} \item{indiv}{an identifier (ID) for the person} \item{mother}{the individual ID of the person's mother} \item{father}{the individual ID of the person's father} \item{sex}{the person's sex (M = male, F = female)} \item{aff}{the person's affected status (1 = case, 0 = control)} \item{age}{the person's age, in years} \item{typed}{a flag indicating if methylation data is available (1 = available, 0 = unavailable)} } } \source{ Simulated } \usage{ ped } \description{ A dataset giving the relationship structure of 20 families and phenotypic data on the family members } \keyword{datasets}
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/dbplyHelper.R \name{unarrange} \alias{unarrange} \title{Unarrange an ordered data set} \usage{ unarrange(remote_df) } \arguments{ \item{remote_df}{- a df which may be a dbplyr table} } \value{ a dbplyr dataframe without any ordering } \description{ Unarrange an ordered data set }
/man/unarrange.Rd
permissive
terminological/tidy-info-stats
R
false
true
359
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/dbplyHelper.R \name{unarrange} \alias{unarrange} \title{Unarrange an ordered data set} \usage{ unarrange(remote_df) } \arguments{ \item{remote_df}{- a df which may be a dbplyr table} } \value{ a dbplyr dataframe without any ordering } \description{ Unarrange an ordered data set }
#' Prediction Intervals and Estimates for New Data - Parallelized #' #' This function takes in a list of linear regression coefficient estimates generated #' by a Bag of Little Bootstraps procedure, and a dataframe of observations without the #' response variable. The response variable for each observation is predicted using #' each vector of coefficient estimates for each sample. Then, empirical prediction #' intervals and point estimates for the response variable of each observation are #' determined for each sample. Afterwards, the endpoints of all intervals are averaged #' to form overall prediction intervals, and point estimates are averaged to form #' overall predictions. It should be noted that the prediction intervals are not #' multiple prediction intervals. For Bonferroni-corrected prediction intervals, divide #' the desired value of alpha by the number of observations. The difference between #' this function and PI is that this function uses parallel processing through furrr's #' future_map function. #' #' @param lrbs A linear_reg_bs or linear_reg_bs_par object containing BLB regression #' coefficient estimates. #' @param x A dataframe of the explanatory variables of unseen observations. #' @param alpha The significance level. Default value is 0.05. #' @return The prediction intervals and estimates for the response variable of each #' unseen observation. #' @export PI_par <- function(lrbs, x, alpha = 0.05) { coefs <- lrbs$bootstrap_coefficient_estimates x1 <- as.matrix(cbind(Intercept = 1, x)) preds <- future_map(1:length(coefs), function(i) { future_map(1:dim(coefs[[i]])[2], function(j) x1 %*% coefs[[i]][, j]) }) preds <- future_map(preds, function(sample) matrix(unlist(sample),nrow = dim(x1)[1])) PIs <- future_map(preds, function(p) apply(p, 1, quantile, probs = c((alpha/2), (1 - (alpha/2))))) fits <- future_map(preds, function(p) apply(p, 1, mean)) PI <- reduce(PIs, `+`) / length(PIs) fit <- reduce(fits, `+`) / length(fits) return(cbind(Lower_Bounds = PI[1,], Estimates = fit, Upper_Bounds = PI[2,])) }
/R/PI_par.R
no_license
STA141c-LNRW/STA141CFinal
R
false
false
2,120
r
#' Prediction Intervals and Estimates for New Data - Parallelized #' #' This function takes in a list of linear regression coefficient estimates generated #' by a Bag of Little Bootstraps procedure, and a dataframe of observations without the #' response variable. The response variable for each observation is predicted using #' each vector of coefficient estimates for each sample. Then, empirical prediction #' intervals and point estimates for the response variable of each observation are #' determined for each sample. Afterwards, the endpoints of all intervals are averaged #' to form overall prediction intervals, and point estimates are averaged to form #' overall predictions. It should be noted that the prediction intervals are not #' multiple prediction intervals. For Bonferroni-corrected prediction intervals, divide #' the desired value of alpha by the number of observations. The difference between #' this function and PI is that this function uses parallel processing through furrr's #' future_map function. #' #' @param lrbs A linear_reg_bs or linear_reg_bs_par object containing BLB regression #' coefficient estimates. #' @param x A dataframe of the explanatory variables of unseen observations. #' @param alpha The significance level. Default value is 0.05. #' @return The prediction intervals and estimates for the response variable of each #' unseen observation. #' @export PI_par <- function(lrbs, x, alpha = 0.05) { coefs <- lrbs$bootstrap_coefficient_estimates x1 <- as.matrix(cbind(Intercept = 1, x)) preds <- future_map(1:length(coefs), function(i) { future_map(1:dim(coefs[[i]])[2], function(j) x1 %*% coefs[[i]][, j]) }) preds <- future_map(preds, function(sample) matrix(unlist(sample),nrow = dim(x1)[1])) PIs <- future_map(preds, function(p) apply(p, 1, quantile, probs = c((alpha/2), (1 - (alpha/2))))) fits <- future_map(preds, function(p) apply(p, 1, mean)) PI <- reduce(PIs, `+`) / length(PIs) fit <- reduce(fits, `+`) / length(fits) return(cbind(Lower_Bounds = PI[1,], Estimates = fit, Upper_Bounds = PI[2,])) }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/api_url.R \name{query_string} \alias{query_string} \title{Build a query with parameters} \usage{ query_string(...) } \arguments{ \item{...}{named arguments to be turned into query parameters.} } \description{ Build a query with parameters } \examples{ query_string(foo="a", bar=NULL, bob=1) }
/man/query_string.Rd
no_license
wilson0106/ecotaxar
R
false
true
371
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/api_url.R \name{query_string} \alias{query_string} \title{Build a query with parameters} \usage{ query_string(...) } \arguments{ \item{...}{named arguments to be turned into query parameters.} } \description{ Build a query with parameters } \examples{ query_string(foo="a", bar=NULL, bob=1) }
#' Top Min-Max Scaled TF-IDF terms #' #' View the top n min-max scaled tf-idf weighted terms in a text. #' #' @param text.var A vector of character strings. #' @param n The number of rows to print. If integer selects the frequency at #' the nth row and prints all rows >= that value. If proportional (less than 0) #' the frequency value for the nth\% row is selected and prints all rows >= that #' value. #' @param stopwords A vector of stopwords to exclude. #' @param stem logical. If \code{TRUE} the \code{\link[SnowballC]{wordStem}} #' is used with \code{language = "porter"} as the default. Note that stopwords #' will be stemmed as well. #' @param language The stem language to use (see \code{\link[SnowballC]{wordStem}}). #' @param strip logical. If \code{TRUE} all values that are not alpha, apostrophe, #' or spaces are stripped. This regex can be changed via the \code{strip.regex} #' argument. #' @param strip.regex A regular expression used for stripping undesired characters. #' @param \ldots \code{\link[gofastr]{remove_stopwords}} #' @return Returns a \code{\link[base]{data.frame}} of terms and min-max scaled tf-idf weights. #' @keywords important #' @export #' @examples #' x <- presidential_debates_2012[["dialogue"]] #' #' frequent_terms(x) #' important_terms(x) #' important_terms(x, n=899) #' important_terms(x, n=.1) #' important_terms(x, min.char = 7) #' important_terms(x, min.char = 6, stem=TRUE) important_terms <- function (text.var, n = 20, stopwords = tm::stopwords("en"), stem = FALSE, language = "porter", strip = TRUE, strip.regex = "[^A-Za-z' ]", ...) { if (isTRUE(strip)) text.var <- gsub(strip.regex, " ", text.var) if (isTRUE(stem)){ dtm <- gofastr::q_dtm_stem(text.var) stopwords <- gofastr:::stem(stopwords) } else { dtm <- gofastr::q_dtm(text.var) } if (!is.null(stopwords)) dtm <- gofastr::remove_stopwords(dtm, stopwords = stopwords, stem = stem, ...) dtm <- suppressWarnings(tm::weightTfIdf(dtm)) sorted <- sort(minmax_scale(slam::col_sums(dtm)/nrow(dtm)), TRUE) out <- data.frame(term = names(sorted), tf_idf = unlist(sorted, use.names=FALSE), stringsAsFactors = FALSE, row.names=NULL) class(out) <- c("important_terms", class(out)) attributes(out)[["n"]] <- n out } #' Prints an important_terms Object #' #' Prints an important_terms object #' #' @param x An important_terms object. #' @param n The number of rows to print. If integer selects the frequency at #' the nth row and prints all rows >= that value. If proportional (less than 0) #' the frequency value for the nth\% row is selected and prints all rows >= that #' value. #' @param \ldots Ignored. #' @method print important_terms #' @export print.important_terms <- function(x, n = NULL, ...){ if (is.null(n)) n <- attributes(x)[['n']] if (n < 1) n <- ceiling(n * nrow(x)) if (n > nrow(x)) n <- nrow(x) x <- rm_class(x, 'important_terms') print(x[x[['tf_idf']] >= x[n, 'tf_idf'], ] ) }
/R/important_terms.R
no_license
jimhester/termco
R
false
false
3,026
r
#' Top Min-Max Scaled TF-IDF terms #' #' View the top n min-max scaled tf-idf weighted terms in a text. #' #' @param text.var A vector of character strings. #' @param n The number of rows to print. If integer selects the frequency at #' the nth row and prints all rows >= that value. If proportional (less than 0) #' the frequency value for the nth\% row is selected and prints all rows >= that #' value. #' @param stopwords A vector of stopwords to exclude. #' @param stem logical. If \code{TRUE} the \code{\link[SnowballC]{wordStem}} #' is used with \code{language = "porter"} as the default. Note that stopwords #' will be stemmed as well. #' @param language The stem language to use (see \code{\link[SnowballC]{wordStem}}). #' @param strip logical. If \code{TRUE} all values that are not alpha, apostrophe, #' or spaces are stripped. This regex can be changed via the \code{strip.regex} #' argument. #' @param strip.regex A regular expression used for stripping undesired characters. #' @param \ldots \code{\link[gofastr]{remove_stopwords}} #' @return Returns a \code{\link[base]{data.frame}} of terms and min-max scaled tf-idf weights. #' @keywords important #' @export #' @examples #' x <- presidential_debates_2012[["dialogue"]] #' #' frequent_terms(x) #' important_terms(x) #' important_terms(x, n=899) #' important_terms(x, n=.1) #' important_terms(x, min.char = 7) #' important_terms(x, min.char = 6, stem=TRUE) important_terms <- function (text.var, n = 20, stopwords = tm::stopwords("en"), stem = FALSE, language = "porter", strip = TRUE, strip.regex = "[^A-Za-z' ]", ...) { if (isTRUE(strip)) text.var <- gsub(strip.regex, " ", text.var) if (isTRUE(stem)){ dtm <- gofastr::q_dtm_stem(text.var) stopwords <- gofastr:::stem(stopwords) } else { dtm <- gofastr::q_dtm(text.var) } if (!is.null(stopwords)) dtm <- gofastr::remove_stopwords(dtm, stopwords = stopwords, stem = stem, ...) dtm <- suppressWarnings(tm::weightTfIdf(dtm)) sorted <- sort(minmax_scale(slam::col_sums(dtm)/nrow(dtm)), TRUE) out <- data.frame(term = names(sorted), tf_idf = unlist(sorted, use.names=FALSE), stringsAsFactors = FALSE, row.names=NULL) class(out) <- c("important_terms", class(out)) attributes(out)[["n"]] <- n out } #' Prints an important_terms Object #' #' Prints an important_terms object #' #' @param x An important_terms object. #' @param n The number of rows to print. If integer selects the frequency at #' the nth row and prints all rows >= that value. If proportional (less than 0) #' the frequency value for the nth\% row is selected and prints all rows >= that #' value. #' @param \ldots Ignored. #' @method print important_terms #' @export print.important_terms <- function(x, n = NULL, ...){ if (is.null(n)) n <- attributes(x)[['n']] if (n < 1) n <- ceiling(n * nrow(x)) if (n > nrow(x)) n <- nrow(x) x <- rm_class(x, 'important_terms') print(x[x[['tf_idf']] >= x[n, 'tf_idf'], ] ) }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/Met_distributionH.R \name{compP} \alias{compP} \alias{compP,distributionH,numeric-method} \alias{compP,distributionH-method} \title{Method \code{compP}} \usage{ compP(object, q) \S4method{compP}{distributionH,numeric}(object, q) } \arguments{ \item{object}{is an object of \env{distributionH} class} \item{q}{is a numeric value} } \value{ Returns a value between 0 and 1. } \description{ Compute the cdf probability at a given value for a histogram } \examples{ ## ---- A mydist distribution ---- mydist <- distributionH(x = c(1, 2, 3, 10), p = c(0, 0.1, 0.5, 1)) ## ---- Compute the cfd value for q=5 (not observed) ---- p <- compP(mydist, 5) } \keyword{distribution}
/man/compP-methods.Rd
no_license
Airpino/HistDAWass
R
false
true
750
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/Met_distributionH.R \name{compP} \alias{compP} \alias{compP,distributionH,numeric-method} \alias{compP,distributionH-method} \title{Method \code{compP}} \usage{ compP(object, q) \S4method{compP}{distributionH,numeric}(object, q) } \arguments{ \item{object}{is an object of \env{distributionH} class} \item{q}{is a numeric value} } \value{ Returns a value between 0 and 1. } \description{ Compute the cdf probability at a given value for a histogram } \examples{ ## ---- A mydist distribution ---- mydist <- distributionH(x = c(1, 2, 3, 10), p = c(0, 0.1, 0.5, 1)) ## ---- Compute the cfd value for q=5 (not observed) ---- p <- compP(mydist, 5) } \keyword{distribution}
library("dplyr", warn.conflicts = FALSE) library("readr") library("latlon2map") options(timeout = 6000) params <- readr::read_csv(file = "params.csv", col_types = readr::cols( param = readr::col_character(), value = readr::col_double() )) dataset_eu_folder <- paste0("07-dataset_eu", "-", "lau_", params %>% dplyr::filter(param == "lau_year") %>% dplyr::pull(value), "-", "pop_grid_", params %>% dplyr::filter(param == "pop_grid_year") %>% dplyr::pull(value) ) dataset_eu_file <- fs::path(dataset_eu_folder, "by_lau_all_years.csv.gz") original_df <- readr::read_csv("~/Downloads/datasets/dashboard_data_source/lau_lvl_data_temperatures_eu.csv") # gisco id with valid id and within the relevant area gisco_id_within_area_covered_2018_df <- ll_get_lau_eu(year = 2018) %>% sf::st_transform(crs = 4326) %>% sf::st_filter(y = readr::read_rds("area_covered_sf.rds") %>% sf::st_transform(crs = 4326), .predicate = sf::st_within) %>% sf::st_drop_geometry() %>% dplyr::filter(is.na(LAU_ID)==FALSE) # exclude lau in montenengro and kosovo with no valid data # all(sf::st_is_valid( ll_get_lau_eu(year = 2019) %>% sf::st_make_valid())) # to deal with issues in the geometry, see also: https://gis.stackexchange.com/questions/404385/r-sf-some-edges-are-crossing-in-a-multipolygon-how-to-make-it-valid-when-using/404454 # or just crs to 3857 # sf_use_s2(TRUE) gisco_id_within_area_covered_2019_df <- ll_get_lau_eu(year = 2019) %>% sf::st_transform(crs = 3857) %>% sf::st_make_valid() %>% sf::st_filter(y = readr::read_rds("area_covered_sf.rds") %>% sf::st_transform(crs = 3857), .predicate = sf::st_within) %>% sf::st_drop_geometry() %>% dplyr::filter(is.na(LAU_ID)==FALSE) # exclude lau in montenengro and kosovo with no valid data gisco_id_within_area_covered_2020_df <- ll_get_lau_eu(year = 2020) %>% sf::st_transform(crs = 3857) %>% sf::st_filter(y = readr::read_rds("area_covered_sf.rds") %>% sf::st_transform(crs = 3857), .predicate = sf::st_within) %>% sf::st_drop_geometry() %>% dplyr::filter(is.na(LAU_ID)==FALSE) # exclude lau in montenengro and kosovo with no valid data check_df <- readr::read_csv(file = dataset_eu_file, col_types = cols( CNTR_CODE = col_character(), NUTS_2_ID = col_character(), NUTS_3_ID = col_character(), GISCO_ID = col_character(), LAU_LABEL = col_character(), avg_1961_1970 = col_double(), year = col_double(), avg_year = col_double(), variation_year = col_double(), avg_2009_2018 = col_double(), variation_periods = col_double(), lon = col_double(), lat = col_double() )) %>% dplyr::filter(year>1970) ### temporarily exclude missing countries#### nrow(check_df %>%distinct(CNTR_CODE)) check_df <- check_df %>% dplyr::filter(!is.element(CNTR_CODE, c("CY", "IS", "LU", "MK","RS"))) nrow(check_df %>%distinct(CNTR_CODE)) gisco_id_within_area_covered_2018_df <- gisco_id_within_area_covered_2018_df %>% dplyr::filter(!is.element(CNTR_CODE, c("CY", "IS", "LU", "MK","RS"))) gisco_id_within_area_covered_2019_df <- gisco_id_within_area_covered_2019_df %>% dplyr::filter(!is.element(CNTR_CODE, c("CY", "IS", "LU", "MK","RS"))) #### testing ##### library("testthat") test_that("check if same column names", { expect_equal(object = colnames(original_df), expected = colnames(check_df)) }) test_that("Same countries included in the dataset", { expect_equal(object = { original_countries <- original_df %>% dplyr::distinct(CNTR_CODE) %>% dplyr::arrange(CNTR_CODE) %>% dplyr::pull(CNTR_CODE) original_countries }, expected = { check_countries <- check_df %>% dplyr::distinct(CNTR_CODE) %>% dplyr::arrange(CNTR_CODE) %>% dplyr::pull(CNTR_CODE) check_countries }) check_countries[!is.element(check_countries, original_countries)] original_countries[!is.element(original_countries, check_countries)] }) test_that("Same NUTS2 included", { original_nuts2_df <- original_df %>% dplyr::distinct(NUTS_2_ID) %>% dplyr::arrange(NUTS_2_ID) %>% dplyr::select(NUTS_2_ID) check_nuts2_df <- check_df %>% dplyr::distinct(NUTS_2_ID) %>% dplyr::arrange(NUTS_2_ID) %>% dplyr::select(NUTS_2_ID) expect_equal(object = { nuts2_not_in_check <- original_nuts2_df %>% dplyr::anti_join(check_nuts2_df, by = "NUTS_2_ID") nuts2_not_in_original <- check_nuts2_df %>% dplyr::anti_join(original_nuts2_df, by = "NUTS_2_ID") sum(nrow(nuts2_not_in_check), nrow(nuts2_not_in_original)) original_df %>% dplyr::distinct(NUTS_2_ID, .keep_all = TRUE) %>% dplyr::filter(CNTR_CODE == "AL") original_df %>% dplyr::distinct(GISCO_ID, .keep_all = TRUE) %>% dplyr::filter(CNTR_CODE == "EE") }, expected = 0 ) expect_equal(object = { original_nuts2_df }, expected = { check_nuts2_df }) }) test_that("Same number of rows", { expect_equal(object = { nrow_original <- original_df %>% nrow() nrow_original }, expected = { nrow_check <- check_df %>% nrow() nrow_check }) }) test_that("Same LAU included in the dataset", { library("latlon2map") library("sf") options(timeout = 6000) # # lau_2018_v <- ll_get_lau_eu(year = 2018) %>% # sf::st_drop_geometry() %>% # dplyr::distinct(GISCO_ID) %>% # dplyr::arrange(GISCO_ID) %>% # dplyr::pull(GISCO_ID) # # lau_2019_v <- ll_get_lau_eu(year = 2019) %>% # sf::st_drop_geometry() %>% # dplyr::distinct(GISCO_ID) %>% # dplyr::arrange(GISCO_ID) %>% # dplyr::pull(GISCO_ID) # check_df %>% dplyr::filter(GISCO_ID == "ME_") check_df %>% dplyr::filter(NUTS_2_ID == "PT20") gisco_id_within_area_covered_2018_df %>% dplyr::anti_join(y = check_df, by = "GISCO_ID") %>% distinct(CNTR_CODE) check_df %>% dplyr::anti_join(y = gisco_id_within_area_covered_2018_df, by = "GISCO_ID") %>% View() # dplyr::distinct(GISCO_ID) %>% # dplyr::arrange(GISCO_ID) %>% # dplyr::pull(GISCO_ID) expect_true(object = { original_gisco_id <- original_df %>% dplyr::distinct(GISCO_ID) %>% dplyr::arrange(GISCO_ID) %>% dplyr::pull(GISCO_ID) check_gisco_id <- check_df %>% dplyr::distinct(GISCO_ID) %>% dplyr::arrange(GISCO_ID) %>% dplyr::pull(GISCO_ID) theoretic_gisco_id <- gisco_id_within_area_covered_2019_df %>% dplyr::distinct(GISCO_ID) %>% dplyr::arrange(GISCO_ID) %>% dplyr::pull(GISCO_ID) length(check_gisco_id) length(original_gisco_id) length(theoretic_gisco_id) unmatched_gisco_id_original <- original_gisco_id[is.element(check_gisco_id, original_gisco_id)==FALSE] unmatched_gisco_id_check <- check_gisco_id[is.element(original_gisco_id, check_gisco_id)==FALSE] length(unmatched_gisco_id_check) ll_get_lau_nuts_concordance(lau_year = 2018) %>% dplyr::filter(is.element(gisco_id, unmatched_gisco_id_check)) %>% dplyr::distinct(nuts_2) }) original_df %>% dplyr::filter(CNTR_CODE == "PT") %>% dplyr::distinct(GISCO_ID) %>% nrow() gisco_id_within_area_covered_2019_df %>% dplyr::filter(CNTR_CODE == "PT") %>% dplyr::distinct(GISCO_ID) %>% nrow() }) test_that("Same data for the same LAU", { expect_true(object = { common_gisco_df <- dplyr::semi_join(check_df %>% dplyr::distinct(GISCO_ID), original_df %>% dplyr::distinct(GISCO_ID), by = "GISCO_ID") combo_df <- dplyr::bind_rows(original = original_df, check = check_df, .id = "source") head(combo_df) combo_df %>% dplyr::distinct(source, GISCO_ID, avg_1961_1970) %>% tidyr::pivot_wider(names_from = source, values_from = avg_1961_1970) %>% dplyr::mutate(difference = original-check) %>% dplyr::arrange(dplyr::desc(difference)) %>% dplyr::filter(difference>0.1) %>% dplyr::arrange(difference) combo_df %>% dplyr::distinct(source, GISCO_ID, variation_periods) %>% tidyr::pivot_wider(names_from = source, values_from = variation_periods) %>% dplyr::mutate(difference = original-check) %>% dplyr::arrange(dplyr::desc(difference)) %>% dplyr::filter(difference>0.1) %>% dplyr::arrange(difference) double_gisco_id <- ll_get_lau_eu(year = 2018) %>% dplyr::group_by(CNTR_CODE, LAU_NAME) %>% add_count() %>% ungroup() %>% filter(n>2) %>% dplyr::select(GISCO_ID) %>% sf::st_drop_geometry() combo_df %>% dplyr::anti_join(y = double_gisco_id, by = "GISCO_ID") %>% dplyr::distinct(source, GISCO_ID, avg_1961_1970) %>% tidyr::pivot_wider(names_from = source, values_from = avg_1961_1970) %>% dplyr::mutate(difference = original-check) %>% dplyr::arrange(dplyr::desc(difference)) %>% dplyr::filter(difference>0.5) %>% dplyr::arrange(difference) combo_df %>% dplyr::anti_join(y = double_gisco_id, by = "GISCO_ID") %>% dplyr::distinct(source, GISCO_ID, variation_periods) %>% tidyr::pivot_wider(names_from = source, values_from = variation_periods) %>% dplyr::mutate(difference = original-check) %>% dplyr::arrange(dplyr::desc(difference)) %>% dplyr::filter(difference>0.5) %>% dplyr::arrange(difference) reference_geometry_sf <- readRDS("reference_geometry_sf.rds") combo_df %>% dplyr::anti_join(y = double_gisco_id, by = "GISCO_ID") %>% dplyr::distinct(source, GISCO_ID, lon) %>% tidyr::pivot_wider(names_from = source, values_from = lon) %>% dplyr::mutate(difference = original-check) %>% dplyr::arrange(dplyr::desc(difference)) %>% dplyr::filter(difference>0.5) %>% dplyr::arrange(difference) SK_511188 }) }) ll_get_la dplyr::all_equal(original_df, check_df) dplyr::setdiff(original_df, check_df)
/step_08b.R
permissive
giocomai/mescan_surfex_2m
R
false
false
11,340
r
library("dplyr", warn.conflicts = FALSE) library("readr") library("latlon2map") options(timeout = 6000) params <- readr::read_csv(file = "params.csv", col_types = readr::cols( param = readr::col_character(), value = readr::col_double() )) dataset_eu_folder <- paste0("07-dataset_eu", "-", "lau_", params %>% dplyr::filter(param == "lau_year") %>% dplyr::pull(value), "-", "pop_grid_", params %>% dplyr::filter(param == "pop_grid_year") %>% dplyr::pull(value) ) dataset_eu_file <- fs::path(dataset_eu_folder, "by_lau_all_years.csv.gz") original_df <- readr::read_csv("~/Downloads/datasets/dashboard_data_source/lau_lvl_data_temperatures_eu.csv") # gisco id with valid id and within the relevant area gisco_id_within_area_covered_2018_df <- ll_get_lau_eu(year = 2018) %>% sf::st_transform(crs = 4326) %>% sf::st_filter(y = readr::read_rds("area_covered_sf.rds") %>% sf::st_transform(crs = 4326), .predicate = sf::st_within) %>% sf::st_drop_geometry() %>% dplyr::filter(is.na(LAU_ID)==FALSE) # exclude lau in montenengro and kosovo with no valid data # all(sf::st_is_valid( ll_get_lau_eu(year = 2019) %>% sf::st_make_valid())) # to deal with issues in the geometry, see also: https://gis.stackexchange.com/questions/404385/r-sf-some-edges-are-crossing-in-a-multipolygon-how-to-make-it-valid-when-using/404454 # or just crs to 3857 # sf_use_s2(TRUE) gisco_id_within_area_covered_2019_df <- ll_get_lau_eu(year = 2019) %>% sf::st_transform(crs = 3857) %>% sf::st_make_valid() %>% sf::st_filter(y = readr::read_rds("area_covered_sf.rds") %>% sf::st_transform(crs = 3857), .predicate = sf::st_within) %>% sf::st_drop_geometry() %>% dplyr::filter(is.na(LAU_ID)==FALSE) # exclude lau in montenengro and kosovo with no valid data gisco_id_within_area_covered_2020_df <- ll_get_lau_eu(year = 2020) %>% sf::st_transform(crs = 3857) %>% sf::st_filter(y = readr::read_rds("area_covered_sf.rds") %>% sf::st_transform(crs = 3857), .predicate = sf::st_within) %>% sf::st_drop_geometry() %>% dplyr::filter(is.na(LAU_ID)==FALSE) # exclude lau in montenengro and kosovo with no valid data check_df <- readr::read_csv(file = dataset_eu_file, col_types = cols( CNTR_CODE = col_character(), NUTS_2_ID = col_character(), NUTS_3_ID = col_character(), GISCO_ID = col_character(), LAU_LABEL = col_character(), avg_1961_1970 = col_double(), year = col_double(), avg_year = col_double(), variation_year = col_double(), avg_2009_2018 = col_double(), variation_periods = col_double(), lon = col_double(), lat = col_double() )) %>% dplyr::filter(year>1970) ### temporarily exclude missing countries#### nrow(check_df %>%distinct(CNTR_CODE)) check_df <- check_df %>% dplyr::filter(!is.element(CNTR_CODE, c("CY", "IS", "LU", "MK","RS"))) nrow(check_df %>%distinct(CNTR_CODE)) gisco_id_within_area_covered_2018_df <- gisco_id_within_area_covered_2018_df %>% dplyr::filter(!is.element(CNTR_CODE, c("CY", "IS", "LU", "MK","RS"))) gisco_id_within_area_covered_2019_df <- gisco_id_within_area_covered_2019_df %>% dplyr::filter(!is.element(CNTR_CODE, c("CY", "IS", "LU", "MK","RS"))) #### testing ##### library("testthat") test_that("check if same column names", { expect_equal(object = colnames(original_df), expected = colnames(check_df)) }) test_that("Same countries included in the dataset", { expect_equal(object = { original_countries <- original_df %>% dplyr::distinct(CNTR_CODE) %>% dplyr::arrange(CNTR_CODE) %>% dplyr::pull(CNTR_CODE) original_countries }, expected = { check_countries <- check_df %>% dplyr::distinct(CNTR_CODE) %>% dplyr::arrange(CNTR_CODE) %>% dplyr::pull(CNTR_CODE) check_countries }) check_countries[!is.element(check_countries, original_countries)] original_countries[!is.element(original_countries, check_countries)] }) test_that("Same NUTS2 included", { original_nuts2_df <- original_df %>% dplyr::distinct(NUTS_2_ID) %>% dplyr::arrange(NUTS_2_ID) %>% dplyr::select(NUTS_2_ID) check_nuts2_df <- check_df %>% dplyr::distinct(NUTS_2_ID) %>% dplyr::arrange(NUTS_2_ID) %>% dplyr::select(NUTS_2_ID) expect_equal(object = { nuts2_not_in_check <- original_nuts2_df %>% dplyr::anti_join(check_nuts2_df, by = "NUTS_2_ID") nuts2_not_in_original <- check_nuts2_df %>% dplyr::anti_join(original_nuts2_df, by = "NUTS_2_ID") sum(nrow(nuts2_not_in_check), nrow(nuts2_not_in_original)) original_df %>% dplyr::distinct(NUTS_2_ID, .keep_all = TRUE) %>% dplyr::filter(CNTR_CODE == "AL") original_df %>% dplyr::distinct(GISCO_ID, .keep_all = TRUE) %>% dplyr::filter(CNTR_CODE == "EE") }, expected = 0 ) expect_equal(object = { original_nuts2_df }, expected = { check_nuts2_df }) }) test_that("Same number of rows", { expect_equal(object = { nrow_original <- original_df %>% nrow() nrow_original }, expected = { nrow_check <- check_df %>% nrow() nrow_check }) }) test_that("Same LAU included in the dataset", { library("latlon2map") library("sf") options(timeout = 6000) # # lau_2018_v <- ll_get_lau_eu(year = 2018) %>% # sf::st_drop_geometry() %>% # dplyr::distinct(GISCO_ID) %>% # dplyr::arrange(GISCO_ID) %>% # dplyr::pull(GISCO_ID) # # lau_2019_v <- ll_get_lau_eu(year = 2019) %>% # sf::st_drop_geometry() %>% # dplyr::distinct(GISCO_ID) %>% # dplyr::arrange(GISCO_ID) %>% # dplyr::pull(GISCO_ID) # check_df %>% dplyr::filter(GISCO_ID == "ME_") check_df %>% dplyr::filter(NUTS_2_ID == "PT20") gisco_id_within_area_covered_2018_df %>% dplyr::anti_join(y = check_df, by = "GISCO_ID") %>% distinct(CNTR_CODE) check_df %>% dplyr::anti_join(y = gisco_id_within_area_covered_2018_df, by = "GISCO_ID") %>% View() # dplyr::distinct(GISCO_ID) %>% # dplyr::arrange(GISCO_ID) %>% # dplyr::pull(GISCO_ID) expect_true(object = { original_gisco_id <- original_df %>% dplyr::distinct(GISCO_ID) %>% dplyr::arrange(GISCO_ID) %>% dplyr::pull(GISCO_ID) check_gisco_id <- check_df %>% dplyr::distinct(GISCO_ID) %>% dplyr::arrange(GISCO_ID) %>% dplyr::pull(GISCO_ID) theoretic_gisco_id <- gisco_id_within_area_covered_2019_df %>% dplyr::distinct(GISCO_ID) %>% dplyr::arrange(GISCO_ID) %>% dplyr::pull(GISCO_ID) length(check_gisco_id) length(original_gisco_id) length(theoretic_gisco_id) unmatched_gisco_id_original <- original_gisco_id[is.element(check_gisco_id, original_gisco_id)==FALSE] unmatched_gisco_id_check <- check_gisco_id[is.element(original_gisco_id, check_gisco_id)==FALSE] length(unmatched_gisco_id_check) ll_get_lau_nuts_concordance(lau_year = 2018) %>% dplyr::filter(is.element(gisco_id, unmatched_gisco_id_check)) %>% dplyr::distinct(nuts_2) }) original_df %>% dplyr::filter(CNTR_CODE == "PT") %>% dplyr::distinct(GISCO_ID) %>% nrow() gisco_id_within_area_covered_2019_df %>% dplyr::filter(CNTR_CODE == "PT") %>% dplyr::distinct(GISCO_ID) %>% nrow() }) test_that("Same data for the same LAU", { expect_true(object = { common_gisco_df <- dplyr::semi_join(check_df %>% dplyr::distinct(GISCO_ID), original_df %>% dplyr::distinct(GISCO_ID), by = "GISCO_ID") combo_df <- dplyr::bind_rows(original = original_df, check = check_df, .id = "source") head(combo_df) combo_df %>% dplyr::distinct(source, GISCO_ID, avg_1961_1970) %>% tidyr::pivot_wider(names_from = source, values_from = avg_1961_1970) %>% dplyr::mutate(difference = original-check) %>% dplyr::arrange(dplyr::desc(difference)) %>% dplyr::filter(difference>0.1) %>% dplyr::arrange(difference) combo_df %>% dplyr::distinct(source, GISCO_ID, variation_periods) %>% tidyr::pivot_wider(names_from = source, values_from = variation_periods) %>% dplyr::mutate(difference = original-check) %>% dplyr::arrange(dplyr::desc(difference)) %>% dplyr::filter(difference>0.1) %>% dplyr::arrange(difference) double_gisco_id <- ll_get_lau_eu(year = 2018) %>% dplyr::group_by(CNTR_CODE, LAU_NAME) %>% add_count() %>% ungroup() %>% filter(n>2) %>% dplyr::select(GISCO_ID) %>% sf::st_drop_geometry() combo_df %>% dplyr::anti_join(y = double_gisco_id, by = "GISCO_ID") %>% dplyr::distinct(source, GISCO_ID, avg_1961_1970) %>% tidyr::pivot_wider(names_from = source, values_from = avg_1961_1970) %>% dplyr::mutate(difference = original-check) %>% dplyr::arrange(dplyr::desc(difference)) %>% dplyr::filter(difference>0.5) %>% dplyr::arrange(difference) combo_df %>% dplyr::anti_join(y = double_gisco_id, by = "GISCO_ID") %>% dplyr::distinct(source, GISCO_ID, variation_periods) %>% tidyr::pivot_wider(names_from = source, values_from = variation_periods) %>% dplyr::mutate(difference = original-check) %>% dplyr::arrange(dplyr::desc(difference)) %>% dplyr::filter(difference>0.5) %>% dplyr::arrange(difference) reference_geometry_sf <- readRDS("reference_geometry_sf.rds") combo_df %>% dplyr::anti_join(y = double_gisco_id, by = "GISCO_ID") %>% dplyr::distinct(source, GISCO_ID, lon) %>% tidyr::pivot_wider(names_from = source, values_from = lon) %>% dplyr::mutate(difference = original-check) %>% dplyr::arrange(dplyr::desc(difference)) %>% dplyr::filter(difference>0.5) %>% dplyr::arrange(difference) SK_511188 }) }) ll_get_la dplyr::all_equal(original_df, check_df) dplyr::setdiff(original_df, check_df)
# Copyright 2017 Province of British Columbia # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and limitations under the License. ## Load in packages library(purrr) library(dplyr) library(dbplyr) ## Create a subset of the data hydat_con <- DBI::dbConnect(RSQLite::SQLite(), file.path(hy_dir(),"Hydat.sqlite3")) all_tables <- DBI::dbListTables(hydat_con) #table_vector <- DBI::dbListTables(hydat_con) # Don't need all the tables. I will export what I need for testing table_vector <- c("ANNUAL_INSTANT_PEAKS", "ANNUAL_STATISTICS", "DLY_FLOWS", "DLY_LEVELS", "SED_DLY_LOADS", "SED_DLY_SUSCON", "SED_SAMPLES", "SED_SAMPLES_PSD", "STATIONS","STN_REGULATION","STN_REMARKS", "STN_DATUM_CONVERSION", "STN_DATA_RANGE","STN_DATA_COLLECTION", "STN_OPERATION_SCHEDULE","STN_DATUM_UNRELATED") ## List of tables with STATION_NUMBER INFORMATION list_of_small_tables <- table_vector %>% map(~tbl(src = hydat_con, .) %>% filter(STATION_NUMBER %in% c("08MF005","08NM083","08NE102","08AA003", "05AA008","01AP003")) %>% collect() ) %>% set_names(table_vector) ## All tables without STATION_NUMBER no_stn_table_vector <- all_tables[!all_tables %in% table_vector] list_of_no_stn_tables <- no_stn_table_vector %>% map(~tbl(src = hydat_con, .) %>% head(50) %>% collect() ) %>% set_names(no_stn_table_vector) SED_DATA_TYPES <- dplyr::tbl(hydat_con, "SED_DATA_TYPES") %>% collect() DBI::dbDisconnect(hydat_con) ## Create the new smaller database createIndex <- TRUE db_path <- "./inst/test_db/tinyhydat.sqlite3" con <- DBI::dbConnect(RSQLite::SQLite(), db_path) ## Do this in a loop - uncertain how to do it purrr. ## Because this isn't a regularly run item I'll leave it as is. ## Loops for Stns for (i in 1:length(table_vector)) { DBI::dbWriteTable(con, table_vector[i], list_of_small_tables[[i]], overwrite=TRUE) } ## Tables without station info for (i in 1:length(no_stn_table_vector)) { DBI::dbWriteTable(con, no_stn_table_vector[i], list_of_no_stn_tables[[i]], overwrite=TRUE) } #DBI::dbWriteTable(con, "SED_DATA_TYPES", SED_DATA_TYPES, overwrite=TRUE) DBI::dbDisconnect(con) ## Check to make sure the tables that I want are in the db testdb <- DBI::dbConnect(RSQLite::SQLite(), db_path) ## Check to make sure all the tables from HYDAT are in the test db all(DBI::dbListTables(testdb) %in% all_tables) == TRUE DBI::dbDisconnect(con)
/data-raw/HYDAT_internal_data/tinyhydat_proc.R
permissive
JingningShi/tidyhydat
R
false
false
2,940
r
# Copyright 2017 Province of British Columbia # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and limitations under the License. ## Load in packages library(purrr) library(dplyr) library(dbplyr) ## Create a subset of the data hydat_con <- DBI::dbConnect(RSQLite::SQLite(), file.path(hy_dir(),"Hydat.sqlite3")) all_tables <- DBI::dbListTables(hydat_con) #table_vector <- DBI::dbListTables(hydat_con) # Don't need all the tables. I will export what I need for testing table_vector <- c("ANNUAL_INSTANT_PEAKS", "ANNUAL_STATISTICS", "DLY_FLOWS", "DLY_LEVELS", "SED_DLY_LOADS", "SED_DLY_SUSCON", "SED_SAMPLES", "SED_SAMPLES_PSD", "STATIONS","STN_REGULATION","STN_REMARKS", "STN_DATUM_CONVERSION", "STN_DATA_RANGE","STN_DATA_COLLECTION", "STN_OPERATION_SCHEDULE","STN_DATUM_UNRELATED") ## List of tables with STATION_NUMBER INFORMATION list_of_small_tables <- table_vector %>% map(~tbl(src = hydat_con, .) %>% filter(STATION_NUMBER %in% c("08MF005","08NM083","08NE102","08AA003", "05AA008","01AP003")) %>% collect() ) %>% set_names(table_vector) ## All tables without STATION_NUMBER no_stn_table_vector <- all_tables[!all_tables %in% table_vector] list_of_no_stn_tables <- no_stn_table_vector %>% map(~tbl(src = hydat_con, .) %>% head(50) %>% collect() ) %>% set_names(no_stn_table_vector) SED_DATA_TYPES <- dplyr::tbl(hydat_con, "SED_DATA_TYPES") %>% collect() DBI::dbDisconnect(hydat_con) ## Create the new smaller database createIndex <- TRUE db_path <- "./inst/test_db/tinyhydat.sqlite3" con <- DBI::dbConnect(RSQLite::SQLite(), db_path) ## Do this in a loop - uncertain how to do it purrr. ## Because this isn't a regularly run item I'll leave it as is. ## Loops for Stns for (i in 1:length(table_vector)) { DBI::dbWriteTable(con, table_vector[i], list_of_small_tables[[i]], overwrite=TRUE) } ## Tables without station info for (i in 1:length(no_stn_table_vector)) { DBI::dbWriteTable(con, no_stn_table_vector[i], list_of_no_stn_tables[[i]], overwrite=TRUE) } #DBI::dbWriteTable(con, "SED_DATA_TYPES", SED_DATA_TYPES, overwrite=TRUE) DBI::dbDisconnect(con) ## Check to make sure the tables that I want are in the db testdb <- DBI::dbConnect(RSQLite::SQLite(), db_path) ## Check to make sure all the tables from HYDAT are in the test db all(DBI::dbListTables(testdb) %in% all_tables) == TRUE DBI::dbDisconnect(con)
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/plotting_functions.R \name{osb_DotPlot} \alias{osb_DotPlot} \title{generate dot plot} \usage{ osb_DotPlot(dataset, changes_in, for_the, across, formatter = function(x) x) } \arguments{ \item{dataset}{The source dataset} \item{changes_in}{The variable to show changes in} \item{for_the}{The variable to split those changes by} \item{across}{The two timeframes for the changes to occur} \item{formatter}{A formatter function to control the labels} } \value{ } \description{ generate dot plot }
/man/osb_DotPlot.Rd
permissive
JDOsborne1/OSButils
R
false
true
575
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/plotting_functions.R \name{osb_DotPlot} \alias{osb_DotPlot} \title{generate dot plot} \usage{ osb_DotPlot(dataset, changes_in, for_the, across, formatter = function(x) x) } \arguments{ \item{dataset}{The source dataset} \item{changes_in}{The variable to show changes in} \item{for_the}{The variable to split those changes by} \item{across}{The two timeframes for the changes to occur} \item{formatter}{A formatter function to control the labels} } \value{ } \description{ generate dot plot }
# Exercise 3: writing and executing functions # Define a function `add_three` that takes a single argument and # returns a value 3 greater than the input add_three <- function(x) { result <- x + 3 result } # Create a variable `ten` that is the result of passing 7 to your `add_three` # function ten <- add_three(7) # Define a function `imperial_to_metric` that takes in two arguments: a number # of feet and a number of inches # The function should return the equivalent length in meters imperial_to_metric <- function(feet, inches){ total_inches <- 12 * feet + inches meters <- .025 * total_inches meters } # Create a variable `height_in_meters` by passing your height in imperial to the # `imperial_to_metric` function height_in_meters <- imperial_to_metric(6, 2)
/week-2/exercise.R
no_license
Soitzai/lab-exercises
R
false
false
782
r
# Exercise 3: writing and executing functions # Define a function `add_three` that takes a single argument and # returns a value 3 greater than the input add_three <- function(x) { result <- x + 3 result } # Create a variable `ten` that is the result of passing 7 to your `add_three` # function ten <- add_three(7) # Define a function `imperial_to_metric` that takes in two arguments: a number # of feet and a number of inches # The function should return the equivalent length in meters imperial_to_metric <- function(feet, inches){ total_inches <- 12 * feet + inches meters <- .025 * total_inches meters } # Create a variable `height_in_meters` by passing your height in imperial to the # `imperial_to_metric` function height_in_meters <- imperial_to_metric(6, 2)
library(umx) ### Name: umxMatrix ### Title: Make a mxMatrix with automatic labels. Also takes name as the ### first parameter for more readable code. ### Aliases: umxMatrix ### ** Examples umxMatrix("test", "Full", 1, 1)
/data/genthat_extracted_code/umx/examples/umxMatrix.Rd.R
no_license
surayaaramli/typeRrh
R
false
false
230
r
library(umx) ### Name: umxMatrix ### Title: Make a mxMatrix with automatic labels. Also takes name as the ### first parameter for more readable code. ### Aliases: umxMatrix ### ** Examples umxMatrix("test", "Full", 1, 1)
#' Visualize the change of connection weights between a specific outcome and all #' cues. #' #' @description Visualize the change of connection weights between a specific #' outcome and all cues. #' @export #' @import graphics #' @import plotfunctions #' @import grDevices #' @param wmlist A list with weightmatrices, generated by #' \code{\link{RWlearning}} or \code{\link{updateWeights}}. #' @param outcome String: outcome for which to extract the connection weights. #' @param select.cues Optional selection of outcomes to limit the number #' of connection weights that are returned. The value of NULL (default) will #' return all connection weights. Note that specified values that are not in #' the weightmatrices will return the initial value without error or warning. #' Please use \code{\link{getOutcomes}} for returning all outcomes from the #' data, and \code{\link{getValues}} for returning all outcomes in the data. #' @param init.value Value of connection weights for non-existing connections. #' Typically set to 0. #' @param add.labels Logical: whether or not to add labels for the lines. #' Defaults to TRUE, see examples. #' @param add Logical: whether or not to add the lines to an existing plot. #' Defaults to FALSE (starting a new plot). #' @param ... Optional graphical arguments, as specified in #' \code{\link[graphics]{par}}. These parameters are forwarded to the functions #' \code{\link[plotfunctions]{emptyPlot}}, \code{\link[graphics]{lines}}, and #' \code{\link[graphics]{text}}. #' @return Optionally a list with label specifications is returned, which #' allows to plot your own labels. This may be helpful for very long labels, #' and for overlapping lines. #' @seealso \code{\link{plotCueWeights}}, \code{\link{getWeightsByOutcome}}, #' \code{\link{getWeightsByCue}} #' @author Jacolien van Rij #' @examples #' # load example data: #' data(dat) #' #' # add obligatory columns Cues, Outcomes, and Frequency: #' dat$Cues <- paste("BG", dat$Shape, dat$Color, sep="_") #' dat$Outcomes <- dat$Category #' dat$Frequency <- dat$Frequency1 #' head(dat) #' dim(dat) #' #' # now use createTrainingData to sample from the specified frequencies: #' train <- createTrainingData(dat) #' #' # this training data can actually be used train network: #' wm <- RWlearning(train) #' #' # plot connection weights for cue = 'car': #' plotOutcomeWeights(wm, outcome="vehicle") #' #' #' # plot your own labels: #' labels <- plotOutcomeWeights(wm, outcome="vehicle", add.labels=FALSE) #' legend_margin('topright', legend=labels$labels, col=labels$col, #' lwd=1, bty='n') #' #' # change color and select outcomes: #' out <- getValues(train$Cues, unique=TRUE) #' out <- out[! out %in% c("car", "bicycle")] #' labels <- plotOutcomeWeights(wm, outcome="vehicle", add.labels=FALSE, #' ylim=c(-.5,1),col=alpha(1), select.cues=out) #' lab2 <- plotOutcomeWeights(wm, outcome="vehicle", add.labels=FALSE, #' select.cues=c("car", "bicycle"), add=TRUE, col=2, lwd=2, xpd=TRUE) #' legend_margin('topright', legend=c(labels$labels, c("car", "bicycle")), #' col=c(labels$col, lab2$col), lwd=c(labels$lwd, lab2$lwd), #' lty=c(labels$lty, lab2$lty)) #' plotOutcomeWeights <- function(wmlist, outcome, select.cues=NULL, init.value=0, add.labels=TRUE, add=FALSE, ...){ dat <- getWeightsByOutcome(wmlist=wmlist, outcome=outcome, select.cues=select.cues, init.value=init.value) par <- list(...) xrange <- c(1, nrow(dat)) yrange <- range(dat, na.rm=TRUE) col <- rainbow(ncol(dat)) lty <- 1:ncol(dat) lwd <- rep(1, ncol(dat)) if('xlim' %in% names(par)){ xrange <- par[['xlim']] par[['xlim']] <- NULL } if('ylim' %in% names(par)){ yrange <- par[['ylim']] par[['ylim']] <- NULL } if('col' %in% names(par)){ col <- par[['col']] if(length(col)==1){ col <- rep(col, ceiling(ncol(dat) / length(col)))[1:ncol(dat)] } par[['col']] <- NULL } if('lty' %in% names(par)){ lty <- par[['lty']] if(length(lty)==1){ lty <- rep(lty, ceiling(ncol(dat) / length(lty)))[1:ncol(dat)] } par[['lty']] <- NULL } if('lwd' %in% names(par)){ lwd <- par[['lwd']] if(length(lwd)==1){ lwd <- rep(lwd, ceiling(ncol(dat) / length(lwd)))[1:ncol(dat)] } par[['lwd']] <- NULL } if(!'main' %in% names(par)){ par[['main']] <- sprintf("Outcome: \"%s\"", outcome) } if(!'xlab' %in% names(par)){ par[['xlab']] <- "Input" } if(!'ylab' %in% names(par)){ par[['ylab']] <- "Connection weight" } if(!'h0' %in% names(par)){ par[['h0']] <- 0 } line.par <- c("type", "pch", "bg", "cex", "lend", "ljoin", "lmitre") label.par <- c("font", "adj", "pos", "offset", "vfont", "cex", "srt", "family", "crt", "lheight") plotspec <- list2str(x=names(par)[!names(par) %in% c(line.par, label.par)], par) if(plotspec != ""){ plotspec <- paste(",", plotspec) } linespec <- list2str(line.par, par) if(linespec != ""){ linespec <- paste(",", linespec) } labelspec <- list2str(label.par, par) if(labelspec != ""){ labelspec <- paste(",", labelspec) } if(add==FALSE){ eval(parse(text=sprintf("emptyPlot(xrange, yrange %s)", plotspec))) } for( i in 1:ncol(dat)){ eval(parse(text=sprintf("lines(dat[,i], col=col[i], lty=lty[i], lwd=lwd[i] %s, xpd=NA)", linespec))) } if(add.labels){ for( i in 1:ncol(dat)){ eval(parse(text=sprintf("text(nrow(dat), dat[nrow(dat),i], labels=names(dat)[i], col=col[i] %s, xpd=NA)", labelspec))) } } invisible(list(labels=names(dat), x=rep(nrow(dat),ncol(dat)), y=unlist(dat[nrow(dat),]),col=col, lty=lty, lwd=lwd)) }
/R/plotOutcomeWeights.R
no_license
cran/edl
R
false
false
5,554
r
#' Visualize the change of connection weights between a specific outcome and all #' cues. #' #' @description Visualize the change of connection weights between a specific #' outcome and all cues. #' @export #' @import graphics #' @import plotfunctions #' @import grDevices #' @param wmlist A list with weightmatrices, generated by #' \code{\link{RWlearning}} or \code{\link{updateWeights}}. #' @param outcome String: outcome for which to extract the connection weights. #' @param select.cues Optional selection of outcomes to limit the number #' of connection weights that are returned. The value of NULL (default) will #' return all connection weights. Note that specified values that are not in #' the weightmatrices will return the initial value without error or warning. #' Please use \code{\link{getOutcomes}} for returning all outcomes from the #' data, and \code{\link{getValues}} for returning all outcomes in the data. #' @param init.value Value of connection weights for non-existing connections. #' Typically set to 0. #' @param add.labels Logical: whether or not to add labels for the lines. #' Defaults to TRUE, see examples. #' @param add Logical: whether or not to add the lines to an existing plot. #' Defaults to FALSE (starting a new plot). #' @param ... Optional graphical arguments, as specified in #' \code{\link[graphics]{par}}. These parameters are forwarded to the functions #' \code{\link[plotfunctions]{emptyPlot}}, \code{\link[graphics]{lines}}, and #' \code{\link[graphics]{text}}. #' @return Optionally a list with label specifications is returned, which #' allows to plot your own labels. This may be helpful for very long labels, #' and for overlapping lines. #' @seealso \code{\link{plotCueWeights}}, \code{\link{getWeightsByOutcome}}, #' \code{\link{getWeightsByCue}} #' @author Jacolien van Rij #' @examples #' # load example data: #' data(dat) #' #' # add obligatory columns Cues, Outcomes, and Frequency: #' dat$Cues <- paste("BG", dat$Shape, dat$Color, sep="_") #' dat$Outcomes <- dat$Category #' dat$Frequency <- dat$Frequency1 #' head(dat) #' dim(dat) #' #' # now use createTrainingData to sample from the specified frequencies: #' train <- createTrainingData(dat) #' #' # this training data can actually be used train network: #' wm <- RWlearning(train) #' #' # plot connection weights for cue = 'car': #' plotOutcomeWeights(wm, outcome="vehicle") #' #' #' # plot your own labels: #' labels <- plotOutcomeWeights(wm, outcome="vehicle", add.labels=FALSE) #' legend_margin('topright', legend=labels$labels, col=labels$col, #' lwd=1, bty='n') #' #' # change color and select outcomes: #' out <- getValues(train$Cues, unique=TRUE) #' out <- out[! out %in% c("car", "bicycle")] #' labels <- plotOutcomeWeights(wm, outcome="vehicle", add.labels=FALSE, #' ylim=c(-.5,1),col=alpha(1), select.cues=out) #' lab2 <- plotOutcomeWeights(wm, outcome="vehicle", add.labels=FALSE, #' select.cues=c("car", "bicycle"), add=TRUE, col=2, lwd=2, xpd=TRUE) #' legend_margin('topright', legend=c(labels$labels, c("car", "bicycle")), #' col=c(labels$col, lab2$col), lwd=c(labels$lwd, lab2$lwd), #' lty=c(labels$lty, lab2$lty)) #' plotOutcomeWeights <- function(wmlist, outcome, select.cues=NULL, init.value=0, add.labels=TRUE, add=FALSE, ...){ dat <- getWeightsByOutcome(wmlist=wmlist, outcome=outcome, select.cues=select.cues, init.value=init.value) par <- list(...) xrange <- c(1, nrow(dat)) yrange <- range(dat, na.rm=TRUE) col <- rainbow(ncol(dat)) lty <- 1:ncol(dat) lwd <- rep(1, ncol(dat)) if('xlim' %in% names(par)){ xrange <- par[['xlim']] par[['xlim']] <- NULL } if('ylim' %in% names(par)){ yrange <- par[['ylim']] par[['ylim']] <- NULL } if('col' %in% names(par)){ col <- par[['col']] if(length(col)==1){ col <- rep(col, ceiling(ncol(dat) / length(col)))[1:ncol(dat)] } par[['col']] <- NULL } if('lty' %in% names(par)){ lty <- par[['lty']] if(length(lty)==1){ lty <- rep(lty, ceiling(ncol(dat) / length(lty)))[1:ncol(dat)] } par[['lty']] <- NULL } if('lwd' %in% names(par)){ lwd <- par[['lwd']] if(length(lwd)==1){ lwd <- rep(lwd, ceiling(ncol(dat) / length(lwd)))[1:ncol(dat)] } par[['lwd']] <- NULL } if(!'main' %in% names(par)){ par[['main']] <- sprintf("Outcome: \"%s\"", outcome) } if(!'xlab' %in% names(par)){ par[['xlab']] <- "Input" } if(!'ylab' %in% names(par)){ par[['ylab']] <- "Connection weight" } if(!'h0' %in% names(par)){ par[['h0']] <- 0 } line.par <- c("type", "pch", "bg", "cex", "lend", "ljoin", "lmitre") label.par <- c("font", "adj", "pos", "offset", "vfont", "cex", "srt", "family", "crt", "lheight") plotspec <- list2str(x=names(par)[!names(par) %in% c(line.par, label.par)], par) if(plotspec != ""){ plotspec <- paste(",", plotspec) } linespec <- list2str(line.par, par) if(linespec != ""){ linespec <- paste(",", linespec) } labelspec <- list2str(label.par, par) if(labelspec != ""){ labelspec <- paste(",", labelspec) } if(add==FALSE){ eval(parse(text=sprintf("emptyPlot(xrange, yrange %s)", plotspec))) } for( i in 1:ncol(dat)){ eval(parse(text=sprintf("lines(dat[,i], col=col[i], lty=lty[i], lwd=lwd[i] %s, xpd=NA)", linespec))) } if(add.labels){ for( i in 1:ncol(dat)){ eval(parse(text=sprintf("text(nrow(dat), dat[nrow(dat),i], labels=names(dat)[i], col=col[i] %s, xpd=NA)", labelspec))) } } invisible(list(labels=names(dat), x=rep(nrow(dat),ncol(dat)), y=unlist(dat[nrow(dat),]),col=col, lty=lty, lwd=lwd)) }
library(tidyverse) library(sf) hospital <- tibble( longitude = c(80.15998, 72.89125, 77.65032, 77.60599), latitude = c(12.90524, 19.08120, 12.97238, 12.90927) ) people <- tibble( longitude = c(72.89537, 77.65094, 73.95325, 72.96746, 77.65058, 77.66715, 77.64214, 77.58415, 77.76180, 76.65470, 76.65480, 76.65490, 76.65500, 76.65560, 76.65560), latitude = c(19.07726, 13.03902, 18.50330, 19.16764, 12.90871, 13.01693, 13.00954, 12.92079, 13.02212, 12.81447, 12.81457, 12.81467, 12.81477, 12.81487, 12.81497) ) hospital_sf <- hospital %>% st_as_sf(coords = c("longitude", "latitude")) %>% st_set_crs(4326) people_sf <- people %>% st_as_sf(coords = c("longitude", "latitude")) %>% st_set_crs(4326) distances <- st_distance(people_sf, hospital_sf) %>% as_tibble() %>% mutate_at(vars(V1:V4), as.numeric) %>% mutate_at(vars(V1:V4), function (x) x < 2000) %>% mutate(within_2km = pmap_lgl(., function(V1, V2, V3, V4) any(V1, V2, V3, V4)))
/stackoverflow/hospital_distances.R
no_license
Zedseayou/reprexes
R
false
false
1,025
r
library(tidyverse) library(sf) hospital <- tibble( longitude = c(80.15998, 72.89125, 77.65032, 77.60599), latitude = c(12.90524, 19.08120, 12.97238, 12.90927) ) people <- tibble( longitude = c(72.89537, 77.65094, 73.95325, 72.96746, 77.65058, 77.66715, 77.64214, 77.58415, 77.76180, 76.65470, 76.65480, 76.65490, 76.65500, 76.65560, 76.65560), latitude = c(19.07726, 13.03902, 18.50330, 19.16764, 12.90871, 13.01693, 13.00954, 12.92079, 13.02212, 12.81447, 12.81457, 12.81467, 12.81477, 12.81487, 12.81497) ) hospital_sf <- hospital %>% st_as_sf(coords = c("longitude", "latitude")) %>% st_set_crs(4326) people_sf <- people %>% st_as_sf(coords = c("longitude", "latitude")) %>% st_set_crs(4326) distances <- st_distance(people_sf, hospital_sf) %>% as_tibble() %>% mutate_at(vars(V1:V4), as.numeric) %>% mutate_at(vars(V1:V4), function (x) x < 2000) %>% mutate(within_2km = pmap_lgl(., function(V1, V2, V3, V4) any(V1, V2, V3, V4)))
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/parallelization.R \name{time_display} \alias{time_display} \title{Display time elapsed and projected completion time.} \usage{ time_display(i, n_chunks, start_time) } \arguments{ \item{i}{The current chunk under analysis} \item{n_chunks}{number of total chunks} \item{start_time}{The start time of the first chunk} } \value{ a character object. } \description{ Display time elapsed and projected completion time. }
/man/time_display.Rd
permissive
adrisede/lowcat
R
false
true
495
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/parallelization.R \name{time_display} \alias{time_display} \title{Display time elapsed and projected completion time.} \usage{ time_display(i, n_chunks, start_time) } \arguments{ \item{i}{The current chunk under analysis} \item{n_chunks}{number of total chunks} \item{start_time}{The start time of the first chunk} } \value{ a character object. } \description{ Display time elapsed and projected completion time. }
library(odds.converter) ### Name: odds.dec2hk ### Title: Convert Decimal Odds to Hong Kong odds ### Aliases: odds.dec2hk ### ** Examples odds.dec2hk(c(1.93,2.05))
/data/genthat_extracted_code/odds.converter/examples/odds.dec2hk.Rd.R
no_license
surayaaramli/typeRrh
R
false
false
170
r
library(odds.converter) ### Name: odds.dec2hk ### Title: Convert Decimal Odds to Hong Kong odds ### Aliases: odds.dec2hk ### ** Examples odds.dec2hk(c(1.93,2.05))
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/zzz.R \docType{data} \name{dwdbase} \alias{dwdbase} \alias{gridbase} \title{DWD FTP Server base URL} \format{ An object of class \code{character} of length 1. } \usage{ dwdbase } \description{ base URLs to the DWD FTP Server\cr\cr \strong{\code{dwdbase}}: observed climatic records at\cr \url{ftp://opendata.dwd.de/climate_environment/CDC/observations_germany/climate}\cr An overview of available datasets and usage suggestions can be found at\cr \url{https://bookdown.org/brry/rdwd/available-datasets.html}\cr \url{https://bookdown.org/brry/rdwd/station-selection.html}\cr\cr\cr \strong{\code{gridbase}}: spatially interpolated gridded data at\cr \url{ftp://opendata.dwd.de/climate_environment/CDC/grids_germany}\cr Usage instructions can be found at\cr \url{https://bookdown.org/brry/rdwd/raster-data.html} } \keyword{datasets}
/man/dwdbase.Rd
no_license
NandhiniS08/rdwd
R
false
true
908
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/zzz.R \docType{data} \name{dwdbase} \alias{dwdbase} \alias{gridbase} \title{DWD FTP Server base URL} \format{ An object of class \code{character} of length 1. } \usage{ dwdbase } \description{ base URLs to the DWD FTP Server\cr\cr \strong{\code{dwdbase}}: observed climatic records at\cr \url{ftp://opendata.dwd.de/climate_environment/CDC/observations_germany/climate}\cr An overview of available datasets and usage suggestions can be found at\cr \url{https://bookdown.org/brry/rdwd/available-datasets.html}\cr \url{https://bookdown.org/brry/rdwd/station-selection.html}\cr\cr\cr \strong{\code{gridbase}}: spatially interpolated gridded data at\cr \url{ftp://opendata.dwd.de/climate_environment/CDC/grids_germany}\cr Usage instructions can be found at\cr \url{https://bookdown.org/brry/rdwd/raster-data.html} } \keyword{datasets}
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/Result.R \docType{methods} \name{dbFetch} \alias{dbFetch} \alias{dbFetch,AthenaResult-method} \title{Fetch records from previously executed query} \usage{ \S4method{dbFetch}{AthenaResult}(res, n = -1, ...) } \arguments{ \item{res}{An object inheriting from \linkS4class{DBIResult}, created by \code{\link[DBI:dbSendQuery]{dbSendQuery()}}.} \item{n}{maximum number of records to retrieve per fetch. Use \code{n = -1} or \code{n = Inf} to retrieve all pending records. Some implementations may recognize other special values. Currently chunk sizes range from 0 to 999, if entire dataframe is required use \code{n = -1} or \code{n = Inf}.} \item{...}{Other arguments passed on to methods.} } \value{ \code{dbFetch()} returns a data frame. } \description{ Currently returns the top n elements (rows) from result set or returns entire table from Athena. } \examples{ \dontrun{ # Note: # - Require AWS Account to run below example. # - Different connection methods can be used please see `RAthena::dbConnect` documnentation library(DBI) # Demo connection to Athena using profile name con <- dbConnect(RAthena::athena()) res <- dbSendQuery(con, "show databases") dbFetch(res) dbClearResult(res) # Disconnect from Athena dbDisconnect(con) } } \seealso{ \code{\link[DBI]{dbFetch}} }
/man/dbFetch.Rd
no_license
mstei4176/RAthena
R
false
true
1,361
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/Result.R \docType{methods} \name{dbFetch} \alias{dbFetch} \alias{dbFetch,AthenaResult-method} \title{Fetch records from previously executed query} \usage{ \S4method{dbFetch}{AthenaResult}(res, n = -1, ...) } \arguments{ \item{res}{An object inheriting from \linkS4class{DBIResult}, created by \code{\link[DBI:dbSendQuery]{dbSendQuery()}}.} \item{n}{maximum number of records to retrieve per fetch. Use \code{n = -1} or \code{n = Inf} to retrieve all pending records. Some implementations may recognize other special values. Currently chunk sizes range from 0 to 999, if entire dataframe is required use \code{n = -1} or \code{n = Inf}.} \item{...}{Other arguments passed on to methods.} } \value{ \code{dbFetch()} returns a data frame. } \description{ Currently returns the top n elements (rows) from result set or returns entire table from Athena. } \examples{ \dontrun{ # Note: # - Require AWS Account to run below example. # - Different connection methods can be used please see `RAthena::dbConnect` documnentation library(DBI) # Demo connection to Athena using profile name con <- dbConnect(RAthena::athena()) res <- dbSendQuery(con, "show databases") dbFetch(res) dbClearResult(res) # Disconnect from Athena dbDisconnect(con) } } \seealso{ \code{\link[DBI]{dbFetch}} }
library(qcc) ### Name: process.capability ### Title: Process capability analysis ### Aliases: process.capability ### Keywords: htest hplot ### ** Examples data(pistonrings) attach(pistonrings) diameter <- qcc.groups(diameter, sample) q <- qcc(diameter[1:25,], type="xbar", nsigmas=3, plot=FALSE) process.capability(q, spec.limits=c(73.95,74.05)) process.capability(q, spec.limits=c(73.95,74.05), target=74.02) process.capability(q, spec.limits=c(73.99,74.01)) process.capability(q, spec.limits = c(73.99, 74.1))
/data/genthat_extracted_code/qcc/examples/process.capability.Rd.R
no_license
surayaaramli/typeRrh
R
false
false
519
r
library(qcc) ### Name: process.capability ### Title: Process capability analysis ### Aliases: process.capability ### Keywords: htest hplot ### ** Examples data(pistonrings) attach(pistonrings) diameter <- qcc.groups(diameter, sample) q <- qcc(diameter[1:25,], type="xbar", nsigmas=3, plot=FALSE) process.capability(q, spec.limits=c(73.95,74.05)) process.capability(q, spec.limits=c(73.95,74.05), target=74.02) process.capability(q, spec.limits=c(73.99,74.01)) process.capability(q, spec.limits = c(73.99, 74.1))
#-----------------------------------------------------------# #------------------ Making the BK Database -----------------# #-----------------------------------------------------------# #Load data collected from EZ Brew spreadsheet suppressWarnings(suppressMessages(library(gdata))) #Suppress loading info on Excel reading package suppressWarnings(suppressMessages(library(sqldf))) Grains_Info <- read.xls("Brewing_Constants.xlsx",sheet = "Grains", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") Extracts_Info <- read.xls("Brewing_Constants.xlsx",sheet = "Extracts", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") Adjuncts_Info <- read.xls("Brewing_Constants.xlsx",sheet = "Adjuncts", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") Hops_Info <- read.xls("Brewing_Constants.xlsx",sheet = "Hops", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") Yeast_Info <- read.xls("Brewing_Constants.xlsx",sheet = "Yeast", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") Style_Info <- read.xls("Brewing_Constants.xlsx",sheet = "Styles", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") Spices_Info <- read.xls("Brewing_Constants.xlsx",sheet = "Spices", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") SystemSpecific_Info <- read.xls("Brewing_Constants.xlsx",sheet = "SystemSpecificInformation", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") Gravity_Correction_Info <- read.xls("Brewing_Constants.xlsx",sheet = "Gravity_Correction_Chart", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") setwd("Brewing_Constants") Grains_Info <- read.csv(file = "grains.csv",header = T,stringsAsFactors = F) Extracts_Info <- read.csv(file = "extracts.csv",header = T,stringsAsFactors = F) Adjuncts_Info <- read.csv(file = "adjuncts.csv",header = T,stringsAsFactors = F) Hops_Info <- read.csv(file = "hops.csv",header = T,stringsAsFactors = F) Yeast_Info <- read.csv(file = "yeast.csv",header = T,stringsAsFactors = F) Style_Info <- read.csv(file = "styles.csv",header = T,stringsAsFactors = F) Spices_Info <- read.csv(file = "spices.csv",header = T,stringsAsFactors = F) SystemSpecific_Info <- read.csv(file = "systemSpecificInformation.csv",header = T,stringsAsFactors = F) Gravity_Correction_Info <- read.csv(file = "gravityCorrectionChart.csv",header = T,stringsAsFactors = F) setwd("../") dbIngredients <- dbConnect(SQLite(), dbname="Ingredients.sqlite") #Load in *.csv data to Ingredients Database dbWriteTable(conn = dbIngredients, name = "Grains", value = Grains_Info, overwrite = F, append = T) dbWriteTable(conn = dbIngredients, name = "Extracts", value = Extracts_Info, overwrite = F, append = T) dbWriteTable(conn = dbIngredients, name = "Adjuncts", value = Adjuncts_Info, overwrite = F, append = T) dbWriteTable(conn = dbIngredients, name = "Hops", value = Hops_Info, overwrite = F, append = T) dbWriteTable(conn = dbIngredients, name = "Yeast", value = Yeast_Info, overwrite = F, append = T) dbWriteTable(conn = dbIngredients, name = "Styles", value = Style_Info, overwrite = F, append = T) dbWriteTable(conn = dbIngredients, name = "Spices", value = Spices_Info, overwrite = F, append = T) dbBKBrewHouse <- dbConnect(SQLite(), dbname="BKBrewHouse.sqlite") #Load in *.csv data to BKBrew House Database dbWriteTable(conn = dbBKBrewHouse, name = "SystemSpecificInformation", value = SystemSpecific_Info, overwrite = F, append = T) dbWriteTable(conn = dbBKBrewHouse, name = "GravityCorrectionChart", value = Gravity_Correction_Info, overwrite = F, append = T) closeAllConnections() # #Create Tables...Deprecated...functions above do a great job # #Grains # dbSendQuery(conn = dbIngredients, # "CREATE TABLE Grains # (Ingredients CHAR, # Value DOUBLE, # PPG DOUBLE, # SRM DOUBLE, # EZWaterCode INT, # FlavorProfile TEXT, # DP INT, # isGrain INT)") # #Extracts # dbSendQuery(conn = dbIngredients, # "CREATE TABLE Extracts # (Ingredients CHAR, # Value DOUBLE, # PPG DOUBLE, # SRM DOUBLE, # EZWaterCode INT, # FlavorProfile TEXT, # DP INT, # isGrain INT)") # #Adjuncts # dbSendQuery(conn = dbIngredients, # "CREATE TABLE Adjuncts # (Ingredients CHAR, # Value DOUBLE, # PPG DOUBLE, # SRM DOUBLE, # EZWaterCode INT, # FlavorProfile TEXT, # DP INT, # isGrain INT)") # #Hops # dbSendQuery(conn = dbIngredients, # "CREATE TABLE Hops # (Hops CHAR, # Value DOUBLE, # TypicalAlphaAcidPercent DOUBLE, # FlavorProfile TEXT, # PossibleSubstitutions CHAR, # Origin CHAR, # Storage CHAR, # AdditionalInformation_History TEXT)") # #Yeast # dbSendQuery(conn = dbIngredients, # "CREATE TABLE Yeast # (YeastStrain CHAR, # Value DOUBLE, # ATT DOUBLE, # TemperatureRange CHAR, # Flocculation CHAR, # AlcoholTolerancePercent DOUBLE, # FlavorCharacteristics TEXT, # RecommendedStyles TEXT, # Brewery CHAR)") # #Styles # dbSendQuery(conn = dbIngredients, # "CREATE TABLE Styles # (GeneralStyle CHAR, # Styles CHAR, # GravityRange CHAR, # StyleFinal CHAR, # BitterRange CHAR, # SRMRange CHAR)") # #Spices # dbSendQuery(conn = dbIngredients, # "CREATE TABLE Spices # (Spice CHAR, # Value INT, # FlavorProfile TEXT)") # # dbBKBrewHouse <- dbConnect(SQLite(), dbname="BKBrewHouse.sqlite") # #Create Tables # #System Specific Information # dbSendQuery(conn = dbBKBrewHouse, # "CREATE TABLE SystemSpecificInformation # (BatchSize_Gal DOUBLE, # EvapRate_Percent_per_hr DOUBLE, # ShrinkageFromCooling_Percent DOUBLE, # BrewHouseEfficiency_Percent DOUBLE, # WeightOfMashTun_lb DOUBLE, # ThermalMassOfMAshTun_btu_per_lb_DegreeF DOUBLE, # BoilKettleDeadSpace_Gal DOUBLE, # LauterTunDeadSpace_Gal DOUBLE)") # #Gravity Correction Chart # dbSendQuery(conn = dbBKBrewHouse, # "CREATE TABLE GravityCorrectionChart # (Temperature_F INT, # Temperature_C INT, # Add_SG DOUBLE)") # # # #dbListFields(db, "Grains") # The columns in a table; for a refernece if you need it later # # # # #
/Database/BK Database Construction.R
no_license
BenjaminBearce/BK_Brew
R
false
false
6,754
r
#-----------------------------------------------------------# #------------------ Making the BK Database -----------------# #-----------------------------------------------------------# #Load data collected from EZ Brew spreadsheet suppressWarnings(suppressMessages(library(gdata))) #Suppress loading info on Excel reading package suppressWarnings(suppressMessages(library(sqldf))) Grains_Info <- read.xls("Brewing_Constants.xlsx",sheet = "Grains", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") Extracts_Info <- read.xls("Brewing_Constants.xlsx",sheet = "Extracts", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") Adjuncts_Info <- read.xls("Brewing_Constants.xlsx",sheet = "Adjuncts", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") Hops_Info <- read.xls("Brewing_Constants.xlsx",sheet = "Hops", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") Yeast_Info <- read.xls("Brewing_Constants.xlsx",sheet = "Yeast", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") Style_Info <- read.xls("Brewing_Constants.xlsx",sheet = "Styles", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") Spices_Info <- read.xls("Brewing_Constants.xlsx",sheet = "Spices", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") SystemSpecific_Info <- read.xls("Brewing_Constants.xlsx",sheet = "SystemSpecificInformation", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") Gravity_Correction_Info <- read.xls("Brewing_Constants.xlsx",sheet = "Gravity_Correction_Chart", stringsAsFactors = F)#, perl="C:/Strawberry/perl/bin/perl.exe") setwd("Brewing_Constants") Grains_Info <- read.csv(file = "grains.csv",header = T,stringsAsFactors = F) Extracts_Info <- read.csv(file = "extracts.csv",header = T,stringsAsFactors = F) Adjuncts_Info <- read.csv(file = "adjuncts.csv",header = T,stringsAsFactors = F) Hops_Info <- read.csv(file = "hops.csv",header = T,stringsAsFactors = F) Yeast_Info <- read.csv(file = "yeast.csv",header = T,stringsAsFactors = F) Style_Info <- read.csv(file = "styles.csv",header = T,stringsAsFactors = F) Spices_Info <- read.csv(file = "spices.csv",header = T,stringsAsFactors = F) SystemSpecific_Info <- read.csv(file = "systemSpecificInformation.csv",header = T,stringsAsFactors = F) Gravity_Correction_Info <- read.csv(file = "gravityCorrectionChart.csv",header = T,stringsAsFactors = F) setwd("../") dbIngredients <- dbConnect(SQLite(), dbname="Ingredients.sqlite") #Load in *.csv data to Ingredients Database dbWriteTable(conn = dbIngredients, name = "Grains", value = Grains_Info, overwrite = F, append = T) dbWriteTable(conn = dbIngredients, name = "Extracts", value = Extracts_Info, overwrite = F, append = T) dbWriteTable(conn = dbIngredients, name = "Adjuncts", value = Adjuncts_Info, overwrite = F, append = T) dbWriteTable(conn = dbIngredients, name = "Hops", value = Hops_Info, overwrite = F, append = T) dbWriteTable(conn = dbIngredients, name = "Yeast", value = Yeast_Info, overwrite = F, append = T) dbWriteTable(conn = dbIngredients, name = "Styles", value = Style_Info, overwrite = F, append = T) dbWriteTable(conn = dbIngredients, name = "Spices", value = Spices_Info, overwrite = F, append = T) dbBKBrewHouse <- dbConnect(SQLite(), dbname="BKBrewHouse.sqlite") #Load in *.csv data to BKBrew House Database dbWriteTable(conn = dbBKBrewHouse, name = "SystemSpecificInformation", value = SystemSpecific_Info, overwrite = F, append = T) dbWriteTable(conn = dbBKBrewHouse, name = "GravityCorrectionChart", value = Gravity_Correction_Info, overwrite = F, append = T) closeAllConnections() # #Create Tables...Deprecated...functions above do a great job # #Grains # dbSendQuery(conn = dbIngredients, # "CREATE TABLE Grains # (Ingredients CHAR, # Value DOUBLE, # PPG DOUBLE, # SRM DOUBLE, # EZWaterCode INT, # FlavorProfile TEXT, # DP INT, # isGrain INT)") # #Extracts # dbSendQuery(conn = dbIngredients, # "CREATE TABLE Extracts # (Ingredients CHAR, # Value DOUBLE, # PPG DOUBLE, # SRM DOUBLE, # EZWaterCode INT, # FlavorProfile TEXT, # DP INT, # isGrain INT)") # #Adjuncts # dbSendQuery(conn = dbIngredients, # "CREATE TABLE Adjuncts # (Ingredients CHAR, # Value DOUBLE, # PPG DOUBLE, # SRM DOUBLE, # EZWaterCode INT, # FlavorProfile TEXT, # DP INT, # isGrain INT)") # #Hops # dbSendQuery(conn = dbIngredients, # "CREATE TABLE Hops # (Hops CHAR, # Value DOUBLE, # TypicalAlphaAcidPercent DOUBLE, # FlavorProfile TEXT, # PossibleSubstitutions CHAR, # Origin CHAR, # Storage CHAR, # AdditionalInformation_History TEXT)") # #Yeast # dbSendQuery(conn = dbIngredients, # "CREATE TABLE Yeast # (YeastStrain CHAR, # Value DOUBLE, # ATT DOUBLE, # TemperatureRange CHAR, # Flocculation CHAR, # AlcoholTolerancePercent DOUBLE, # FlavorCharacteristics TEXT, # RecommendedStyles TEXT, # Brewery CHAR)") # #Styles # dbSendQuery(conn = dbIngredients, # "CREATE TABLE Styles # (GeneralStyle CHAR, # Styles CHAR, # GravityRange CHAR, # StyleFinal CHAR, # BitterRange CHAR, # SRMRange CHAR)") # #Spices # dbSendQuery(conn = dbIngredients, # "CREATE TABLE Spices # (Spice CHAR, # Value INT, # FlavorProfile TEXT)") # # dbBKBrewHouse <- dbConnect(SQLite(), dbname="BKBrewHouse.sqlite") # #Create Tables # #System Specific Information # dbSendQuery(conn = dbBKBrewHouse, # "CREATE TABLE SystemSpecificInformation # (BatchSize_Gal DOUBLE, # EvapRate_Percent_per_hr DOUBLE, # ShrinkageFromCooling_Percent DOUBLE, # BrewHouseEfficiency_Percent DOUBLE, # WeightOfMashTun_lb DOUBLE, # ThermalMassOfMAshTun_btu_per_lb_DegreeF DOUBLE, # BoilKettleDeadSpace_Gal DOUBLE, # LauterTunDeadSpace_Gal DOUBLE)") # #Gravity Correction Chart # dbSendQuery(conn = dbBKBrewHouse, # "CREATE TABLE GravityCorrectionChart # (Temperature_F INT, # Temperature_C INT, # Add_SG DOUBLE)") # # # #dbListFields(db, "Grains") # The columns in a table; for a refernece if you need it later # # # # #
npoints<-100 n<-c(10,100,1000) p=rep(seq(0,1,length.out=npoints),length(n)) n<-rep(n,each=npoints) library(ggplot2) plot1<-ggplot(data = data.frame(p,n,y=qnorm(0.975)*sqrt(p*(1-p)/n),Size=factor(n)), aes(x=p, y=y, group=Size, colour = Size))+ geom_line()
/demo/marginoferror.R
no_license
robsalasco/JPSMSurv662
R
false
false
271
r
npoints<-100 n<-c(10,100,1000) p=rep(seq(0,1,length.out=npoints),length(n)) n<-rep(n,each=npoints) library(ggplot2) plot1<-ggplot(data = data.frame(p,n,y=qnorm(0.975)*sqrt(p*(1-p)/n),Size=factor(n)), aes(x=p, y=y, group=Size, colour = Size))+ geom_line()
## This script builds a histogram of Global Active Power and saves it to a .png ## I plan to always use dplyr for data manipulation! library(dplyr) ## read in from file and avoid factors! elec <- read.csv("household_power_consumption.txt",header=TRUE, sep = ";",na.strings="?" , stringsAsFactors=FALSE) ## turn it into a table for use with dplyr elec <- tbl_df(elec) ## get rid of unwanted rows elec <- filter(elec, as.Date(Date,"%d/%m/%Y") == as.Date("2/1/2007","%m/%d/%Y") | as.Date(Date,"%d/%m/%Y") == as.Date("2/2/2007","%m/%d/%Y") ) ## Set the dates right elec$Date <- as.Date(elec$Date,"%d/%m/%Y") ## Create a histogram par(mar=c(5,5,5,5)) hist(elec$Global_active_power, col ="red", xlab="Global Active Power (kilowatts)", main="Global Active Power", xaxt="n") axis(1, at=c(0,2,4,6), labels=c(0,2,4,6)) ## Copy to .png dev.copy(png, file="plot1.png", width=480, height=480) dev.off()
/plot1.r
no_license
tarzan11/ExData_Plotting1
R
false
false
924
r
## This script builds a histogram of Global Active Power and saves it to a .png ## I plan to always use dplyr for data manipulation! library(dplyr) ## read in from file and avoid factors! elec <- read.csv("household_power_consumption.txt",header=TRUE, sep = ";",na.strings="?" , stringsAsFactors=FALSE) ## turn it into a table for use with dplyr elec <- tbl_df(elec) ## get rid of unwanted rows elec <- filter(elec, as.Date(Date,"%d/%m/%Y") == as.Date("2/1/2007","%m/%d/%Y") | as.Date(Date,"%d/%m/%Y") == as.Date("2/2/2007","%m/%d/%Y") ) ## Set the dates right elec$Date <- as.Date(elec$Date,"%d/%m/%Y") ## Create a histogram par(mar=c(5,5,5,5)) hist(elec$Global_active_power, col ="red", xlab="Global Active Power (kilowatts)", main="Global Active Power", xaxt="n") axis(1, at=c(0,2,4,6), labels=c(0,2,4,6)) ## Copy to .png dev.copy(png, file="plot1.png", width=480, height=480) dev.off()
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/SIMULE.R \name{simule} \alias{simule} \title{A constrained l1 minimization approach for estimating multiple Sparse Gaussian or Nonparanormal Graphical Models Estimate multiple, related sparse Gaussian or Nonparanormal graphical} \usage{ simule(X, lambda, epsilon = 1, covType = "cov", parallel = FALSE) } \arguments{ \item{X}{A List of input matrices. They can be data matrices or covariance/correlation matrices. If every matrix in the X is a symmetric matrix, the matrices are assumed to be covariance/correlation matrices. More details at <https://github.com/QData/SIMULE>} \item{lambda}{A positive number. The hyperparameter controls the sparsity level of the matrices. The \\eqn{\\lambda_n} in the following section: Details.} \item{epsilon}{A positive number. The hyperparameter controls the differences between the shared pattern among graphs and the individual part of each graph. The \\eqn{\\epsilon} in the following section: Details. If epsilon becomes larger, the generated graphs will be more similar to each other. The default value is 1, which means that we set the same weights to the shared pattern among graphs and the individual part of each graph.} \item{covType}{A parameter to decide which Graphical model we choose to estimate from the input data. If covType = "cov", it means that we estimate multiple sparse Gaussian Graphical models. This option assumes that we calculate (when input X represents data directly) or use (when X elements are symmetric representing covariance matrices) the sample covariance matrices as input to the simule algorithm. If covType = "kendall", it means that we estimate multiple nonparanormal Graphical models. This option assumes that we calculate (when input X represents data directly) or use (when X elements are symmetric representing correlation matrices) the kendall's tau correlation matrices as input to the simule algorithm.} \item{parallel}{A boolean. This parameter decides if the package will use the multithreading architecture or not.} } \value{ \\item{Graphs}{A list of the estimated inverse covariance/correlation matrices.} \\item{share}{The share graph among multiple tasks.} } \description{ models from multiple related datasets using the SIMULE algorithm. Please run demo(simule) to learn the basic functions provided by this package. For further details, please read the original paper: Beilun Wang, Ritambhara Singh, Yanjun Qi (2017) <DOI:10.1007/s10994-017-5635-7>. } \details{ The SIMULE algorithm is a constrained l1 minimization method that can detect both the shared and the task-specific parts of multiple graphs explicitly from data (through jointly estimating multiple sparse Gaussian graphical models or Nonparanormal graphical models). It solves the following equation: \\deqn{ \\hat{\\Omega}^{(1)}_I, \\hat{\\Omega}^{(2)}_I, \\dots, \\hat{\\Omega}^{(K)}_I, \\hat{\\Omega}_S = \\min\\limits_{\\Omega^{(i)}_I,\\Omega_S}\\sum\\limits_i ||\\Omega^{(i)}_I||_1+ \\epsilon K||\\Omega_S||_1 } Subject to : \\deqn{ ||\\Sigma^{(i)}(\\Omega^{(i)}_I + \\Omega_S) - I||_{\\infty} \\le \\lambda_{n}, i = 1,\\dots,K \\nonumber } Please also see the equation (7) in our paper. The \\eqn{\\lambda_n} is the hyperparameter controlling the sparsity level of the matrices and it is the \\code{lambda} in our function. The \\eqn{\\epsilon} is the hyperparameter controlling the differences between the shared pattern among graphs and the individual part of each graph. It is the \\code{epsilon} parameter in our function and the default value is 1. For further details, please see our paper: <http://link.springer.com/article/10.1007/s10994-017-5635-7>. } \examples{ \dontrun{ data(exampleData) result = simule(X = exampleData , lambda = 0.1, epsilon = 0.45, covType = "cov", FALSE) plot.simule(result) } } \references{ Beilun Wang, Ritambhara Singh, Yanjun Qi (2017). A constrained L1 minimization approach for estimating multiple Sparse Gaussian or Nonparanormal Graphical Models. http://link.springer.com/article/10.1007/s10994-017-5635-7 } \author{ Beilun Wang }
/man/simule.Rd
no_license
cran/simule
R
false
true
4,211
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/SIMULE.R \name{simule} \alias{simule} \title{A constrained l1 minimization approach for estimating multiple Sparse Gaussian or Nonparanormal Graphical Models Estimate multiple, related sparse Gaussian or Nonparanormal graphical} \usage{ simule(X, lambda, epsilon = 1, covType = "cov", parallel = FALSE) } \arguments{ \item{X}{A List of input matrices. They can be data matrices or covariance/correlation matrices. If every matrix in the X is a symmetric matrix, the matrices are assumed to be covariance/correlation matrices. More details at <https://github.com/QData/SIMULE>} \item{lambda}{A positive number. The hyperparameter controls the sparsity level of the matrices. The \\eqn{\\lambda_n} in the following section: Details.} \item{epsilon}{A positive number. The hyperparameter controls the differences between the shared pattern among graphs and the individual part of each graph. The \\eqn{\\epsilon} in the following section: Details. If epsilon becomes larger, the generated graphs will be more similar to each other. The default value is 1, which means that we set the same weights to the shared pattern among graphs and the individual part of each graph.} \item{covType}{A parameter to decide which Graphical model we choose to estimate from the input data. If covType = "cov", it means that we estimate multiple sparse Gaussian Graphical models. This option assumes that we calculate (when input X represents data directly) or use (when X elements are symmetric representing covariance matrices) the sample covariance matrices as input to the simule algorithm. If covType = "kendall", it means that we estimate multiple nonparanormal Graphical models. This option assumes that we calculate (when input X represents data directly) or use (when X elements are symmetric representing correlation matrices) the kendall's tau correlation matrices as input to the simule algorithm.} \item{parallel}{A boolean. This parameter decides if the package will use the multithreading architecture or not.} } \value{ \\item{Graphs}{A list of the estimated inverse covariance/correlation matrices.} \\item{share}{The share graph among multiple tasks.} } \description{ models from multiple related datasets using the SIMULE algorithm. Please run demo(simule) to learn the basic functions provided by this package. For further details, please read the original paper: Beilun Wang, Ritambhara Singh, Yanjun Qi (2017) <DOI:10.1007/s10994-017-5635-7>. } \details{ The SIMULE algorithm is a constrained l1 minimization method that can detect both the shared and the task-specific parts of multiple graphs explicitly from data (through jointly estimating multiple sparse Gaussian graphical models or Nonparanormal graphical models). It solves the following equation: \\deqn{ \\hat{\\Omega}^{(1)}_I, \\hat{\\Omega}^{(2)}_I, \\dots, \\hat{\\Omega}^{(K)}_I, \\hat{\\Omega}_S = \\min\\limits_{\\Omega^{(i)}_I,\\Omega_S}\\sum\\limits_i ||\\Omega^{(i)}_I||_1+ \\epsilon K||\\Omega_S||_1 } Subject to : \\deqn{ ||\\Sigma^{(i)}(\\Omega^{(i)}_I + \\Omega_S) - I||_{\\infty} \\le \\lambda_{n}, i = 1,\\dots,K \\nonumber } Please also see the equation (7) in our paper. The \\eqn{\\lambda_n} is the hyperparameter controlling the sparsity level of the matrices and it is the \\code{lambda} in our function. The \\eqn{\\epsilon} is the hyperparameter controlling the differences between the shared pattern among graphs and the individual part of each graph. It is the \\code{epsilon} parameter in our function and the default value is 1. For further details, please see our paper: <http://link.springer.com/article/10.1007/s10994-017-5635-7>. } \examples{ \dontrun{ data(exampleData) result = simule(X = exampleData , lambda = 0.1, epsilon = 0.45, covType = "cov", FALSE) plot.simule(result) } } \references{ Beilun Wang, Ritambhara Singh, Yanjun Qi (2017). A constrained L1 minimization approach for estimating multiple Sparse Gaussian or Nonparanormal Graphical Models. http://link.springer.com/article/10.1007/s10994-017-5635-7 } \author{ Beilun Wang }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/zzz.R \name{zzz_bm} \alias{zzz_bm} \title{Dummy function to clean working directory after package checks} \usage{ zzz_bm() } \value{ nothing returned } \description{ This is the last function that will be check and will remove residual files from other examples checking such as 'GuloGulo'. This is required to pass CRAN checks. } \examples{ zzz_bm() } \keyword{internal}
/man/zzz_bm.Rd
no_license
biomodhub/biomod2
R
false
true
452
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/zzz.R \name{zzz_bm} \alias{zzz_bm} \title{Dummy function to clean working directory after package checks} \usage{ zzz_bm() } \value{ nothing returned } \description{ This is the last function that will be check and will remove residual files from other examples checking such as 'GuloGulo'. This is required to pass CRAN checks. } \examples{ zzz_bm() } \keyword{internal}
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/ch4-fn.R \name{cont.trans} \alias{cont.trans} \title{Transformed PDF of a Continuous Random Variable} \usage{ cont.trans(fx, TF, FTF, a, b, lo = 0, up = 1, plot = FALSE, ...) } \arguments{ \item{fx}{PDF of the original random variable} \item{TF}{List of transform functions (1~8)} \item{FTF}{List of transformed PDF (1~8)} \item{a}{Lower limit of the original random variable for calculating P(a<X<b)} \item{b}{Upper limit of the original random variable for calculating P(a<X<b)} \item{lo}{Lower limit of the original random variable, Default: 0} \item{up}{Upper limit of the original random variable, Default: 1} \item{plot}{Plot the PDF? Default: FALSE} \item{...}{Graphic parameters} } \value{ None. } \description{ Transformed PDF of a Continuous Random Variable } \examples{ fx = function(x) 2*x*(x>=0 & x<=1) ty = function(x) 10*x - 4 tw = function(x) -10*x + 4 fy = function(y) fx((y+4)/10)/10 fw = function(w) fx((-w+4)/10)/10 cont.trans(fx, list(y=ty, w=tw), list(fy, fw), 0.3, 0.7, plot=TRUE, cex=1.3) }
/man/cont.trans.Rd
no_license
zlfn/Rstat-1
R
false
true
1,101
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/ch4-fn.R \name{cont.trans} \alias{cont.trans} \title{Transformed PDF of a Continuous Random Variable} \usage{ cont.trans(fx, TF, FTF, a, b, lo = 0, up = 1, plot = FALSE, ...) } \arguments{ \item{fx}{PDF of the original random variable} \item{TF}{List of transform functions (1~8)} \item{FTF}{List of transformed PDF (1~8)} \item{a}{Lower limit of the original random variable for calculating P(a<X<b)} \item{b}{Upper limit of the original random variable for calculating P(a<X<b)} \item{lo}{Lower limit of the original random variable, Default: 0} \item{up}{Upper limit of the original random variable, Default: 1} \item{plot}{Plot the PDF? Default: FALSE} \item{...}{Graphic parameters} } \value{ None. } \description{ Transformed PDF of a Continuous Random Variable } \examples{ fx = function(x) 2*x*(x>=0 & x<=1) ty = function(x) 10*x - 4 tw = function(x) -10*x + 4 fy = function(y) fx((y+4)/10)/10 fw = function(w) fx((-w+4)/10)/10 cont.trans(fx, list(y=ty, w=tw), list(fy, fw), 0.3, 0.7, plot=TRUE, cex=1.3) }
# ------------------------------------------------------------------------------ # File and Script Created by: Alaine A. Gulles 08.05.2014 # for International Rice Research Institute # ------------------------------------------------------------------------------ # Description: determine if the linear contrast are pairwise orthogonal # Arguments: contrast - a contrast matrix # Returned Value: logical # ------------------------------------------------------------------------------ isOrthogonal <- function(contrast) UseMethod("isOrthogonal") isOrthogonal.default <- function(contrast) { numContrast <- nrow(contrast) if (numContrast == 1) { stop("The number of contrast should be greater than or equal to 2.") } result <- FALSE theContrastName <- rownames(contrast) pairedContrast <- combn(theContrastName, 2) for (i in ncol(pairedContrast)) { index <- match(pairedContrast[,i], theContrastName) if (sum(apply(contrast[index,],2,prod)) != 0) { result <- FALSE break } else { result <- TRUE } } return(result) }
/R3.0.2 Package Creation/PBTools/R/isOrthogonal.R
no_license
djnpisano/RScriptLibrary
R
false
false
1,159
r
# ------------------------------------------------------------------------------ # File and Script Created by: Alaine A. Gulles 08.05.2014 # for International Rice Research Institute # ------------------------------------------------------------------------------ # Description: determine if the linear contrast are pairwise orthogonal # Arguments: contrast - a contrast matrix # Returned Value: logical # ------------------------------------------------------------------------------ isOrthogonal <- function(contrast) UseMethod("isOrthogonal") isOrthogonal.default <- function(contrast) { numContrast <- nrow(contrast) if (numContrast == 1) { stop("The number of contrast should be greater than or equal to 2.") } result <- FALSE theContrastName <- rownames(contrast) pairedContrast <- combn(theContrastName, 2) for (i in ncol(pairedContrast)) { index <- match(pairedContrast[,i], theContrastName) if (sum(apply(contrast[index,],2,prod)) != 0) { result <- FALSE break } else { result <- TRUE } } return(result) }
# A basic script for calculating your required final exam grade in order to # obtain a B- or higher in a course based on your current progress. Allows # for percentage or fractional grade input. # For simplicity, no action has been taken to prevent invalid input from # the user, such as negative/textual grades, invalid weights, etc. library(stringr) # Collection step marks <- numeric(0) weights <- numeric(0) running <- TRUE while (running){ m <- readline("Enter mark as percentage or fraction (`e` to exit): ") if (m == "e"){ running <- FALSE } else { w <- as.numeric(readline("Enter weight (%): ")) weights <- append(weights, values=w) if (str_detect(m, pattern="/")){ ## Fractional grade entered frac <- as.numeric(str_split(m, pattern="/", n=2, simplify=TRUE)) decimalGrade <- frac[1]/frac[2] marks <- append(marks, values=decimalGrade) } else { ## Percentage grade entered decimalGrade <- as.numeric(m)/100 marks <- append(marks, values=decimalGrade) } } } # Computation step examWeight <- 100 - sum(weights) points <- sum(marks*weights) ## `gradeCut` can be modified to suit your needs gradeCut <- c(70, 75, 80, 85, 90) finalExamGrade <- (gradeCut - points)/(examWeight/100) # Output for (i in 1:length(gradeCut)){ cat("\n", sprintf("For a final grade of %d%% you need a %.2f%% on the final exam.", gradeCut[i], finalExamGrade[i]), sep="") }
/final-grade.R
no_license
adamoshen/final-grade
R
false
false
1,517
r
# A basic script for calculating your required final exam grade in order to # obtain a B- or higher in a course based on your current progress. Allows # for percentage or fractional grade input. # For simplicity, no action has been taken to prevent invalid input from # the user, such as negative/textual grades, invalid weights, etc. library(stringr) # Collection step marks <- numeric(0) weights <- numeric(0) running <- TRUE while (running){ m <- readline("Enter mark as percentage or fraction (`e` to exit): ") if (m == "e"){ running <- FALSE } else { w <- as.numeric(readline("Enter weight (%): ")) weights <- append(weights, values=w) if (str_detect(m, pattern="/")){ ## Fractional grade entered frac <- as.numeric(str_split(m, pattern="/", n=2, simplify=TRUE)) decimalGrade <- frac[1]/frac[2] marks <- append(marks, values=decimalGrade) } else { ## Percentage grade entered decimalGrade <- as.numeric(m)/100 marks <- append(marks, values=decimalGrade) } } } # Computation step examWeight <- 100 - sum(weights) points <- sum(marks*weights) ## `gradeCut` can be modified to suit your needs gradeCut <- c(70, 75, 80, 85, 90) finalExamGrade <- (gradeCut - points)/(examWeight/100) # Output for (i in 1:length(gradeCut)){ cat("\n", sprintf("For a final grade of %d%% you need a %.2f%% on the final exam.", gradeCut[i], finalExamGrade[i]), sep="") }
### Fitting Logistic Regression ### Predict the Stock Market library(ISLR) attach(Smarket) ?Smarket head(Smarket) round(cor(Smarket[,-9]), digit = 2) pairs(Smarket[,-9]) ### Split the data into training (2001-2004), ### and testing(2005) train = (Smarket$Year < 2005) test = !train training_data = Smarket[train,-8] testing_data = Smarket[test,-8] ## select the y (Direction) from the testing data testing_y = testing_data[,8] ### Fit a logistic regression model using glm() model = glm(Direction ~., data = training_data, family = "binomial") summary(model) ## Assess the model predicted_probs = predict(model, testing_data, type ="response") head(predicted_probs) ### we don't know how R coded the Up and Down ### we can use function contrasts() contrasts(Direction) ### Up is 1, Down is 0 ## we need to create a new vector of Ups and Downs ## based on the predicted probs. Use a thresghold ## of 0.5 ### initialize y_hat with a vector full of Ups y_hat = rep("Up", length(testing_y)) y_hat[predicted_probs < 0.5] = "Down" ### Find the MSE mean(testing_y != y_hat) table(testing_y, y_hat) # Chaper 5 Lab: Cross-Validation and the Bootstrap # The Validation Set Approach library(ISLR) set.seed(1) train=sample(392,196) lm.fit=lm(mpg~horsepower,data=Auto,subset=train) attach(Auto) mean((mpg-predict(lm.fit,Auto))[-train]^2) lm.fit2=lm(mpg~poly(horsepower,2),data=Auto,subset=train) mean((mpg-predict(lm.fit2,Auto))[-train]^2) lm.fit3=lm(mpg~poly(horsepower,3),data=Auto,subset=train) mean((mpg-predict(lm.fit3,Auto))[-train]^2) set.seed(2) train=sample(392,196) lm.fit=lm(mpg~horsepower,subset=train) mean((mpg-predict(lm.fit,Auto))[-train]^2) lm.fit2=lm(mpg~poly(horsepower,2),data=Auto,subset=train) mean((mpg-predict(lm.fit2,Auto))[-train]^2) lm.fit3=lm(mpg~poly(horsepower,3),data=Auto,subset=train) mean((mpg-predict(lm.fit3,Auto))[-train]^2) # Leave-One-Out Cross-Validation glm.fit=glm(mpg~horsepower,data=Auto) coef(glm.fit) lm.fit=lm(mpg~horsepower,data=Auto) coef(lm.fit) library(boot) glm.fit=glm(mpg~horsepower,data=Auto) cv.err=cv.glm(Auto,glm.fit) cv.err$delta cv.error=rep(0,5) for (i in 1:5){ glm.fit=glm(mpg~poly(horsepower,i),data=Auto) cv.error[i]=cv.glm(Auto,glm.fit)$delta[1] } cv.error # k-Fold Cross-Validation set.seed(17) cv.error.10=rep(0,10) for (i in 1:10){ glm.fit=glm(mpg~poly(horsepower,i),data=Auto) cv.error.10[i]=cv.glm(Auto,glm.fit,K=10)$delta[1] } cv.error.10 # The Bootstrap alpha.fn=function(data,index){ X=data$X[index] Y=data$Y[index] return((var(Y)-cov(X,Y))/(var(X)+var(Y)-2*cov(X,Y))) } alpha.fn(Portfolio,1:100) set.seed(1) alpha.fn(Portfolio,sample(100,100,replace=T)) boot(Portfolio,alpha.fn,R=1000) # Estimating the Accuracy of a Linear Regression Model boot.fn=function(data,index) return(coef(lm(mpg~horsepower,data=data,subset=index))) boot.fn(Auto,1:392) set.seed(1) boot.fn(Auto,sample(392,392,replace=T)) boot.fn(Auto,sample(392,392,replace=T)) boot(Auto,boot.fn,1000) summary(lm(mpg~horsepower,data=Auto))$coef boot.fn=function(data,index) coefficients(lm(mpg~horsepower+I(horsepower^2),data=data,subset=index)) set.seed(1) boot(Auto,boot.fn,1000) summary(lm(mpg~horsepower+I(horsepower^2),data=Auto))$coef
/LogisticRegression.R
no_license
eslh2050/Machine-Learning
R
false
false
3,298
r
### Fitting Logistic Regression ### Predict the Stock Market library(ISLR) attach(Smarket) ?Smarket head(Smarket) round(cor(Smarket[,-9]), digit = 2) pairs(Smarket[,-9]) ### Split the data into training (2001-2004), ### and testing(2005) train = (Smarket$Year < 2005) test = !train training_data = Smarket[train,-8] testing_data = Smarket[test,-8] ## select the y (Direction) from the testing data testing_y = testing_data[,8] ### Fit a logistic regression model using glm() model = glm(Direction ~., data = training_data, family = "binomial") summary(model) ## Assess the model predicted_probs = predict(model, testing_data, type ="response") head(predicted_probs) ### we don't know how R coded the Up and Down ### we can use function contrasts() contrasts(Direction) ### Up is 1, Down is 0 ## we need to create a new vector of Ups and Downs ## based on the predicted probs. Use a thresghold ## of 0.5 ### initialize y_hat with a vector full of Ups y_hat = rep("Up", length(testing_y)) y_hat[predicted_probs < 0.5] = "Down" ### Find the MSE mean(testing_y != y_hat) table(testing_y, y_hat) # Chaper 5 Lab: Cross-Validation and the Bootstrap # The Validation Set Approach library(ISLR) set.seed(1) train=sample(392,196) lm.fit=lm(mpg~horsepower,data=Auto,subset=train) attach(Auto) mean((mpg-predict(lm.fit,Auto))[-train]^2) lm.fit2=lm(mpg~poly(horsepower,2),data=Auto,subset=train) mean((mpg-predict(lm.fit2,Auto))[-train]^2) lm.fit3=lm(mpg~poly(horsepower,3),data=Auto,subset=train) mean((mpg-predict(lm.fit3,Auto))[-train]^2) set.seed(2) train=sample(392,196) lm.fit=lm(mpg~horsepower,subset=train) mean((mpg-predict(lm.fit,Auto))[-train]^2) lm.fit2=lm(mpg~poly(horsepower,2),data=Auto,subset=train) mean((mpg-predict(lm.fit2,Auto))[-train]^2) lm.fit3=lm(mpg~poly(horsepower,3),data=Auto,subset=train) mean((mpg-predict(lm.fit3,Auto))[-train]^2) # Leave-One-Out Cross-Validation glm.fit=glm(mpg~horsepower,data=Auto) coef(glm.fit) lm.fit=lm(mpg~horsepower,data=Auto) coef(lm.fit) library(boot) glm.fit=glm(mpg~horsepower,data=Auto) cv.err=cv.glm(Auto,glm.fit) cv.err$delta cv.error=rep(0,5) for (i in 1:5){ glm.fit=glm(mpg~poly(horsepower,i),data=Auto) cv.error[i]=cv.glm(Auto,glm.fit)$delta[1] } cv.error # k-Fold Cross-Validation set.seed(17) cv.error.10=rep(0,10) for (i in 1:10){ glm.fit=glm(mpg~poly(horsepower,i),data=Auto) cv.error.10[i]=cv.glm(Auto,glm.fit,K=10)$delta[1] } cv.error.10 # The Bootstrap alpha.fn=function(data,index){ X=data$X[index] Y=data$Y[index] return((var(Y)-cov(X,Y))/(var(X)+var(Y)-2*cov(X,Y))) } alpha.fn(Portfolio,1:100) set.seed(1) alpha.fn(Portfolio,sample(100,100,replace=T)) boot(Portfolio,alpha.fn,R=1000) # Estimating the Accuracy of a Linear Regression Model boot.fn=function(data,index) return(coef(lm(mpg~horsepower,data=data,subset=index))) boot.fn(Auto,1:392) set.seed(1) boot.fn(Auto,sample(392,392,replace=T)) boot.fn(Auto,sample(392,392,replace=T)) boot(Auto,boot.fn,1000) summary(lm(mpg~horsepower,data=Auto))$coef boot.fn=function(data,index) coefficients(lm(mpg~horsepower+I(horsepower^2),data=data,subset=index)) set.seed(1) boot(Auto,boot.fn,1000) summary(lm(mpg~horsepower+I(horsepower^2),data=Auto))$coef
#Assignment: catching the inverse of a matrixless #make cacheMatrix & cacheSolve #the two functions are useful tools in catching the inverse of a matrix.As matrix inversion takes a lot of space #these functions will help to compute only once and get the result #makecacheMatrix creates a special "matrix" object that can cache the inverse #cachesolve computes the inverse of the special "matrix" from line 3. it is only a simply variable of type matrix #makecacheMatrix consists of four functions #solution makeCacheMatrix <- function(x = matrix()){ inv <- NULL set <- function(y) { x <<- y inv <<- NULL } get <- function() x setinver <- function(inverse) inv <<- inverse getinver <- function() inv list(set = set, get = get, setinver = setinver, getinver = getinver) } #cacheSolve is a function that computes the inverse of the special "matrix" returned by the makeCacheMatrix #if the inverse has already been calculated (and matrix not changed), then cacheSolve should retrieve the inverse from the cache #cacheSolve helps computing the reverse once makeCacheMatrix is created. cacheSolve <- function(x, ...) { inv <-x$getinver() if(!is.null(inv)) { message("get the cached data") return(inv) } data <- x$get() inv <- solve(data, ...) x$setinver(inv) inv } #testing #creating new matrix with normal distribution new_mat <- makeCacheMatrix(matrix(1:4, 2, 2)) new_mat$get() #my matrix with geninver (get inverse) new_mat$getinver() #it returns NULL #moving on to cacheSolve cacheSolve(new_mat) new_mat$getinver() #setting new values for my matrix and printing the new values in the next line new_mat$set(matrix(c(12, 16, 20, 24), 2, 2)) cacheSolve(new_mat) #trying cache solve again new_mat$getinver() #it returns the matrix
/cachematrix.R
no_license
Zdravko1212/ProgrammingAssignment2
R
false
false
1,791
r
#Assignment: catching the inverse of a matrixless #make cacheMatrix & cacheSolve #the two functions are useful tools in catching the inverse of a matrix.As matrix inversion takes a lot of space #these functions will help to compute only once and get the result #makecacheMatrix creates a special "matrix" object that can cache the inverse #cachesolve computes the inverse of the special "matrix" from line 3. it is only a simply variable of type matrix #makecacheMatrix consists of four functions #solution makeCacheMatrix <- function(x = matrix()){ inv <- NULL set <- function(y) { x <<- y inv <<- NULL } get <- function() x setinver <- function(inverse) inv <<- inverse getinver <- function() inv list(set = set, get = get, setinver = setinver, getinver = getinver) } #cacheSolve is a function that computes the inverse of the special "matrix" returned by the makeCacheMatrix #if the inverse has already been calculated (and matrix not changed), then cacheSolve should retrieve the inverse from the cache #cacheSolve helps computing the reverse once makeCacheMatrix is created. cacheSolve <- function(x, ...) { inv <-x$getinver() if(!is.null(inv)) { message("get the cached data") return(inv) } data <- x$get() inv <- solve(data, ...) x$setinver(inv) inv } #testing #creating new matrix with normal distribution new_mat <- makeCacheMatrix(matrix(1:4, 2, 2)) new_mat$get() #my matrix with geninver (get inverse) new_mat$getinver() #it returns NULL #moving on to cacheSolve cacheSolve(new_mat) new_mat$getinver() #setting new values for my matrix and printing the new values in the next line new_mat$set(matrix(c(12, 16, 20, 24), 2, 2)) cacheSolve(new_mat) #trying cache solve again new_mat$getinver() #it returns the matrix
############################################### ############################################### ################ FUNCTIONS #################### ############################################### ############ Calculate an Empirical Loss Curve (for costs) LossCurveC <- function(inp, ThresFun) { resS <- RESOLUTION_EMPIRICAL_CURVES range <- (0:resS) / resS ELoss <- 1:(resS + 1) for (c in range) { if (THRESHOLD_CHOICE_METHOD == 4) { #Uniform. We need to average RESOL <- 100 LV <- rep(0, RESOL) for (i in 1:RESOL) { t <- ThresFun(inp, c) LV[i] <- LossC(inp, t, c) } ELoss[round(c * resS + 1)] <- mean(LV) } else if (THRESHOLD_CHOICE_METHOD == 7) { # Interpolating vec <- RateDrivenThresFun3C(inp, c) tL <- vec[1] pL <- vec[2] tR <- vec[3] pR <- vec[4] ELL <- LossC(inp, tL, c) ELR <- LossC(inp, tR, c) ELoss[round(c * resS + 1)] <- ELL * pL + ELR * pR } else { t <- ThresFun(inp, c) ELoss[round(c * resS + 1)] <- LossC(inp, t, c) } } ELoss } # ############################################################# ############ Calculate Loss (for costs) given a classifier and a threshold LossC <- function(inp, t, c) { lo0 <- 0 lo1 <- 0 n0n1 <- nrow(inp) pi0 <- sum(inp[,1] == 0) / n0n1 pi1 <- sum(inp[,1] == 1) / n0n1 for (i in 1:n0n1) { if (inp[i,2] < t) { # if (inp[i,2] <= t) { pred <- 0 } else pred <- 1 if (pred != inp[i,1]) { if (pred == 1) { l0 <- c # 1 for 0-1 loss lo0 <- lo0 + l0 } else { l1 <- (1 - c) # 1 for 0-1 loss lo1 <- lo1 + l1 } } } lo0 <- bfactor * lo0 lo1 <- bfactor * lo1 (lo0 + lo1) / n0n1 } ############ Calculate ExpLoss (for skews) ExpLossZ <- function(inp, ThresFun) { ELoss <- 0 resS <- RESOLUTION_EMPIRICAL_CURVES range <- (0:resS) / resS for (z in range) { t <- ThresFun(inp, z) ELoss <- ELoss + LossZ(inp, t, z) } ELoss / (resS + 1) } # # Example of use: # # ExpLossZ(inp,ProbThresFunZ) # ExpLossZ(inp,NatThresFunZ) ############ Calculate ExpLoss (for costs) ExpLossC <- function(inp, ThresFun) { ELoss <- 0 resS <- RESOLUTION_EMPIRICAL_CURVES range <- (0:resS) / resS for (c in range) { t <- ThresFun(inp, c) ELoss <- ELoss + LossC(inp, t, c) } ELoss / (resS + 1) } # # Example of use: # # ExpLossC(inp,ProbThresFunC) # ExpLossC(inp,NatThresFunC) ############ Calculate an Empirical Loss Curve (for skews) LossCurveZ <- function(inp, ThresFun) { resS <- RESOLUTION_EMPIRICAL_CURVES range <- (0:resS) / resS ELoss <- 1:(resS + 1) for (z in range) { if (THRESHOLD_CHOICE_METHOD == 4) { #Uniform. We need to average RESOL <- 100 LV <- rep(0, RESOL) for (i in 1:RESOL) { t <- ThresFun(inp, z) LV[i] <- LossZ(inp, t, z) } ELoss[round(z * resS + 1)] <- mean(LV) } else if (THRESHOLD_CHOICE_METHOD == 7) { # Interpolating vec <- RateDrivenThresFun3Z(inp, z) tL <- vec[1] pL <- vec[2] tR <- vec[3] pR <- vec[4] ELL <- LossZ(inp, tL, z) ELR <- LossZ(inp, tR, z) ELoss[round(z * resS + 1)] <- ELL * pL + ELR * pR } else { t <- ThresFun(inp, z) ELoss[round(z * resS + 1)] <- LossZ(inp, t, z) } } ELoss } # # Example of use: # # LossCurveZ(inp,ProbThresFunZ) # LossCurveZ(inp,NatThresFunZ) ############ Calculate Loss (for skews) given a classifier and a threshold LossZ <- function(inp, t, z) { lo0 <- 0 lo1 <- 0 n0n1 <- nrow(inp) pi0 <- sum(inp[,1] == 0) / n0n1 pi1 <- sum(inp[,1] == 1) / n0n1 for (i in 1:n0n1) { if (inp[i,2] < t) { # if (inp[i,2] <= t) { pred <- 0 } else pred <- 1 if (pred != inp[i,1]) { if (pred == 1) { l0 <- z # 1 for 0-1 loss lo0 <- lo0 + l0 } else { l1 <- (1 - z) # 1 for 0-1 loss lo1 <- lo1 + l1 } } } lo0 <- bfactor * 0.5 * lo0 / pi0 lo1 <- bfactor * 0.5 * lo1 / pi1 (lo0 + lo1) / n0n1 } ### La béstia!! ############ Calculate an Empirical Loss Curve (for costs) MinCost <- function(inp,title = "") { ############################################### #### SOME GENERAL VARIABLES USED ELSEWHERE #### n0n1 <- nrow(inp) x <- t(inp) zord <- order(x[2,]) sc <- x[,zord] n1 <- sum(sc[1,]) n0 <- n0n1 - n1 pi0 <- n0 / n0n1 pi1 <- n1 / n0n1 # We modify inp to be ordered by scores (this is needed for several plots) # order data into increasing scores #######zord <- order(x[2,]) zordrev <- rev(zord) # zordrev <- zord # NO REVERSE screv <- x[,zordrev] inp <- t(screv) #Decreasing order if (n0 == 0) print("No elements of class 0") if (n1 == 0) print("No elements of class 1") # We calculate means class0 <- x[,x[1,] == 0] class1 <- x[,x[1,] == 1] if (n0 > 1) { s0 <- mean(class0[2,]) } else s0 <- class0[2] if (n1 > 1) { s1 <- mean(class1[2,]) } else s1 <- class1[2] calpi1 <- pi0 * s0 + pi1 * s1 # alpha and betad are the parameters in the beta # cost distribution ~ c^alpha * (1-c)^betad alpha <- 2 betad <- 2 ############################################### ##### ACCUMULATED EMPIRICAL DIST. F0, F1 ###### # Calculate the raw ROC, replacing any tied # sequences by a ‘diagonal’ in the ROC curve. # The raw ROC starts at F0[1]=0, F1[1]=0, and ends at # F0[K1]=n0, F1[K1]=n1. # F0 and F1 are counts, G0 and G1 are normalised in (0,1) # F0 and F1 eliminate merge repeated values. # F0rep and F1rep keep the repeated values as different elements in the vector. # K1 is the number of non-repeated scores. It will be used everywhere sc <- cbind(sc,sc[,n0n1]) # Adds the last example. Now sc is one element longer. F0 <- c(0:n0n1) F1 <- c(0:n0n1) K1 <- 1 k <- 2 for (i in 1:n0n1) { F0[k] <- F0[K1] + (1 - sc[1,i]) F1[k] <- F1[K1] + sc[1,i] K1 <- k k <- if (sc[2,i + 1] == sc[2,i]) (k) else (k + 1) } F0 <- F0[1:K1] F1 <- F1[1:K1] G0nomin <- F0 / n0 G1nomin <- F1 / n1 scshared <- sc[1,] j <- 1 Ac <- sc[1,1] for (i in 2:n0n1) { if (sc[2,i] == sc[2,i - 1]) { j <- j + 1 Ac <- Ac + sc[1,i] } else { for (k in (i - 1):(i - j)) { scshared[k] <- Ac / j } j <- 1 Ac <- sc[1,i] } } F0rep <- c(0:n0n1) F1rep <- c(0:n0n1) K1rep <- 1 krep <- 2 for (i in 1:n0n1) { F0rep[krep] <- F0rep[K1rep] + (1 - scshared[i]) F1rep[krep] <- F1rep[K1rep] + scshared[i] K1rep <- krep krep <- krep + 1 } F0rep <- F0rep[1:K1rep] F1rep <- F1rep[1:K1rep] G0nominrep <- F0rep / n0 G1nominrep <- F1rep / n1 ############################################### ############################################### ######## CONVEX HULL - MINIMUM COSTS ########## # Find the upper concave hull # G0 and G1 are normalised in (0,1) and optimal # Repeated values are merged # hc will be the number of segments in the convex hull G0 <- c(0:(K1 - 1)) G1 <- c(0:(K1 - 1)) i <- 1 hc <- 1 while (i < K1) { c1 <- c((i + 1):K1) for (j in (i + 1):K1) { u1 <- (F1[j] - F1[i]) u0 <- (F0[j] - F0[i]) c1[j] <- u1 / (u1 + u0) } argmin <- i + 1 c1min <- c1[i + 1] for (k in (i + 1):K1) { argmin <- if (c1[k] <= c1min) (k) else (argmin) c1min <- c1[argmin] } hc <- hc + 1 G0[hc] <- F0[argmin] G1[hc] <- F1[argmin] i <- argmin } G0 <- G0[1:hc] / n0 G1 <- G1[1:hc] / n1 ############################################### ############################################### ##### ACCUMULATED EMPIRICAL DIST. F0, F1 ###### # Calculate the raw ROC, replacing any tied # sequences by a ‘diagonal’ in the ROC curve. # The raw ROC starts at F0[1]=0, F1[1]=0, and ends at # F0[K1]=n0, F1[K1]=n1. # F0 and F1 are counts, G0 and G1 are normalised in (0,1) # F0 and F1 eliminate merge repeated values. # F0rep and F1rep keep the repeated values as different elements in the vector. # K1 is the number of non-repeated scores. It will be used everywhere sc <- cbind(sc,sc[,n0n1]) # Adds the last example. Now sc is one element longer. F0 <- c(0:n0n1) F1 <- c(0:n0n1) K1 <- 1 k <- 2 for (i in 1:n0n1) { F0[k] <- F0[K1] + (1 - sc[1,i]) F1[k] <- F1[K1] + sc[1,i] K1 <- k k <- if (sc[2,i + 1] == sc[2,i]) (k) else (k + 1) } F0 <- F0[1:K1] F1 <- F1[1:K1] G0nomin <- F0 / n0 G1nomin <- F1 / n1 scshared <- sc[1,] j <- 1 Ac <- sc[1,1] for (i in 2:n0n1) { if (sc[2,i] == sc[2,i - 1]) { j <- j + 1 Ac <- Ac + sc[1,i] } else { for (k in (i - 1):(i - j)) { scshared[k] <- Ac / j } j <- 1 Ac <- sc[1,i] } } F0rep <- c(0:n0n1) F1rep <- c(0:n0n1) K1rep <- 1 krep <- 2 for (i in 1:n0n1) { F0rep[krep] <- F0rep[K1rep] + (1 - scshared[i]) F1rep[krep] <- F1rep[K1rep] + scshared[i] K1rep <- krep krep <- krep + 1 } F0rep <- F0rep[1:K1rep] F1rep <- F1rep[1:K1rep] G0nominrep <- F0rep / n0 G1nominrep <- F1rep / n1 # x-axis using the uniform of the convex hull for cost ratios cost <- c(1:(hc + 1)) cost[1] <- 0 cost[hc + 1] <- 1 for (i in 2:hc) { cost[i] <- 1 * pi1 * (G1[i] - G1[i - 1]) / (pi0 * (G0[i] - G0[i - 1]) + pi1 * (G1[i] - G1[i - 1])) } ############################################### ###### Expected cost Q (minimal) ############## ############## for cost ratio ################# # CALCULATE THE MINIMUM LOSS VS c CURVE (UNNORMALISED, HAND) Q <- c(1:(hc + 1)) for (i in 1:hc) { Q[i] <- bfactor * (cost[i] * pi0 * (1 - G0[i]) + (1 - cost[i]) * pi1 * G1[i]) } Q[(hc + 1)] <- 0 if (DRAW_COSTRATIO_PLOTS) { # only plots the space if (PLOT_SPACE) { #tit <- paste(EXPECTEDLOSS,BYCOST) tit <- title if (NOTITLE) { par(mar = MARGINSNOTITLE) tit <- "" } plot( c(0,1),c(0,0), ylim = c(0,YLIM), main = tit,xlab = COSTRATIO,ylab = LOSS, col = "white" ) if (LEGEND) { for (i in 1:nrow(inp)) { text(0.2 + i * 0.08,0.9,inp[i,1]) text(0.2 + i * 0.08,0.8,inp[i,2]) } } } if (DRAW_OPTIMAL_CURVES) { if (OVERLAP_PROB_OPTIMAL) { points( cost,Q, type = "o", lwd = LWD_OPTIMAL_CURVES, pch = PCH_OPTIMAL_CURVES, col = COLOUR_OPTIMAL_CURVES, lty = LTY_OPTIMAL_CURVES ) } else { tit <- paste(EXPECTEDLOSS, BYCOST) if (NOTITLE) { par(mar = MARGINSNOTITLE) tit <- "" } plot( cost,Q, type = "l", lwd = LWD_OPTIMAL_CURVES, pch = PCH_OPTIMAL_CURVES, main = tit ,xlab = COSTRATIO,ylab = LOSS, lty = LTY_OPTIMAL_CURVES, col = COLOUR_OPTIMAL_CURVES, ylim = c(0,YLIM) ) } } } # AUCC AUCC <- 0 for (i in 2:(hc + 1)) { AUCC <- AUCC + (cost[i] - cost[i - 1]) * (Q[i] + Q[i - 1]) / 2 } ############################################### ###### Expected cost Qnomintots (ALL) ######### ############## for cost ratio ################# if (!COSTLINESREP) { # Reduces the number of lines (eliminates repeated ones), but it computes the red line in a wrong way if there are ties. # AUCCnomintots xnomintots <- rep(0,K1 * 4) ynomintots <- rep(0,K1 * 4) Leftacc <- 0 Rightacc <- 0 Leftmax <- 0 Rightmax <- 0 AUCCnomintots <- 0 for (i in 1:K1) #for each cost line { #extremes of teh cost line Leftv <- G1nomin[i] * pi1 * bfactor Rightv <- (1 - G0nomin[i]) * pi0 * bfactor #cesar ponderem per llargaria de les X agafant els threshold method de Brier curves if (WEIGHTED_LOSSLINE) { wpm <- 0 if (i == 1) wpm <- inp[,2][K1rep - 1] else if (i == K1rep) wpm <- 1 - inp[,2][1] else wpm <- inp[,2][(K1rep + 1 - i) - 1] - inp[,2][(K1rep + 1 - i)] #print(wpm) w <- wpm * K1rep # weight print(paste("line",i)) print(w / K1) # print(inp) # print(K1rep) # print(K1) } else { w <- 1 # weight } Leftacc <- Leftacc + w * Leftv Rightacc <- Rightacc + w * Rightv Leftmax <- max(Leftmax, Leftv) Rightmax <- max(Rightmax, Rightv) xnomintots[(i - 1) * 4 + 1] <- 0 xnomintots[(i - 1) * 4 + 2] <- 1 xnomintots[(i - 1) * 4 + 3] <- 1 xnomintots[(i - 1) * 4 + 4] <- 0 ynomintots[(i - 1) * 4 + 1] <- Leftv ynomintots[(i - 1) * 4 + 2] <- Rightv ynomintots[(i - 1) * 4 + 3] <- 0 ynomintots[(i - 1) * 4 + 4] <- 0 linia <- (Rightv + Leftv) / 2 # print(linia) AUCCnomintots <- AUCCnomintots + linia # print(AUCCnomintots) } AUCCnomintots <- AUCCnomintots / K1 Leftacc <- Leftacc / K1 Rightacc <- Rightacc / K1 # print(c(Leftacc,Rightacc)) if (DRAW_COSTRATIO_PLOTS) { if (DRAW_LOSSLINE) { points( c(0,1),c(Leftacc,Rightacc), type = "l", lty = LTY_LOSSLINE, col = COLOUR_LOSSLINE, lwd = LWD_LOSSLINE ) } if (DRAW_COST_LINES) { if (!(OVERLAP_NONOPTIMAL)) { tit <- paste(EXPECTEDLOSS, BYCOST) if (NOTITLE) { par(mar = MARGINSNOTITLE) tit <- "" } plot( xnomintots,ynomintots, type = "l", main = tit,xlab = COSTRATIO,ylab = LOSS, lty = 2, col = COLOUR_COST_LINES ) } else { points( xnomintots,ynomintots, type = "l", lty = 2, col = COLOUR_COST_LINES ) } if (DRAW_LOSSLINE) points( c(0,1),c(Leftacc,Rightacc), type = "l", lty = LTY_LOSSLINE, col = COLOUR_LOSSLINE, lwd = LWD_LOSSLINE ) if (DRAW_COSTSQUARE) { points( c(0,1),c(0,0), type = "l", col = "grey", lwd = 1 ) # horizontal points( c(0,0),c(0,Leftmax), type = "l", col = "grey", lwd = 1 ) # left vertical # points(c(0,1),c(0,0), type= "l", col = "grey", lwd=1) points( c(1,1),c(0,Rightmax), type = "l", col = "grey", lwd = 1 ) # right vertical # points(c(1,1),c(0,1), type= "l", col = "grey", lwd=1) } } if (DRAW_OPTIMAL_CURVES_AGAIN) { points( cost,Q, type = "o", lwd = LWD_OPTIMAL_CURVE_AGAIN, col = COLOUR_OPTIMAL_CURVE_AGAIN, lty = LTY_OPTIMAL_CURVES, pch = PCH_OPTIMAL_CURVES ) } if (DRAW_PROB_CURVES_AGAIN) { lines( costprobnew,Qprobnew, type = TYPE_PROB_CURVES, lwd = LWD_PROB_CURVE_AGAIN, col = COLOUR_PROB_CURVE_AGAIN, pch = PCH_PROB_CURVES ) } } ## Xnominots nad Ynominots have elements non relevant hh <- length(xnomintots) xf <- c(1:(hh / 2)) yf <- c(1:(hh / 2)) kk <- 1 for (i in 1:hh) { if (((i - 1) %% 4) < 2) { xf[kk] <- xnomintots[i] yf[kk] <- ynomintots[i] kk <- kk + 1 } } # print(xnomintots) # print(ynomintots) # print(xf) # print(yf) # print(cost) # print(Q) } # plot(cost,xlim=c(0,1),ylim=c(0,1),Q, type= "l", main= "kkk",xlab="oo",axes=FALSE) resx = list("res" = AUCC, "yf" = yf) #retiorne una llista en el valor y les cost lines MinCost <- resx } ############ Calculate MSE (Brier score) MSE <- function(inp) { MSE <- 0 for (i in 1:nrow(inp)) { MSE = MSE + (inp[i,1] - inp[i,2]) ^ 2 } MSE / nrow(inp) } ############ Calculate FMeasure fmeas <- function(inp) { fm <- 0 thr <- 0.5 mat <- matrix(0,2,2) for (i in 1:nrow(inp)) { if (inp[i,2] > thr) { if (inp[i,1] == 1) mat[1,1] <- mat[1,1] + 1 else mat[1,2] <- mat[1,2] + 1 } else { if (inp[i,1] == 0) mat[2,2] <- mat[2,2] + 1 else mat[2,1] <- mat[2,1] + 1 } } pr <- mat[1,1] / (mat[1,1] + mat[1,2]) rec <- mat[1,1] / (mat[1,1] + mat[2,1]) fm <- 2 * pr * rec / (pr + rec) fm } ############ Calculate Refinement and Calibration loss (following Peter &Katsuma approach, ie, ROC Curve segments) MSEdecomp <- function(inp) { # BS = CAL + REF # we have T examples # we split data accordinf to segments in ROC space #For each segment i we have ni examples and pi proportion of positive examples and p^i average prob of positive examp in the segment #CAL #1/T* sum for each segment i ni* (pi-p^i)^2 #REF #1/T* sum for each segment i ni* pi*(1-pi) ##trobem segments nseg <- rep(0,nrow(inp)) ### un vector amb un index que indica el segment que te k <- 1 nseg[1] = 1 for (i in 2:nrow(inp)) { if (inp[i,2] != inp[i - 1,2]) { k <- k + 1 nseg[i] = k } else nseg[i] = k } #print(nseg) #inp 0.8 0.8 0.5 0.4 0.4 0.4 #nseg 1 1 2 3 3 3 numsegs <- nseg [nrow(inp)] #calibration cal <- 0 ref <- 0 for (i in 1:numsegs) { segini <- 0 segfin <- 0 for (j in 1:nrow(inp)) { if (nseg[j] > i) break if (nseg[j] == i && segini == 0) segini <- j segfin <- j } # print(segini) # print(segfin) tamseg <- segfin - segini + 1 phati <- inp[segini,2] psi <- sum(inp[,1][segini:segfin]) / tamseg cal <- cal + tamseg * (phati - psi) ^ 2 ref <- ref + tamseg * psi * (1 - psi) # print(ref) } #print(cal/nrow(inp)) #print(ref/nrow(inp)) res <- list("cal" = cal / nrow(inp),"ref" = ref / nrow(inp)) res } ############ Calculate AUC (Area under the cost curve) AUC <- function(inp) { AUC <- 0 np <- 0 nn <- 0 for (i in 1:nrow(inp)) { if (inp[i,1] == 1) { np = np + 1 for (j in 1:nrow(inp)) { if (inp[j,1] == 0) { if (inp[i,2] > inp[j,2]) AUC = AUC + 1 if (inp[i,2] == inp[j,2]) AUC = AUC + 0.5 } } } else nn = nn + 1 } AUC = AUC / (nn * np) AUC } ########compute the cost comp_cost <- function(classes,probs,threds) { tam <- length(classes) #Km is the resolution res <- c(1:km) for (i in 1:km) { cost <- 0 acc <- 0 sk <- i / km for (j in 1:tam) { if (probs[j] > threds[i]) classe <- 1 else classe <- 0 if (classes[j] == classe) acc <- acc + 1 if (classes[j] != classe) if (classe == 0) cost <- cost + (1 - sk) else cost <- cost + sk } res[i] <- 2 * cost / (tam) } #print(acc) #sum the cost under the lines fff <- 0 for (i in 1:(km - 1)) fff <- fff + (1 / km) * ((res[i] + res[i + 1]) / 2) #print("area comp_cost") # print(fff) comp_cost <- list("points" = res,"area" = fff) } ########compute the cost noise comp_cost_noise <- function(classes,probs,threds) { tam <- length(classes) alfa <- 0.2 #Km is the resolution res <- c(1:km) for (i in 1:km) { cost <- 0 acc <- 0 sk <- i / km sk <- sk + runif(1,-alfa,alfa) for (j in 1:tam) { if (probs[j] > threds[i]) classe <- 1 else classe <- 0 if (classes[j] == classe) acc <- acc + 1 if (classes[j] != classe) if (classe == 0) cost <- cost + (1 - sk) else cost <- cost + sk } res[i] <- 2 * cost / (tam) } #print(acc) #sum the cost under the lines fff <- 0 for (i in 1:(km - 1)) fff <- fff + (1 / km) * ((res[i] + res[i + 1]) / 2) #print("area comp_cost") # print(fff) comp_cost_noise <- list("points" = res,"area" = fff) } # sample classifier, no ties, scores > 0 and < 1 # relatively good # cls<-c(1,1,1,1,1,0,0,0,0,0) # prbs<-c(1,0.9,0.8,0.7,0.6,0.5,0.4,0.3,0.2,0.1) #thrs<-rep(0.44,100) # print(comp_cost(cls,prbs,0.44)) ########compute the cost with noise using a beta function #Toma C para calcular el threshold i Chat para los costes #chat to c model comp_cost_beta_f <- function(classes,probs,threds,alphabeta = 10,mode = TRUE) { tam <- length(classes) #Km is the resolution res <- c(1:km) vnoise <- c(1:km) #ka <- diferentes valores para la beta dist ka <- 1 #kp puntos por cada ka #kp<-100 kp <- 10 costtotal = 0 for (i in 1:km) { ct = 0 # for (j in 1:ka) { cr = i / km # beta=(alfa*(1-cr)-1+2*cr)/cr if (mode == TRUE) { alpha <- cr * alphabeta + 1 beta <- alphabeta - alpha + 2 #beta dist mode in cr } else { #alpha <- alphabeta * cr #beta <- alphabeta* (1 - (cr-0.0000001)) #beta dist mean in cr alpha <- (cr - 0.0000001) * (alphabeta + 2) beta <- alphabeta - alpha + 2 } ca = 0 ask <- 0 for (z in 1:kp) { cost <- 0 acc <- 0 # print(paste("alpha",alpha,"beta",beta)) if (alphabeta != Inf) sk <- rbeta(1,alpha,beta) else sk <- i / km ask <- ask + sk for (j in 1:tam) { if (probs[j] > threds[i]) classe <- 1 else classe <- 0 if (classes[j] == classe) acc <- acc + 1 if (classes[j] != classe) if (classe == 0) cost <- cost + (1 - sk) else cost <- cost + sk # print("costp") #print(cost) } # print("cost") # print(cost) ca = ca + cost } # print("sk") # print(paste(i/km,ask/kp)) # print(threds[i]) vnoise[i] = ask / kp ct = ct + (ca / kp) # } costtotal = (ct / ka) res[i] <- 2 * costtotal / (tam) # print("res") # print(res[i]) } #print(acc) #sum the cost under the lines fff <- 0 for (i in 1:(km - 1)) fff <- fff + (1 / km) * ((res[i] + res[i + 1]) / 2) #print("area comp_cost") # print(fff) comp_cost_beta <- list("points" = res,"area" = fff,"noise" = vnoise) } ########compute the cost with noise using a beta function #Toma Chat para calcular el threshold i C para los costes #c to chat model comp_cost_beta <- function(classes,probs,threds,alphabeta = 10,mode = TRUE) { tam <- length(classes) #Km is the resolution res <- c(1:km) vnoise <- c(1:km) #ka <- diferentes valores para la beta dist ka <- 1 #kp puntos por cada ka #kp<-100 kp <- ITERATIONS_BETA_NOISE costtotal = 0 for (i in 1:km) { ct = 0 # for (j in 1:ka) { cr = i / km # beta=(alfa*(1-cr)-1+2*cr)/cr if (mode == TRUE) { alpha <- cr * alphabeta + 1 beta <- alphabeta - alpha + 2 #beta dist mode in cr } else { #alpha <- alphabeta * cr #beta <- alphabeta* (1 - (cr-0.0000001)) #beta dist mean in cr alpha <- (cr - 0.0000001) * (alphabeta + 2) beta <- alphabeta - alpha + 2 } ca = 0 ask <- 0 for (z in 1:kp) { cost <- 0 acc <- 0 # print(paste("alpha",alpha,"beta",beta)) if (alphabeta != Inf) sk <- rbeta(1,alpha,beta) else sk <- i / km ask <- ask + sk tt <- 1 + round(sk * (km - 1)) # los threshols van entre 1 i km, i sk entre 0 i 1 for (j in 1:tam) { # print(tt) if (probs[j] > threds[tt]) classe <- 1 else classe <- 0 if (classes[j] == classe) acc <- acc + 1 if (classes[j] != classe) if (classe == 0) cost <- cost + (1 - cr) else cost <- cost + cr # print("costp") #print(cost) } # print("cost") # print(cost) ca = ca + cost } # print("sk") # print(paste(i/km,ask/kp)) # print(threds[i]) vnoise[i] = ask / kp ct = ct + (ca / kp) # } costtotal = (ct / ka) res[i] <- 2 * costtotal / (tam) # print("res") # print(res[i]) } #print(acc) #sum the cost under the lines fff <- 0 for (i in 1:(km - 1)) fff <- fff + (1 / km) * ((res[i] + res[i + 1]) / 2) #print("area comp_cost") # print(fff) comp_cost_beta <- list("points" = res,"area" = fff,"noise" = vnoise) } ########compute the cost with noise using a beta function for ratedriven threshold method #Toma Chat para calcular el threshold i C para los costes #c to chat model comp_cost_beta_rd <- function(classes,probs,threds1,threds2,pes,alphabeta = 10,mode = TRUE) { tam <- length(classes) #Km is the resolution res <- c(1:km) vnoise <- c(1:km) #ka <- diferentes valores para la beta dist ka <- 1 #kp puntos por cada ka #kp<-100 kp <- ITERATIONS_BETA_NOISE costtotal = 0 for (i in 1:km) { ct = 0 # for (j in 1:ka) { cr = i / km # beta=(alfa*(1-cr)-1+2*cr)/cr if (mode == TRUE) { alpha <- cr * alphabeta + 1 beta <- alphabeta - alpha + 2 #beta dist mode in cr } else { #alpha <- alphabeta * cr #beta <- alphabeta* (1 - (cr-0.0000001)) #beta dist mean in cr alpha <- (cr - 0.0000001) * (alphabeta + 2) beta <- alphabeta - alpha + 2 } ca = 0 ask <- 0 for (z in 1:kp) { cost1 <- 0 cost2 <- 0 acc <- 0 # print(paste("alpha",alpha,"beta",beta)) if (alphabeta != Inf) sk <- rbeta(1,alpha,beta) else sk <- i / km ask <- ask + sk tt <- 1 + round(sk * (km - 1)) # los threshols van entre 1 i km, i sk entre 0 i 1 for (j in 1:tam) { #primera # print(tt) if (probs[j] > threds1[tt]) classe <- 1 else classe <- 0 if (classes[j] == classe) acc <- acc + 1 if (classes[j] != classe) if (classe == 0) cost1 <- cost1 + (1 - cr) else cost1 <- cost1 + cr #segona #primera # print(tt) if (probs[j] > threds2[tt]) classe <- 1 else classe <- 0 if (classes[j] == classe) acc <- acc + 1 if (classes[j] != classe) if (classe == 0) cost2 <- cost2 + (1 - cr) else cost2 <- cost2 + cr # print("costp") #print(cost) } # print("cost") # print(cost) cost <- (pes[tt] * cost1) + ((1 - pes[tt]) * cost2)#weighted average according to pes ca = ca + cost } # print("sk") # print(paste(i/km,ask/kp)) # print(threds[i]) vnoise[i] = ask / kp ct = ct + (ca / kp) # } costtotal = (ct / ka) res[i] <- 2 * costtotal / (tam) # print("res") # print(res[i]) } #print(acc) #sum the cost under the lines fff <- 0 for (i in 1:(km - 1)) fff <- fff + (1 / km) * ((res[i] + res[i + 1]) / 2) #print("area comp_cost") # print(fff) comp_cost_beta <- list("points" = res,"area" = fff,"noise" = vnoise) } #######################33 #calcula el coste dado un threshold y un cr comp_cost_single <- function(classes,probs,thred,cr) { tam <- length(classes) cost <- 0 for (j in 1:tam) { # print(tt) if (probs[j] > thred) classe <- 1 else classe <- 0 #if (classes[j]==classe) acc<-acc+1 if (classes[j] != classe) if (classe == 0) cost <- cost + (1 - cr) else cost <- cost + cr # print("costp") #print(cost) } comp_cost_single <- 2 * cost / tam } #######################33 #calcula el coste dado un threshold y un cr para el caso de rate driven (interpola) comp_cost_single_rd <- function(classes,probs,threds1,threds2,pes,cr) { tam <- length(classes) cost1 <- 0 cost2 <- 0 for (j in 1:tam) { #primera # print(tt) if (probs[j] > threds1) classe <- 1 else classe <- 0 if (classes[j] != classe) if (classe == 0) cost1 <- cost1 + (1 - cr) else cost1 <- cost1 + cr #segona #primera # print(tt) if (probs[j] > threds2) classe <- 1 else classe <- 0 if (classes[j] != classe) if (classe == 0) cost2 <- cost2 + (1 - cr) else cost2 <- cost2 + cr } # print("cost") # print(cost) cost <- (pes * cost1) + ((1 - pes) * cost2)#weighted average according to pes comp_cost_single_rd <- 2 * cost / tam } ################### #From a list of cost lines a point, and a cost poit returns which cost line is selected whichLC<-function(costlines,selpoint,cr) { nlin<-length(costlines)/2 pos<-c(1:nlin) k<-1 for (j in 1:nlin){ pos[j]=cr*(costlines[k+1]-costlines[k])+costlines[k] k<-k+2 } #print(cr) #print(pos) #print(selpoint) pos <-round(pos, digits=6)##si no no va ??? selpoint <-round(selpoint, digits=6)##si no no va ??? # print((which(pos==selpoint))) whichLC<-(which(pos==selpoint)[1]) }
/cost_functions.R
no_license
ceferra/ThresholdChoiceMethods
R
false
false
30,221
r
############################################### ############################################### ################ FUNCTIONS #################### ############################################### ############ Calculate an Empirical Loss Curve (for costs) LossCurveC <- function(inp, ThresFun) { resS <- RESOLUTION_EMPIRICAL_CURVES range <- (0:resS) / resS ELoss <- 1:(resS + 1) for (c in range) { if (THRESHOLD_CHOICE_METHOD == 4) { #Uniform. We need to average RESOL <- 100 LV <- rep(0, RESOL) for (i in 1:RESOL) { t <- ThresFun(inp, c) LV[i] <- LossC(inp, t, c) } ELoss[round(c * resS + 1)] <- mean(LV) } else if (THRESHOLD_CHOICE_METHOD == 7) { # Interpolating vec <- RateDrivenThresFun3C(inp, c) tL <- vec[1] pL <- vec[2] tR <- vec[3] pR <- vec[4] ELL <- LossC(inp, tL, c) ELR <- LossC(inp, tR, c) ELoss[round(c * resS + 1)] <- ELL * pL + ELR * pR } else { t <- ThresFun(inp, c) ELoss[round(c * resS + 1)] <- LossC(inp, t, c) } } ELoss } # ############################################################# ############ Calculate Loss (for costs) given a classifier and a threshold LossC <- function(inp, t, c) { lo0 <- 0 lo1 <- 0 n0n1 <- nrow(inp) pi0 <- sum(inp[,1] == 0) / n0n1 pi1 <- sum(inp[,1] == 1) / n0n1 for (i in 1:n0n1) { if (inp[i,2] < t) { # if (inp[i,2] <= t) { pred <- 0 } else pred <- 1 if (pred != inp[i,1]) { if (pred == 1) { l0 <- c # 1 for 0-1 loss lo0 <- lo0 + l0 } else { l1 <- (1 - c) # 1 for 0-1 loss lo1 <- lo1 + l1 } } } lo0 <- bfactor * lo0 lo1 <- bfactor * lo1 (lo0 + lo1) / n0n1 } ############ Calculate ExpLoss (for skews) ExpLossZ <- function(inp, ThresFun) { ELoss <- 0 resS <- RESOLUTION_EMPIRICAL_CURVES range <- (0:resS) / resS for (z in range) { t <- ThresFun(inp, z) ELoss <- ELoss + LossZ(inp, t, z) } ELoss / (resS + 1) } # # Example of use: # # ExpLossZ(inp,ProbThresFunZ) # ExpLossZ(inp,NatThresFunZ) ############ Calculate ExpLoss (for costs) ExpLossC <- function(inp, ThresFun) { ELoss <- 0 resS <- RESOLUTION_EMPIRICAL_CURVES range <- (0:resS) / resS for (c in range) { t <- ThresFun(inp, c) ELoss <- ELoss + LossC(inp, t, c) } ELoss / (resS + 1) } # # Example of use: # # ExpLossC(inp,ProbThresFunC) # ExpLossC(inp,NatThresFunC) ############ Calculate an Empirical Loss Curve (for skews) LossCurveZ <- function(inp, ThresFun) { resS <- RESOLUTION_EMPIRICAL_CURVES range <- (0:resS) / resS ELoss <- 1:(resS + 1) for (z in range) { if (THRESHOLD_CHOICE_METHOD == 4) { #Uniform. We need to average RESOL <- 100 LV <- rep(0, RESOL) for (i in 1:RESOL) { t <- ThresFun(inp, z) LV[i] <- LossZ(inp, t, z) } ELoss[round(z * resS + 1)] <- mean(LV) } else if (THRESHOLD_CHOICE_METHOD == 7) { # Interpolating vec <- RateDrivenThresFun3Z(inp, z) tL <- vec[1] pL <- vec[2] tR <- vec[3] pR <- vec[4] ELL <- LossZ(inp, tL, z) ELR <- LossZ(inp, tR, z) ELoss[round(z * resS + 1)] <- ELL * pL + ELR * pR } else { t <- ThresFun(inp, z) ELoss[round(z * resS + 1)] <- LossZ(inp, t, z) } } ELoss } # # Example of use: # # LossCurveZ(inp,ProbThresFunZ) # LossCurveZ(inp,NatThresFunZ) ############ Calculate Loss (for skews) given a classifier and a threshold LossZ <- function(inp, t, z) { lo0 <- 0 lo1 <- 0 n0n1 <- nrow(inp) pi0 <- sum(inp[,1] == 0) / n0n1 pi1 <- sum(inp[,1] == 1) / n0n1 for (i in 1:n0n1) { if (inp[i,2] < t) { # if (inp[i,2] <= t) { pred <- 0 } else pred <- 1 if (pred != inp[i,1]) { if (pred == 1) { l0 <- z # 1 for 0-1 loss lo0 <- lo0 + l0 } else { l1 <- (1 - z) # 1 for 0-1 loss lo1 <- lo1 + l1 } } } lo0 <- bfactor * 0.5 * lo0 / pi0 lo1 <- bfactor * 0.5 * lo1 / pi1 (lo0 + lo1) / n0n1 } ### La béstia!! ############ Calculate an Empirical Loss Curve (for costs) MinCost <- function(inp,title = "") { ############################################### #### SOME GENERAL VARIABLES USED ELSEWHERE #### n0n1 <- nrow(inp) x <- t(inp) zord <- order(x[2,]) sc <- x[,zord] n1 <- sum(sc[1,]) n0 <- n0n1 - n1 pi0 <- n0 / n0n1 pi1 <- n1 / n0n1 # We modify inp to be ordered by scores (this is needed for several plots) # order data into increasing scores #######zord <- order(x[2,]) zordrev <- rev(zord) # zordrev <- zord # NO REVERSE screv <- x[,zordrev] inp <- t(screv) #Decreasing order if (n0 == 0) print("No elements of class 0") if (n1 == 0) print("No elements of class 1") # We calculate means class0 <- x[,x[1,] == 0] class1 <- x[,x[1,] == 1] if (n0 > 1) { s0 <- mean(class0[2,]) } else s0 <- class0[2] if (n1 > 1) { s1 <- mean(class1[2,]) } else s1 <- class1[2] calpi1 <- pi0 * s0 + pi1 * s1 # alpha and betad are the parameters in the beta # cost distribution ~ c^alpha * (1-c)^betad alpha <- 2 betad <- 2 ############################################### ##### ACCUMULATED EMPIRICAL DIST. F0, F1 ###### # Calculate the raw ROC, replacing any tied # sequences by a ‘diagonal’ in the ROC curve. # The raw ROC starts at F0[1]=0, F1[1]=0, and ends at # F0[K1]=n0, F1[K1]=n1. # F0 and F1 are counts, G0 and G1 are normalised in (0,1) # F0 and F1 eliminate merge repeated values. # F0rep and F1rep keep the repeated values as different elements in the vector. # K1 is the number of non-repeated scores. It will be used everywhere sc <- cbind(sc,sc[,n0n1]) # Adds the last example. Now sc is one element longer. F0 <- c(0:n0n1) F1 <- c(0:n0n1) K1 <- 1 k <- 2 for (i in 1:n0n1) { F0[k] <- F0[K1] + (1 - sc[1,i]) F1[k] <- F1[K1] + sc[1,i] K1 <- k k <- if (sc[2,i + 1] == sc[2,i]) (k) else (k + 1) } F0 <- F0[1:K1] F1 <- F1[1:K1] G0nomin <- F0 / n0 G1nomin <- F1 / n1 scshared <- sc[1,] j <- 1 Ac <- sc[1,1] for (i in 2:n0n1) { if (sc[2,i] == sc[2,i - 1]) { j <- j + 1 Ac <- Ac + sc[1,i] } else { for (k in (i - 1):(i - j)) { scshared[k] <- Ac / j } j <- 1 Ac <- sc[1,i] } } F0rep <- c(0:n0n1) F1rep <- c(0:n0n1) K1rep <- 1 krep <- 2 for (i in 1:n0n1) { F0rep[krep] <- F0rep[K1rep] + (1 - scshared[i]) F1rep[krep] <- F1rep[K1rep] + scshared[i] K1rep <- krep krep <- krep + 1 } F0rep <- F0rep[1:K1rep] F1rep <- F1rep[1:K1rep] G0nominrep <- F0rep / n0 G1nominrep <- F1rep / n1 ############################################### ############################################### ######## CONVEX HULL - MINIMUM COSTS ########## # Find the upper concave hull # G0 and G1 are normalised in (0,1) and optimal # Repeated values are merged # hc will be the number of segments in the convex hull G0 <- c(0:(K1 - 1)) G1 <- c(0:(K1 - 1)) i <- 1 hc <- 1 while (i < K1) { c1 <- c((i + 1):K1) for (j in (i + 1):K1) { u1 <- (F1[j] - F1[i]) u0 <- (F0[j] - F0[i]) c1[j] <- u1 / (u1 + u0) } argmin <- i + 1 c1min <- c1[i + 1] for (k in (i + 1):K1) { argmin <- if (c1[k] <= c1min) (k) else (argmin) c1min <- c1[argmin] } hc <- hc + 1 G0[hc] <- F0[argmin] G1[hc] <- F1[argmin] i <- argmin } G0 <- G0[1:hc] / n0 G1 <- G1[1:hc] / n1 ############################################### ############################################### ##### ACCUMULATED EMPIRICAL DIST. F0, F1 ###### # Calculate the raw ROC, replacing any tied # sequences by a ‘diagonal’ in the ROC curve. # The raw ROC starts at F0[1]=0, F1[1]=0, and ends at # F0[K1]=n0, F1[K1]=n1. # F0 and F1 are counts, G0 and G1 are normalised in (0,1) # F0 and F1 eliminate merge repeated values. # F0rep and F1rep keep the repeated values as different elements in the vector. # K1 is the number of non-repeated scores. It will be used everywhere sc <- cbind(sc,sc[,n0n1]) # Adds the last example. Now sc is one element longer. F0 <- c(0:n0n1) F1 <- c(0:n0n1) K1 <- 1 k <- 2 for (i in 1:n0n1) { F0[k] <- F0[K1] + (1 - sc[1,i]) F1[k] <- F1[K1] + sc[1,i] K1 <- k k <- if (sc[2,i + 1] == sc[2,i]) (k) else (k + 1) } F0 <- F0[1:K1] F1 <- F1[1:K1] G0nomin <- F0 / n0 G1nomin <- F1 / n1 scshared <- sc[1,] j <- 1 Ac <- sc[1,1] for (i in 2:n0n1) { if (sc[2,i] == sc[2,i - 1]) { j <- j + 1 Ac <- Ac + sc[1,i] } else { for (k in (i - 1):(i - j)) { scshared[k] <- Ac / j } j <- 1 Ac <- sc[1,i] } } F0rep <- c(0:n0n1) F1rep <- c(0:n0n1) K1rep <- 1 krep <- 2 for (i in 1:n0n1) { F0rep[krep] <- F0rep[K1rep] + (1 - scshared[i]) F1rep[krep] <- F1rep[K1rep] + scshared[i] K1rep <- krep krep <- krep + 1 } F0rep <- F0rep[1:K1rep] F1rep <- F1rep[1:K1rep] G0nominrep <- F0rep / n0 G1nominrep <- F1rep / n1 # x-axis using the uniform of the convex hull for cost ratios cost <- c(1:(hc + 1)) cost[1] <- 0 cost[hc + 1] <- 1 for (i in 2:hc) { cost[i] <- 1 * pi1 * (G1[i] - G1[i - 1]) / (pi0 * (G0[i] - G0[i - 1]) + pi1 * (G1[i] - G1[i - 1])) } ############################################### ###### Expected cost Q (minimal) ############## ############## for cost ratio ################# # CALCULATE THE MINIMUM LOSS VS c CURVE (UNNORMALISED, HAND) Q <- c(1:(hc + 1)) for (i in 1:hc) { Q[i] <- bfactor * (cost[i] * pi0 * (1 - G0[i]) + (1 - cost[i]) * pi1 * G1[i]) } Q[(hc + 1)] <- 0 if (DRAW_COSTRATIO_PLOTS) { # only plots the space if (PLOT_SPACE) { #tit <- paste(EXPECTEDLOSS,BYCOST) tit <- title if (NOTITLE) { par(mar = MARGINSNOTITLE) tit <- "" } plot( c(0,1),c(0,0), ylim = c(0,YLIM), main = tit,xlab = COSTRATIO,ylab = LOSS, col = "white" ) if (LEGEND) { for (i in 1:nrow(inp)) { text(0.2 + i * 0.08,0.9,inp[i,1]) text(0.2 + i * 0.08,0.8,inp[i,2]) } } } if (DRAW_OPTIMAL_CURVES) { if (OVERLAP_PROB_OPTIMAL) { points( cost,Q, type = "o", lwd = LWD_OPTIMAL_CURVES, pch = PCH_OPTIMAL_CURVES, col = COLOUR_OPTIMAL_CURVES, lty = LTY_OPTIMAL_CURVES ) } else { tit <- paste(EXPECTEDLOSS, BYCOST) if (NOTITLE) { par(mar = MARGINSNOTITLE) tit <- "" } plot( cost,Q, type = "l", lwd = LWD_OPTIMAL_CURVES, pch = PCH_OPTIMAL_CURVES, main = tit ,xlab = COSTRATIO,ylab = LOSS, lty = LTY_OPTIMAL_CURVES, col = COLOUR_OPTIMAL_CURVES, ylim = c(0,YLIM) ) } } } # AUCC AUCC <- 0 for (i in 2:(hc + 1)) { AUCC <- AUCC + (cost[i] - cost[i - 1]) * (Q[i] + Q[i - 1]) / 2 } ############################################### ###### Expected cost Qnomintots (ALL) ######### ############## for cost ratio ################# if (!COSTLINESREP) { # Reduces the number of lines (eliminates repeated ones), but it computes the red line in a wrong way if there are ties. # AUCCnomintots xnomintots <- rep(0,K1 * 4) ynomintots <- rep(0,K1 * 4) Leftacc <- 0 Rightacc <- 0 Leftmax <- 0 Rightmax <- 0 AUCCnomintots <- 0 for (i in 1:K1) #for each cost line { #extremes of teh cost line Leftv <- G1nomin[i] * pi1 * bfactor Rightv <- (1 - G0nomin[i]) * pi0 * bfactor #cesar ponderem per llargaria de les X agafant els threshold method de Brier curves if (WEIGHTED_LOSSLINE) { wpm <- 0 if (i == 1) wpm <- inp[,2][K1rep - 1] else if (i == K1rep) wpm <- 1 - inp[,2][1] else wpm <- inp[,2][(K1rep + 1 - i) - 1] - inp[,2][(K1rep + 1 - i)] #print(wpm) w <- wpm * K1rep # weight print(paste("line",i)) print(w / K1) # print(inp) # print(K1rep) # print(K1) } else { w <- 1 # weight } Leftacc <- Leftacc + w * Leftv Rightacc <- Rightacc + w * Rightv Leftmax <- max(Leftmax, Leftv) Rightmax <- max(Rightmax, Rightv) xnomintots[(i - 1) * 4 + 1] <- 0 xnomintots[(i - 1) * 4 + 2] <- 1 xnomintots[(i - 1) * 4 + 3] <- 1 xnomintots[(i - 1) * 4 + 4] <- 0 ynomintots[(i - 1) * 4 + 1] <- Leftv ynomintots[(i - 1) * 4 + 2] <- Rightv ynomintots[(i - 1) * 4 + 3] <- 0 ynomintots[(i - 1) * 4 + 4] <- 0 linia <- (Rightv + Leftv) / 2 # print(linia) AUCCnomintots <- AUCCnomintots + linia # print(AUCCnomintots) } AUCCnomintots <- AUCCnomintots / K1 Leftacc <- Leftacc / K1 Rightacc <- Rightacc / K1 # print(c(Leftacc,Rightacc)) if (DRAW_COSTRATIO_PLOTS) { if (DRAW_LOSSLINE) { points( c(0,1),c(Leftacc,Rightacc), type = "l", lty = LTY_LOSSLINE, col = COLOUR_LOSSLINE, lwd = LWD_LOSSLINE ) } if (DRAW_COST_LINES) { if (!(OVERLAP_NONOPTIMAL)) { tit <- paste(EXPECTEDLOSS, BYCOST) if (NOTITLE) { par(mar = MARGINSNOTITLE) tit <- "" } plot( xnomintots,ynomintots, type = "l", main = tit,xlab = COSTRATIO,ylab = LOSS, lty = 2, col = COLOUR_COST_LINES ) } else { points( xnomintots,ynomintots, type = "l", lty = 2, col = COLOUR_COST_LINES ) } if (DRAW_LOSSLINE) points( c(0,1),c(Leftacc,Rightacc), type = "l", lty = LTY_LOSSLINE, col = COLOUR_LOSSLINE, lwd = LWD_LOSSLINE ) if (DRAW_COSTSQUARE) { points( c(0,1),c(0,0), type = "l", col = "grey", lwd = 1 ) # horizontal points( c(0,0),c(0,Leftmax), type = "l", col = "grey", lwd = 1 ) # left vertical # points(c(0,1),c(0,0), type= "l", col = "grey", lwd=1) points( c(1,1),c(0,Rightmax), type = "l", col = "grey", lwd = 1 ) # right vertical # points(c(1,1),c(0,1), type= "l", col = "grey", lwd=1) } } if (DRAW_OPTIMAL_CURVES_AGAIN) { points( cost,Q, type = "o", lwd = LWD_OPTIMAL_CURVE_AGAIN, col = COLOUR_OPTIMAL_CURVE_AGAIN, lty = LTY_OPTIMAL_CURVES, pch = PCH_OPTIMAL_CURVES ) } if (DRAW_PROB_CURVES_AGAIN) { lines( costprobnew,Qprobnew, type = TYPE_PROB_CURVES, lwd = LWD_PROB_CURVE_AGAIN, col = COLOUR_PROB_CURVE_AGAIN, pch = PCH_PROB_CURVES ) } } ## Xnominots nad Ynominots have elements non relevant hh <- length(xnomintots) xf <- c(1:(hh / 2)) yf <- c(1:(hh / 2)) kk <- 1 for (i in 1:hh) { if (((i - 1) %% 4) < 2) { xf[kk] <- xnomintots[i] yf[kk] <- ynomintots[i] kk <- kk + 1 } } # print(xnomintots) # print(ynomintots) # print(xf) # print(yf) # print(cost) # print(Q) } # plot(cost,xlim=c(0,1),ylim=c(0,1),Q, type= "l", main= "kkk",xlab="oo",axes=FALSE) resx = list("res" = AUCC, "yf" = yf) #retiorne una llista en el valor y les cost lines MinCost <- resx } ############ Calculate MSE (Brier score) MSE <- function(inp) { MSE <- 0 for (i in 1:nrow(inp)) { MSE = MSE + (inp[i,1] - inp[i,2]) ^ 2 } MSE / nrow(inp) } ############ Calculate FMeasure fmeas <- function(inp) { fm <- 0 thr <- 0.5 mat <- matrix(0,2,2) for (i in 1:nrow(inp)) { if (inp[i,2] > thr) { if (inp[i,1] == 1) mat[1,1] <- mat[1,1] + 1 else mat[1,2] <- mat[1,2] + 1 } else { if (inp[i,1] == 0) mat[2,2] <- mat[2,2] + 1 else mat[2,1] <- mat[2,1] + 1 } } pr <- mat[1,1] / (mat[1,1] + mat[1,2]) rec <- mat[1,1] / (mat[1,1] + mat[2,1]) fm <- 2 * pr * rec / (pr + rec) fm } ############ Calculate Refinement and Calibration loss (following Peter &Katsuma approach, ie, ROC Curve segments) MSEdecomp <- function(inp) { # BS = CAL + REF # we have T examples # we split data accordinf to segments in ROC space #For each segment i we have ni examples and pi proportion of positive examples and p^i average prob of positive examp in the segment #CAL #1/T* sum for each segment i ni* (pi-p^i)^2 #REF #1/T* sum for each segment i ni* pi*(1-pi) ##trobem segments nseg <- rep(0,nrow(inp)) ### un vector amb un index que indica el segment que te k <- 1 nseg[1] = 1 for (i in 2:nrow(inp)) { if (inp[i,2] != inp[i - 1,2]) { k <- k + 1 nseg[i] = k } else nseg[i] = k } #print(nseg) #inp 0.8 0.8 0.5 0.4 0.4 0.4 #nseg 1 1 2 3 3 3 numsegs <- nseg [nrow(inp)] #calibration cal <- 0 ref <- 0 for (i in 1:numsegs) { segini <- 0 segfin <- 0 for (j in 1:nrow(inp)) { if (nseg[j] > i) break if (nseg[j] == i && segini == 0) segini <- j segfin <- j } # print(segini) # print(segfin) tamseg <- segfin - segini + 1 phati <- inp[segini,2] psi <- sum(inp[,1][segini:segfin]) / tamseg cal <- cal + tamseg * (phati - psi) ^ 2 ref <- ref + tamseg * psi * (1 - psi) # print(ref) } #print(cal/nrow(inp)) #print(ref/nrow(inp)) res <- list("cal" = cal / nrow(inp),"ref" = ref / nrow(inp)) res } ############ Calculate AUC (Area under the cost curve) AUC <- function(inp) { AUC <- 0 np <- 0 nn <- 0 for (i in 1:nrow(inp)) { if (inp[i,1] == 1) { np = np + 1 for (j in 1:nrow(inp)) { if (inp[j,1] == 0) { if (inp[i,2] > inp[j,2]) AUC = AUC + 1 if (inp[i,2] == inp[j,2]) AUC = AUC + 0.5 } } } else nn = nn + 1 } AUC = AUC / (nn * np) AUC } ########compute the cost comp_cost <- function(classes,probs,threds) { tam <- length(classes) #Km is the resolution res <- c(1:km) for (i in 1:km) { cost <- 0 acc <- 0 sk <- i / km for (j in 1:tam) { if (probs[j] > threds[i]) classe <- 1 else classe <- 0 if (classes[j] == classe) acc <- acc + 1 if (classes[j] != classe) if (classe == 0) cost <- cost + (1 - sk) else cost <- cost + sk } res[i] <- 2 * cost / (tam) } #print(acc) #sum the cost under the lines fff <- 0 for (i in 1:(km - 1)) fff <- fff + (1 / km) * ((res[i] + res[i + 1]) / 2) #print("area comp_cost") # print(fff) comp_cost <- list("points" = res,"area" = fff) } ########compute the cost noise comp_cost_noise <- function(classes,probs,threds) { tam <- length(classes) alfa <- 0.2 #Km is the resolution res <- c(1:km) for (i in 1:km) { cost <- 0 acc <- 0 sk <- i / km sk <- sk + runif(1,-alfa,alfa) for (j in 1:tam) { if (probs[j] > threds[i]) classe <- 1 else classe <- 0 if (classes[j] == classe) acc <- acc + 1 if (classes[j] != classe) if (classe == 0) cost <- cost + (1 - sk) else cost <- cost + sk } res[i] <- 2 * cost / (tam) } #print(acc) #sum the cost under the lines fff <- 0 for (i in 1:(km - 1)) fff <- fff + (1 / km) * ((res[i] + res[i + 1]) / 2) #print("area comp_cost") # print(fff) comp_cost_noise <- list("points" = res,"area" = fff) } # sample classifier, no ties, scores > 0 and < 1 # relatively good # cls<-c(1,1,1,1,1,0,0,0,0,0) # prbs<-c(1,0.9,0.8,0.7,0.6,0.5,0.4,0.3,0.2,0.1) #thrs<-rep(0.44,100) # print(comp_cost(cls,prbs,0.44)) ########compute the cost with noise using a beta function #Toma C para calcular el threshold i Chat para los costes #chat to c model comp_cost_beta_f <- function(classes,probs,threds,alphabeta = 10,mode = TRUE) { tam <- length(classes) #Km is the resolution res <- c(1:km) vnoise <- c(1:km) #ka <- diferentes valores para la beta dist ka <- 1 #kp puntos por cada ka #kp<-100 kp <- 10 costtotal = 0 for (i in 1:km) { ct = 0 # for (j in 1:ka) { cr = i / km # beta=(alfa*(1-cr)-1+2*cr)/cr if (mode == TRUE) { alpha <- cr * alphabeta + 1 beta <- alphabeta - alpha + 2 #beta dist mode in cr } else { #alpha <- alphabeta * cr #beta <- alphabeta* (1 - (cr-0.0000001)) #beta dist mean in cr alpha <- (cr - 0.0000001) * (alphabeta + 2) beta <- alphabeta - alpha + 2 } ca = 0 ask <- 0 for (z in 1:kp) { cost <- 0 acc <- 0 # print(paste("alpha",alpha,"beta",beta)) if (alphabeta != Inf) sk <- rbeta(1,alpha,beta) else sk <- i / km ask <- ask + sk for (j in 1:tam) { if (probs[j] > threds[i]) classe <- 1 else classe <- 0 if (classes[j] == classe) acc <- acc + 1 if (classes[j] != classe) if (classe == 0) cost <- cost + (1 - sk) else cost <- cost + sk # print("costp") #print(cost) } # print("cost") # print(cost) ca = ca + cost } # print("sk") # print(paste(i/km,ask/kp)) # print(threds[i]) vnoise[i] = ask / kp ct = ct + (ca / kp) # } costtotal = (ct / ka) res[i] <- 2 * costtotal / (tam) # print("res") # print(res[i]) } #print(acc) #sum the cost under the lines fff <- 0 for (i in 1:(km - 1)) fff <- fff + (1 / km) * ((res[i] + res[i + 1]) / 2) #print("area comp_cost") # print(fff) comp_cost_beta <- list("points" = res,"area" = fff,"noise" = vnoise) } ########compute the cost with noise using a beta function #Toma Chat para calcular el threshold i C para los costes #c to chat model comp_cost_beta <- function(classes,probs,threds,alphabeta = 10,mode = TRUE) { tam <- length(classes) #Km is the resolution res <- c(1:km) vnoise <- c(1:km) #ka <- diferentes valores para la beta dist ka <- 1 #kp puntos por cada ka #kp<-100 kp <- ITERATIONS_BETA_NOISE costtotal = 0 for (i in 1:km) { ct = 0 # for (j in 1:ka) { cr = i / km # beta=(alfa*(1-cr)-1+2*cr)/cr if (mode == TRUE) { alpha <- cr * alphabeta + 1 beta <- alphabeta - alpha + 2 #beta dist mode in cr } else { #alpha <- alphabeta * cr #beta <- alphabeta* (1 - (cr-0.0000001)) #beta dist mean in cr alpha <- (cr - 0.0000001) * (alphabeta + 2) beta <- alphabeta - alpha + 2 } ca = 0 ask <- 0 for (z in 1:kp) { cost <- 0 acc <- 0 # print(paste("alpha",alpha,"beta",beta)) if (alphabeta != Inf) sk <- rbeta(1,alpha,beta) else sk <- i / km ask <- ask + sk tt <- 1 + round(sk * (km - 1)) # los threshols van entre 1 i km, i sk entre 0 i 1 for (j in 1:tam) { # print(tt) if (probs[j] > threds[tt]) classe <- 1 else classe <- 0 if (classes[j] == classe) acc <- acc + 1 if (classes[j] != classe) if (classe == 0) cost <- cost + (1 - cr) else cost <- cost + cr # print("costp") #print(cost) } # print("cost") # print(cost) ca = ca + cost } # print("sk") # print(paste(i/km,ask/kp)) # print(threds[i]) vnoise[i] = ask / kp ct = ct + (ca / kp) # } costtotal = (ct / ka) res[i] <- 2 * costtotal / (tam) # print("res") # print(res[i]) } #print(acc) #sum the cost under the lines fff <- 0 for (i in 1:(km - 1)) fff <- fff + (1 / km) * ((res[i] + res[i + 1]) / 2) #print("area comp_cost") # print(fff) comp_cost_beta <- list("points" = res,"area" = fff,"noise" = vnoise) } ########compute the cost with noise using a beta function for ratedriven threshold method #Toma Chat para calcular el threshold i C para los costes #c to chat model comp_cost_beta_rd <- function(classes,probs,threds1,threds2,pes,alphabeta = 10,mode = TRUE) { tam <- length(classes) #Km is the resolution res <- c(1:km) vnoise <- c(1:km) #ka <- diferentes valores para la beta dist ka <- 1 #kp puntos por cada ka #kp<-100 kp <- ITERATIONS_BETA_NOISE costtotal = 0 for (i in 1:km) { ct = 0 # for (j in 1:ka) { cr = i / km # beta=(alfa*(1-cr)-1+2*cr)/cr if (mode == TRUE) { alpha <- cr * alphabeta + 1 beta <- alphabeta - alpha + 2 #beta dist mode in cr } else { #alpha <- alphabeta * cr #beta <- alphabeta* (1 - (cr-0.0000001)) #beta dist mean in cr alpha <- (cr - 0.0000001) * (alphabeta + 2) beta <- alphabeta - alpha + 2 } ca = 0 ask <- 0 for (z in 1:kp) { cost1 <- 0 cost2 <- 0 acc <- 0 # print(paste("alpha",alpha,"beta",beta)) if (alphabeta != Inf) sk <- rbeta(1,alpha,beta) else sk <- i / km ask <- ask + sk tt <- 1 + round(sk * (km - 1)) # los threshols van entre 1 i km, i sk entre 0 i 1 for (j in 1:tam) { #primera # print(tt) if (probs[j] > threds1[tt]) classe <- 1 else classe <- 0 if (classes[j] == classe) acc <- acc + 1 if (classes[j] != classe) if (classe == 0) cost1 <- cost1 + (1 - cr) else cost1 <- cost1 + cr #segona #primera # print(tt) if (probs[j] > threds2[tt]) classe <- 1 else classe <- 0 if (classes[j] == classe) acc <- acc + 1 if (classes[j] != classe) if (classe == 0) cost2 <- cost2 + (1 - cr) else cost2 <- cost2 + cr # print("costp") #print(cost) } # print("cost") # print(cost) cost <- (pes[tt] * cost1) + ((1 - pes[tt]) * cost2)#weighted average according to pes ca = ca + cost } # print("sk") # print(paste(i/km,ask/kp)) # print(threds[i]) vnoise[i] = ask / kp ct = ct + (ca / kp) # } costtotal = (ct / ka) res[i] <- 2 * costtotal / (tam) # print("res") # print(res[i]) } #print(acc) #sum the cost under the lines fff <- 0 for (i in 1:(km - 1)) fff <- fff + (1 / km) * ((res[i] + res[i + 1]) / 2) #print("area comp_cost") # print(fff) comp_cost_beta <- list("points" = res,"area" = fff,"noise" = vnoise) } #######################33 #calcula el coste dado un threshold y un cr comp_cost_single <- function(classes,probs,thred,cr) { tam <- length(classes) cost <- 0 for (j in 1:tam) { # print(tt) if (probs[j] > thred) classe <- 1 else classe <- 0 #if (classes[j]==classe) acc<-acc+1 if (classes[j] != classe) if (classe == 0) cost <- cost + (1 - cr) else cost <- cost + cr # print("costp") #print(cost) } comp_cost_single <- 2 * cost / tam } #######################33 #calcula el coste dado un threshold y un cr para el caso de rate driven (interpola) comp_cost_single_rd <- function(classes,probs,threds1,threds2,pes,cr) { tam <- length(classes) cost1 <- 0 cost2 <- 0 for (j in 1:tam) { #primera # print(tt) if (probs[j] > threds1) classe <- 1 else classe <- 0 if (classes[j] != classe) if (classe == 0) cost1 <- cost1 + (1 - cr) else cost1 <- cost1 + cr #segona #primera # print(tt) if (probs[j] > threds2) classe <- 1 else classe <- 0 if (classes[j] != classe) if (classe == 0) cost2 <- cost2 + (1 - cr) else cost2 <- cost2 + cr } # print("cost") # print(cost) cost <- (pes * cost1) + ((1 - pes) * cost2)#weighted average according to pes comp_cost_single_rd <- 2 * cost / tam } ################### #From a list of cost lines a point, and a cost poit returns which cost line is selected whichLC<-function(costlines,selpoint,cr) { nlin<-length(costlines)/2 pos<-c(1:nlin) k<-1 for (j in 1:nlin){ pos[j]=cr*(costlines[k+1]-costlines[k])+costlines[k] k<-k+2 } #print(cr) #print(pos) #print(selpoint) pos <-round(pos, digits=6)##si no no va ??? selpoint <-round(selpoint, digits=6)##si no no va ??? # print((which(pos==selpoint))) whichLC<-(which(pos==selpoint)[1]) }
# convert board to a vector for the state vectors - mostly for keeping track of games # not usually used for any RL methods boardToVector <- function(board){ # take elments in board an lists them from left to right, top to bottom as a vector vec <- c() for(i in 1:3){ for(j in 1:3){ vec <- c(vec, board[i,1,j,], board[i,2,j,], board[i,3,j,]) } } return(vec) } # convert board to a vector for the state vectors - in this approximation 0's and 2's map to 1 suboardToVector <- function(subBoard){ vec <- c() for(i in 1:3){ for(j in 1:3){ if(subBoard[i,j] == 0 || subBoard[i,j] == 2) vec <- c(vec,0) else vec <- c(vec,1) } } return(vec) } actionToVecMapping <- function(actionNumerical){ return(switch(actionNumerical, c(1,1), c(1,2), c(1,3), c(2,1), c(2,2), c(2,3), c(3,1), c(3,2), c(3,3) ) ) } vecToActionMapping <- function(actionVec){ if(actionVec[1] == 1){ if(actionVec[2] == 1) return(1) else if(actionVec[2] == 2) return(2) else return(3) } else if(actionVec[1] == 2){ if(actionVec[2] == 1) return(4) else if(actionVec[2] == 2) return(5) else return(6) } else if(actionVec[1] == 3){ if(actionVec[2] == 1) return(7) else if(actionVec[2] == 2) return(8) else return(9) } else return -1 } # take a state which looks like a binary representation and convert it to decimal stateToDec <- function(boardVec){ dec <- 0 for(i in 1:length(boardVec)){ dec <- dec + boardVec[i]*2^(i-1) } return(dec) } decToState <- function(numState){ bin <- decToBin(numState) # pad left with 0's lenPad <- 9 - length(bin) padZeros <- vector("numeric",length=lenPad) state <- c(padZeros, bin) return(state) } decToBin <- function(decimal){ if(decimal == 0) return(NULL) bin <- c(decimal %% 2, decToBin(decimal %/% 2)) return(bin) } # reward function reward <- function(winner, playingAs){ # returns a reward based on the winner + who you are playing as if(winner == 0) return(-100) # want agent to win else if(winner == 1 && playingAs == 1) return(100) else if(winner == 1 && playingAs == 2) return(-100) else if(winner == 2 && playingAs == 1) return(-100) else return(100) } # simulate with state approximations simulUTTT <- function(theseed){ # To reproduce experiment set.seed(theseed) # Start game masterBoard <- array(0, dim = c(3,3,3,3)) statusBoard <- matrix(0, nrow = 3, ncol = 3, byrow = T) winner <- 0 # Initially a tie # track the states boardStates <- matrix(0, ncol = 81) currentBoardState <- matrix(0, ncol = 9) # for Q learning statusBoardState <- matrix(0,ncol = 9) # for Q learning actionList <- list() # for Q learning currAndStatus <- matrix(0, ncol = 18) # First move player <- 1 # random method move <- sample(c(1,2,3), size = 4, replace = T) masterBoard[move[1], move[2], move[3], move[4]] <- player forcedMove <- c(move[3], move[4]) player <- player %% 2 + 1 # add state to tracker boardStates <- rbind(boardStates, boardToVector(masterBoard)) # add action to tracker actionList <- append(actionList, vecToActionMapping(forcedMove)) # The game continue normally while (T) { validMoves <- getValidMove(masterBoard, forcedMove, statusBoard) if (length(validMoves) == 0) break move <- sample(validMoves, size = 1) move <- move[[1]] # add current board to state tracker currentBoardState <- rbind(currentBoardState, suboardToVector(masterBoard[move[1],move[2],,])) # add current board status to the state tracker statusBoardState <- rbind(statusBoardState, suboardToVector(statusBoard)) # make the action masterBoard[move[1], move[2], move[3], move[4]] <- player forcedMove <- c(move[3], move[4]) # add action to state tracker actionList <- append(actionList, vecToActionMapping(forcedMove)) # Update tracked states boardStates <- rbind(boardStates, boardToVector(masterBoard)) # Check for win if(hasWonBoard(masterBoard[move[1], move[2],,], player)) statusBoard[move[1], move[2]] <- player # Won subBoard if(hasWonBoard(statusBoard, player)){ winner <- player # Won game break } player <- player %% 2 + 1 # Change player } return(list(winner,boardStates,currentBoardState,statusBoardState,actionList)) } # create a dataset that contains runs of random plays (states) + of the winner (rewards) # requires more formatting than just one board nepis <- 1 StateActionReward <- rep(NULL, nepis) # playing as player 1 for(i in 1:nepis){ run <- simulUTTT(i) rewardR <- reward(run[[1]],playingAs = 1) states <- matrix(cbind(run[[3]],run[[4]])) # state convention curr board in binary + status board in binary StateActionReward[[i]] <- list(rewardR,states,run[[5]]) } # TODO: Use Watkin's QLearning # use Watkin's Q Learning Techinque - Input all the simulations as a list of rewards, states and actions ApplyQLearning <- function(qInit,episodeSimu,stepSize){ qEstim <- qInit for(episode in episodeSimu){ reward <- episode[[1]] states <- episode[[2]] actions <- episode[[3]] # print(reward) # print(states) # print(actions) Tt <- length(actions) # print(Tt) # cat(dim(states)[1],"-----\n") for(t in 1:(Tt-1)){ S_t <- states[t,] A_t <- actions[[t]][1] R_tplus1 <- 0 if(t == Tt-1) # at the beginning will do a lot of of 0 updates R_tplus1 <- reward S_tplus1 <- states[t+1,] # convert states to decimal representation S_t <- stateToDec(S_t) S_tplus1 <- stateToDec(S_tplus1) # add 1 to every S because r indexing starts at 0 # undiscounted rewards qEstim[S_t+1,A_t] <- qEstim[S_t+1,A_t] + stepSize*(R_tplus1 + max(qEstim[S_tplus1+1,]) - qEstim[S_t+1,A_t]) } } return(qEstim) } # TODO: USe qEstim to find new winRAtio # apply q learning stepsize1 <- 0.01 stepsize2 <- 0.1 stepsize3 <- 0.2 qEstim1 <- matrix(0,nrow = 2^18, ncol = 9) qEstim2 <- matrix(0,nrow = 2^18, ncol = 9) qEstim3 <- matrix(0,nrow = 2^18, ncol = 9) qEstim1 <- ApplyQLearning(qEstim1,StateActionReward,stepsize1) qEstim2 <- ApplyQLearning(qEstim2,StateActionReward,stepsize2) qEstim3 <- ApplyQLearning(qEstim3,StateActionReward,stepsize3) # this agent uses qEstim from Q learning to play against a random bot simulUTTTQLearning <- function(theseed,qEstim){ # To reproduce experiment set.seed(theseed) # Start game masterBoard <- array(0, dim = c(3,3,3,3)) statusBoard <- matrix(0, nrow = 3, ncol = 3, byrow = T) winner <- 0 # Initially a tie # track the states boardStates <- matrix(0, ncol = 81) # more for debugging # First move player <- 1 # random method move <- sample(c(1,2,3), size = 4, replace = T) masterBoard[move[1], move[2], move[3], move[4]] <- player forcedMove <- c(move[3], move[4]) player <- player %% 2 + 1 # The game continue normally while (T) { validMoves <- getValidMove(masterBoard, forcedMove, statusBoard) if (length(validMoves) == 0) break # random move for opponent or for player 1 in special case if(player == 2){ move <- sample(validMoves, size = 1) move <- move[[1]] } else { if(validMoves[[1]][1] != forcedMove[1] || validMoves[[1]][2] != forcedMove[2]){ # in the case where move isn't in forced move subboard randomSuboard <- sample(validMoves,size = 1) # pick a subboard randomly randomSuboard <- randomSuboard[[1]] forcedMove <- c(randomSuboard[1],randomSuboard[2]) # in this case the valid moves should be restricted to the chose subboard temp <- list() for(move in validMoves){ if(move[1] == forcedMove[1] && move[2] == forcedMove[2]) temp <- append(temp, list(move)) } } # otherwise player 1 can pick only within sub-board # in this case can use qEstim stateDecimal <- stateToDec(c(suboardToVector(masterBoard[forcedMove[1],forcedMove[2],,]),suboardToVector(statusBoard))) validActionsDecimal <- c() # list of numbers # print("I <3 QLearning"3) # go through list of moves and coonvert them to decimal for(move in validMoves){ validActionsDecimal <- c(validActionsDecimal, vecToActionMapping(c(move[3],move[4]))) } # find the max action based on values in qEstim maxValue <- -1000000000 maxIndex <- -1 for(actionIndex in validActionsDecimal){ if(qEstim[stateDecimal+1,actionIndex] > maxValue) maxIndex <- actionIndex } # so the move will be actionVector <- actionToVecMapping(maxIndex) move <- c(forcedMove[1],forcedMove[2],actionVector[1],actionVector[2]) } # Play the move masterBoard[move[1], move[2], move[3], move[4]] <- player forcedMove <- c(move[3], move[4]) # add state to tracker boardStates <- rbind(boardStates, boardToVector(masterBoard)) # Check for win if(hasWonBoard(masterBoard[move[1], move[2],,], player)) statusBoard[move[1], move[2]] <- player # Won subBoard if(hasWonBoard(statusBoard, player)){ winner <- player # Won game break } player <- player %% 2 + 1 # Change player } return(list(winner,boardStates)) } # run and find win ratio # playing as player 1 - with QLearning technique wins1 <- 0 wins2 <- 0 wins3 <- 0 nepis <- 10000 winRatio1 <- vector("numeric",length = nepis) winRatio2 <- vector("numeric",length = nepis) winRatio3 <- vector("numeric",length = nepis) for(i in 1:nepis){ winner1 <- simulUTTTQLearning(i,qEstim1)[[1]] winner2 <- simulUTTTQLearning(i,qEstim2)[[1]] winner3 <- simulUTTTQLearning(i,qEstim3)[[1]] if (winner1 == 1){ wins1 <- 1 + wins1 } winRatio1[i] <- wins1/i if (winner2 == 1){ wins2 <- 1 + wins2 } winRatio2[i] <- wins2/i if (winner3 == 1){ wins3 <- 1 + wins3 } winRatio3[i] <- wins3/i } # worst than the random policy plot(x=1:nepis,winRatio1,type="l")
/src/QLearningMethodTwoBoard.R
no_license
cesare-spinoso/UltimateTTT
R
false
false
10,783
r
# convert board to a vector for the state vectors - mostly for keeping track of games # not usually used for any RL methods boardToVector <- function(board){ # take elments in board an lists them from left to right, top to bottom as a vector vec <- c() for(i in 1:3){ for(j in 1:3){ vec <- c(vec, board[i,1,j,], board[i,2,j,], board[i,3,j,]) } } return(vec) } # convert board to a vector for the state vectors - in this approximation 0's and 2's map to 1 suboardToVector <- function(subBoard){ vec <- c() for(i in 1:3){ for(j in 1:3){ if(subBoard[i,j] == 0 || subBoard[i,j] == 2) vec <- c(vec,0) else vec <- c(vec,1) } } return(vec) } actionToVecMapping <- function(actionNumerical){ return(switch(actionNumerical, c(1,1), c(1,2), c(1,3), c(2,1), c(2,2), c(2,3), c(3,1), c(3,2), c(3,3) ) ) } vecToActionMapping <- function(actionVec){ if(actionVec[1] == 1){ if(actionVec[2] == 1) return(1) else if(actionVec[2] == 2) return(2) else return(3) } else if(actionVec[1] == 2){ if(actionVec[2] == 1) return(4) else if(actionVec[2] == 2) return(5) else return(6) } else if(actionVec[1] == 3){ if(actionVec[2] == 1) return(7) else if(actionVec[2] == 2) return(8) else return(9) } else return -1 } # take a state which looks like a binary representation and convert it to decimal stateToDec <- function(boardVec){ dec <- 0 for(i in 1:length(boardVec)){ dec <- dec + boardVec[i]*2^(i-1) } return(dec) } decToState <- function(numState){ bin <- decToBin(numState) # pad left with 0's lenPad <- 9 - length(bin) padZeros <- vector("numeric",length=lenPad) state <- c(padZeros, bin) return(state) } decToBin <- function(decimal){ if(decimal == 0) return(NULL) bin <- c(decimal %% 2, decToBin(decimal %/% 2)) return(bin) } # reward function reward <- function(winner, playingAs){ # returns a reward based on the winner + who you are playing as if(winner == 0) return(-100) # want agent to win else if(winner == 1 && playingAs == 1) return(100) else if(winner == 1 && playingAs == 2) return(-100) else if(winner == 2 && playingAs == 1) return(-100) else return(100) } # simulate with state approximations simulUTTT <- function(theseed){ # To reproduce experiment set.seed(theseed) # Start game masterBoard <- array(0, dim = c(3,3,3,3)) statusBoard <- matrix(0, nrow = 3, ncol = 3, byrow = T) winner <- 0 # Initially a tie # track the states boardStates <- matrix(0, ncol = 81) currentBoardState <- matrix(0, ncol = 9) # for Q learning statusBoardState <- matrix(0,ncol = 9) # for Q learning actionList <- list() # for Q learning currAndStatus <- matrix(0, ncol = 18) # First move player <- 1 # random method move <- sample(c(1,2,3), size = 4, replace = T) masterBoard[move[1], move[2], move[3], move[4]] <- player forcedMove <- c(move[3], move[4]) player <- player %% 2 + 1 # add state to tracker boardStates <- rbind(boardStates, boardToVector(masterBoard)) # add action to tracker actionList <- append(actionList, vecToActionMapping(forcedMove)) # The game continue normally while (T) { validMoves <- getValidMove(masterBoard, forcedMove, statusBoard) if (length(validMoves) == 0) break move <- sample(validMoves, size = 1) move <- move[[1]] # add current board to state tracker currentBoardState <- rbind(currentBoardState, suboardToVector(masterBoard[move[1],move[2],,])) # add current board status to the state tracker statusBoardState <- rbind(statusBoardState, suboardToVector(statusBoard)) # make the action masterBoard[move[1], move[2], move[3], move[4]] <- player forcedMove <- c(move[3], move[4]) # add action to state tracker actionList <- append(actionList, vecToActionMapping(forcedMove)) # Update tracked states boardStates <- rbind(boardStates, boardToVector(masterBoard)) # Check for win if(hasWonBoard(masterBoard[move[1], move[2],,], player)) statusBoard[move[1], move[2]] <- player # Won subBoard if(hasWonBoard(statusBoard, player)){ winner <- player # Won game break } player <- player %% 2 + 1 # Change player } return(list(winner,boardStates,currentBoardState,statusBoardState,actionList)) } # create a dataset that contains runs of random plays (states) + of the winner (rewards) # requires more formatting than just one board nepis <- 1 StateActionReward <- rep(NULL, nepis) # playing as player 1 for(i in 1:nepis){ run <- simulUTTT(i) rewardR <- reward(run[[1]],playingAs = 1) states <- matrix(cbind(run[[3]],run[[4]])) # state convention curr board in binary + status board in binary StateActionReward[[i]] <- list(rewardR,states,run[[5]]) } # TODO: Use Watkin's QLearning # use Watkin's Q Learning Techinque - Input all the simulations as a list of rewards, states and actions ApplyQLearning <- function(qInit,episodeSimu,stepSize){ qEstim <- qInit for(episode in episodeSimu){ reward <- episode[[1]] states <- episode[[2]] actions <- episode[[3]] # print(reward) # print(states) # print(actions) Tt <- length(actions) # print(Tt) # cat(dim(states)[1],"-----\n") for(t in 1:(Tt-1)){ S_t <- states[t,] A_t <- actions[[t]][1] R_tplus1 <- 0 if(t == Tt-1) # at the beginning will do a lot of of 0 updates R_tplus1 <- reward S_tplus1 <- states[t+1,] # convert states to decimal representation S_t <- stateToDec(S_t) S_tplus1 <- stateToDec(S_tplus1) # add 1 to every S because r indexing starts at 0 # undiscounted rewards qEstim[S_t+1,A_t] <- qEstim[S_t+1,A_t] + stepSize*(R_tplus1 + max(qEstim[S_tplus1+1,]) - qEstim[S_t+1,A_t]) } } return(qEstim) } # TODO: USe qEstim to find new winRAtio # apply q learning stepsize1 <- 0.01 stepsize2 <- 0.1 stepsize3 <- 0.2 qEstim1 <- matrix(0,nrow = 2^18, ncol = 9) qEstim2 <- matrix(0,nrow = 2^18, ncol = 9) qEstim3 <- matrix(0,nrow = 2^18, ncol = 9) qEstim1 <- ApplyQLearning(qEstim1,StateActionReward,stepsize1) qEstim2 <- ApplyQLearning(qEstim2,StateActionReward,stepsize2) qEstim3 <- ApplyQLearning(qEstim3,StateActionReward,stepsize3) # this agent uses qEstim from Q learning to play against a random bot simulUTTTQLearning <- function(theseed,qEstim){ # To reproduce experiment set.seed(theseed) # Start game masterBoard <- array(0, dim = c(3,3,3,3)) statusBoard <- matrix(0, nrow = 3, ncol = 3, byrow = T) winner <- 0 # Initially a tie # track the states boardStates <- matrix(0, ncol = 81) # more for debugging # First move player <- 1 # random method move <- sample(c(1,2,3), size = 4, replace = T) masterBoard[move[1], move[2], move[3], move[4]] <- player forcedMove <- c(move[3], move[4]) player <- player %% 2 + 1 # The game continue normally while (T) { validMoves <- getValidMove(masterBoard, forcedMove, statusBoard) if (length(validMoves) == 0) break # random move for opponent or for player 1 in special case if(player == 2){ move <- sample(validMoves, size = 1) move <- move[[1]] } else { if(validMoves[[1]][1] != forcedMove[1] || validMoves[[1]][2] != forcedMove[2]){ # in the case where move isn't in forced move subboard randomSuboard <- sample(validMoves,size = 1) # pick a subboard randomly randomSuboard <- randomSuboard[[1]] forcedMove <- c(randomSuboard[1],randomSuboard[2]) # in this case the valid moves should be restricted to the chose subboard temp <- list() for(move in validMoves){ if(move[1] == forcedMove[1] && move[2] == forcedMove[2]) temp <- append(temp, list(move)) } } # otherwise player 1 can pick only within sub-board # in this case can use qEstim stateDecimal <- stateToDec(c(suboardToVector(masterBoard[forcedMove[1],forcedMove[2],,]),suboardToVector(statusBoard))) validActionsDecimal <- c() # list of numbers # print("I <3 QLearning"3) # go through list of moves and coonvert them to decimal for(move in validMoves){ validActionsDecimal <- c(validActionsDecimal, vecToActionMapping(c(move[3],move[4]))) } # find the max action based on values in qEstim maxValue <- -1000000000 maxIndex <- -1 for(actionIndex in validActionsDecimal){ if(qEstim[stateDecimal+1,actionIndex] > maxValue) maxIndex <- actionIndex } # so the move will be actionVector <- actionToVecMapping(maxIndex) move <- c(forcedMove[1],forcedMove[2],actionVector[1],actionVector[2]) } # Play the move masterBoard[move[1], move[2], move[3], move[4]] <- player forcedMove <- c(move[3], move[4]) # add state to tracker boardStates <- rbind(boardStates, boardToVector(masterBoard)) # Check for win if(hasWonBoard(masterBoard[move[1], move[2],,], player)) statusBoard[move[1], move[2]] <- player # Won subBoard if(hasWonBoard(statusBoard, player)){ winner <- player # Won game break } player <- player %% 2 + 1 # Change player } return(list(winner,boardStates)) } # run and find win ratio # playing as player 1 - with QLearning technique wins1 <- 0 wins2 <- 0 wins3 <- 0 nepis <- 10000 winRatio1 <- vector("numeric",length = nepis) winRatio2 <- vector("numeric",length = nepis) winRatio3 <- vector("numeric",length = nepis) for(i in 1:nepis){ winner1 <- simulUTTTQLearning(i,qEstim1)[[1]] winner2 <- simulUTTTQLearning(i,qEstim2)[[1]] winner3 <- simulUTTTQLearning(i,qEstim3)[[1]] if (winner1 == 1){ wins1 <- 1 + wins1 } winRatio1[i] <- wins1/i if (winner2 == 1){ wins2 <- 1 + wins2 } winRatio2[i] <- wins2/i if (winner3 == 1){ wins3 <- 1 + wins3 } winRatio3[i] <- wins3/i } # worst than the random policy plot(x=1:nepis,winRatio1,type="l")
# January 2018 # Functions to compute the one-shot and averaged across folds AUROC and AUPRC library(precrec); # function to compute the AUROC one-shot and averaging across folds # Input: # labels : a vector or factor with the labels (values in 0,1) # pred : a vector of the predictions (same dimension as labels) # folds : vector with number of the fold (same dimension as labels). If NULL no predefined folds are given and only one-shot AUROC is computed # digits : number of rounding digits # It returns a numeric vector with three elements: # - the one-shot AUROC # - the AUROC averaged across folds # - the stdev of the AUROC averaged acorss folds. compute.AUROC <- function (labels, pred, folds=NULL, digits=4) { y <- ifelse(labels==1, 1, 0); if (!is.null(folds)){ # compute AUROC averaged across folds k <- max(folds); AUROC <- numeric(k+1); for(j in 0:k){ test.indices <- which(folds==j); if (sum(y[test.indices]) > 0){ # if there is at least 1 positive in the jth fold sscurves <- evalmod(scores = pred[test.indices], labels = y[test.indices]); m<-attr(sscurves,"auc",exact=FALSE); AUROC[j+1] <- round(m[1,"aucs"],digits); }else{ AUROC[j+1] <- 0.5; } } } ## to handle the case in which a fold does not contain positive examples if(sum(y)==0){ AUROC.oneshot <- 0.5; }else{ sscurves <- evalmod(scores=pred, labels=y); m <- attr(sscurves,"auc",exact=FALSE); AUROC.oneshot <- round(m[1,"aucs"],digits); } if(!is.null(folds)){ res <- c(AUROC.oneshot, round(mean(AUROC), digits), round(sd(AUROC),digits)); }else{ res <- c(AUROC.oneshot, 0, 0); names(res) <- c("one.shot.AUROC","av.AUROC", "stdev"); } return(res); } # function to compute the AUPRC one-shot and averaging across folds # Input: # labels : a vector or factor with the labels (values in 0,1) # pred : a vector of the predictions (same dimension as labels) # folds : vector with number of the fold (same dimension as labels). If NULL no predefined folds are given and only one-shot AUROC is computed # digits : number of rounding digits # It returns a numeric vector with three elements: # - the one-shot AUPRC # - the AUPRC averaged across folds # - the stdev of the AUPRC averaged acorss folds. compute.AUPRC <- function (labels, pred, folds=NULL, digits=4) { y <- ifelse(labels==1, 1, 0); if (!is.null(folds)){ # compute AUROC averaged across folds k <- max(folds); AUPRC <- numeric(k+1); for(j in 0:k){ test.indices <- which(folds==j); if (sum(y[test.indices]) > 0) { # in there is at least 1 positive in the jth fold sscurves <- evalmod(scores = pred[test.indices], labels = y[test.indices]); m<-attr(sscurves,"auc",exact=FALSE); AUPRC[j+1] <- round(m[2,"aucs"],digits); }else{ AUPRC[j+1] <- 0; } } } ## to handle the case in which a fold does not contain positive examples if(sum(y)==0){ AUPRC.oneshot <- 0; }else{ sscurves <- evalmod(scores=pred, labels=y); m <- attr(sscurves, "auc", exact=FALSE); AUPRC.oneshot <- round(m[2,"aucs"], digits); } if(!is.null(folds)) { res <- c(AUPRC.oneshot, round(mean(AUPRC), digits), round(sd(AUPRC),digits)); }else{ res <- c(AUPRC.oneshot, 0, 0); names(res) <- c("one.shot.AUPRC","av.AUPRC", "stdev"); } return(res); } ######################################################################## # function to be used by the caret package for AUPRC metric AUPRCSummary <- function(data, lev=NULL, model=NULL){ labels <- ifelse(data$obs == lev[1], 1, 0); pred <- as.numeric(data[,lev[1]]); out <- compute.AUPRC(labels, pred=pred, folds=NULL, digits=6)[1]; names(out) <- "AUC"; ## named the same as caret::prSummary return(out); } # function to be used by the caret package for AUROC metric AUROCSummary <- function(data, lev=NULL, model=NULL){ labels <- ifelse(data$obs == lev[1], 1, 0); pred <- as.numeric(data[,lev[1]]); out <- compute.AUROC(labels, pred=pred, folds=NULL, digits=6)[1]; names(out) <- "ROC"; ## named the same as caret::twoClassSummary return(out); }
/MLFinalEvaluation/lib/metrics.R
no_license
AikawaKai/BioinformaticThesis
R
false
false
4,015
r
# January 2018 # Functions to compute the one-shot and averaged across folds AUROC and AUPRC library(precrec); # function to compute the AUROC one-shot and averaging across folds # Input: # labels : a vector or factor with the labels (values in 0,1) # pred : a vector of the predictions (same dimension as labels) # folds : vector with number of the fold (same dimension as labels). If NULL no predefined folds are given and only one-shot AUROC is computed # digits : number of rounding digits # It returns a numeric vector with three elements: # - the one-shot AUROC # - the AUROC averaged across folds # - the stdev of the AUROC averaged acorss folds. compute.AUROC <- function (labels, pred, folds=NULL, digits=4) { y <- ifelse(labels==1, 1, 0); if (!is.null(folds)){ # compute AUROC averaged across folds k <- max(folds); AUROC <- numeric(k+1); for(j in 0:k){ test.indices <- which(folds==j); if (sum(y[test.indices]) > 0){ # if there is at least 1 positive in the jth fold sscurves <- evalmod(scores = pred[test.indices], labels = y[test.indices]); m<-attr(sscurves,"auc",exact=FALSE); AUROC[j+1] <- round(m[1,"aucs"],digits); }else{ AUROC[j+1] <- 0.5; } } } ## to handle the case in which a fold does not contain positive examples if(sum(y)==0){ AUROC.oneshot <- 0.5; }else{ sscurves <- evalmod(scores=pred, labels=y); m <- attr(sscurves,"auc",exact=FALSE); AUROC.oneshot <- round(m[1,"aucs"],digits); } if(!is.null(folds)){ res <- c(AUROC.oneshot, round(mean(AUROC), digits), round(sd(AUROC),digits)); }else{ res <- c(AUROC.oneshot, 0, 0); names(res) <- c("one.shot.AUROC","av.AUROC", "stdev"); } return(res); } # function to compute the AUPRC one-shot and averaging across folds # Input: # labels : a vector or factor with the labels (values in 0,1) # pred : a vector of the predictions (same dimension as labels) # folds : vector with number of the fold (same dimension as labels). If NULL no predefined folds are given and only one-shot AUROC is computed # digits : number of rounding digits # It returns a numeric vector with three elements: # - the one-shot AUPRC # - the AUPRC averaged across folds # - the stdev of the AUPRC averaged acorss folds. compute.AUPRC <- function (labels, pred, folds=NULL, digits=4) { y <- ifelse(labels==1, 1, 0); if (!is.null(folds)){ # compute AUROC averaged across folds k <- max(folds); AUPRC <- numeric(k+1); for(j in 0:k){ test.indices <- which(folds==j); if (sum(y[test.indices]) > 0) { # in there is at least 1 positive in the jth fold sscurves <- evalmod(scores = pred[test.indices], labels = y[test.indices]); m<-attr(sscurves,"auc",exact=FALSE); AUPRC[j+1] <- round(m[2,"aucs"],digits); }else{ AUPRC[j+1] <- 0; } } } ## to handle the case in which a fold does not contain positive examples if(sum(y)==0){ AUPRC.oneshot <- 0; }else{ sscurves <- evalmod(scores=pred, labels=y); m <- attr(sscurves, "auc", exact=FALSE); AUPRC.oneshot <- round(m[2,"aucs"], digits); } if(!is.null(folds)) { res <- c(AUPRC.oneshot, round(mean(AUPRC), digits), round(sd(AUPRC),digits)); }else{ res <- c(AUPRC.oneshot, 0, 0); names(res) <- c("one.shot.AUPRC","av.AUPRC", "stdev"); } return(res); } ######################################################################## # function to be used by the caret package for AUPRC metric AUPRCSummary <- function(data, lev=NULL, model=NULL){ labels <- ifelse(data$obs == lev[1], 1, 0); pred <- as.numeric(data[,lev[1]]); out <- compute.AUPRC(labels, pred=pred, folds=NULL, digits=6)[1]; names(out) <- "AUC"; ## named the same as caret::prSummary return(out); } # function to be used by the caret package for AUROC metric AUROCSummary <- function(data, lev=NULL, model=NULL){ labels <- ifelse(data$obs == lev[1], 1, 0); pred <- as.numeric(data[,lev[1]]); out <- compute.AUROC(labels, pred=pred, folds=NULL, digits=6)[1]; names(out) <- "ROC"; ## named the same as caret::twoClassSummary return(out); }
NEI <- readRDS("summarySCC_PM25.rds") SCC <- readRDS("Source_Classification_Code.rds") SCC_coal_comb <- SCC[grepl("comb", SCC$EI.Sector, ignore.case=TRUE) & grepl("coal", SCC$EI.Sector, ignore.case=TRUE),] NEI_coal_comb <- subset(NEI, SCC %in% SCC_coal_comb$SCC) coalComb_total <- aggregate(Emissions ~ year, sum, data = NEI_coal_comb) png(filename = "plot4.png", width = 600, height = 600) with(coalComb_total, plot(year, Emissions, type = "l", xlab = "Year", xaxt = "n", col = "red", lwd = 1, lty=2)) with(coalComb_total, points(year, Emissions, col = "red", pch = 19)) axis(1, at = seq(1999, 2008, by = 3)) title("Emissions from coal combustion-related sources per year in U.S.") dev.off()
/plot4.R
no_license
ArthurNing/EDA_Project2
R
false
false
699
r
NEI <- readRDS("summarySCC_PM25.rds") SCC <- readRDS("Source_Classification_Code.rds") SCC_coal_comb <- SCC[grepl("comb", SCC$EI.Sector, ignore.case=TRUE) & grepl("coal", SCC$EI.Sector, ignore.case=TRUE),] NEI_coal_comb <- subset(NEI, SCC %in% SCC_coal_comb$SCC) coalComb_total <- aggregate(Emissions ~ year, sum, data = NEI_coal_comb) png(filename = "plot4.png", width = 600, height = 600) with(coalComb_total, plot(year, Emissions, type = "l", xlab = "Year", xaxt = "n", col = "red", lwd = 1, lty=2)) with(coalComb_total, points(year, Emissions, col = "red", pch = 19)) axis(1, at = seq(1999, 2008, by = 3)) title("Emissions from coal combustion-related sources per year in U.S.") dev.off()
#load sqldf liberary options(gsubfn.engine = "R") library(sqldf) #reading data from the dates 2007-02-01 and 2007-02-02. data<- read.csv.sql("household_power_consumption.txt", sql = "SELECT * from file WHERE Date in ('1/2/2007','2/2/2007') ", sep = ";", header = TRUE) #Remove the rows where Global_active_power is NA data<-data[data$Global_active_power !="?",] #conbine date and time values data$datetime <- strptime(paste(data$Date, data$Time), "%d/%m/%Y %H:%M:%S") #Open png device; create "plot2.png" in my working directory png(file = "plot2.png") ## Create plot and send to a file plot(data$datetime, data$Global_active_power, type = "l", xlab = "", ylab = "Global Active Power (kilowatts)") ## Close the PNG file device dev.off()
/plot2.R
no_license
DavidHHShao/ExData_Plotting1
R
false
false
754
r
#load sqldf liberary options(gsubfn.engine = "R") library(sqldf) #reading data from the dates 2007-02-01 and 2007-02-02. data<- read.csv.sql("household_power_consumption.txt", sql = "SELECT * from file WHERE Date in ('1/2/2007','2/2/2007') ", sep = ";", header = TRUE) #Remove the rows where Global_active_power is NA data<-data[data$Global_active_power !="?",] #conbine date and time values data$datetime <- strptime(paste(data$Date, data$Time), "%d/%m/%Y %H:%M:%S") #Open png device; create "plot2.png" in my working directory png(file = "plot2.png") ## Create plot and send to a file plot(data$datetime, data$Global_active_power, type = "l", xlab = "", ylab = "Global Active Power (kilowatts)") ## Close the PNG file device dev.off()
# registerDoMC(cores = 4) training_data <- read_delim( file = 'data/TrainingData.txt', delim = '~', col_names = c('doc_id', 'text') ) training_labels <- read_csv( file = 'data/TrainCategoryMatrix.csv', col_names = paste0('cat_', letters[1:22]), col_types = paste0(rep('i', 22), collapse = '') ) %>% rownames_to_column('doc_id') %>% mutate(doc_id = as.integer(doc_id)) %>% melt(id.vars='doc_id') %>% mutate(value = ifelse(value==-1,F,T)) %>% dcast(doc_id~variable) %>% as.tbl() dim(training_data) dim(training_labels) attr(training_data, 'state') <- 'raw' attr(training_labels, 'state') <- 'raw' cache('training_data') cache('training_labels') airport_iata_codes <- readr::read_csv('data/airports.csv', na = '', trim_ws = T, locale = locale(encoding = "UTF-8"))%>% select(iata_code) %>% na.omit %>% mutate(iata_code = str_to_lower(iata_code)) %>% distinct() %>% pull() airport_iata_codes %<>% sort() %>% tail(-2) airport_iata_codes
/munge/01-A-ReadData.R
no_license
rsangole/453_TextAnalysis_FinalProject
R
false
false
1,119
r
# registerDoMC(cores = 4) training_data <- read_delim( file = 'data/TrainingData.txt', delim = '~', col_names = c('doc_id', 'text') ) training_labels <- read_csv( file = 'data/TrainCategoryMatrix.csv', col_names = paste0('cat_', letters[1:22]), col_types = paste0(rep('i', 22), collapse = '') ) %>% rownames_to_column('doc_id') %>% mutate(doc_id = as.integer(doc_id)) %>% melt(id.vars='doc_id') %>% mutate(value = ifelse(value==-1,F,T)) %>% dcast(doc_id~variable) %>% as.tbl() dim(training_data) dim(training_labels) attr(training_data, 'state') <- 'raw' attr(training_labels, 'state') <- 'raw' cache('training_data') cache('training_labels') airport_iata_codes <- readr::read_csv('data/airports.csv', na = '', trim_ws = T, locale = locale(encoding = "UTF-8"))%>% select(iata_code) %>% na.omit %>% mutate(iata_code = str_to_lower(iata_code)) %>% distinct() %>% pull() airport_iata_codes %<>% sort() %>% tail(-2) airport_iata_codes
library(wasabi) args = commandArgs(trailingOnly=TRUE) pathoutput=args[1] listesample=args[4] print(listesample) if (args[3]=="mm10" | args[3]=="mm9"){ specie_dataset="mmusculus_gene_ensembl" } else { specie_dataset="hsapiens_gene_ensembl" } a=(as.list(strsplit(listesample, ","))[[1]]) print(a) data <- pathoutput sfdirs <- file.path(data, c(a)) prepare_fish_for_sleuth(sfdirs) library("sleuth") base_dir <- pathoutput sample_id <- dir(file.path(base_dir,"")) s2c <- read.table(file.path(base_dir, "/DiffExpressSalmon/serie.txt"), header = TRUE, stringsAsFactors=FALSE) s2c <- dplyr::select(s2c, sample =sample, condition) path=getwd() s2c$path= paste0(path,"/",args[1],"/",s2c$sample,"/",s2c$sample,"_transcripts_quant/") setwd(pathoutput) #sf_dirs <- dir() #s2c <- dplyr::mutate(s2c, path = sf_dirs) model <- "~ condition" # load data and fit the model print(s2c) so <- sleuth_prep(s2c, ~condition) print("ok") so <- sleuth_prep(s2c, as.formula(model)) %>% sleuth_fit() models(so) print ("ouioi") mart <- biomaRt::useMart(biomart = "ENSEMBL_MART_ENSEMBL", dataset = specie_dataset, host = 'ensembl.org') t2g <- biomaRt::getBM(attributes = c("ensembl_transcript_id", "ensembl_gene_id", "external_gene_name"), mart = mart) t2g <- dplyr::rename(t2g, target_id = ensembl_transcript_id, ens_gene = ensembl_gene_id, ext_gene = external_gene_name) so <- sleuth_prep(s2c, ~ condition, target_mapping = t2g) so <- sleuth_fit(so) so <- sleuth_fit(so, ~1, 'reduced') so <- sleuth_lrt(so, 'reduced', 'full') results_table <- sleuth_results(so, 'reduced:full', test_type = 'lrt') sleuth_live(so) so <- sleuth_wt(so, 'control')
/sleuth.R
no_license
TessierFlorent/PipelineNGS
R
false
false
1,753
r
library(wasabi) args = commandArgs(trailingOnly=TRUE) pathoutput=args[1] listesample=args[4] print(listesample) if (args[3]=="mm10" | args[3]=="mm9"){ specie_dataset="mmusculus_gene_ensembl" } else { specie_dataset="hsapiens_gene_ensembl" } a=(as.list(strsplit(listesample, ","))[[1]]) print(a) data <- pathoutput sfdirs <- file.path(data, c(a)) prepare_fish_for_sleuth(sfdirs) library("sleuth") base_dir <- pathoutput sample_id <- dir(file.path(base_dir,"")) s2c <- read.table(file.path(base_dir, "/DiffExpressSalmon/serie.txt"), header = TRUE, stringsAsFactors=FALSE) s2c <- dplyr::select(s2c, sample =sample, condition) path=getwd() s2c$path= paste0(path,"/",args[1],"/",s2c$sample,"/",s2c$sample,"_transcripts_quant/") setwd(pathoutput) #sf_dirs <- dir() #s2c <- dplyr::mutate(s2c, path = sf_dirs) model <- "~ condition" # load data and fit the model print(s2c) so <- sleuth_prep(s2c, ~condition) print("ok") so <- sleuth_prep(s2c, as.formula(model)) %>% sleuth_fit() models(so) print ("ouioi") mart <- biomaRt::useMart(biomart = "ENSEMBL_MART_ENSEMBL", dataset = specie_dataset, host = 'ensembl.org') t2g <- biomaRt::getBM(attributes = c("ensembl_transcript_id", "ensembl_gene_id", "external_gene_name"), mart = mart) t2g <- dplyr::rename(t2g, target_id = ensembl_transcript_id, ens_gene = ensembl_gene_id, ext_gene = external_gene_name) so <- sleuth_prep(s2c, ~ condition, target_mapping = t2g) so <- sleuth_fit(so) so <- sleuth_fit(so, ~1, 'reduced') so <- sleuth_lrt(so, 'reduced', 'full') results_table <- sleuth_results(so, 'reduced:full', test_type = 'lrt') sleuth_live(so) so <- sleuth_wt(so, 'control')
palette(rainbow(10)) data("mfeat_pix") trainlabels <- rep(1:10,rep(200,10)) colors = rainbow(length(unique(trainlabels))) names(colors) = unique(trainlabels) ecb = function(x,y){ plot(x,t='n'); text(x,labels=trainlabels, col=colors[as.integer(unlist(trainlabels))]) } #qkern kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "rbfbase", qpar = list(sigma = 10,q = 0.99) , epoch_callback = ecb, max_iter = 1000, perplexity=35) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "nonlbase", qpar = list(alpha=0.01,q=0.8) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "laplbase", qpar = list(sigma = 112, q = 0.9) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "ratibase", qpar = list(c = .1, q = 0.9) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "multbase", qpar = list(c = 1, q = 0.9) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "invbase", qpar = list(c = 1, q = 0.9) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "wavbase", qpar = list(theta=10,q=0.8) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "powbase", qpar = list(d = .5, q = 0.9) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "logbase", qpar = list(d = 2, q = 0.9) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "caubase", qpar = list(sigma = 100, q = 0.9) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "chibase", qpar = list(gamma=.01,q = 0.8) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "studbase", qpar = list(d = .5, q = 0.9) , epoch_callback = ecb, max_iter = 1000, perplexity=30) #cndkern kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "nonlcnd", qpar = list(alpha=.00001) , epoch_callback = ecb, perplexity=15) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "polycnd", qpar = list(d = 2,alpha = 1, c=1) , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "rbfcnd", qpar = list(gamma = 1e5) , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "laplcnd", qpar = list(gamma = 100000) , epoch_callback = ecb, perplexity=40) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "anocnd", qpar = list(d = 1, sigma = 1e9), epoch_callback = ecb, perplexity=40) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "raticnd", qpar = list(c = 1000) , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "multcnd", qpar = list(c= 10) , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "invcnd", qpar = list(c = 10) , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "wavcnd", qpar = list(theta = 1000), epoch_callback = ecb, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "powcnd", qpar = list(d = 2) , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "logcnd", qpar = list(d = 2) , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "caucnd", qpar = list(gamma = 1000) , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "chicnd", qpar = list() , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "studcnd", qpar = list(d = 0.16) , epoch_callback = ecb, perplexity=10) plot(dimRed(kpc2),col=trainlabels) ## S4 method for signature 'qkernmatrix' qkfunc <- rbfbase(sigma = 10,q = 0.99) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- nonlbase(alpha=0.01,q=0.8) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- laplbase(sigma = 112, q = 0.9) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- ratibase(c = .1, q = 0.9) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- multbase(c = 1, q = 0.9) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- invbase(c = 1, q = 0.9) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- wavbase(theta=10,q=0.8) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- powbase(d = .5, q = 0.9) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- logbase(d = 2, q = 0.9) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- caubase(sigma = 100, q = 0.9) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- chibase(gamma=.01,q = 0.8) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- studbase(d = .5, q = 0.9) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) #cndkfun cndkfunc <- nonlcnd(alpha=.00001) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- polycnd(d = 2,alpha = 1, c=1) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- rbfcnd(gamma = 1e5) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- laplcnd(gamma = 100000) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- raticnd(c = 1000) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- anocnd(d = 1, sigma = 1e9) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- multcnd(c = 10) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- invcnd(c = 10) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- wavcnd(theta = 1000) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- powcnd(d = 2) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- logcnd(d = 2) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- caucnd(gamma = 1000) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- chicnd() cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- studcnd(d = 0.16) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels)
/data/genthat_extracted_code/qkerntool/tests/testqtsne_mfeat-pix.R
no_license
surayaaramli/typeRrh
R
false
false
9,178
r
palette(rainbow(10)) data("mfeat_pix") trainlabels <- rep(1:10,rep(200,10)) colors = rainbow(length(unique(trainlabels))) names(colors) = unique(trainlabels) ecb = function(x,y){ plot(x,t='n'); text(x,labels=trainlabels, col=colors[as.integer(unlist(trainlabels))]) } #qkern kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "rbfbase", qpar = list(sigma = 10,q = 0.99) , epoch_callback = ecb, max_iter = 1000, perplexity=35) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "nonlbase", qpar = list(alpha=0.01,q=0.8) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "laplbase", qpar = list(sigma = 112, q = 0.9) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "ratibase", qpar = list(c = .1, q = 0.9) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "multbase", qpar = list(c = 1, q = 0.9) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "invbase", qpar = list(c = 1, q = 0.9) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "wavbase", qpar = list(theta=10,q=0.8) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "powbase", qpar = list(d = .5, q = 0.9) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "logbase", qpar = list(d = 2, q = 0.9) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "caubase", qpar = list(sigma = 100, q = 0.9) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "chibase", qpar = list(gamma=.01,q = 0.8) , epoch_callback = ecb, max_iter = 1000, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "studbase", qpar = list(d = .5, q = 0.9) , epoch_callback = ecb, max_iter = 1000, perplexity=30) #cndkern kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "nonlcnd", qpar = list(alpha=.00001) , epoch_callback = ecb, perplexity=15) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "polycnd", qpar = list(d = 2,alpha = 1, c=1) , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "rbfcnd", qpar = list(gamma = 1e5) , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "laplcnd", qpar = list(gamma = 100000) , epoch_callback = ecb, perplexity=40) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "anocnd", qpar = list(d = 1, sigma = 1e9), epoch_callback = ecb, perplexity=40) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "raticnd", qpar = list(c = 1000) , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "multcnd", qpar = list(c= 10) , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "invcnd", qpar = list(c = 10) , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "wavcnd", qpar = list(theta = 1000), epoch_callback = ecb, perplexity=30) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "powcnd", qpar = list(d = 2) , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "logcnd", qpar = list(d = 2) , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "caucnd", qpar = list(gamma = 1000) , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "chicnd", qpar = list() , epoch_callback = ecb, perplexity=10) kpc2 <- qtSNE(as.matrix(mfeat_pix), kernel = "studcnd", qpar = list(d = 0.16) , epoch_callback = ecb, perplexity=10) plot(dimRed(kpc2),col=trainlabels) ## S4 method for signature 'qkernmatrix' qkfunc <- rbfbase(sigma = 10,q = 0.99) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- nonlbase(alpha=0.01,q=0.8) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- laplbase(sigma = 112, q = 0.9) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- ratibase(c = .1, q = 0.9) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- multbase(c = 1, q = 0.9) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- invbase(c = 1, q = 0.9) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- wavbase(theta=10,q=0.8) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- powbase(d = .5, q = 0.9) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- logbase(d = 2, q = 0.9) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- caubase(sigma = 100, q = 0.9) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- chibase(gamma=.01,q = 0.8) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) qkfunc <- studbase(d = .5, q = 0.9) qKtrain <- qkernmatrix(qkfunc, as.matrix(mfeat_pix)) kpc3 <- qtSNE(qKtrain,max_iter = 1000,epoch_callback = ecb, perplexity=30) plot(dimRed(kpc3),col=trainlabels) #cndkfun cndkfunc <- nonlcnd(alpha=.00001) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- polycnd(d = 2,alpha = 1, c=1) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- rbfcnd(gamma = 1e5) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- laplcnd(gamma = 100000) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- raticnd(c = 1000) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- anocnd(d = 1, sigma = 1e9) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- multcnd(c = 10) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- invcnd(c = 10) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- wavcnd(theta = 1000) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- powcnd(d = 2) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- logcnd(d = 2) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- caucnd(gamma = 1000) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- chicnd() cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels) cndkfunc <- studcnd(d = 0.16) cndKtrain <- cndkernmatrix(cndkfunc, as.matrix(mfeat_pix)) kpc4 <- qtSNE(cndKtrain,max_iter = 1300,epoch_callback = ecb, perplexity=10) plot(dimRed(kpc4),col=trainlabels)
shinyUI(pageWithSidebar( headerPanel("Random Distribution Explorer"), sidebarPanel( textInput('n', 'Select size of distribution',value = 1000), h2(' '), h3(textOutput('mdizzle')), h3(textOutput('devizzle'))), mainPanel( h3("Generates specified random distribution"), h5("Calculates Standard Deviation and Mean of distribution and displays histogram"), p("This is a simple way to generate and then view descriptive statistics for several basic random distribution types. You can adjust the size of the distribution with the text box on the left and choose among the three distribution types via the radio buttons on the right. The random uniforms are generated in a range from 1 to 200. The normals are generatedwith mean 100 and standard deviation 100. The exponentialdistribution is generated with rate .05. A histogram will beplotted of the resulting distribution and the mean and standard deviation of the distribution will be displayed on the left. "), radioButtons("dist", "Distribution type:", c("Normal" = "norm", "Uniform" = "unif", "Exponential" = "exp")), plotOutput('newHist') ) ))
/ui.r
no_license
Robmattles/Data_Products
R
false
false
1,198
r
shinyUI(pageWithSidebar( headerPanel("Random Distribution Explorer"), sidebarPanel( textInput('n', 'Select size of distribution',value = 1000), h2(' '), h3(textOutput('mdizzle')), h3(textOutput('devizzle'))), mainPanel( h3("Generates specified random distribution"), h5("Calculates Standard Deviation and Mean of distribution and displays histogram"), p("This is a simple way to generate and then view descriptive statistics for several basic random distribution types. You can adjust the size of the distribution with the text box on the left and choose among the three distribution types via the radio buttons on the right. The random uniforms are generated in a range from 1 to 200. The normals are generatedwith mean 100 and standard deviation 100. The exponentialdistribution is generated with rate .05. A histogram will beplotted of the resulting distribution and the mean and standard deviation of the distribution will be displayed on the left. "), radioButtons("dist", "Distribution type:", c("Normal" = "norm", "Uniform" = "unif", "Exponential" = "exp")), plotOutput('newHist') ) ))
library(shiny) library(markdown) shinyUI( navbarPage( "Language Identification", tabPanel("Intro", includeMarkdown("intro.md") ), tabPanel("No. of Examples", sidebarLayout( sidebarPanel( sliderInput(inputId = "noFeatures", label = "Features", min = 200, max = 2000, value = 200, step = 200, round = NO, ticks = TRUE, animate = FALSE, width = "100%", sep = ",", pre = "", post = ""), selectInput(inputId = "plotMType", label = "Plot type", choices = c("Points plot" = "p", "Line plot" = "l"), selected = "l", multiple = FALSE) ), mainPanel( h2("Accuracy vs Number of Training Examples"), p(" The plot below shows the accuracy (both in-sample and out-of-sample) vs the number of training examples for a fixed number of features. You can use the slider on the left to set the number of features. "), plotOutput("plotMvsAcc"), p("The below shows in-sample accuracy only to better see how it changes with the varying number of training examples."), plotOutput("plotMvsAccInSample") ) ) ), tabPanel("No. of Features", sidebarLayout( sidebarPanel( sliderInput(inputId = "noTrainExamples", label = "Number of training examples", min = 1000, max = 25000, value = 1000, step = 1000, round = NO, ticks = TRUE, animate = FALSE, width = "100%", sep = ",", pre = "", post = ""), selectInput(inputId = "plotNType", label = "Plot type", choices = c("Points plot" = "p", "Line plot" = "l"), selected = "l", multiple = FALSE) ), mainPanel( h2("Accuracy vs Number of Features"), p(" The plot below shows the accuracy (both in-sample and out-of-sample) vs the number of features for a fixed number of training examples. You can use the slider on the left to set the number of training examples. "), plotOutput("plotNvsAcc"), p("The below shows in-sample accuracy only to better see how it changes with the varying number of features."), plotOutput("plotNvsAccInSample") ) ) ), tabPanel( "Error Analysis", p("The tables below show in-sample and out-of-sample errors respectevely. Both are created using the model consisting of 2000 features trained on 25,000 example sentences. Note that the in-sample errors are the English sentence 'Where is the beef?' which appeared in the training data for the non-English languages. The classifier correctly predicts this as an English sentence but it counts as an error because the sentences are not labeled as English."), h3("In-Sample Errors"), dataTableOutput(outputId="tableInSampleErrors"), h3("Out-of-Sample Errors"), dataTableOutput(outputId="tableOutOfSampleErrors"), h3("Histogram of Sentence Length for Wrong Predictions"), plotOutput("errorHistogram") ) ) )
/webapp/ui.R
no_license
sorenlind/DataProductsLanguageIdentification
R
false
false
4,068
r
library(shiny) library(markdown) shinyUI( navbarPage( "Language Identification", tabPanel("Intro", includeMarkdown("intro.md") ), tabPanel("No. of Examples", sidebarLayout( sidebarPanel( sliderInput(inputId = "noFeatures", label = "Features", min = 200, max = 2000, value = 200, step = 200, round = NO, ticks = TRUE, animate = FALSE, width = "100%", sep = ",", pre = "", post = ""), selectInput(inputId = "plotMType", label = "Plot type", choices = c("Points plot" = "p", "Line plot" = "l"), selected = "l", multiple = FALSE) ), mainPanel( h2("Accuracy vs Number of Training Examples"), p(" The plot below shows the accuracy (both in-sample and out-of-sample) vs the number of training examples for a fixed number of features. You can use the slider on the left to set the number of features. "), plotOutput("plotMvsAcc"), p("The below shows in-sample accuracy only to better see how it changes with the varying number of training examples."), plotOutput("plotMvsAccInSample") ) ) ), tabPanel("No. of Features", sidebarLayout( sidebarPanel( sliderInput(inputId = "noTrainExamples", label = "Number of training examples", min = 1000, max = 25000, value = 1000, step = 1000, round = NO, ticks = TRUE, animate = FALSE, width = "100%", sep = ",", pre = "", post = ""), selectInput(inputId = "plotNType", label = "Plot type", choices = c("Points plot" = "p", "Line plot" = "l"), selected = "l", multiple = FALSE) ), mainPanel( h2("Accuracy vs Number of Features"), p(" The plot below shows the accuracy (both in-sample and out-of-sample) vs the number of features for a fixed number of training examples. You can use the slider on the left to set the number of training examples. "), plotOutput("plotNvsAcc"), p("The below shows in-sample accuracy only to better see how it changes with the varying number of features."), plotOutput("plotNvsAccInSample") ) ) ), tabPanel( "Error Analysis", p("The tables below show in-sample and out-of-sample errors respectevely. Both are created using the model consisting of 2000 features trained on 25,000 example sentences. Note that the in-sample errors are the English sentence 'Where is the beef?' which appeared in the training data for the non-English languages. The classifier correctly predicts this as an English sentence but it counts as an error because the sentences are not labeled as English."), h3("In-Sample Errors"), dataTableOutput(outputId="tableInSampleErrors"), h3("Out-of-Sample Errors"), dataTableOutput(outputId="tableOutOfSampleErrors"), h3("Histogram of Sentence Length for Wrong Predictions"), plotOutput("errorHistogram") ) ) )
## ----setupQualityMeasures, message = FALSE, echo = FALSE, warning = FALSE---- library(knitr) library(dplyr) knitr::opts_chunk$set( echo = TRUE, fig.height = 6, fig.width = 7, message = FALSE, warning = FALSE, results = "asis" ) # Define colors constants---- COL.OVD <- "#66C2A5" COL.OVO <- "#A6D854" COL.OVCL <- "#FC8D62" COL.HLD <- "#8DA0CB" COL.HLO <- "#E78AC3" getNum <- function(str.vect) { sapply(strsplit(str.vect, "[_]"), "[[", 2) } ## ----NanoStringQC, message=TRUE, echo=TRUE------------------------------- library(nanostringr) expOVD <- NanoStringQC(ovd.r, subset(expQC, OVD == "Yes")) expOVO <- NanoStringQC(ovo.r, subset(expQC, OVO == "Yes")) expOVCL <- NanoStringQC(ovc.r, subset(expQC, OVCL == "Yes")) expHLD <- NanoStringQC(hld.r, subset(expQC, HLD == "Yes")) expHLO <- NanoStringQC(hlo.r, subset(expQC, HLO == "Yes")) expQC <- rbind(expHLD, expOVD, expHLO, expOVO, expOVCL) expQC$cohort <- factor(c(rep("HLD", nrow(expHLD)), rep("OVD", nrow(expOVD)), rep("HLO", nrow(expHLO)), rep("OVO", nrow(expOVO)), rep("OVCL", nrow(expOVCL)))) expQC <- expQC %>% mutate(cohort = factor(stringr::str_replace_all(cohort, c("HLD" = "HL", "OVD" = "OC")), levels = c("HL", "OC", "OVCL", "HLO", "OVO"))) ## ----perFOVPlot, fig.cap="Samples that failed imaging QC based on percent fields of view (FOV) counted across cohorts."---- boxplot(perFOV ~ cohort, ylab = "% FOV", main = "% FOV by Cohort", data = expQC, pch = 20, col = c(COL.HLD, COL.OVD, COL.OVCL, COL.HLO, COL.OVO)) abline(h = 75, lty = 2, col = "red") grid(NULL, NULL, lwd = 1) ## ----linPCPlot, fig.cap="Plot of $R^2$ of positive control probes from samples across all cohorts."---- boxplot(linPC ~ cohort, ylab = expression(R ^ 2), main = "Linearity of Positive Controls by Cohort", data = expQC, pch = 20, col = c(COL.HLD, COL.OVD, COL.OVCL, COL.HLO, COL.OVO), ylim = c(0, 1)) abline(h = 0.95, lty = 2, col = "red") grid(NULL, NULL, lwd = 1) ## ----averageHKPlot, fig.cap="Average log expression of Housekeeping genes by Cohort."---- boxplot(averageHK ~ cohort, ylab = "Average log HK expression", main = "Average log expression of Housekeeping genes by Cohort", data = expQC, pch = 20, col = c(COL.HLD, COL.OVD, COL.OVCL, COL.HLO, COL.OVO)) abline(h = 50, lty = 2, col = "red") grid(NULL, NULL, lwd = 1) ## ----lodPlot, fig.cap="Limit of detection by cohort."-------------------- boxplot(lod ~ cohort, ylab = "LOD", main = "Limit of detection (LOD) by Cohort", data = expQC, pch = 20, col = c(COL.HLD, COL.OVD, COL.OVCL, COL.HLO, COL.OVO)) abline(h = 50, lty = 2, col = "red") grid(NULL, NULL, lwd = 1) ## ----pergdPlot, fig.cap="Percent genes of total (excluding controls) detected above the limit of detection."---- boxplot(pergd ~ cohort, data = expQC, border = "white", ylab = "% Genes Detected", main = "Percent of Genes Detected Above \n the Limit of Detection", pch = 20, col = c(COL.HLD, COL.OVD, COL.OVCL, COL.HLO, COL.OVO)) abline(h = 50, lty = 2, col = "red") grid(NULL, NULL, lwd = 1) stripchart(pergd ~ cohort, data = expQC, vertical = TRUE, method = "jitter", pch = 20, cex = 0.4 , col = "#3A6EE3", add = TRUE) ## ----snPlot, fig.cap="Signal to Noise versus % Gene Detected by cohort."---- sn <- 100 detect <- 60 plot(expOVD$sn, expOVD$pergd, pch = 20, col = COL.OVD, xaxt = "n", ylim = c(0, 100), xlim = range(expOVD$sn), xlab = "Signal to Noise Ratio", ylab = "% Genes Detected") points(expOVO$sn, expOVO$pergd, pch = 20, col = COL.OVO) points(expOVCL$sn, expOVCL$pergd, pch = 20, col = COL.OVCL) points(expHLD$sn, expHLD$pergd, pch = 20, col = COL.HLD) points(expHLO$sn, expHLO$pergd, pch = 20, col = COL.HLO) axis(1, at = seq(0, max(expQC$sn) + 1, 300)) abline(v = sn, col = "red", lwd = 2) abline(h = detect, lty = 2) title("Signal to Noise vs \n Ratio of Genes Detected") legend("bottomright", c("HL", "OC", "OVCL", "HLO", "OVO"), pch = 20, bty = 'n', col = c(COL.HLD, COL.OVD, COL.OVCL, COL.HLO, COL.OVO)) ## ----snZoom, fig.cap="Signal to Noise versus % Gene Detected by cohort zoomed in to the area of possible failures."---- plot(expOVD$sn, expOVD$pergd, pch = 20, col = COL.OVD, xaxt = "n", ylim = c(0, 100), xlim = c(0, 6000), xlab = "Signal to Noise Ratio ", ylab = "Ratio of Genes Detected") points(expOVO$sn, expOVO$pergd, pch = 20, col = COL.OVO) points(expOVCL$sn, expOVCL$pergd, pch = 20, col = COL.OVCL) points(expHLD$sn, expHLD$pergd, pch = 20, col = COL.HLD) points(expHLO$sn, expHLO$pergd, pch = 20, col = COL.HLO) axis(1, at = seq(0, max(expQC$sn) + 1, 300)) abline(v = sn, col = "red", lwd = 2) abline(h = detect, lty = 2) title("Signal to Noise vs \n Ratio of Genes Detected (Zooming-in)") legend("bottomright", c("HL", "OC", "OVCL", "HLO", "OVO"), pch = 20, bty = 'n', col = c(COL.HLD, COL.OVD, COL.OVCL, COL.HLO, COL.OVO)) ## ----HKnorm-------------------------------------------------------------- expHLD0 <- expHLD any(expHLD0$QCFlag == "Failed") expHLD0$sampleID[which(expHLD0$QCFlag == "Failed")] ## ----remove_samples------------------------------------------------------ expHLD <- filter(expHLD0, sampleID != "HL1_18" & sampleID != "HL2_18") hld <- hld.r[, !colnames(hld.r) %in% c("HL1_18", "HL2_18")] ## ----normalize_HK-------------------------------------------------------- # If data already log normalized hld.n <- HKnorm(hld, is.logged = TRUE) # Otherwise, normalize to HK hld.n <- HKnorm(hld) hld1 <- hld.n[, grep("HL1", colnames(hld.n))] exp.hld1 <- subset(expHLD, geneRLF == "HL1") hld2 <- hld.n[, grep("HL2", colnames(hld.n))] exp.hld2 <- subset(expHLD, geneRLF == "HL2") ## ----refMethod----------------------------------------------------------- r <- 3 # The number of references to use choice.refs <- exp.hld1$sampleID[sample((1:dim(exp.hld1)[1]), r, replace = F)] # select reference samples randomly R1 <- t(hld1[, choice.refs]) R2 <- t(hld2[, paste("HL2", getNum(choice.refs), sep = "_")]) Y <- t(hld2[, !colnames(hld2) %in% paste("HL2", getNum(choice.refs), sep = "_")]) S2.r <- t(refMethod(Y, R1, R2)) # Data from CodeSet 2 now calibrated for CodeSet 1 ## ----plot_gene----------------------------------------------------------- set.seed(2016) gene <- sample(1:nrow(hld1), 1) par(mfrow = c(1, 2)) plot(t(hld1[gene, ]), t(hld2[gene, ]), xlab = "HL1", ylab = "HL2", main = "No Correction") abline(0, 1) plot(t(hld1[gene, !(colnames(hld1) %in% choice.refs)]), t(S2.r[gene, ]), xlab = "HL1", ylab = "HL2", main = "Corrected") abline(0, 1)
/data/genthat_extracted_code/nanostringr/vignettes/Overview.R
no_license
surayaaramli/typeRrh
R
false
false
6,690
r
## ----setupQualityMeasures, message = FALSE, echo = FALSE, warning = FALSE---- library(knitr) library(dplyr) knitr::opts_chunk$set( echo = TRUE, fig.height = 6, fig.width = 7, message = FALSE, warning = FALSE, results = "asis" ) # Define colors constants---- COL.OVD <- "#66C2A5" COL.OVO <- "#A6D854" COL.OVCL <- "#FC8D62" COL.HLD <- "#8DA0CB" COL.HLO <- "#E78AC3" getNum <- function(str.vect) { sapply(strsplit(str.vect, "[_]"), "[[", 2) } ## ----NanoStringQC, message=TRUE, echo=TRUE------------------------------- library(nanostringr) expOVD <- NanoStringQC(ovd.r, subset(expQC, OVD == "Yes")) expOVO <- NanoStringQC(ovo.r, subset(expQC, OVO == "Yes")) expOVCL <- NanoStringQC(ovc.r, subset(expQC, OVCL == "Yes")) expHLD <- NanoStringQC(hld.r, subset(expQC, HLD == "Yes")) expHLO <- NanoStringQC(hlo.r, subset(expQC, HLO == "Yes")) expQC <- rbind(expHLD, expOVD, expHLO, expOVO, expOVCL) expQC$cohort <- factor(c(rep("HLD", nrow(expHLD)), rep("OVD", nrow(expOVD)), rep("HLO", nrow(expHLO)), rep("OVO", nrow(expOVO)), rep("OVCL", nrow(expOVCL)))) expQC <- expQC %>% mutate(cohort = factor(stringr::str_replace_all(cohort, c("HLD" = "HL", "OVD" = "OC")), levels = c("HL", "OC", "OVCL", "HLO", "OVO"))) ## ----perFOVPlot, fig.cap="Samples that failed imaging QC based on percent fields of view (FOV) counted across cohorts."---- boxplot(perFOV ~ cohort, ylab = "% FOV", main = "% FOV by Cohort", data = expQC, pch = 20, col = c(COL.HLD, COL.OVD, COL.OVCL, COL.HLO, COL.OVO)) abline(h = 75, lty = 2, col = "red") grid(NULL, NULL, lwd = 1) ## ----linPCPlot, fig.cap="Plot of $R^2$ of positive control probes from samples across all cohorts."---- boxplot(linPC ~ cohort, ylab = expression(R ^ 2), main = "Linearity of Positive Controls by Cohort", data = expQC, pch = 20, col = c(COL.HLD, COL.OVD, COL.OVCL, COL.HLO, COL.OVO), ylim = c(0, 1)) abline(h = 0.95, lty = 2, col = "red") grid(NULL, NULL, lwd = 1) ## ----averageHKPlot, fig.cap="Average log expression of Housekeeping genes by Cohort."---- boxplot(averageHK ~ cohort, ylab = "Average log HK expression", main = "Average log expression of Housekeeping genes by Cohort", data = expQC, pch = 20, col = c(COL.HLD, COL.OVD, COL.OVCL, COL.HLO, COL.OVO)) abline(h = 50, lty = 2, col = "red") grid(NULL, NULL, lwd = 1) ## ----lodPlot, fig.cap="Limit of detection by cohort."-------------------- boxplot(lod ~ cohort, ylab = "LOD", main = "Limit of detection (LOD) by Cohort", data = expQC, pch = 20, col = c(COL.HLD, COL.OVD, COL.OVCL, COL.HLO, COL.OVO)) abline(h = 50, lty = 2, col = "red") grid(NULL, NULL, lwd = 1) ## ----pergdPlot, fig.cap="Percent genes of total (excluding controls) detected above the limit of detection."---- boxplot(pergd ~ cohort, data = expQC, border = "white", ylab = "% Genes Detected", main = "Percent of Genes Detected Above \n the Limit of Detection", pch = 20, col = c(COL.HLD, COL.OVD, COL.OVCL, COL.HLO, COL.OVO)) abline(h = 50, lty = 2, col = "red") grid(NULL, NULL, lwd = 1) stripchart(pergd ~ cohort, data = expQC, vertical = TRUE, method = "jitter", pch = 20, cex = 0.4 , col = "#3A6EE3", add = TRUE) ## ----snPlot, fig.cap="Signal to Noise versus % Gene Detected by cohort."---- sn <- 100 detect <- 60 plot(expOVD$sn, expOVD$pergd, pch = 20, col = COL.OVD, xaxt = "n", ylim = c(0, 100), xlim = range(expOVD$sn), xlab = "Signal to Noise Ratio", ylab = "% Genes Detected") points(expOVO$sn, expOVO$pergd, pch = 20, col = COL.OVO) points(expOVCL$sn, expOVCL$pergd, pch = 20, col = COL.OVCL) points(expHLD$sn, expHLD$pergd, pch = 20, col = COL.HLD) points(expHLO$sn, expHLO$pergd, pch = 20, col = COL.HLO) axis(1, at = seq(0, max(expQC$sn) + 1, 300)) abline(v = sn, col = "red", lwd = 2) abline(h = detect, lty = 2) title("Signal to Noise vs \n Ratio of Genes Detected") legend("bottomright", c("HL", "OC", "OVCL", "HLO", "OVO"), pch = 20, bty = 'n', col = c(COL.HLD, COL.OVD, COL.OVCL, COL.HLO, COL.OVO)) ## ----snZoom, fig.cap="Signal to Noise versus % Gene Detected by cohort zoomed in to the area of possible failures."---- plot(expOVD$sn, expOVD$pergd, pch = 20, col = COL.OVD, xaxt = "n", ylim = c(0, 100), xlim = c(0, 6000), xlab = "Signal to Noise Ratio ", ylab = "Ratio of Genes Detected") points(expOVO$sn, expOVO$pergd, pch = 20, col = COL.OVO) points(expOVCL$sn, expOVCL$pergd, pch = 20, col = COL.OVCL) points(expHLD$sn, expHLD$pergd, pch = 20, col = COL.HLD) points(expHLO$sn, expHLO$pergd, pch = 20, col = COL.HLO) axis(1, at = seq(0, max(expQC$sn) + 1, 300)) abline(v = sn, col = "red", lwd = 2) abline(h = detect, lty = 2) title("Signal to Noise vs \n Ratio of Genes Detected (Zooming-in)") legend("bottomright", c("HL", "OC", "OVCL", "HLO", "OVO"), pch = 20, bty = 'n', col = c(COL.HLD, COL.OVD, COL.OVCL, COL.HLO, COL.OVO)) ## ----HKnorm-------------------------------------------------------------- expHLD0 <- expHLD any(expHLD0$QCFlag == "Failed") expHLD0$sampleID[which(expHLD0$QCFlag == "Failed")] ## ----remove_samples------------------------------------------------------ expHLD <- filter(expHLD0, sampleID != "HL1_18" & sampleID != "HL2_18") hld <- hld.r[, !colnames(hld.r) %in% c("HL1_18", "HL2_18")] ## ----normalize_HK-------------------------------------------------------- # If data already log normalized hld.n <- HKnorm(hld, is.logged = TRUE) # Otherwise, normalize to HK hld.n <- HKnorm(hld) hld1 <- hld.n[, grep("HL1", colnames(hld.n))] exp.hld1 <- subset(expHLD, geneRLF == "HL1") hld2 <- hld.n[, grep("HL2", colnames(hld.n))] exp.hld2 <- subset(expHLD, geneRLF == "HL2") ## ----refMethod----------------------------------------------------------- r <- 3 # The number of references to use choice.refs <- exp.hld1$sampleID[sample((1:dim(exp.hld1)[1]), r, replace = F)] # select reference samples randomly R1 <- t(hld1[, choice.refs]) R2 <- t(hld2[, paste("HL2", getNum(choice.refs), sep = "_")]) Y <- t(hld2[, !colnames(hld2) %in% paste("HL2", getNum(choice.refs), sep = "_")]) S2.r <- t(refMethod(Y, R1, R2)) # Data from CodeSet 2 now calibrated for CodeSet 1 ## ----plot_gene----------------------------------------------------------- set.seed(2016) gene <- sample(1:nrow(hld1), 1) par(mfrow = c(1, 2)) plot(t(hld1[gene, ]), t(hld2[gene, ]), xlab = "HL1", ylab = "HL2", main = "No Correction") abline(0, 1) plot(t(hld1[gene, !(colnames(hld1) %in% choice.refs)]), t(S2.r[gene, ]), xlab = "HL1", ylab = "HL2", main = "Corrected") abline(0, 1)
# tg 2018 code # rm(list=ls(all=T)) setwd("H:\\kukrebh\\gola-done\\tg18\\DS") # ------------step 1 read all training csv files------------# DEO_B_DATA<-read.csv("Deodorant-B.CSV") DEO_F_DATA<-read.csv("Deodorant-F.CSV") DEO_G_DATA<-read.csv("Deodorant-G.CSV") DEO_H_DATA<-read.csv("Deodorant-H.CSV") DEO_J_DATA<-read.csv("Deodorant-J.CSV") NAMES<-as.data.frame(names(DEO_B_DATA)) NAMES$F_NAMES<-names(DEO_F_DATA) NAMES$G_NAMES<-names(DEO_G_DATA) NAMES$H_NAMES<-names(DEO_H_DATA) NAMES$J_NAMES<-names(DEO_J_DATA) # ------------step 2 join them all in 1 file------------# # before binding deal with q8 columns # 1 2 5 6 7 8 9 10 11 12 13 17 18 19 20 DEO_B_DATA$q8.7 <- '0' DEO_B_DATA$q8.9 <- '0' DEO_B_DATA$q8.10 <- '0' DEO_B_DATA$q8.17 <- '0' DEO_B_DATA$q8.18 <- '0' DEO_F_DATA$q8.2 <- '0' DEO_F_DATA$q8.8 <- '0' DEO_F_DATA$q8.10 <- '0' DEO_F_DATA$q8.12 <- '0' DEO_F_DATA$q8.17 <- '0' DEO_G_DATA$q8.8 <- '0' DEO_G_DATA$q8.9 <- '0' DEO_G_DATA$q8.10 <- '0' DEO_G_DATA$q8.18 <- '0' DEO_G_DATA$q8.20 <- '0' DEO_H_DATA$q8.8 <- '0' DEO_H_DATA$q8.9 <- '0' DEO_H_DATA$q8.10 <- '0' DEO_H_DATA$q8.17 <- '0' DEO_H_DATA$q8.18 <- '0' DEO_J_DATA$q8.2 <- '0' DEO_J_DATA$q8.8 <- '0' DEO_J_DATA$q8.9 <- '0' DEO_J_DATA$q8.17 <- '0' DEO_J_DATA$q8.20 <- '0' # deal with s13.2,6,8,10 colnames(DEO_B_DATA)[colnames(DEO_B_DATA)=="s13.2"] <- "frag_usd_past_6_mths" colnames(DEO_F_DATA)[colnames(DEO_F_DATA)=="s13.6"] <- "frag_usd_past_6_mths" colnames(DEO_G_DATA)[colnames(DEO_G_DATA)=="s13.7"] <- "frag_usd_past_6_mths" colnames(DEO_H_DATA)[colnames(DEO_H_DATA)=="s13.8"] <- "frag_usd_past_6_mths" colnames(DEO_J_DATA)[colnames(DEO_J_DATA)=="s13.10"] <- "frag_usd_past_6_mths" # deal with s13a.b.most.often colnames(DEO_B_DATA)[colnames(DEO_B_DATA)=="s13a.b.most.often"] <- "most_often" colnames(DEO_F_DATA)[colnames(DEO_F_DATA)=="s13a.f.most.often"] <- "most_often" colnames(DEO_G_DATA)[colnames(DEO_G_DATA)=="s13a.g.most.often"] <- "most_often" colnames(DEO_H_DATA)[colnames(DEO_H_DATA)=="s13a.h.most.often"] <- "most_often" colnames(DEO_J_DATA)[colnames(DEO_J_DATA)=="s13a.j.most.often"] <- "most_often" # combine Complete_DATA<- rbind(DEO_B_DATA,DEO_F_DATA) Complete_DATA_2<-rbind(Complete_DATA,DEO_G_DATA) Complete_DATA_3<-rbind(Complete_DATA_2,DEO_H_DATA) Complete_DATA_4<-rbind(Complete_DATA_3,DEO_J_DATA) names(Complete_DATA_4) #------------combine test data in training------------# Test_Data<-read.csv("test-data.CSV") Test_Data$Instant.Liking <- NA Complete_DATA_4[,'q8.7']<-NULL Complete_DATA_4[,'q8.9']<-NULL Complete_DATA_4[,'q8.10']<-NULL Complete_DATA_4[,'q8.17']<-NULL Complete_DATA_4[,'q8.18']<-NULL colnames(Test_Data)[colnames(Test_Data)=="s13a.b.most.often"] <- "most_often" colnames(Test_Data)[colnames(Test_Data)=="s13.2"] <- "frag_usd_past_6_mths" names(Complete_DATA_4) names(Test_Data) nrow(Test_Data) # rearrange columns to combine train and test TRAIN_DATA<- Complete_DATA_4[, c("Respondent.ID","Product.ID","Product","q1_1.personal.opinion.of.this.Deodorant","q2_all.words", "q3_1.strength.of.the.Deodorant","q4_1.artificial.chemical","q4_2.attractive","q4_3.bold","q4_4.boring", "q4_5.casual" ,"q4_6.cheap","q4_7.clean","q4_8.easy.to.wear","q4_9.elegant","q4_10.feminine","q4_11.for.someone.like.me", "q4_12.heavy","q4_13.high.quality","q4_14.long.lasting","q4_15.masculine","q4_16.memorable","q4_17.natural","q4_18.old.fashioned", "q4_19.ordinary","q4_20.overpowering","q4_21.sharp","q4_22.sophisticated","q4_23.upscale","q4_24.well.rounded","q5_1.Deodorant.is.addictive", "q7","q8.1","q8.2","q8.5","q8.6","q8.8","q8.11","q8.12" ,"q8.13","q8.19","q8.20","q9.how.likely.would.you.be.to.purchase.this.Deodorant", "q10.prefer.this.Deodorant.or.your.usual.Deodorant","q11.time.of.day.would.this.Deodorant.be.appropriate","q12.which.occasions.would.this.Deodorant.be.appropriate", "Q13_Liking.after.30.minutes","q14.Deodorant.overall.on.a.scale.from.1.to.10","ValSegb","s7.involved.in.the.selection.of.the.cosmetic.products", "s8.ethnic.background","s9.education","s10.income","s11.marital.status","s12.working.status","frag_usd_past_6_mths","most_often", "s13b.bottles.of.Deodorant.do.you.currently.own","Instant.Liking" )] NAMES2<-as.data.frame(names(Complete_DATA_4)) NAMES2$test_names<-names(Test_Data) NAMES2$train_names<-names(TRAIN_DATA) nrow(Test_Data) # 5105 nrow(TRAIN_DATA) # 2500 total_data<-rbind(TRAIN_DATA,Test_Data) nrow(total_data) #------------step 3 clean and rename factors------------# total_data$Respondent_id_1<-as.factor(total_data$Respondent.ID) total_data$Product_id_2<-as.factor(total_data$Product.ID) total_data$Instant_liking_target <-as.factor(total_data$Instant.Liking) total_data$personal_opinion_3<- as.ordered(total_data$q1_1.personal.opinion.of.this.Deodorant) total_data$all_words_4<-as.factor(total_data$q2_all.words) total_data$strength_of_deo_5<- as.ordered(total_data$q3_1.strength.of.the.Deodorant) total_data$artifical_chemical_6<-as.factor(total_data$q4_1.artificial.chemical) total_data$attractive_6<-as.factor(total_data$q4_2.attractive) total_data$bold_6<-as.factor(total_data$q4_3.bold) total_data$boring_6<-as.factor(total_data$q4_4.boring) total_data$casual_6<-as.factor(total_data$q4_5.casual) total_data$cheap_6<-as.factor(total_data$q4_6.cheap) total_data$clean_6<-as.factor(total_data$q4_7.clean) total_data$easy_to_wear_6<-as.factor(total_data$q4_8.easy.to.wear) total_data$elegant_6<-as.factor(total_data$q4_9.elegant) total_data$feminine_6<-as.factor(total_data$q4_10.feminine) total_data$forsomeonelikeme_6<-as.factor(total_data$q4_11.for.someone.like.me) total_data$heavy_6<-as.factor(total_data$q4_12.heavy) total_data$high_quality_6<-as.factor(total_data$q4_13.high.quality) total_data$long_lasting_6<-as.factor(total_data$q4_14.long.lasting) total_data$masculine_6<-as.factor(total_data$q4_15.masculine) total_data$memorable_6<-as.factor(total_data$q4_16.memorable) total_data$natural_6<-as.factor(total_data$q4_17.natural) total_data$old_fashioned_6<-as.factor(total_data$q4_18.old.fashioned) total_data$ordinary_6<-as.factor(total_data$q4_19.ordinary) total_data$forsomeonelikeme_6<-as.factor(total_data$q4_20.overpowering) total_data$heavy_6<-as.factor(total_data$q4_21.sharp) total_data$high_quality_6<-as.factor(total_data$q4_22.sophisticated) total_data$long_lasting_6<-as.factor(total_data$q4_23.upscale) total_data$masculine_6<-as.factor(total_data$q4_24.well.rounded) total_data$Deo_is_addictive_7<- as.ordered(total_data$q5_1.Deodorant.is.addictive) total_data$q7_8<- (total_data$q7) total_data$q81_9<-as.factor(total_data$q8.1) total_data$q82_9<-as.factor(total_data$q8.2) total_data$q85_9<-as.factor(total_data$q8.5) total_data$q86_9<-as.factor(total_data$q8.6) total_data$q88_9<-as.factor(total_data$q8.8) total_data$q811_9<-as.factor(total_data$q8.11) total_data$q812_9<-as.factor(total_data$q8.12) total_data$q813_9<-as.factor(total_data$q8.13) total_data$q819_9<-as.factor(total_data$q8.19) total_data$q820_9<-as.factor(total_data$q8.20) total_data$how_likely_purchase_10 <-as.ordered(total_data$q9.how.likely.would.you.be.to.purchase.this.Deodorant) total_data$prefer_this_or_usual_11 <-as.ordered(total_data$q10.prefer.this.Deodorant.or.your.usual.Deodorant) total_data$time_of_day_12 <-as.factor(total_data$q11.time.of.day.would.this.Deodorant.be.appropriate) total_data$occasions_13 <-as.factor(total_data$q12.which.occasions.would.this.Deodorant.be.appropriate) total_data$liking_after30min_14 <-as.ordered(total_data$Q13_Liking.after.30.minutes) total_data$overall_15 <-as.ordered(total_data$q14.Deodorant.overall.on.a.scale.from.1.to.10) total_data$ValSegb_16 <-as.factor(total_data$ValSegb) total_data$involvement_17 <-as.ordered(total_data$s7.involved.in.the.selection.of.the.cosmetic.products) total_data$ethnic_18 <-as.factor(total_data$s8.ethnic.background) total_data$education_19 <-as.factor(total_data$s9.education) total_data$income_20 <-as.ordered(total_data$s10.income) total_data$m_status_21 <-as.factor(total_data$s11.marital.status) total_data$w_status_22 <-as.factor(total_data$s12.working.status) total_data$frag_usd_past_6_mths_23 <-(total_data$frag_usd_past_6_mths) total_data$most_often_24 <-as.factor(total_data$most_often) total_data$bottles_own_25 <-as.ordered(total_data$s13b.bottles.of.Deodorant.do.you.currently.own) total_data$Product_26 <-(total_data$Product) #------------step 4 remove old factor columns and filter out 4 training data sets again ---------# names(total_data) drops <- c( "Respondent.ID","Product.ID","Product","Instant.Liking","q1_1.personal.opinion.of.this.Deodorant","q2_all.words", "q3_1.strength.of.the.Deodorant","q4_1.artificial.chemical","q4_2.attractive","q4_3.bold","q4_4.boring", "q4_5.casual" ,"q4_6.cheap","q4_7.clean","q4_8.easy.to.wear","q4_9.elegant","q4_10.feminine","q4_11.for.someone.like.me", "q4_12.heavy","q4_13.high.quality","q4_14.long.lasting","q4_15.masculine","q4_16.memorable","q4_17.natural","q4_18.old.fashioned", "q4_19.ordinary","q4_20.overpowering","q4_21.sharp","q4_22.sophisticated","q4_23.upscale","q4_24.well.rounded","q5_1.Deodorant.is.addictive", "q7","q8.1","q8.2","q8.5","q8.6","q8.8","q8.11","q8.12" ,"q8.13","q8.19","q8.20","q9.how.likely.would.you.be.to.purchase.this.Deodorant", "q10.prefer.this.Deodorant.or.your.usual.Deodorant","q11.time.of.day.would.this.Deodorant.be.appropriate","q12.which.occasions.would.this.Deodorant.be.appropriate", "Q13_Liking.after.30.minutes","q14.Deodorant.overall.on.a.scale.from.1.to.10","ValSegb","s7.involved.in.the.selection.of.the.cosmetic.products", "s8.ethnic.background","s9.education","s10.income","s11.marital.status","s12.working.status","frag_usd_past_6_mths","most_often", "s13b.bottles.of.Deodorant.do.you.currently.own" ) Comp_Data_filtered<-total_data[ , !(names(total_data) %in% drops)] names(Comp_Data_filtered) test_backup<- subset( Comp_Data_filtered, (is.na(Instant_liking_target )) ) str(Comp_Data_filtered) Comp_Data_filtered[,'involvement_17']<-NULL Comp_Data_filtered[,'Respondent_id_1']<-NULL Comp_Data_filtered[,'Product_id_2']<-NULL # -----------------------train and test data split--------------------# Test_filtered<- subset( Comp_Data_filtered, is.na(Instant_liking_target )) Train_filtered<- subset( Comp_Data_filtered, !(is.na(Instant_liking_target )) ) nrow(Test_filtered) nrow(Train_filtered) str(Train_filtered) str(Test_filtered) ##--------------------#3 test correlations#--------------------# #install.packages("H:/kukrebh/d backup/R Packages/R Packages/tidyr_0.4.1.zip", repos = NULL, type = "win.binary") # library(tidyr) # Train_filtered %>% gather() %>% head() # Train_filtered %>% # gather(-Instant_liking_target, key = "var", value = "value") %>% # ggplot(aes(x = value, y = Instant_liking_target)) + # geom_point() + # facet_wrap(~ var, scales = "free") + # theme_bw() # method 2 chi square test test_backup[,'involvement_17']<-NULL test_backup[,'Product_id_2']<-NULL test_result<-read.csv("results_1.CSV") names(test_result) colnames(test_result)[1] <- "Respondent_id_1" test_result[,'Product']<-NULL total<- merge(test_backup,test_result,by="Respondent_id_1") names(total) total[,'Respondent_id_1']<-NULL total$Instant_liking_target<-as.factor(total$Instant.Liking) total[,'Instant.Liking']<-NULL names(total) total_use<-total[,c(51,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50)] names(total_use) names(Train_filtered) #Instant.Liking combined_analyse<-rbind(Train_filtered,total_use) library(MASS) tbl_personal_op = table(Train_filtered$Instant_liking_target, Train_filtered$personal_opinion_3) c<-chisq.test(tbl_personal_op) c # data: tbl_personal_op # X-squared = 2500, df = 6, p-value < 2.2e-16 # As the p-value 2.2e-16 is much less than the .05 significance level, # we reject the null hypothesis that the liking habit is independent of the personal opinion #plot(Train_filtered$personal_opinion_3, Train_filtered$Instant_liking_target) names(Train_filtered) survey_total<-Train_filtered surveys<-survey_total[1:2500, -c(25,48)] s_train_test<-combined_analyse nrow(s_train_test) surveys_tt<-s_train_test[1:7605, -c(25,48)] with(surveys, summary(Instant_liking_target)) # 0 1 #1882 618 with(surveys, table(Instant_liking_target, Product_26)) # Product_26 # Instant_liking_target Deodorant B Deodorant F Deodorant G Deodorant H Deodorant J # 0 373 373 378 387 371 # 1 127 127 122 113 129 with(surveys, table(personal_opinion_3,Instant_liking_target)) # Instant_liking_target # personal_opinion_3 0 1 # 1 0 105 # 2 0 91 # 3 0 107 # 4 0 315 # 5 707 0 # 6 804 0 # 7 371 0 library(ggplot2) #1 #install.packages("H:/kukrebh/SF code migration/packages/plyr_1.8.4.zip", repos = NULL, type = "win.binary") ggplot(surveys, aes(x = Product_26, fill = Instant_liking_target)) + geom_bar(position = "dodge") #for both #ggplot(surveys_tt, aes(x = Product_26, fill = Instant_liking_target)) + geom_bar() #2 ggplot(surveys, aes(x = personal_opinion_3, fill = Instant_liking_target)) + geom_bar(show.legend = TRUE) ?geom_bar #3 tb<-with(surveys, table(strength_of_deo_5,Instant_liking_target)) tb<-as.data.frame(tb) tb ggplot(surveys, aes(x = strength_of_deo_5, fill = Instant_liking_target)) + geom_bar(position = "dodge") #3a strength win surveys_b<-surveys[surveys$Instant_liking_target==1,] summary(surveys_b) ggplot(surveys_b, aes(x = strength_of_deo_5, fill = Instant_liking_target)) + geom_bar(position = "dodge") #4 ggplot(surveys, aes(x = cheap_6, fill = Instant_liking_target)) + geom_bar(position = "dodge") tb2<-with(surveys, table(cheap_6,Instant_liking_target)) tb2<-as.data.frame(tb2) tb2 #4b ggplot(surveys_b, aes(x = cheap_6, fill = Instant_liking_target)) + geom_bar(position = "dodge") #5 ggplot(surveys, aes(x = all_words_4, fill = Instant_liking_target)) + geom_bar() tb3<-with(surveys, table(all_words_4,Instant_liking_target)) tb3<-as.data.frame(tb3) tb3 #5a ggplot(surveys_b, aes(x = all_words_4, fill = Instant_liking_target)) + geom_bar() #6 ggplot(surveys, aes(x = most_often_24, fill = Instant_liking_target)) + geom_bar() tb4<-with(surveys, table(Instant_liking_target,most_often_24)) tb4<-as.data.frame(tb4) tb4 #7 ggplot(surveys, aes(x = Deo_is_addictive_7, fill = Instant_liking_target)) + geom_bar() tb5<-with(surveys, table(Deo_is_addictive_7,Instant_liking_target)) tb5<-as.data.frame(tb5) tb5 #8 w_status_22 ggplot(surveys, aes(x = w_status_22, fill = Instant_liking_target)) + geom_bar() tb6<-with(surveys, table(w_status_22,Instant_liking_target)) tb6<-as.data.frame(tb6) tb6 #9 m_status_21 ggplot(surveys, aes(x = m_status_21, fill = Instant_liking_target)) + geom_bar() tb7<-with(surveys, table(m_status_21,Instant_liking_target)) tb7<-as.data.frame(tb7) tb7 names(surveys) #--------------var importance library(Boruta) #install.packages("H:/kukrebh/SF code migration/packages/Rcpp_0.12.10.zip", repos = NULL, type = "win.binary") #str(combined_analyse) #Train_filtered BroutaModel <- Boruta( Instant_liking_target ~ . , data = Train_filtered, doTrace = 2) print(BroutaModel) plot(BroutaModel, xlab = "", xaxt = "n") lz<-lapply(1:ncol(BroutaModel$ImpHistory),function(i) BroutaModel$ImpHistory[is.finite(BroutaModel$ImpHistory[,i]),i]) names(lz) <- colnames(BroutaModel$ImpHistory) Labels <- sort(sapply(lz,median)) axis(side = 1,las=2,labels = names(Labels), at = 1:ncol(BroutaModel$ImpHistory), cex.axis = 0.7) final.boruta <- TentativeRoughFix(BroutaModel) print(final.boruta) #--------------------modelling------------------------------#?glm model_1<-glm(Instant_liking_target ~ . , family=binomial(link = "logit"), data=Train_filtered,control = list(maxit = 100)) model_1 predictions_1<-predict(model_1,Test_filtered) predictions_1 Test_filtered_save<-Test_filtered Test_filtered_save$Respondent.ID<-test_backup$Respondent_id_1 # remove waste columns names(Test_filtered_save) Test_filtered_save<-Test_filtered_save[c(51,52)] Test_filtered_save$fitted_values_1<-predictions_1 head(Test_filtered_save) #Warning message: glm.fit: fitted probabilities numerically 0 or 1 occurred # trying glmnet bcoz glm.fit: fitted probabilities numerically 0 or 1 occurred # load package glmnet in R glmnet_2.0-13 library(glmnet) xfactors <- model.matrix( ~Train_filtered$personal_opinion_3 + Train_filtered$all_words_4 + Train_filtered$strength_of_deo_5 + Train_filtered$artifical_chemical_6 + Train_filtered$attractive_6 + Train_filtered$bold_6 + Train_filtered$boring_6 + Train_filtered$casual_6 + Train_filtered$cheap_6 + Train_filtered$clean_6 + Train_filtered$easy_to_wear_6 + Train_filtered$elegant_6 + Train_filtered$feminine_6 + Train_filtered$forsomeonelikeme_6 + Train_filtered$heavy_6 + Train_filtered$high_quality_6 + Train_filtered$long_lasting_6 + Train_filtered$masculine_6 + Train_filtered$memorable_6 + Train_filtered$natural_6 + Train_filtered$old_fashioned_6 + Train_filtered$ordinary_6 + Train_filtered$Deo_is_addictive_7 + Train_filtered$q7_8 + Train_filtered$q81_9 + Train_filtered$q82_9 + Train_filtered$q85_9+ Train_filtered$q86_9+ Train_filtered$q88_9+ Train_filtered$q811_9+ Train_filtered$q812_9 + Train_filtered$q813_9 + Train_filtered$q819_9 + Train_filtered$q820_9 + Train_filtered$how_likely_purchase_10 + Train_filtered$prefer_this_or_usual_11 + Train_filtered$time_of_day_12 + Train_filtered$occasions_13 + Train_filtered$liking_after30min_14 + Train_filtered$overall_15 + Train_filtered$ValSegb_16 + Train_filtered$ethnic_18 + Train_filtered$education_19 + Train_filtered$income_20 + Train_filtered$m_status_21 + Train_filtered$w_status_22 + Train_filtered$most_often_24 + Train_filtered$bottles_own_25 + Train_filtered$Product_26) x <- as.matrix(data.frame(Train_filtered$frag_usd_past_6_mths_23, xfactors)) y <-as.double(Train_filtered$Instant_liking_target) GLMnet_model_1 <- glmnet(x, y, family="binomial", alpha=0.755, nlambda=1000, standardize=FALSE, maxit=100000) str(GLMnet_model_1) #get best lambada # Note alpha=1 for lasso only and can blend with ridge penalty down to # # alpha=0 ridge only. cv.glmmod <- cv.glmnet(x, y=as.double(Train_filtered$Instant_liking_target), alpha=1) plot(cv.glmmod) (best.lambda <- cv.glmmod$lambda.min) #test x_testfactors <- model.matrix( ~Test_filtered$personal_opinion_3 + Test_filtered$all_words_4 + Test_filtered$strength_of_deo_5 + Test_filtered$artifical_chemical_6 + Test_filtered$attractive_6 + Test_filtered$bold_6 + Test_filtered$boring_6 + Test_filtered$casual_6 + Test_filtered$cheap_6 + Test_filtered$clean_6 + Test_filtered$easy_to_wear_6 + Test_filtered$elegant_6 + Test_filtered$feminine_6 + Test_filtered$forsomeonelikeme_6 + Test_filtered$heavy_6 + Test_filtered$high_quality_6 + Test_filtered$long_lasting_6 + Test_filtered$masculine_6 + Test_filtered$memorable_6 + Test_filtered$natural_6 + Test_filtered$old_fashioned_6 + Test_filtered$ordinary_6 + Test_filtered$Deo_is_addictive_7 + Test_filtered$q7_8 + Test_filtered$q81_9 + Test_filtered$q82_9 + Test_filtered$q85_9+ Test_filtered$q86_9+ Test_filtered$q88_9+ Test_filtered$q811_9+ Test_filtered$q812_9 + Test_filtered$q813_9 + Test_filtered$q819_9 + Test_filtered$q820_9 + Test_filtered$how_likely_purchase_10 + Test_filtered$prefer_this_or_usual_11 + Test_filtered$time_of_day_12 + Test_filtered$occasions_13 + Test_filtered$liking_after30min_14 + Test_filtered$overall_15 + Test_filtered$ValSegb_16 + Test_filtered$ethnic_18 + Test_filtered$education_19 + Test_filtered$income_20 + Test_filtered$m_status_21 + Test_filtered$w_status_22 + Test_filtered$most_often_24 + Test_filtered$bottles_own_25 + Test_filtered$Product_26) x_test <- as.matrix(data.frame(Test_filtered$frag_usd_past_6_mths_23, x_testfactors)) nrow(x_test) str(x_test) lasso_pred = predict(GLMnet_model_1, s = best.lambda, newx = x_test, type="response") # Use best lambda to predict test data lasso_pred head(x_test) Test_filtered_save$lasso_fitted_values_2<-lasso_pred # set threshold use if else n save result_2 column in this Test_filtered_save$result_lasso_2 <-ifelse(Test_filtered_save$lasso_fitted_values_2< 0.5,0, ifelse(Test_filtered_save$lasso_fitted_values_2 >=0.5,1,NA )) Test_filtered_save$result_lasso_2 <-as.factor(Test_filtered_save$result_lasso_2 ) #View(Test_filtered_save) names(Test_filtered_save) # save the file submission<-read.csv("sample-submission.CSV") str(submission) result_1<-submission[c(1,2)] result_1_val<-Test_filtered_save[c(2,5)] names(result_1_val) total_1 <- merge(result_1,result_1_val,by="Respondent.ID") colnames(total_1)[3] <- "Instant.Liking" nrow(total_1) write.csv(total_1,file = "results_1.csv") #library(glmnet) #https://stats.stackexchange.com/questions/11109/how-to-deal-with-perfect-separation-in-logistic-regression ################################################################################################################################ nrow(Train_filtered)
/tg-ds-hack-coor.R
no_license
RYTHAM/Backup
R
false
false
22,779
r
# tg 2018 code # rm(list=ls(all=T)) setwd("H:\\kukrebh\\gola-done\\tg18\\DS") # ------------step 1 read all training csv files------------# DEO_B_DATA<-read.csv("Deodorant-B.CSV") DEO_F_DATA<-read.csv("Deodorant-F.CSV") DEO_G_DATA<-read.csv("Deodorant-G.CSV") DEO_H_DATA<-read.csv("Deodorant-H.CSV") DEO_J_DATA<-read.csv("Deodorant-J.CSV") NAMES<-as.data.frame(names(DEO_B_DATA)) NAMES$F_NAMES<-names(DEO_F_DATA) NAMES$G_NAMES<-names(DEO_G_DATA) NAMES$H_NAMES<-names(DEO_H_DATA) NAMES$J_NAMES<-names(DEO_J_DATA) # ------------step 2 join them all in 1 file------------# # before binding deal with q8 columns # 1 2 5 6 7 8 9 10 11 12 13 17 18 19 20 DEO_B_DATA$q8.7 <- '0' DEO_B_DATA$q8.9 <- '0' DEO_B_DATA$q8.10 <- '0' DEO_B_DATA$q8.17 <- '0' DEO_B_DATA$q8.18 <- '0' DEO_F_DATA$q8.2 <- '0' DEO_F_DATA$q8.8 <- '0' DEO_F_DATA$q8.10 <- '0' DEO_F_DATA$q8.12 <- '0' DEO_F_DATA$q8.17 <- '0' DEO_G_DATA$q8.8 <- '0' DEO_G_DATA$q8.9 <- '0' DEO_G_DATA$q8.10 <- '0' DEO_G_DATA$q8.18 <- '0' DEO_G_DATA$q8.20 <- '0' DEO_H_DATA$q8.8 <- '0' DEO_H_DATA$q8.9 <- '0' DEO_H_DATA$q8.10 <- '0' DEO_H_DATA$q8.17 <- '0' DEO_H_DATA$q8.18 <- '0' DEO_J_DATA$q8.2 <- '0' DEO_J_DATA$q8.8 <- '0' DEO_J_DATA$q8.9 <- '0' DEO_J_DATA$q8.17 <- '0' DEO_J_DATA$q8.20 <- '0' # deal with s13.2,6,8,10 colnames(DEO_B_DATA)[colnames(DEO_B_DATA)=="s13.2"] <- "frag_usd_past_6_mths" colnames(DEO_F_DATA)[colnames(DEO_F_DATA)=="s13.6"] <- "frag_usd_past_6_mths" colnames(DEO_G_DATA)[colnames(DEO_G_DATA)=="s13.7"] <- "frag_usd_past_6_mths" colnames(DEO_H_DATA)[colnames(DEO_H_DATA)=="s13.8"] <- "frag_usd_past_6_mths" colnames(DEO_J_DATA)[colnames(DEO_J_DATA)=="s13.10"] <- "frag_usd_past_6_mths" # deal with s13a.b.most.often colnames(DEO_B_DATA)[colnames(DEO_B_DATA)=="s13a.b.most.often"] <- "most_often" colnames(DEO_F_DATA)[colnames(DEO_F_DATA)=="s13a.f.most.often"] <- "most_often" colnames(DEO_G_DATA)[colnames(DEO_G_DATA)=="s13a.g.most.often"] <- "most_often" colnames(DEO_H_DATA)[colnames(DEO_H_DATA)=="s13a.h.most.often"] <- "most_often" colnames(DEO_J_DATA)[colnames(DEO_J_DATA)=="s13a.j.most.often"] <- "most_often" # combine Complete_DATA<- rbind(DEO_B_DATA,DEO_F_DATA) Complete_DATA_2<-rbind(Complete_DATA,DEO_G_DATA) Complete_DATA_3<-rbind(Complete_DATA_2,DEO_H_DATA) Complete_DATA_4<-rbind(Complete_DATA_3,DEO_J_DATA) names(Complete_DATA_4) #------------combine test data in training------------# Test_Data<-read.csv("test-data.CSV") Test_Data$Instant.Liking <- NA Complete_DATA_4[,'q8.7']<-NULL Complete_DATA_4[,'q8.9']<-NULL Complete_DATA_4[,'q8.10']<-NULL Complete_DATA_4[,'q8.17']<-NULL Complete_DATA_4[,'q8.18']<-NULL colnames(Test_Data)[colnames(Test_Data)=="s13a.b.most.often"] <- "most_often" colnames(Test_Data)[colnames(Test_Data)=="s13.2"] <- "frag_usd_past_6_mths" names(Complete_DATA_4) names(Test_Data) nrow(Test_Data) # rearrange columns to combine train and test TRAIN_DATA<- Complete_DATA_4[, c("Respondent.ID","Product.ID","Product","q1_1.personal.opinion.of.this.Deodorant","q2_all.words", "q3_1.strength.of.the.Deodorant","q4_1.artificial.chemical","q4_2.attractive","q4_3.bold","q4_4.boring", "q4_5.casual" ,"q4_6.cheap","q4_7.clean","q4_8.easy.to.wear","q4_9.elegant","q4_10.feminine","q4_11.for.someone.like.me", "q4_12.heavy","q4_13.high.quality","q4_14.long.lasting","q4_15.masculine","q4_16.memorable","q4_17.natural","q4_18.old.fashioned", "q4_19.ordinary","q4_20.overpowering","q4_21.sharp","q4_22.sophisticated","q4_23.upscale","q4_24.well.rounded","q5_1.Deodorant.is.addictive", "q7","q8.1","q8.2","q8.5","q8.6","q8.8","q8.11","q8.12" ,"q8.13","q8.19","q8.20","q9.how.likely.would.you.be.to.purchase.this.Deodorant", "q10.prefer.this.Deodorant.or.your.usual.Deodorant","q11.time.of.day.would.this.Deodorant.be.appropriate","q12.which.occasions.would.this.Deodorant.be.appropriate", "Q13_Liking.after.30.minutes","q14.Deodorant.overall.on.a.scale.from.1.to.10","ValSegb","s7.involved.in.the.selection.of.the.cosmetic.products", "s8.ethnic.background","s9.education","s10.income","s11.marital.status","s12.working.status","frag_usd_past_6_mths","most_often", "s13b.bottles.of.Deodorant.do.you.currently.own","Instant.Liking" )] NAMES2<-as.data.frame(names(Complete_DATA_4)) NAMES2$test_names<-names(Test_Data) NAMES2$train_names<-names(TRAIN_DATA) nrow(Test_Data) # 5105 nrow(TRAIN_DATA) # 2500 total_data<-rbind(TRAIN_DATA,Test_Data) nrow(total_data) #------------step 3 clean and rename factors------------# total_data$Respondent_id_1<-as.factor(total_data$Respondent.ID) total_data$Product_id_2<-as.factor(total_data$Product.ID) total_data$Instant_liking_target <-as.factor(total_data$Instant.Liking) total_data$personal_opinion_3<- as.ordered(total_data$q1_1.personal.opinion.of.this.Deodorant) total_data$all_words_4<-as.factor(total_data$q2_all.words) total_data$strength_of_deo_5<- as.ordered(total_data$q3_1.strength.of.the.Deodorant) total_data$artifical_chemical_6<-as.factor(total_data$q4_1.artificial.chemical) total_data$attractive_6<-as.factor(total_data$q4_2.attractive) total_data$bold_6<-as.factor(total_data$q4_3.bold) total_data$boring_6<-as.factor(total_data$q4_4.boring) total_data$casual_6<-as.factor(total_data$q4_5.casual) total_data$cheap_6<-as.factor(total_data$q4_6.cheap) total_data$clean_6<-as.factor(total_data$q4_7.clean) total_data$easy_to_wear_6<-as.factor(total_data$q4_8.easy.to.wear) total_data$elegant_6<-as.factor(total_data$q4_9.elegant) total_data$feminine_6<-as.factor(total_data$q4_10.feminine) total_data$forsomeonelikeme_6<-as.factor(total_data$q4_11.for.someone.like.me) total_data$heavy_6<-as.factor(total_data$q4_12.heavy) total_data$high_quality_6<-as.factor(total_data$q4_13.high.quality) total_data$long_lasting_6<-as.factor(total_data$q4_14.long.lasting) total_data$masculine_6<-as.factor(total_data$q4_15.masculine) total_data$memorable_6<-as.factor(total_data$q4_16.memorable) total_data$natural_6<-as.factor(total_data$q4_17.natural) total_data$old_fashioned_6<-as.factor(total_data$q4_18.old.fashioned) total_data$ordinary_6<-as.factor(total_data$q4_19.ordinary) total_data$forsomeonelikeme_6<-as.factor(total_data$q4_20.overpowering) total_data$heavy_6<-as.factor(total_data$q4_21.sharp) total_data$high_quality_6<-as.factor(total_data$q4_22.sophisticated) total_data$long_lasting_6<-as.factor(total_data$q4_23.upscale) total_data$masculine_6<-as.factor(total_data$q4_24.well.rounded) total_data$Deo_is_addictive_7<- as.ordered(total_data$q5_1.Deodorant.is.addictive) total_data$q7_8<- (total_data$q7) total_data$q81_9<-as.factor(total_data$q8.1) total_data$q82_9<-as.factor(total_data$q8.2) total_data$q85_9<-as.factor(total_data$q8.5) total_data$q86_9<-as.factor(total_data$q8.6) total_data$q88_9<-as.factor(total_data$q8.8) total_data$q811_9<-as.factor(total_data$q8.11) total_data$q812_9<-as.factor(total_data$q8.12) total_data$q813_9<-as.factor(total_data$q8.13) total_data$q819_9<-as.factor(total_data$q8.19) total_data$q820_9<-as.factor(total_data$q8.20) total_data$how_likely_purchase_10 <-as.ordered(total_data$q9.how.likely.would.you.be.to.purchase.this.Deodorant) total_data$prefer_this_or_usual_11 <-as.ordered(total_data$q10.prefer.this.Deodorant.or.your.usual.Deodorant) total_data$time_of_day_12 <-as.factor(total_data$q11.time.of.day.would.this.Deodorant.be.appropriate) total_data$occasions_13 <-as.factor(total_data$q12.which.occasions.would.this.Deodorant.be.appropriate) total_data$liking_after30min_14 <-as.ordered(total_data$Q13_Liking.after.30.minutes) total_data$overall_15 <-as.ordered(total_data$q14.Deodorant.overall.on.a.scale.from.1.to.10) total_data$ValSegb_16 <-as.factor(total_data$ValSegb) total_data$involvement_17 <-as.ordered(total_data$s7.involved.in.the.selection.of.the.cosmetic.products) total_data$ethnic_18 <-as.factor(total_data$s8.ethnic.background) total_data$education_19 <-as.factor(total_data$s9.education) total_data$income_20 <-as.ordered(total_data$s10.income) total_data$m_status_21 <-as.factor(total_data$s11.marital.status) total_data$w_status_22 <-as.factor(total_data$s12.working.status) total_data$frag_usd_past_6_mths_23 <-(total_data$frag_usd_past_6_mths) total_data$most_often_24 <-as.factor(total_data$most_often) total_data$bottles_own_25 <-as.ordered(total_data$s13b.bottles.of.Deodorant.do.you.currently.own) total_data$Product_26 <-(total_data$Product) #------------step 4 remove old factor columns and filter out 4 training data sets again ---------# names(total_data) drops <- c( "Respondent.ID","Product.ID","Product","Instant.Liking","q1_1.personal.opinion.of.this.Deodorant","q2_all.words", "q3_1.strength.of.the.Deodorant","q4_1.artificial.chemical","q4_2.attractive","q4_3.bold","q4_4.boring", "q4_5.casual" ,"q4_6.cheap","q4_7.clean","q4_8.easy.to.wear","q4_9.elegant","q4_10.feminine","q4_11.for.someone.like.me", "q4_12.heavy","q4_13.high.quality","q4_14.long.lasting","q4_15.masculine","q4_16.memorable","q4_17.natural","q4_18.old.fashioned", "q4_19.ordinary","q4_20.overpowering","q4_21.sharp","q4_22.sophisticated","q4_23.upscale","q4_24.well.rounded","q5_1.Deodorant.is.addictive", "q7","q8.1","q8.2","q8.5","q8.6","q8.8","q8.11","q8.12" ,"q8.13","q8.19","q8.20","q9.how.likely.would.you.be.to.purchase.this.Deodorant", "q10.prefer.this.Deodorant.or.your.usual.Deodorant","q11.time.of.day.would.this.Deodorant.be.appropriate","q12.which.occasions.would.this.Deodorant.be.appropriate", "Q13_Liking.after.30.minutes","q14.Deodorant.overall.on.a.scale.from.1.to.10","ValSegb","s7.involved.in.the.selection.of.the.cosmetic.products", "s8.ethnic.background","s9.education","s10.income","s11.marital.status","s12.working.status","frag_usd_past_6_mths","most_often", "s13b.bottles.of.Deodorant.do.you.currently.own" ) Comp_Data_filtered<-total_data[ , !(names(total_data) %in% drops)] names(Comp_Data_filtered) test_backup<- subset( Comp_Data_filtered, (is.na(Instant_liking_target )) ) str(Comp_Data_filtered) Comp_Data_filtered[,'involvement_17']<-NULL Comp_Data_filtered[,'Respondent_id_1']<-NULL Comp_Data_filtered[,'Product_id_2']<-NULL # -----------------------train and test data split--------------------# Test_filtered<- subset( Comp_Data_filtered, is.na(Instant_liking_target )) Train_filtered<- subset( Comp_Data_filtered, !(is.na(Instant_liking_target )) ) nrow(Test_filtered) nrow(Train_filtered) str(Train_filtered) str(Test_filtered) ##--------------------#3 test correlations#--------------------# #install.packages("H:/kukrebh/d backup/R Packages/R Packages/tidyr_0.4.1.zip", repos = NULL, type = "win.binary") # library(tidyr) # Train_filtered %>% gather() %>% head() # Train_filtered %>% # gather(-Instant_liking_target, key = "var", value = "value") %>% # ggplot(aes(x = value, y = Instant_liking_target)) + # geom_point() + # facet_wrap(~ var, scales = "free") + # theme_bw() # method 2 chi square test test_backup[,'involvement_17']<-NULL test_backup[,'Product_id_2']<-NULL test_result<-read.csv("results_1.CSV") names(test_result) colnames(test_result)[1] <- "Respondent_id_1" test_result[,'Product']<-NULL total<- merge(test_backup,test_result,by="Respondent_id_1") names(total) total[,'Respondent_id_1']<-NULL total$Instant_liking_target<-as.factor(total$Instant.Liking) total[,'Instant.Liking']<-NULL names(total) total_use<-total[,c(51,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50)] names(total_use) names(Train_filtered) #Instant.Liking combined_analyse<-rbind(Train_filtered,total_use) library(MASS) tbl_personal_op = table(Train_filtered$Instant_liking_target, Train_filtered$personal_opinion_3) c<-chisq.test(tbl_personal_op) c # data: tbl_personal_op # X-squared = 2500, df = 6, p-value < 2.2e-16 # As the p-value 2.2e-16 is much less than the .05 significance level, # we reject the null hypothesis that the liking habit is independent of the personal opinion #plot(Train_filtered$personal_opinion_3, Train_filtered$Instant_liking_target) names(Train_filtered) survey_total<-Train_filtered surveys<-survey_total[1:2500, -c(25,48)] s_train_test<-combined_analyse nrow(s_train_test) surveys_tt<-s_train_test[1:7605, -c(25,48)] with(surveys, summary(Instant_liking_target)) # 0 1 #1882 618 with(surveys, table(Instant_liking_target, Product_26)) # Product_26 # Instant_liking_target Deodorant B Deodorant F Deodorant G Deodorant H Deodorant J # 0 373 373 378 387 371 # 1 127 127 122 113 129 with(surveys, table(personal_opinion_3,Instant_liking_target)) # Instant_liking_target # personal_opinion_3 0 1 # 1 0 105 # 2 0 91 # 3 0 107 # 4 0 315 # 5 707 0 # 6 804 0 # 7 371 0 library(ggplot2) #1 #install.packages("H:/kukrebh/SF code migration/packages/plyr_1.8.4.zip", repos = NULL, type = "win.binary") ggplot(surveys, aes(x = Product_26, fill = Instant_liking_target)) + geom_bar(position = "dodge") #for both #ggplot(surveys_tt, aes(x = Product_26, fill = Instant_liking_target)) + geom_bar() #2 ggplot(surveys, aes(x = personal_opinion_3, fill = Instant_liking_target)) + geom_bar(show.legend = TRUE) ?geom_bar #3 tb<-with(surveys, table(strength_of_deo_5,Instant_liking_target)) tb<-as.data.frame(tb) tb ggplot(surveys, aes(x = strength_of_deo_5, fill = Instant_liking_target)) + geom_bar(position = "dodge") #3a strength win surveys_b<-surveys[surveys$Instant_liking_target==1,] summary(surveys_b) ggplot(surveys_b, aes(x = strength_of_deo_5, fill = Instant_liking_target)) + geom_bar(position = "dodge") #4 ggplot(surveys, aes(x = cheap_6, fill = Instant_liking_target)) + geom_bar(position = "dodge") tb2<-with(surveys, table(cheap_6,Instant_liking_target)) tb2<-as.data.frame(tb2) tb2 #4b ggplot(surveys_b, aes(x = cheap_6, fill = Instant_liking_target)) + geom_bar(position = "dodge") #5 ggplot(surveys, aes(x = all_words_4, fill = Instant_liking_target)) + geom_bar() tb3<-with(surveys, table(all_words_4,Instant_liking_target)) tb3<-as.data.frame(tb3) tb3 #5a ggplot(surveys_b, aes(x = all_words_4, fill = Instant_liking_target)) + geom_bar() #6 ggplot(surveys, aes(x = most_often_24, fill = Instant_liking_target)) + geom_bar() tb4<-with(surveys, table(Instant_liking_target,most_often_24)) tb4<-as.data.frame(tb4) tb4 #7 ggplot(surveys, aes(x = Deo_is_addictive_7, fill = Instant_liking_target)) + geom_bar() tb5<-with(surveys, table(Deo_is_addictive_7,Instant_liking_target)) tb5<-as.data.frame(tb5) tb5 #8 w_status_22 ggplot(surveys, aes(x = w_status_22, fill = Instant_liking_target)) + geom_bar() tb6<-with(surveys, table(w_status_22,Instant_liking_target)) tb6<-as.data.frame(tb6) tb6 #9 m_status_21 ggplot(surveys, aes(x = m_status_21, fill = Instant_liking_target)) + geom_bar() tb7<-with(surveys, table(m_status_21,Instant_liking_target)) tb7<-as.data.frame(tb7) tb7 names(surveys) #--------------var importance library(Boruta) #install.packages("H:/kukrebh/SF code migration/packages/Rcpp_0.12.10.zip", repos = NULL, type = "win.binary") #str(combined_analyse) #Train_filtered BroutaModel <- Boruta( Instant_liking_target ~ . , data = Train_filtered, doTrace = 2) print(BroutaModel) plot(BroutaModel, xlab = "", xaxt = "n") lz<-lapply(1:ncol(BroutaModel$ImpHistory),function(i) BroutaModel$ImpHistory[is.finite(BroutaModel$ImpHistory[,i]),i]) names(lz) <- colnames(BroutaModel$ImpHistory) Labels <- sort(sapply(lz,median)) axis(side = 1,las=2,labels = names(Labels), at = 1:ncol(BroutaModel$ImpHistory), cex.axis = 0.7) final.boruta <- TentativeRoughFix(BroutaModel) print(final.boruta) #--------------------modelling------------------------------#?glm model_1<-glm(Instant_liking_target ~ . , family=binomial(link = "logit"), data=Train_filtered,control = list(maxit = 100)) model_1 predictions_1<-predict(model_1,Test_filtered) predictions_1 Test_filtered_save<-Test_filtered Test_filtered_save$Respondent.ID<-test_backup$Respondent_id_1 # remove waste columns names(Test_filtered_save) Test_filtered_save<-Test_filtered_save[c(51,52)] Test_filtered_save$fitted_values_1<-predictions_1 head(Test_filtered_save) #Warning message: glm.fit: fitted probabilities numerically 0 or 1 occurred # trying glmnet bcoz glm.fit: fitted probabilities numerically 0 or 1 occurred # load package glmnet in R glmnet_2.0-13 library(glmnet) xfactors <- model.matrix( ~Train_filtered$personal_opinion_3 + Train_filtered$all_words_4 + Train_filtered$strength_of_deo_5 + Train_filtered$artifical_chemical_6 + Train_filtered$attractive_6 + Train_filtered$bold_6 + Train_filtered$boring_6 + Train_filtered$casual_6 + Train_filtered$cheap_6 + Train_filtered$clean_6 + Train_filtered$easy_to_wear_6 + Train_filtered$elegant_6 + Train_filtered$feminine_6 + Train_filtered$forsomeonelikeme_6 + Train_filtered$heavy_6 + Train_filtered$high_quality_6 + Train_filtered$long_lasting_6 + Train_filtered$masculine_6 + Train_filtered$memorable_6 + Train_filtered$natural_6 + Train_filtered$old_fashioned_6 + Train_filtered$ordinary_6 + Train_filtered$Deo_is_addictive_7 + Train_filtered$q7_8 + Train_filtered$q81_9 + Train_filtered$q82_9 + Train_filtered$q85_9+ Train_filtered$q86_9+ Train_filtered$q88_9+ Train_filtered$q811_9+ Train_filtered$q812_9 + Train_filtered$q813_9 + Train_filtered$q819_9 + Train_filtered$q820_9 + Train_filtered$how_likely_purchase_10 + Train_filtered$prefer_this_or_usual_11 + Train_filtered$time_of_day_12 + Train_filtered$occasions_13 + Train_filtered$liking_after30min_14 + Train_filtered$overall_15 + Train_filtered$ValSegb_16 + Train_filtered$ethnic_18 + Train_filtered$education_19 + Train_filtered$income_20 + Train_filtered$m_status_21 + Train_filtered$w_status_22 + Train_filtered$most_often_24 + Train_filtered$bottles_own_25 + Train_filtered$Product_26) x <- as.matrix(data.frame(Train_filtered$frag_usd_past_6_mths_23, xfactors)) y <-as.double(Train_filtered$Instant_liking_target) GLMnet_model_1 <- glmnet(x, y, family="binomial", alpha=0.755, nlambda=1000, standardize=FALSE, maxit=100000) str(GLMnet_model_1) #get best lambada # Note alpha=1 for lasso only and can blend with ridge penalty down to # # alpha=0 ridge only. cv.glmmod <- cv.glmnet(x, y=as.double(Train_filtered$Instant_liking_target), alpha=1) plot(cv.glmmod) (best.lambda <- cv.glmmod$lambda.min) #test x_testfactors <- model.matrix( ~Test_filtered$personal_opinion_3 + Test_filtered$all_words_4 + Test_filtered$strength_of_deo_5 + Test_filtered$artifical_chemical_6 + Test_filtered$attractive_6 + Test_filtered$bold_6 + Test_filtered$boring_6 + Test_filtered$casual_6 + Test_filtered$cheap_6 + Test_filtered$clean_6 + Test_filtered$easy_to_wear_6 + Test_filtered$elegant_6 + Test_filtered$feminine_6 + Test_filtered$forsomeonelikeme_6 + Test_filtered$heavy_6 + Test_filtered$high_quality_6 + Test_filtered$long_lasting_6 + Test_filtered$masculine_6 + Test_filtered$memorable_6 + Test_filtered$natural_6 + Test_filtered$old_fashioned_6 + Test_filtered$ordinary_6 + Test_filtered$Deo_is_addictive_7 + Test_filtered$q7_8 + Test_filtered$q81_9 + Test_filtered$q82_9 + Test_filtered$q85_9+ Test_filtered$q86_9+ Test_filtered$q88_9+ Test_filtered$q811_9+ Test_filtered$q812_9 + Test_filtered$q813_9 + Test_filtered$q819_9 + Test_filtered$q820_9 + Test_filtered$how_likely_purchase_10 + Test_filtered$prefer_this_or_usual_11 + Test_filtered$time_of_day_12 + Test_filtered$occasions_13 + Test_filtered$liking_after30min_14 + Test_filtered$overall_15 + Test_filtered$ValSegb_16 + Test_filtered$ethnic_18 + Test_filtered$education_19 + Test_filtered$income_20 + Test_filtered$m_status_21 + Test_filtered$w_status_22 + Test_filtered$most_often_24 + Test_filtered$bottles_own_25 + Test_filtered$Product_26) x_test <- as.matrix(data.frame(Test_filtered$frag_usd_past_6_mths_23, x_testfactors)) nrow(x_test) str(x_test) lasso_pred = predict(GLMnet_model_1, s = best.lambda, newx = x_test, type="response") # Use best lambda to predict test data lasso_pred head(x_test) Test_filtered_save$lasso_fitted_values_2<-lasso_pred # set threshold use if else n save result_2 column in this Test_filtered_save$result_lasso_2 <-ifelse(Test_filtered_save$lasso_fitted_values_2< 0.5,0, ifelse(Test_filtered_save$lasso_fitted_values_2 >=0.5,1,NA )) Test_filtered_save$result_lasso_2 <-as.factor(Test_filtered_save$result_lasso_2 ) #View(Test_filtered_save) names(Test_filtered_save) # save the file submission<-read.csv("sample-submission.CSV") str(submission) result_1<-submission[c(1,2)] result_1_val<-Test_filtered_save[c(2,5)] names(result_1_val) total_1 <- merge(result_1,result_1_val,by="Respondent.ID") colnames(total_1)[3] <- "Instant.Liking" nrow(total_1) write.csv(total_1,file = "results_1.csv") #library(glmnet) #https://stats.stackexchange.com/questions/11109/how-to-deal-with-perfect-separation-in-logistic-regression ################################################################################################################################ nrow(Train_filtered)
del <- read.csv("C:/Users/india/Desktop/data science/assignments/simple linear regression/delivery_time.csv") del summary(del) plot(del$Delivery.Time,del$Sorting.Time) attach(del) install.packages("lattice") library(lattice) dotplot(del$Delivery.Time,main="Dot plot of Delivery time") dotplot(del$Sorting.Time,main="Dot plot of Sorting time") boxplot(del$Delivery.Time,col="red",main="Box plot of Delivery time") boxplot(del$Sorting.Time,col="blue",main="Box plot of Sort time") hist(del$Delivery.Time) hist(del$Sorting.Time) qqnorm(del$Delivery.Time) qqline(del$Delivery.Time) qqnorm(del$Sorting.Time) qqline(del$Sorting.Time) hist(del$Delivery.Time,prob=TRUE) lines(density(del$Delivery.Time)) lines(density(del$Delivery.Time,adjust = 2),lty="dotted") hist(del$Sorting.Time,prob=TRUE) lines(density(del$Sorting.Time)) lines(density(del$Sorting.Time,adjust = 2),lty="dotted") plot(del$Delivery.Time,del$Sorting.Time,main = "Scatter Plot",col="red",xlab = "Delivery time",ylab = "Sorting time",pch=20) cor(Delivery.Time,Sorting.Time) reg<-lm(Sorting.Time~Delivery.Time) summary(reg) confint(reg,level = 0.95) pred<-predict(reg,interval = "predict") pred<-as.data.frame(pred) pred cor(pred$fit,del$Sorting.Time) plot(sqrt(Delivery.Time),Sorting.Time) reg_sqrt<-lm(Sorting.Time~sqrt(Delivery.Time),data=del) summary(reg_sqrt) confint(reg_sqrt,level=0.95) predict(reg_sqrt,interval = "predict") plot(log(Delivery.Time),Sorting.Time) reg_log<-lm(Sorting.Time~log(Delivery.Time)) summary(reg_log) confint(reg_log,level=0.95) predict(reg_log,interval = "predict") #best model is reg_log
/delivery_time.R
no_license
Gitanjali98/data-science-assignment
R
false
false
1,642
r
del <- read.csv("C:/Users/india/Desktop/data science/assignments/simple linear regression/delivery_time.csv") del summary(del) plot(del$Delivery.Time,del$Sorting.Time) attach(del) install.packages("lattice") library(lattice) dotplot(del$Delivery.Time,main="Dot plot of Delivery time") dotplot(del$Sorting.Time,main="Dot plot of Sorting time") boxplot(del$Delivery.Time,col="red",main="Box plot of Delivery time") boxplot(del$Sorting.Time,col="blue",main="Box plot of Sort time") hist(del$Delivery.Time) hist(del$Sorting.Time) qqnorm(del$Delivery.Time) qqline(del$Delivery.Time) qqnorm(del$Sorting.Time) qqline(del$Sorting.Time) hist(del$Delivery.Time,prob=TRUE) lines(density(del$Delivery.Time)) lines(density(del$Delivery.Time,adjust = 2),lty="dotted") hist(del$Sorting.Time,prob=TRUE) lines(density(del$Sorting.Time)) lines(density(del$Sorting.Time,adjust = 2),lty="dotted") plot(del$Delivery.Time,del$Sorting.Time,main = "Scatter Plot",col="red",xlab = "Delivery time",ylab = "Sorting time",pch=20) cor(Delivery.Time,Sorting.Time) reg<-lm(Sorting.Time~Delivery.Time) summary(reg) confint(reg,level = 0.95) pred<-predict(reg,interval = "predict") pred<-as.data.frame(pred) pred cor(pred$fit,del$Sorting.Time) plot(sqrt(Delivery.Time),Sorting.Time) reg_sqrt<-lm(Sorting.Time~sqrt(Delivery.Time),data=del) summary(reg_sqrt) confint(reg_sqrt,level=0.95) predict(reg_sqrt,interval = "predict") plot(log(Delivery.Time),Sorting.Time) reg_log<-lm(Sorting.Time~log(Delivery.Time)) summary(reg_log) confint(reg_log,level=0.95) predict(reg_log,interval = "predict") #best model is reg_log
plotmap3 <- function(data,file=NULL,title="World map",legend_range=NULL,legendname="Cell share",lowcol="grey95",midcol="orange",highcol="darkred",midpoint=0.5,facet_grid="Year~Data1",nrow=NULL,ncol=NULL,scale=2,breaks=T,labs=T,borders=TRUE,MAgPIE_regions=F,axis_text_col="black",legend_discrete=FALSE,legend_breaks=NULL,show_percent=FALSE,sea=TRUE,land_colour="white",legend_height=2,legend_width=NULL,text_size=12,legend_position="right",facet_style="default",plot_height=10,plot_width=20) { #require("ggplot2", quietly = TRUE) #require("RColorBrewer", quietly = TRUE) wrld_simpl_df <- NULL data("world", envir = environment(), package="luplot") if (MAgPIE_regions) { facet_grid <- NULL map <- ggplot(wrld_simpl_df,aes_(~long, ~lat)) + geom_polygon(aes_(group=~group, fill=~magpie)) + # scale_fill_manual("MAgPIE\nregions",values=brewer.pal(11,"Paired")[2:11],na.value="white") scale_fill_manual("MAgPIE\nregion",values=c("purple3", "red3", "hotpink", "cyan3", "goldenrod1", "gray44","#8C5400FF","darkorange2","royalblue3","green4"),na.value="white") # scale_fill_manual("MAgPIE\nregions",values=c(nice_colors(style="contrast_area",saturation=1)[1:10]),na.value="white") # scale_fill_brewer(palette="Set3",na.value="white") } else { if (!is.list(data)) { temp <- data data <- list() data[["default"]] <- temp } if (is.null(legend_range)) { midpoint <- max(unlist(lapply(data,max,na.rm=T)))*midpoint } else { data <- lapply(data, function(x) { x[which(x < legend_range[1])] <- legend_range[1] x[which(x > legend_range[2])] <- legend_range[2] return(x) }) } if (any(unlist(lapply(data,function(x) return(is.null(attr(x,"coordinates"))))))) { data <- lapply(data, function(x) { attr(x,"coordinates") <- getCoordinates(degree = T) return(x) }) warning("Missing coordinates in attributes for at least one MAgPIE object. Added coordinates in default MAgPIE cell order.") } data <- as.ggplot(data,asDate=F) if(legend_discrete) { if(is.null(legend_breaks)) { data$Breaks<-as.character(data$Value) #replace NA's with a value that is not contained in data yet if(any(is.na(data$Breaks))){ data$Breaks[is.na(data$Breaks)]<-"No data" } data$Breaks<-as.factor(data$Breaks) legend_labels<-levels(data$Breaks) } else { tmp<-as.vector(data$Value) tmp[]<-length(legend_breaks)+1 legend_labels<-rep("",length(legend_breaks)+1) legend_labels[length(legend_breaks)+1]<-paste(">",legend_breaks[length(legend_breaks)]) for(i in length(legend_breaks):2){ tmp[which(as.vector(data$Value)<=legend_breaks[i])]<-i legend_labels[i]<-paste(legend_breaks[i-1],"-",legend_breaks[i]) } tmp[which(as.vector(data$Value)<legend_breaks[1])]<-1 tmp[which(is.na(as.vector(data$Value)))]<-NA legend_labels[1]<-paste("<",legend_breaks[1]) legend_labels<-legend_labels[as.numeric(rev(levels(as.factor(tmp))))] tmpchar<-as.character(tmp) tmpchar[is.na(tmpchar)]<-"No data" levels<-rev(levels(as.factor(tmp))) if("No data" %in% tmpchar){ legend_labels<-c(legend_labels,"No data") levels<-c(levels,"No data") } data$Breaks <- factor(tmpchar, levels = levels) } if("No data" %in% legend_labels){ colours <- c(colorRampPalette(c(highcol,midcol,lowcol))( length(legend_labels)-1 ),"grey") } else { colours <- colorRampPalette(c(highcol,midcol,lowcol))( length(legend_labels)) } if(show_percent){ tmp<-table(data$Breaks) if("No data" %in% names(tmp)) tmp<-tmp[-which(names(tmp)=="No data")] percent<-round(tmp/sum(tmp)*100,1) percent<-paste("(",percent,"%)",sep="") legend_labels[which(legend_labels!="No data")]<-paste(legend_labels[which(legend_labels!="No data")],percent,sep=" ") } } if (!is.null(legend_breaks)) { labels <- c(bquote(""<=.(head(legend_breaks,1))),legend_breaks[2:(length(legend_breaks)-1)],bquote("">=.(tail(legend_breaks,1)))) } else { legend_breaks <- waiver() labels <- waiver() } map <- ggplot(data,aes_(~x,~y)) + geom_polygon(data=wrld_simpl_df, aes_(~long,~lat, group=~group, fill=~hole), fill=land_colour) if (is.null(data$Breaks)) { if (!is.null(midcol)) map <- map + geom_raster(aes_(fill=~Value)) + scale_fill_gradient2(name=legendname,low=lowcol,mid=midcol,high=highcol,midpoint=midpoint,limits=legend_range,breaks=legend_breaks,labels=labels,na.value="grey") else map <- map + geom_raster(aes_(fill=~Value)) + scale_fill_gradient(name=legendname,low=lowcol,high=highcol,limits=legend_range,breaks=legend_breaks,labels=labels,na.value="grey") } else map <- map + geom_raster(aes_(fill=~Breaks)) + scale_fill_manual(name=legendname,values=colours,labels=legend_labels,na.value="yellow") if(!is.null(legend_height)) map <- map + theme(legend.key.height = unit(legend_height, "cm")) if(!is.null(legend_width)) map <- map + theme(legend.key.width = unit(legend_width, "cm")) } map <- map + #coord_cartesian(xlim = c(-180, 180), ylim = c(-58, 86)) + coord_map(projection = "mercator", xlim = c(-180, 180), ylim = c(-58, 86), clip = "on") theme(aspect.ratio = 0.5) + ggtitle(title) if (sea) map <- map + theme(panel.background = element_rect(fill = "lightsteelblue2")) else map <- map + theme(panel.background = element_rect(fill="white", colour="black")) + theme(panel.grid.major=element_line(colour="grey80"),panel.grid.minor=element_line(colour="grey90")) if (!is.null(facet_grid)) { if (substr(facet_grid,1,1) == "~") map <- map + facet_wrap(facet_grid, nrow = nrow, ncol = ncol) else map <- map + facet_grid(facet_grid) } if(breaks) map <- map + scale_x_continuous(breaks=c(-90, 0, 90)) + scale_y_continuous(breaks=c(-66,-38,-23, 0, 23,38, 66)) else map <- map + scale_x_continuous(breaks=NULL) + scale_y_continuous(breaks=NULL) if(labs) map <- map + labs(x="Longitude",y="Latitude") else map <- map + labs(y=NULL,x=NULL) if(borders) map <- map + geom_path(data=wrld_simpl_df, aes_(~long, ~lat, group=~group, fill=NULL), color="grey10", size=0.1) if (!is.null(axis_text_col)) map <- map + theme(axis.text = element_text(colour=axis_text_col),axis.ticks = element_line(colour=axis_text_col)) map <- map + theme(panel.grid.minor=element_line(colour="white"),plot.title=element_text(size=text_size+4,face="bold",vjust=1.5), legend.position=legend_position, legend.title=element_text(size=text_size,face="bold"), legend.text=element_text(size=text_size), axis.title.y=element_text(angle=90,size=text_size,vjust=0.3), axis.text.y=element_text(size=text_size-2), axis.title.x=element_text(size=text_size,vjust=-0.3), axis.text.x=element_text(size=text_size-2, vjust=0.5)) if (facet_style == "default") map <- map + theme(strip.text.x=element_text(size=text_size-1), strip.text.y=element_text(size=text_size-1)) else if (facet_style == "paper") map <- map + theme(strip.text.x=element_text(size=text_size,face="bold"),strip.text.y=element_text(size=text_size,face="bold")) + theme(strip.background = element_blank()) if (legend_position %in% c("top","bottom")) map <- map + guides(fill = guide_colorbar(title.position = "top")) + theme(legend.box.just = "left") if(!is.null(file)) { ggsave(file,map,scale=scale,limitsize=FALSE,units = "cm",height = plot_height,width = plot_width) } else { return(map) } }
/R/plotmap3.R
no_license
FelicitasBeier/mrwaterPlots
R
false
false
7,645
r
plotmap3 <- function(data,file=NULL,title="World map",legend_range=NULL,legendname="Cell share",lowcol="grey95",midcol="orange",highcol="darkred",midpoint=0.5,facet_grid="Year~Data1",nrow=NULL,ncol=NULL,scale=2,breaks=T,labs=T,borders=TRUE,MAgPIE_regions=F,axis_text_col="black",legend_discrete=FALSE,legend_breaks=NULL,show_percent=FALSE,sea=TRUE,land_colour="white",legend_height=2,legend_width=NULL,text_size=12,legend_position="right",facet_style="default",plot_height=10,plot_width=20) { #require("ggplot2", quietly = TRUE) #require("RColorBrewer", quietly = TRUE) wrld_simpl_df <- NULL data("world", envir = environment(), package="luplot") if (MAgPIE_regions) { facet_grid <- NULL map <- ggplot(wrld_simpl_df,aes_(~long, ~lat)) + geom_polygon(aes_(group=~group, fill=~magpie)) + # scale_fill_manual("MAgPIE\nregions",values=brewer.pal(11,"Paired")[2:11],na.value="white") scale_fill_manual("MAgPIE\nregion",values=c("purple3", "red3", "hotpink", "cyan3", "goldenrod1", "gray44","#8C5400FF","darkorange2","royalblue3","green4"),na.value="white") # scale_fill_manual("MAgPIE\nregions",values=c(nice_colors(style="contrast_area",saturation=1)[1:10]),na.value="white") # scale_fill_brewer(palette="Set3",na.value="white") } else { if (!is.list(data)) { temp <- data data <- list() data[["default"]] <- temp } if (is.null(legend_range)) { midpoint <- max(unlist(lapply(data,max,na.rm=T)))*midpoint } else { data <- lapply(data, function(x) { x[which(x < legend_range[1])] <- legend_range[1] x[which(x > legend_range[2])] <- legend_range[2] return(x) }) } if (any(unlist(lapply(data,function(x) return(is.null(attr(x,"coordinates"))))))) { data <- lapply(data, function(x) { attr(x,"coordinates") <- getCoordinates(degree = T) return(x) }) warning("Missing coordinates in attributes for at least one MAgPIE object. Added coordinates in default MAgPIE cell order.") } data <- as.ggplot(data,asDate=F) if(legend_discrete) { if(is.null(legend_breaks)) { data$Breaks<-as.character(data$Value) #replace NA's with a value that is not contained in data yet if(any(is.na(data$Breaks))){ data$Breaks[is.na(data$Breaks)]<-"No data" } data$Breaks<-as.factor(data$Breaks) legend_labels<-levels(data$Breaks) } else { tmp<-as.vector(data$Value) tmp[]<-length(legend_breaks)+1 legend_labels<-rep("",length(legend_breaks)+1) legend_labels[length(legend_breaks)+1]<-paste(">",legend_breaks[length(legend_breaks)]) for(i in length(legend_breaks):2){ tmp[which(as.vector(data$Value)<=legend_breaks[i])]<-i legend_labels[i]<-paste(legend_breaks[i-1],"-",legend_breaks[i]) } tmp[which(as.vector(data$Value)<legend_breaks[1])]<-1 tmp[which(is.na(as.vector(data$Value)))]<-NA legend_labels[1]<-paste("<",legend_breaks[1]) legend_labels<-legend_labels[as.numeric(rev(levels(as.factor(tmp))))] tmpchar<-as.character(tmp) tmpchar[is.na(tmpchar)]<-"No data" levels<-rev(levels(as.factor(tmp))) if("No data" %in% tmpchar){ legend_labels<-c(legend_labels,"No data") levels<-c(levels,"No data") } data$Breaks <- factor(tmpchar, levels = levels) } if("No data" %in% legend_labels){ colours <- c(colorRampPalette(c(highcol,midcol,lowcol))( length(legend_labels)-1 ),"grey") } else { colours <- colorRampPalette(c(highcol,midcol,lowcol))( length(legend_labels)) } if(show_percent){ tmp<-table(data$Breaks) if("No data" %in% names(tmp)) tmp<-tmp[-which(names(tmp)=="No data")] percent<-round(tmp/sum(tmp)*100,1) percent<-paste("(",percent,"%)",sep="") legend_labels[which(legend_labels!="No data")]<-paste(legend_labels[which(legend_labels!="No data")],percent,sep=" ") } } if (!is.null(legend_breaks)) { labels <- c(bquote(""<=.(head(legend_breaks,1))),legend_breaks[2:(length(legend_breaks)-1)],bquote("">=.(tail(legend_breaks,1)))) } else { legend_breaks <- waiver() labels <- waiver() } map <- ggplot(data,aes_(~x,~y)) + geom_polygon(data=wrld_simpl_df, aes_(~long,~lat, group=~group, fill=~hole), fill=land_colour) if (is.null(data$Breaks)) { if (!is.null(midcol)) map <- map + geom_raster(aes_(fill=~Value)) + scale_fill_gradient2(name=legendname,low=lowcol,mid=midcol,high=highcol,midpoint=midpoint,limits=legend_range,breaks=legend_breaks,labels=labels,na.value="grey") else map <- map + geom_raster(aes_(fill=~Value)) + scale_fill_gradient(name=legendname,low=lowcol,high=highcol,limits=legend_range,breaks=legend_breaks,labels=labels,na.value="grey") } else map <- map + geom_raster(aes_(fill=~Breaks)) + scale_fill_manual(name=legendname,values=colours,labels=legend_labels,na.value="yellow") if(!is.null(legend_height)) map <- map + theme(legend.key.height = unit(legend_height, "cm")) if(!is.null(legend_width)) map <- map + theme(legend.key.width = unit(legend_width, "cm")) } map <- map + #coord_cartesian(xlim = c(-180, 180), ylim = c(-58, 86)) + coord_map(projection = "mercator", xlim = c(-180, 180), ylim = c(-58, 86), clip = "on") theme(aspect.ratio = 0.5) + ggtitle(title) if (sea) map <- map + theme(panel.background = element_rect(fill = "lightsteelblue2")) else map <- map + theme(panel.background = element_rect(fill="white", colour="black")) + theme(panel.grid.major=element_line(colour="grey80"),panel.grid.minor=element_line(colour="grey90")) if (!is.null(facet_grid)) { if (substr(facet_grid,1,1) == "~") map <- map + facet_wrap(facet_grid, nrow = nrow, ncol = ncol) else map <- map + facet_grid(facet_grid) } if(breaks) map <- map + scale_x_continuous(breaks=c(-90, 0, 90)) + scale_y_continuous(breaks=c(-66,-38,-23, 0, 23,38, 66)) else map <- map + scale_x_continuous(breaks=NULL) + scale_y_continuous(breaks=NULL) if(labs) map <- map + labs(x="Longitude",y="Latitude") else map <- map + labs(y=NULL,x=NULL) if(borders) map <- map + geom_path(data=wrld_simpl_df, aes_(~long, ~lat, group=~group, fill=NULL), color="grey10", size=0.1) if (!is.null(axis_text_col)) map <- map + theme(axis.text = element_text(colour=axis_text_col),axis.ticks = element_line(colour=axis_text_col)) map <- map + theme(panel.grid.minor=element_line(colour="white"),plot.title=element_text(size=text_size+4,face="bold",vjust=1.5), legend.position=legend_position, legend.title=element_text(size=text_size,face="bold"), legend.text=element_text(size=text_size), axis.title.y=element_text(angle=90,size=text_size,vjust=0.3), axis.text.y=element_text(size=text_size-2), axis.title.x=element_text(size=text_size,vjust=-0.3), axis.text.x=element_text(size=text_size-2, vjust=0.5)) if (facet_style == "default") map <- map + theme(strip.text.x=element_text(size=text_size-1), strip.text.y=element_text(size=text_size-1)) else if (facet_style == "paper") map <- map + theme(strip.text.x=element_text(size=text_size,face="bold"),strip.text.y=element_text(size=text_size,face="bold")) + theme(strip.background = element_blank()) if (legend_position %in% c("top","bottom")) map <- map + guides(fill = guide_colorbar(title.position = "top")) + theme(legend.box.just = "left") if(!is.null(file)) { ggsave(file,map,scale=scale,limitsize=FALSE,units = "cm",height = plot_height,width = plot_width) } else { return(map) } }
# ------------------------------------------------------------------ # NESTED CV WITH UNMATCHED REGRESSION DATA # ------------------------------------------------------------------ library(mlr) library(readr) library(parallelMap) library(ggplot2) # load subgroup data, exclude ID from data # input <- "data/subgroup_data_cleared.csv" # target <- "Diff_12monVA_IdxVA" # data <- readr::read_csv(input, guess_max=8000) # data <- data[-"ID",] # define regression task from out dataset # regr.task <- makeRegrTask(id="subgroup", data=data, target=target) # load Boston Housing dataset from mlbench data(BostonHousing, package="mlbench") target="medv" regr.task <- makeRegrTask(id="bh", data=BostonHousing, target=target) # we'll fit an elasticnet regression. # to find out about the params we can tune do: # makeLearner("regr.glmnet")$par.set # define parameter grid we want to search over ps <- makeParamSet( # alpha=0 is ridge, alpha=1 is lasso makeDiscreteParam("alpha", values=seq(0, 1, by=.25)), # lambda is the penalty for the sum of L1 and L2 norms makeDiscreteParam("lambda", values=10^(-3:3)) ) # define grid-search strategy, for random search use makeTuneControlRandom() # there are very good practical (comp cost) and theoretical reasons to why # prefer RS: http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf ctrl <- makeTuneControlRandom() # define inner CV strategy, for resampling replace "CV" with "Subsample" inner <- makeResampleDesc("CV", iters=5) # define the measure we want to collect (mean test mse and its sd), for a full # list see: # https://mlr-org.github.io/mlr-tutorial/release/html/measures/index.html m1 <- mse # we calculate the SD of the mse by setting up an aggregation, for a full list # see: https://rdrr.io/cran/mlr/man/aggregations.html m2 <- setAggregation(mse, test.sd) m_all <- list(m1, m2) # define wrapped learner (mlR way of saying nested CV learner). In this eaxample # we use a gradient boosting trees in this example, for all learners see: # https://mlr-org.github.io/mlr-tutorial/release/html/integrated_learners/index.html lrn <- makeTuneWrapper("regr.glmnet", resampling=inner, par.set=ps, control=ctrl, show.info=FALSE, measures=m_all) # define outer CV strategy outer <- makeResampleDesc("CV", iters=3) # make a parallel environment for gridsearch parallelStartSocket(8, level="mlr.tuneParams") # run nested CV r <- resample(lrn, regr.task, resampling=outer, models=TRUE, extract=getTuneResult, show.info=FALSE) parallelStop() # print mse on the outer test fold r$measures.test # print mean mse on the inner folds of the best params r$extract # for each outerfold show all parameter combination with mean # and sd mse over the inner folds opt_paths <- getNestedTuneResultsOptPathDf(r) # visualise the search paths g <- ggplot(opt_paths, aes(x=alpha, y=lambda, fill=mse.test.mean)) g + geom_tile() + facet_wrap(~ iter) # restrict the same plot to the lowest 50% of the values low_c <- as.list(quantile(opt_paths$mse.test.mean, probs=c(0, .5)))[[1]] high_c <- as.list(quantile(opt_paths$mse.test.mean, probs=c(0, .5)))[[2]] g <- ggplot(opt_paths, aes(x=alpha, y=lambda, fill=mse.test.mean)) g + geom_tile() + facet_wrap(~ iter) + scale_fill_gradient(limits=c(low_c, high_c), low="red", high="grey") # get best parameters for each outer fold getNestedTuneResultsX(r) # get predicted scores pred_scores <- as.data.frame(r$pred) # predict with one of the returned models # mlr::predictLearner(lrn, r$models[[1]], data[,-which(colnames(data)==target)])
/dev/nested_cv_unmatched_regr.R
no_license
jzhao0802/PA_UK
R
false
false
3,580
r
# ------------------------------------------------------------------ # NESTED CV WITH UNMATCHED REGRESSION DATA # ------------------------------------------------------------------ library(mlr) library(readr) library(parallelMap) library(ggplot2) # load subgroup data, exclude ID from data # input <- "data/subgroup_data_cleared.csv" # target <- "Diff_12monVA_IdxVA" # data <- readr::read_csv(input, guess_max=8000) # data <- data[-"ID",] # define regression task from out dataset # regr.task <- makeRegrTask(id="subgroup", data=data, target=target) # load Boston Housing dataset from mlbench data(BostonHousing, package="mlbench") target="medv" regr.task <- makeRegrTask(id="bh", data=BostonHousing, target=target) # we'll fit an elasticnet regression. # to find out about the params we can tune do: # makeLearner("regr.glmnet")$par.set # define parameter grid we want to search over ps <- makeParamSet( # alpha=0 is ridge, alpha=1 is lasso makeDiscreteParam("alpha", values=seq(0, 1, by=.25)), # lambda is the penalty for the sum of L1 and L2 norms makeDiscreteParam("lambda", values=10^(-3:3)) ) # define grid-search strategy, for random search use makeTuneControlRandom() # there are very good practical (comp cost) and theoretical reasons to why # prefer RS: http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf ctrl <- makeTuneControlRandom() # define inner CV strategy, for resampling replace "CV" with "Subsample" inner <- makeResampleDesc("CV", iters=5) # define the measure we want to collect (mean test mse and its sd), for a full # list see: # https://mlr-org.github.io/mlr-tutorial/release/html/measures/index.html m1 <- mse # we calculate the SD of the mse by setting up an aggregation, for a full list # see: https://rdrr.io/cran/mlr/man/aggregations.html m2 <- setAggregation(mse, test.sd) m_all <- list(m1, m2) # define wrapped learner (mlR way of saying nested CV learner). In this eaxample # we use a gradient boosting trees in this example, for all learners see: # https://mlr-org.github.io/mlr-tutorial/release/html/integrated_learners/index.html lrn <- makeTuneWrapper("regr.glmnet", resampling=inner, par.set=ps, control=ctrl, show.info=FALSE, measures=m_all) # define outer CV strategy outer <- makeResampleDesc("CV", iters=3) # make a parallel environment for gridsearch parallelStartSocket(8, level="mlr.tuneParams") # run nested CV r <- resample(lrn, regr.task, resampling=outer, models=TRUE, extract=getTuneResult, show.info=FALSE) parallelStop() # print mse on the outer test fold r$measures.test # print mean mse on the inner folds of the best params r$extract # for each outerfold show all parameter combination with mean # and sd mse over the inner folds opt_paths <- getNestedTuneResultsOptPathDf(r) # visualise the search paths g <- ggplot(opt_paths, aes(x=alpha, y=lambda, fill=mse.test.mean)) g + geom_tile() + facet_wrap(~ iter) # restrict the same plot to the lowest 50% of the values low_c <- as.list(quantile(opt_paths$mse.test.mean, probs=c(0, .5)))[[1]] high_c <- as.list(quantile(opt_paths$mse.test.mean, probs=c(0, .5)))[[2]] g <- ggplot(opt_paths, aes(x=alpha, y=lambda, fill=mse.test.mean)) g + geom_tile() + facet_wrap(~ iter) + scale_fill_gradient(limits=c(low_c, high_c), low="red", high="grey") # get best parameters for each outer fold getNestedTuneResultsX(r) # get predicted scores pred_scores <- as.data.frame(r$pred) # predict with one of the returned models # mlr::predictLearner(lrn, r$models[[1]], data[,-which(colnames(data)==target)])
\name{cp85} \alias{cp85} \docType{data} \title{Wages for males and females} \description{ Data of wages with education, gender, and experience } \usage{data(cp85)} \format{ A data frame with 533 observations on the following 6 variables. \describe{ \item{\code{EDUC}}{a numeric vector} \item{\code{FEMALE}}{a numeric vector} \item{\code{EXPER}}{a numeric vector} \item{\code{WAGE}}{a numeric vector} \item{\code{AGE}}{a numeric vector} \item{\code{LNWAGE}}{a numeric vector} } } \references{ http://lib.stat.cmu.edu/datasets/CPS_85_Wages } \keyword{datasets}
/man/cp85.Rd
no_license
cran/mrt
R
false
false
615
rd
\name{cp85} \alias{cp85} \docType{data} \title{Wages for males and females} \description{ Data of wages with education, gender, and experience } \usage{data(cp85)} \format{ A data frame with 533 observations on the following 6 variables. \describe{ \item{\code{EDUC}}{a numeric vector} \item{\code{FEMALE}}{a numeric vector} \item{\code{EXPER}}{a numeric vector} \item{\code{WAGE}}{a numeric vector} \item{\code{AGE}}{a numeric vector} \item{\code{LNWAGE}}{a numeric vector} } } \references{ http://lib.stat.cmu.edu/datasets/CPS_85_Wages } \keyword{datasets}
#' pcRegression #' #' @description pcRegression does a linear model fit of principal components #' and a batch (categorical) variable #' @param pca.data a list as created by 'prcomp', pcRegression needs #' \itemize{ #' \item \code{x} - #' the principal components (PCs, correctly: the rotated data) and #' \item \code{sdev} - the standard deviations of the PCs) #' } #' @param batch vector with the batch covariate (for each cell) #' @param n_top the number of PCs to consider at maximum #' @param tol truncation threshold for significance level, default: 1e-16 #' @return List summarising principal component regression #' \itemize{ #' \item \code{maxVar} - the variance explained by principal component(s) #' that correlate(s) most with the batch effect #' \item \code{PmaxVar} - p-value (returned by linear model) for the #' respective principal components (related to \code{maxVar}) #' \item \code{pcNfrac} - fraction of significant PCs among the \code{n_top} PCs #' \item \code{pcRegscale} - 'scaled PC regression', i.e. total variance of PCs which correlate significantly with batch covariate (FDR<0.05) scaled by the total variance of \code{n_top} PCs #' \item \code{maxCorr} - maximal correlation of \code{n_top} PCs with batch covariate #' \item \code{maxR2} - maximal coefficient of determination of \code{n_top} PCs with batch covariate #' \item \code{msigPC} - scaled index of the smallest PC that correlates significantly with batch covariate (FDR<0.05), i.e. \code{msigPC=1} if PC_1 is significantly correlated with the batch covariate and \code{msigPC=0} if none of the \code{n_top} PCs is significantly correlated #' \item \code{maxsigPC} - similar to \code{msigPC}, scaled index of the PC with maximal correlation of \code{n_top} PCs with batch covariate #' \item \code{R2Var} - sum over Var(PC_i)*r2(PC_i and batch) for all i #' \item \code{ExplainedVar} - explained variance for each PC #' \item \code{r2} - detailed results of correlation (R-Square) analysis #' } #' @examples #' testdata <- create_testset_multibatch(n.genes=1000, n.batch=3, plattform='any') #' pca.data <- prcomp(testdata$data, center=TRUE) #' pc.reg.result <- pcRegression(pca.data, testdata$batch) #' @importFrom stats t.test lm aov p.adjust #' @export pcRegression <- function(pca.data, batch,n_top=50, tol=1e-16){ batch.levels <- unique(batch) #make sure you do not try to assess more PCs than actually computed pca_rank = ncol(pca.data$x) max_comps <- min(pca_rank, n_top) if (length(pca.data$sdev) > pca_rank) { pca.data$sdev = pca.data$sdev[1:pca_rank] } if (length(batch.levels) == 2) { #r2.batch.raw <- r2.batch # for-loop replaced by correlate.fun and apply r2.batch <- apply(pca.data$x, 2, correlate.fun_two, batch, batch.levels) r2.batch <- t(r2.batch) colnames(r2.batch) <- c('R.squared', 'p.value.lm', 'p.value.t.test') r2.batch[r2.batch[,2] < tol, 2] <- tol r2.batch[r2.batch[,3] < tol, 3] <- tol } else { #r2.batch.raw <- r2.batch r2.batch <- apply(pca.data$x, 2, correlate.fun_gen, batch) r2.batch <- t(r2.batch) colnames(r2.batch) <- c('R.squared', 'p.value.lm', 'p.value.F.test') # for-loop replaced by correlate.fun and apply #for (k in 1:dim(r2.batch)[1]){ # a <- lm(pca.data$x[,k] ~ batch) # r2.batch[k,1] <- summary(a)$r.squared #coefficient of determination # r2.batch[k,2] <- summary(a)$coefficients['batch',4] #p-value (significance level) #} r2.batch[r2.batch[,2] < tol, 2] <- tol r2.batch[r2.batch[,3] < tol, 3] <- tol } argmin <- which(r2.batch[, 2] == min(r2.batch[, 2])) normal <- sum(pca.data$sdev^2) var <- round((pca.data$sdev)^2 / normal * 100,1) batch.var <- sum(r2.batch[,1]*var)/100 setsignif <- p.adjust(r2.batch[1:max_comps,2], method = 'BH') < 0.05 pcCorr <- sqrt(r2.batch[1:max_comps, 1]) result <- list() result$maxVar <- var[argmin] result$PmaxVar <- r2.batch[argmin,2] result$pcNfrac <- mean(setsignif) result$pcRegscale <- sum(var[1:max_comps][setsignif])/sum(var[1:max_comps]) result$maxCorr <- max(pcCorr) result$maxR2 <- max(r2.batch[1:max_comps, 1]) result$msigPC <- 1 - (min(c(which(setsignif), max_comps + 1)) - 1) / max_comps result$maxsigPC <- 1 - (min(c(which(pcCorr == max(pcCorr[setsignif])), max_comps + 1)) - 1) / max_comps result$R2Var <- batch.var result$ExplainedVar <- var result$r2 <- r2.batch result }
/R/pcRegression.R
permissive
theislab/kBET
R
false
false
4,414
r
#' pcRegression #' #' @description pcRegression does a linear model fit of principal components #' and a batch (categorical) variable #' @param pca.data a list as created by 'prcomp', pcRegression needs #' \itemize{ #' \item \code{x} - #' the principal components (PCs, correctly: the rotated data) and #' \item \code{sdev} - the standard deviations of the PCs) #' } #' @param batch vector with the batch covariate (for each cell) #' @param n_top the number of PCs to consider at maximum #' @param tol truncation threshold for significance level, default: 1e-16 #' @return List summarising principal component regression #' \itemize{ #' \item \code{maxVar} - the variance explained by principal component(s) #' that correlate(s) most with the batch effect #' \item \code{PmaxVar} - p-value (returned by linear model) for the #' respective principal components (related to \code{maxVar}) #' \item \code{pcNfrac} - fraction of significant PCs among the \code{n_top} PCs #' \item \code{pcRegscale} - 'scaled PC regression', i.e. total variance of PCs which correlate significantly with batch covariate (FDR<0.05) scaled by the total variance of \code{n_top} PCs #' \item \code{maxCorr} - maximal correlation of \code{n_top} PCs with batch covariate #' \item \code{maxR2} - maximal coefficient of determination of \code{n_top} PCs with batch covariate #' \item \code{msigPC} - scaled index of the smallest PC that correlates significantly with batch covariate (FDR<0.05), i.e. \code{msigPC=1} if PC_1 is significantly correlated with the batch covariate and \code{msigPC=0} if none of the \code{n_top} PCs is significantly correlated #' \item \code{maxsigPC} - similar to \code{msigPC}, scaled index of the PC with maximal correlation of \code{n_top} PCs with batch covariate #' \item \code{R2Var} - sum over Var(PC_i)*r2(PC_i and batch) for all i #' \item \code{ExplainedVar} - explained variance for each PC #' \item \code{r2} - detailed results of correlation (R-Square) analysis #' } #' @examples #' testdata <- create_testset_multibatch(n.genes=1000, n.batch=3, plattform='any') #' pca.data <- prcomp(testdata$data, center=TRUE) #' pc.reg.result <- pcRegression(pca.data, testdata$batch) #' @importFrom stats t.test lm aov p.adjust #' @export pcRegression <- function(pca.data, batch,n_top=50, tol=1e-16){ batch.levels <- unique(batch) #make sure you do not try to assess more PCs than actually computed pca_rank = ncol(pca.data$x) max_comps <- min(pca_rank, n_top) if (length(pca.data$sdev) > pca_rank) { pca.data$sdev = pca.data$sdev[1:pca_rank] } if (length(batch.levels) == 2) { #r2.batch.raw <- r2.batch # for-loop replaced by correlate.fun and apply r2.batch <- apply(pca.data$x, 2, correlate.fun_two, batch, batch.levels) r2.batch <- t(r2.batch) colnames(r2.batch) <- c('R.squared', 'p.value.lm', 'p.value.t.test') r2.batch[r2.batch[,2] < tol, 2] <- tol r2.batch[r2.batch[,3] < tol, 3] <- tol } else { #r2.batch.raw <- r2.batch r2.batch <- apply(pca.data$x, 2, correlate.fun_gen, batch) r2.batch <- t(r2.batch) colnames(r2.batch) <- c('R.squared', 'p.value.lm', 'p.value.F.test') # for-loop replaced by correlate.fun and apply #for (k in 1:dim(r2.batch)[1]){ # a <- lm(pca.data$x[,k] ~ batch) # r2.batch[k,1] <- summary(a)$r.squared #coefficient of determination # r2.batch[k,2] <- summary(a)$coefficients['batch',4] #p-value (significance level) #} r2.batch[r2.batch[,2] < tol, 2] <- tol r2.batch[r2.batch[,3] < tol, 3] <- tol } argmin <- which(r2.batch[, 2] == min(r2.batch[, 2])) normal <- sum(pca.data$sdev^2) var <- round((pca.data$sdev)^2 / normal * 100,1) batch.var <- sum(r2.batch[,1]*var)/100 setsignif <- p.adjust(r2.batch[1:max_comps,2], method = 'BH') < 0.05 pcCorr <- sqrt(r2.batch[1:max_comps, 1]) result <- list() result$maxVar <- var[argmin] result$PmaxVar <- r2.batch[argmin,2] result$pcNfrac <- mean(setsignif) result$pcRegscale <- sum(var[1:max_comps][setsignif])/sum(var[1:max_comps]) result$maxCorr <- max(pcCorr) result$maxR2 <- max(r2.batch[1:max_comps, 1]) result$msigPC <- 1 - (min(c(which(setsignif), max_comps + 1)) - 1) / max_comps result$maxsigPC <- 1 - (min(c(which(pcCorr == max(pcCorr[setsignif])), max_comps + 1)) - 1) / max_comps result$R2Var <- batch.var result$ExplainedVar <- var result$r2 <- r2.batch result }
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/pgVirtual.R \docType{class} \name{pgVirtual-class} \alias{[,pgVirtual,character,ANY,ANY-method} \alias{[,pgVirtual,integer,ANY,ANY-method} \alias{[,pgVirtual,logical,ANY,ANY-method} \alias{[,pgVirtual,numeric,ANY,ANY-method} \alias{[[,pgVirtual,ANY,ANY-method} \alias{as} \alias{length,pgVirtual-method} \alias{pgVirtual-class} \alias{show,pgVirtual-method} \title{Base class for pangenomic data} \usage{ \S4method{length}{pgVirtual}(x) \S4method{show}{pgVirtual}(object) \S4method{[}{pgVirtual,integer,ANY,ANY}(x, i) \S4method{[}{pgVirtual,numeric,ANY,ANY}(x, i) \S4method{[}{pgVirtual,character,ANY,ANY}(x, i) \S4method{[}{pgVirtual,logical,ANY,ANY}(x, i) \S4method{[[}{pgVirtual,ANY,ANY}(x, i) as(object, Class='ExpressionSet') as(object, Class='matrix') } \arguments{ \item{x}{A pgVirtual subclass object} \item{object}{A pgVirtual subclass object} \item{i}{indices specifying genomes, either integer, numeric, character or logical, following the normal rules for indexing objects in R} \item{Class}{The class to coerce pgVirtual subclasses to. Outside of the FindMyFriends class tree only 'ExpressionSet' and 'matrix' is implemented.} } \value{ Length returns an integer giving the number of organisms } \description{ This virtual class is the superclass of all other pangenome classes in FindMyFriends. It is an empty shell that is mainly used for dispatch and checking that the promises of subclasses are held. } \details{ Subclasses of pgVirtual must implement the following methods in order for them to plug into FindMyFriends algorithms: \describe{ \item{seqToOrg(object)}{Returns the mapping from genes to organisms as an integer vector with position mapped to gene and integer mapped to organism.} \item{seqToGeneGroup(object)}{As seqToOrg but mapped to gene group instead of organism. If gene groups are yet to be defined return an empty vector.} \item{genes(object, split, subset)}{Return the underlying sequences. If split is missing return an XStringSet, otherwise return an XStringSetList. split can be either 'group', 'organism' or 'paralogue' and should group the sequences accordingly. Subset should behave as if it was added as '[]' to the results but allow you to avoid reading everything into memory if not needed.} \item{geneNames(object)}{Return a character vector with the name of each gene.} \item{geneNames<-(object, value)}{Set the name of each gene.} \item{geneWidth(object)}{Return an integer vector with the length (in residues) of each gene.} \item{removeGene(object, name, organism, group, ind)}{\strong{Should only be implemented for signature: c(\emph{yourClass}, 'missing', 'missing', 'missing', 'integer')} Remove the genes at the given indexes and return the object.} \item{orgNames(object)}{Return a character vector of organism names.} \item{orgNames<-(object, value)}{Set the name of the organisms.} \item{groupNames(object)}{Return a character vector of gene group names.} \item{groupNames<-(object, value)}{Set the name of gene groups.} \item{orgInfo(object)}{Return a data.frame with metadata about each organim.} \item{orgInfo<-(object, value)}{Set a data.frame to be metadata about each organism.} \item{setOrgInfo(object, name, info, key)}{Set the metadata 'name', for the organisms corresponding to 'key' to 'info'} \item{groupInfo(object)}{Return a data.frame with metadata about each gene group.} \item{groupInfo<-(object, value)}{Set a data.frame to be metadata about each gene group.} \item{setGroupInfo(object, name, info, key)}{Set the metadata 'name', for the gene groups corresponding to 'key' to 'info'} \item{groupGenes(object, seqToGeneGroup)}{Sets the gene grouping of the pangenome. 'seqToGeneGroup' should correspond to the output of the seqToGeneGroup method (i.e. an integer vector with each element giving the group of the corresponding gene). This method \strong{must} include a \code{callNextMethod(object)} as the last line.} \item{mergePangenomes(pg1, pg2, geneGrouping, groupInfo)}{Merge pg2 into pg1 preserving the indexing in pg1 and appending and modifying the indexing of pg2. The geneGrouping argument is the new grouping of genes and groupInfo the new group info for the groups.} } Additionally subclasses can override the following methods for performance gains. Otherwise they will be derived from the above methods. \describe{ \item{length(object)}{Return the number of organisms in the object.} \item{nOrganisms(object)}{As length.} \item{nGenes(object)}{Return the number of genes in the object.} \item{nGeneGroups(object)}{Return the number of gene groups} \item{hasGeneGroups}{Returns TRUE if gene groups have been defined} \item{pgMatrix}{Returns an integer matrix with organisms as columns and gene groups as rows and the corresponding number of genes in each element.} } Developers are encourages to consult the implementation of FindMyFriends own classes when trying to implement new ones } \section{Methods (by generic)}{ \itemize{ \item \code{length}: Length of a Pangenome, defined as the number of organisms it contain \item \code{show}: Basic information about the pangenome \item \code{[}: Create subsets of pangenomes based on index \item \code{[}: Create subsets of pangenomes based on index \item \code{[}: Create subsets of pangenomes based on organism name \item \code{[}: Create subsets of pangenomes based on logical vector \item \code{[[}: Extract sequences from a single organism }} \section{Slots}{ \describe{ \item{\code{.settings}}{A list containing settings pertaining to the object} }} \seealso{ Other Pangenome_classes: \code{\link{pgFull-class}}, \code{\link{pgFullLoc-class}}, \code{\link{pgInMem-class}}, \code{\link{pgInMemLoc-class}}, \code{\link{pgLM-class}}, \code{\link{pgLMLoc-class}}, \code{\link{pgSlim-class}}, \code{\link{pgSlimLoc-class}}, \code{\link{pgVirtualLoc-class}} }
/man/pgVirtual-class.Rd
no_license
ericjcgalvez/FindMyFriends
R
false
true
5,986
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/pgVirtual.R \docType{class} \name{pgVirtual-class} \alias{[,pgVirtual,character,ANY,ANY-method} \alias{[,pgVirtual,integer,ANY,ANY-method} \alias{[,pgVirtual,logical,ANY,ANY-method} \alias{[,pgVirtual,numeric,ANY,ANY-method} \alias{[[,pgVirtual,ANY,ANY-method} \alias{as} \alias{length,pgVirtual-method} \alias{pgVirtual-class} \alias{show,pgVirtual-method} \title{Base class for pangenomic data} \usage{ \S4method{length}{pgVirtual}(x) \S4method{show}{pgVirtual}(object) \S4method{[}{pgVirtual,integer,ANY,ANY}(x, i) \S4method{[}{pgVirtual,numeric,ANY,ANY}(x, i) \S4method{[}{pgVirtual,character,ANY,ANY}(x, i) \S4method{[}{pgVirtual,logical,ANY,ANY}(x, i) \S4method{[[}{pgVirtual,ANY,ANY}(x, i) as(object, Class='ExpressionSet') as(object, Class='matrix') } \arguments{ \item{x}{A pgVirtual subclass object} \item{object}{A pgVirtual subclass object} \item{i}{indices specifying genomes, either integer, numeric, character or logical, following the normal rules for indexing objects in R} \item{Class}{The class to coerce pgVirtual subclasses to. Outside of the FindMyFriends class tree only 'ExpressionSet' and 'matrix' is implemented.} } \value{ Length returns an integer giving the number of organisms } \description{ This virtual class is the superclass of all other pangenome classes in FindMyFriends. It is an empty shell that is mainly used for dispatch and checking that the promises of subclasses are held. } \details{ Subclasses of pgVirtual must implement the following methods in order for them to plug into FindMyFriends algorithms: \describe{ \item{seqToOrg(object)}{Returns the mapping from genes to organisms as an integer vector with position mapped to gene and integer mapped to organism.} \item{seqToGeneGroup(object)}{As seqToOrg but mapped to gene group instead of organism. If gene groups are yet to be defined return an empty vector.} \item{genes(object, split, subset)}{Return the underlying sequences. If split is missing return an XStringSet, otherwise return an XStringSetList. split can be either 'group', 'organism' or 'paralogue' and should group the sequences accordingly. Subset should behave as if it was added as '[]' to the results but allow you to avoid reading everything into memory if not needed.} \item{geneNames(object)}{Return a character vector with the name of each gene.} \item{geneNames<-(object, value)}{Set the name of each gene.} \item{geneWidth(object)}{Return an integer vector with the length (in residues) of each gene.} \item{removeGene(object, name, organism, group, ind)}{\strong{Should only be implemented for signature: c(\emph{yourClass}, 'missing', 'missing', 'missing', 'integer')} Remove the genes at the given indexes and return the object.} \item{orgNames(object)}{Return a character vector of organism names.} \item{orgNames<-(object, value)}{Set the name of the organisms.} \item{groupNames(object)}{Return a character vector of gene group names.} \item{groupNames<-(object, value)}{Set the name of gene groups.} \item{orgInfo(object)}{Return a data.frame with metadata about each organim.} \item{orgInfo<-(object, value)}{Set a data.frame to be metadata about each organism.} \item{setOrgInfo(object, name, info, key)}{Set the metadata 'name', for the organisms corresponding to 'key' to 'info'} \item{groupInfo(object)}{Return a data.frame with metadata about each gene group.} \item{groupInfo<-(object, value)}{Set a data.frame to be metadata about each gene group.} \item{setGroupInfo(object, name, info, key)}{Set the metadata 'name', for the gene groups corresponding to 'key' to 'info'} \item{groupGenes(object, seqToGeneGroup)}{Sets the gene grouping of the pangenome. 'seqToGeneGroup' should correspond to the output of the seqToGeneGroup method (i.e. an integer vector with each element giving the group of the corresponding gene). This method \strong{must} include a \code{callNextMethod(object)} as the last line.} \item{mergePangenomes(pg1, pg2, geneGrouping, groupInfo)}{Merge pg2 into pg1 preserving the indexing in pg1 and appending and modifying the indexing of pg2. The geneGrouping argument is the new grouping of genes and groupInfo the new group info for the groups.} } Additionally subclasses can override the following methods for performance gains. Otherwise they will be derived from the above methods. \describe{ \item{length(object)}{Return the number of organisms in the object.} \item{nOrganisms(object)}{As length.} \item{nGenes(object)}{Return the number of genes in the object.} \item{nGeneGroups(object)}{Return the number of gene groups} \item{hasGeneGroups}{Returns TRUE if gene groups have been defined} \item{pgMatrix}{Returns an integer matrix with organisms as columns and gene groups as rows and the corresponding number of genes in each element.} } Developers are encourages to consult the implementation of FindMyFriends own classes when trying to implement new ones } \section{Methods (by generic)}{ \itemize{ \item \code{length}: Length of a Pangenome, defined as the number of organisms it contain \item \code{show}: Basic information about the pangenome \item \code{[}: Create subsets of pangenomes based on index \item \code{[}: Create subsets of pangenomes based on index \item \code{[}: Create subsets of pangenomes based on organism name \item \code{[}: Create subsets of pangenomes based on logical vector \item \code{[[}: Extract sequences from a single organism }} \section{Slots}{ \describe{ \item{\code{.settings}}{A list containing settings pertaining to the object} }} \seealso{ Other Pangenome_classes: \code{\link{pgFull-class}}, \code{\link{pgFullLoc-class}}, \code{\link{pgInMem-class}}, \code{\link{pgInMemLoc-class}}, \code{\link{pgLM-class}}, \code{\link{pgLMLoc-class}}, \code{\link{pgSlim-class}}, \code{\link{pgSlimLoc-class}}, \code{\link{pgVirtualLoc-class}} }
# # This is a Shiny web application. You can run the application by clicking # the 'Run App' button above. # # Find out more about building applications with Shiny here: # # http://shiny.rstudio.com/ # if (interactive()) { library(shiny) library(leaflet) library(sf) library(dataRetrieval) library(dplyr) library(tidyr) library(shinyWidgets) library(leafem) library(raster) r_colors <- rgb(t(col2rgb(colors()) / 255)) names(r_colors) <- colors() # Define UI for application that draws a histogram ui <- fluidPage( # Application title titlePanel("TEST Map"), fluidRow( column( width = 5, offset = 1, pickerInput( inputId = "p1", label = "Choose Parameter(s) of Interest", choices = c("First, select a watershed by clicking on the map"), multiple=TRUE, options=pickerOptions(actionsBox=TRUE, liveSearch=TRUE, showTick=TRUE, virtualScroll=TRUE)#names(WQP_split) ) ), fluidRow( # column( # width = 10, offset = 1, # sliderInput( # inputId = "up", # label = "Activity Start Date", # width = "50%", # min = 1980, # max = 2020, # value = 1980, # step = 0.1 # ) # ), column( width = 10, offset = 1, dateRangeInput("dates", h3("Activity Start Date"))) ), # # airYearpickerInput( # inputId = "multiple", # label = "Activity Start Date", # placeholder = "Please select a date range", # multiple = FALSE, # minDate = 1980, # maxDate = 2020, # range = TRUE, # clearButton = TRUE # ), # verbatimTextOutput("res"), # # sliderTextInput( # inputId = "Id096", # label = "Activity Start Date", # choices = month.abb, # selected = month.abb[c(4, 8)] # ), leafletOutput("mymap"), )) # Define server logic server <- function(input, output, session) { # points <- eventReactive(input$recalc, { # cbind(rnorm(40) * 2 + 13, rnorm(40) + 48) # }, ignoreNULL = FALSE) latInput <- reactive({ switch(input$lat, '36.0356035' = 36.0356035, '40.6972313874088' = 40.6972313874088) }) longInput <- reactive({ switch(input$long, '-78.9001728' = -78.9001728, '-73.99565830140675' = -73.99565830140675) }) # Assign Variables #lat <- input$latInput #36.0356035 # 40.6972313874088 #long <- input$longInput #-78.9001728 # -73.99565830140675 # Get HUC12 from ESRI API # H <- reactive({ read_sf(paste0("https://hydro.nationalmap.gov/arcgis/rest/services/NHDPlus_HR/MapServer/11/query?where=&text=&objectIds=&time=&geometry=", # click()$lng,",",click()$lat, # "&geometryType=esriGeometryPoint&inSR=4326&spatialRel=esriSpatialRelIntersects&relationParam=&outFields=&returnGeometry=true&returnTrueCurves=false&maxAllowableOffset=&geometryPrecision=&outSR=&having=&returnIdsOnly=false&returnCountOnly=false&orderByFields=&groupByFieldsForStatistics=&outStatistics=&returnZ=false&returnM=false&gdbVersion=&historicMoment=&returnDistinctValues=false&resultOffset=&resultRecordCount=&queryByDistance=&returnExtentOnly=false&datumTransformation=&parameterValues=&rangeValues=&quantizationParameters=&featureEncoding=esriDefault&f=geojson")) }) output$mymap <- renderLeaflet({ leaflet() %>% addProviderTiles(providers$Esri.WorldTopoMap) %>% setView(lng=-79.8373764,lat=35.5465094,zoom=7) # addTiles() %>% }) observeEvent(input$mymap_click, { click <- input$mymap_click H <- read_sf(paste0("https://hydro.nationalmap.gov/arcgis/rest/services/NHDPlus_HR/MapServer/11/query?where=&text=&objectIds=&time=&geometry=", click$lng,",",click$lat, "&geometryType=esriGeometryPoint&inSR=4326&spatialRel=esriSpatialRelIntersects&relationParam=&outFields=&returnGeometry=true&returnTrueCurves=false&maxAllowableOffset=&geometryPrecision=&outSR=&having=&returnIdsOnly=false&returnCountOnly=false&orderByFields=&groupByFieldsForStatistics=&outStatistics=&returnZ=false&returnM=false&gdbVersion=&historicMoment=&returnDistinctValues=false&resultOffset=&resultRecordCount=&queryByDistance=&returnExtentOnly=false&datumTransformation=&parameterValues=&rangeValues=&quantizationParameters=&featureEncoding=esriDefault&f=geojson")) bbox <- sf::st_bbox(H) H <- st_transform(H, 4326) # Comment below to not rely on server # WQPSites <- whatWQPsites (bBox = c(bbox$xmin, bbox$ymin, bbox$xmax, bbox$ymax)) %>% # dplyr::select(OrganizationIdentifier, MonitoringLocationIdentifier, MonitoringLocationTypeName, HUCEightDigitCode, LatitudeMeasure, LongitudeMeasure, ProviderName) #WQPSites <- read.csv("Outputs/WQPSites_NHD.csv") # Bounding Circle circle <- lwgeom::st_minimum_bounding_circle(H) # Transform crs area <- st_area(circle) area <- units::set_units(area, mi^2) radius <- as.numeric(sqrt(area/pi)) WQPSites <- whatWQPsites(lat = click$lat, long = click$lng, within = radius) %>% dplyr::select(OrganizationIdentifier, MonitoringLocationIdentifier, MonitoringLocationTypeName, HUCEightDigitCode, LatitudeMeasure, LongitudeMeasure, ProviderName) Sites <- WQPSites %>% as.data.frame() %>% st_as_sf(coords = c("LongitudeMeasure", "LatitudeMeasure"), crs = 4269, dim = "XY") %>% st_transform(4326) %>% st_filter(H) # Comment below to not rely on server WQPData <- readWQPdata(lat = click$lat, long = click$lng, within = radius) %>% dplyr::select(MonitoringLocationIdentifier, ActivityTypeCode, ActivityStartDate, CharacteristicName, ProviderName) #WQPData <- read.csv("Outputs/WQPData_NHD.csv") # WQPDataFiltered <- WQPData %>% # filter(CharacteristicName == "Phosphorus" | CharacteristicName == "Nitrate" | CharacteristicName == "Organic Nitrogen") WQP_split <- split(WQPData, f=WQPData$CharacteristicName) WQP_split2 <- split(WQPData, f=WQPData$CharacteristicName) # names <- names(WQP_split) ### Update colum input # Method 1 updatePickerInput(session = session, inputId = "p1", choices = names(WQP_split2)) # Method 2 # disabled_choices <- !rownames(WQP_split) %in% rownames(WQP_split2) # updatePickerInput( # session = session, inputId = "p2", # choices = rownames(WQP_split), # disabled = disabled_choices, # style = ifelse(disabled_choices, # yes = "color: rgba(119, 119, 119, 0.5);", # no = "") # ) # ) # Update the input for the airdate in the UI WQPData$ActivityStartDate <- renderPrint (input$multiple) # Save file to local drive saveData <- function(data) { data <- t(data) # Create a unique file name fileName <- sprintf("%s.csv", H$NAME) # Write the file to the local system write.csv( x = data, file = file.path(fileName), row.names = FALSE, quote = TRUE ) } # proxy <- leafletProxy("mymap") leafletProxy("mymap") %>% clearMarkers() %>% clearShapes() %>% removeHomeButton() %>% addPolygons(data = H, popup = H$NAME) %>% addHomeButton(ext = extent(Sites), group= "Selected HUC12") %>% addMarkers(data = Sites, popup = Sites$MonitoringLocationIdentifier) }, ignoreInit = TRUE) observeEvent(input$p1, { WQPDataFiltered <- WQPData %>% filter(CharacteristicName == ".beta.-Hexachlorocyclohexane") FilteredSites <- Sites %>% filter(MonitoringLocationIdentifier %in% WQPDataFiltered$MonitoringLocationIdentifier) leafletProxy("mymap") %>% clearMarkers() %>% clearShapes() %>% removeHomeButton() %>% addPolygons(data = H, popup = H$NAME) %>% addHomeButton(ext = extent(Sites), group= "Selected HUC12") %>% addMarkers(data = FilteredSites, popup = FilteredSites$MonitoringLocationIdentifier) }, ignoreInit = TRUE) } # Run the application shinyApp(ui = ui, server = server) }
/ShinyApp/TESTMap2/app.R
permissive
internetofwater/WQP-Mapper
R
false
false
8,665
r
# # This is a Shiny web application. You can run the application by clicking # the 'Run App' button above. # # Find out more about building applications with Shiny here: # # http://shiny.rstudio.com/ # if (interactive()) { library(shiny) library(leaflet) library(sf) library(dataRetrieval) library(dplyr) library(tidyr) library(shinyWidgets) library(leafem) library(raster) r_colors <- rgb(t(col2rgb(colors()) / 255)) names(r_colors) <- colors() # Define UI for application that draws a histogram ui <- fluidPage( # Application title titlePanel("TEST Map"), fluidRow( column( width = 5, offset = 1, pickerInput( inputId = "p1", label = "Choose Parameter(s) of Interest", choices = c("First, select a watershed by clicking on the map"), multiple=TRUE, options=pickerOptions(actionsBox=TRUE, liveSearch=TRUE, showTick=TRUE, virtualScroll=TRUE)#names(WQP_split) ) ), fluidRow( # column( # width = 10, offset = 1, # sliderInput( # inputId = "up", # label = "Activity Start Date", # width = "50%", # min = 1980, # max = 2020, # value = 1980, # step = 0.1 # ) # ), column( width = 10, offset = 1, dateRangeInput("dates", h3("Activity Start Date"))) ), # # airYearpickerInput( # inputId = "multiple", # label = "Activity Start Date", # placeholder = "Please select a date range", # multiple = FALSE, # minDate = 1980, # maxDate = 2020, # range = TRUE, # clearButton = TRUE # ), # verbatimTextOutput("res"), # # sliderTextInput( # inputId = "Id096", # label = "Activity Start Date", # choices = month.abb, # selected = month.abb[c(4, 8)] # ), leafletOutput("mymap"), )) # Define server logic server <- function(input, output, session) { # points <- eventReactive(input$recalc, { # cbind(rnorm(40) * 2 + 13, rnorm(40) + 48) # }, ignoreNULL = FALSE) latInput <- reactive({ switch(input$lat, '36.0356035' = 36.0356035, '40.6972313874088' = 40.6972313874088) }) longInput <- reactive({ switch(input$long, '-78.9001728' = -78.9001728, '-73.99565830140675' = -73.99565830140675) }) # Assign Variables #lat <- input$latInput #36.0356035 # 40.6972313874088 #long <- input$longInput #-78.9001728 # -73.99565830140675 # Get HUC12 from ESRI API # H <- reactive({ read_sf(paste0("https://hydro.nationalmap.gov/arcgis/rest/services/NHDPlus_HR/MapServer/11/query?where=&text=&objectIds=&time=&geometry=", # click()$lng,",",click()$lat, # "&geometryType=esriGeometryPoint&inSR=4326&spatialRel=esriSpatialRelIntersects&relationParam=&outFields=&returnGeometry=true&returnTrueCurves=false&maxAllowableOffset=&geometryPrecision=&outSR=&having=&returnIdsOnly=false&returnCountOnly=false&orderByFields=&groupByFieldsForStatistics=&outStatistics=&returnZ=false&returnM=false&gdbVersion=&historicMoment=&returnDistinctValues=false&resultOffset=&resultRecordCount=&queryByDistance=&returnExtentOnly=false&datumTransformation=&parameterValues=&rangeValues=&quantizationParameters=&featureEncoding=esriDefault&f=geojson")) }) output$mymap <- renderLeaflet({ leaflet() %>% addProviderTiles(providers$Esri.WorldTopoMap) %>% setView(lng=-79.8373764,lat=35.5465094,zoom=7) # addTiles() %>% }) observeEvent(input$mymap_click, { click <- input$mymap_click H <- read_sf(paste0("https://hydro.nationalmap.gov/arcgis/rest/services/NHDPlus_HR/MapServer/11/query?where=&text=&objectIds=&time=&geometry=", click$lng,",",click$lat, "&geometryType=esriGeometryPoint&inSR=4326&spatialRel=esriSpatialRelIntersects&relationParam=&outFields=&returnGeometry=true&returnTrueCurves=false&maxAllowableOffset=&geometryPrecision=&outSR=&having=&returnIdsOnly=false&returnCountOnly=false&orderByFields=&groupByFieldsForStatistics=&outStatistics=&returnZ=false&returnM=false&gdbVersion=&historicMoment=&returnDistinctValues=false&resultOffset=&resultRecordCount=&queryByDistance=&returnExtentOnly=false&datumTransformation=&parameterValues=&rangeValues=&quantizationParameters=&featureEncoding=esriDefault&f=geojson")) bbox <- sf::st_bbox(H) H <- st_transform(H, 4326) # Comment below to not rely on server # WQPSites <- whatWQPsites (bBox = c(bbox$xmin, bbox$ymin, bbox$xmax, bbox$ymax)) %>% # dplyr::select(OrganizationIdentifier, MonitoringLocationIdentifier, MonitoringLocationTypeName, HUCEightDigitCode, LatitudeMeasure, LongitudeMeasure, ProviderName) #WQPSites <- read.csv("Outputs/WQPSites_NHD.csv") # Bounding Circle circle <- lwgeom::st_minimum_bounding_circle(H) # Transform crs area <- st_area(circle) area <- units::set_units(area, mi^2) radius <- as.numeric(sqrt(area/pi)) WQPSites <- whatWQPsites(lat = click$lat, long = click$lng, within = radius) %>% dplyr::select(OrganizationIdentifier, MonitoringLocationIdentifier, MonitoringLocationTypeName, HUCEightDigitCode, LatitudeMeasure, LongitudeMeasure, ProviderName) Sites <- WQPSites %>% as.data.frame() %>% st_as_sf(coords = c("LongitudeMeasure", "LatitudeMeasure"), crs = 4269, dim = "XY") %>% st_transform(4326) %>% st_filter(H) # Comment below to not rely on server WQPData <- readWQPdata(lat = click$lat, long = click$lng, within = radius) %>% dplyr::select(MonitoringLocationIdentifier, ActivityTypeCode, ActivityStartDate, CharacteristicName, ProviderName) #WQPData <- read.csv("Outputs/WQPData_NHD.csv") # WQPDataFiltered <- WQPData %>% # filter(CharacteristicName == "Phosphorus" | CharacteristicName == "Nitrate" | CharacteristicName == "Organic Nitrogen") WQP_split <- split(WQPData, f=WQPData$CharacteristicName) WQP_split2 <- split(WQPData, f=WQPData$CharacteristicName) # names <- names(WQP_split) ### Update colum input # Method 1 updatePickerInput(session = session, inputId = "p1", choices = names(WQP_split2)) # Method 2 # disabled_choices <- !rownames(WQP_split) %in% rownames(WQP_split2) # updatePickerInput( # session = session, inputId = "p2", # choices = rownames(WQP_split), # disabled = disabled_choices, # style = ifelse(disabled_choices, # yes = "color: rgba(119, 119, 119, 0.5);", # no = "") # ) # ) # Update the input for the airdate in the UI WQPData$ActivityStartDate <- renderPrint (input$multiple) # Save file to local drive saveData <- function(data) { data <- t(data) # Create a unique file name fileName <- sprintf("%s.csv", H$NAME) # Write the file to the local system write.csv( x = data, file = file.path(fileName), row.names = FALSE, quote = TRUE ) } # proxy <- leafletProxy("mymap") leafletProxy("mymap") %>% clearMarkers() %>% clearShapes() %>% removeHomeButton() %>% addPolygons(data = H, popup = H$NAME) %>% addHomeButton(ext = extent(Sites), group= "Selected HUC12") %>% addMarkers(data = Sites, popup = Sites$MonitoringLocationIdentifier) }, ignoreInit = TRUE) observeEvent(input$p1, { WQPDataFiltered <- WQPData %>% filter(CharacteristicName == ".beta.-Hexachlorocyclohexane") FilteredSites <- Sites %>% filter(MonitoringLocationIdentifier %in% WQPDataFiltered$MonitoringLocationIdentifier) leafletProxy("mymap") %>% clearMarkers() %>% clearShapes() %>% removeHomeButton() %>% addPolygons(data = H, popup = H$NAME) %>% addHomeButton(ext = extent(Sites), group= "Selected HUC12") %>% addMarkers(data = FilteredSites, popup = FilteredSites$MonitoringLocationIdentifier) }, ignoreInit = TRUE) } # Run the application shinyApp(ui = ui, server = server) }
#-------------------------------------- # This script sets out to produce a # simulation function that can # produce the values that are required # # NOTE: setup.R must have been run # first #-------------------------------------- #-------------------------------------- # Author: Trent Henderson, 8 April 2021 #-------------------------------------- #' Function to simulate statistical processes needed to calculate time-series features on #' #' @param min_length the minimum time-series length to produce #' @param max_length the maximum time-series length to produce #' @param num_ts the number of time series between min and max length to simulate at random #' @param process the statistical process to simulate. Defaults to 'Gaussian' #' @return a tidy dataframe with the simulated time-series process values #' @author Trent Henderson #' simulation_engine <- function(min_length = 1000, max_length = 10000, num_ts = 100, process = c("Gaussian", "Sinusoidal", "ARIMA", "CumSum")){ # Make Gaussian the default if(missing(process)){ method <- "Gaussian" } else{ process <- match.arg(process) } #---------- Argument checks ---------- # Numeric parameters if(!is.numeric(min_length) | !is.numeric(max_length)){ stop("min_length and max_length arguments should each be numerical scalar values.") } # Process parameter '%ni%' <- Negate('%in%') the_processes <- c("Gaussian", "Sinusoidal", "ARIMA", "CumSum") if(process %ni% the_processes){ stop("process argument should be a single string specification of either 'Gaussian', 'Sinusoidal', 'ARIMA', or 'CumSum'.") } if(length(process) > 1){ stop("process argument should be a single string specification of either 'Gaussian', 'Sinusoidal', 'ARIMA', or 'CumSum'.") } #---------- Main calcs --------------- message("Simulating time-series processes...") set.seed(123) # Fix random number generator for reproducibility # Sample random N lengths between min and max lengths with no replacement all_lengths <- seq(from = min_length, to = max_length, by = 1) use_lengths <- sample(all_lengths, size = num_ts, replace = FALSE) # List to store outputs storage <- list() #--------- # Gaussian #--------- if(process == "Gaussian"){ for(i in use_lengths){ tmp <- data.frame(values = rnorm(i, mean = 0, sd = 1)) %>% mutate(timepoint = row_number()) %>% mutate(ts_length = i) %>% mutate(process = "Gaussian") storage[[i]] <- tmp } outData <- data.table::rbindlist(storage, use.names = TRUE) } #----------- # Sinusoidal #----------- if(process == "Sinusoidal"){ t <- seq(from = 0, to = 4*pi, by = 100) a <- 3 b <- 2 amp <- 2 for(i in use_lengths){ n <- i noise <- runif(n) tmp <- data.frame(values = c(a*sin(b*t)+noise*amp)) %>% mutate(timepoint = row_number()) %>% mutate(ts_length = i) %>% mutate(process = "Sinusoidal") storage[[i]] <- tmp } outData <- data.table::rbindlist(storage, use.names = TRUE) } #------ # ARIMA #------ if(process == "ARIMA"){ for(i in use_lengths){ tmp <- data.frame(values = c(1 + 0.5 * 1:i + arima.sim(list(ma = 0.5), n = i))) %>% mutate(timepoint = row_number()) %>% mutate(ts_length = i) %>% mutate(process = "ARIMA") storage[[i]] <- tmp } outData <- data.table::rbindlist(storage, use.names = TRUE) } #--------- # CumSum #--------- if(process == "CumSum"){ for(i in use_lengths){ tmp <- data.frame(values = cumsum(rnorm(i, mean = 0, sd = 1))) %>% mutate(timepoint = row_number()) %>% mutate(ts_length = i) %>% mutate(process = "CumSum") storage[[i]] <- tmp } outData <- data.table::rbindlist(storage, use.names = TRUE) } return(outData) }
/R/simulation_engine.R
no_license
hendersontrent/feature-sims
R
false
false
4,016
r
#-------------------------------------- # This script sets out to produce a # simulation function that can # produce the values that are required # # NOTE: setup.R must have been run # first #-------------------------------------- #-------------------------------------- # Author: Trent Henderson, 8 April 2021 #-------------------------------------- #' Function to simulate statistical processes needed to calculate time-series features on #' #' @param min_length the minimum time-series length to produce #' @param max_length the maximum time-series length to produce #' @param num_ts the number of time series between min and max length to simulate at random #' @param process the statistical process to simulate. Defaults to 'Gaussian' #' @return a tidy dataframe with the simulated time-series process values #' @author Trent Henderson #' simulation_engine <- function(min_length = 1000, max_length = 10000, num_ts = 100, process = c("Gaussian", "Sinusoidal", "ARIMA", "CumSum")){ # Make Gaussian the default if(missing(process)){ method <- "Gaussian" } else{ process <- match.arg(process) } #---------- Argument checks ---------- # Numeric parameters if(!is.numeric(min_length) | !is.numeric(max_length)){ stop("min_length and max_length arguments should each be numerical scalar values.") } # Process parameter '%ni%' <- Negate('%in%') the_processes <- c("Gaussian", "Sinusoidal", "ARIMA", "CumSum") if(process %ni% the_processes){ stop("process argument should be a single string specification of either 'Gaussian', 'Sinusoidal', 'ARIMA', or 'CumSum'.") } if(length(process) > 1){ stop("process argument should be a single string specification of either 'Gaussian', 'Sinusoidal', 'ARIMA', or 'CumSum'.") } #---------- Main calcs --------------- message("Simulating time-series processes...") set.seed(123) # Fix random number generator for reproducibility # Sample random N lengths between min and max lengths with no replacement all_lengths <- seq(from = min_length, to = max_length, by = 1) use_lengths <- sample(all_lengths, size = num_ts, replace = FALSE) # List to store outputs storage <- list() #--------- # Gaussian #--------- if(process == "Gaussian"){ for(i in use_lengths){ tmp <- data.frame(values = rnorm(i, mean = 0, sd = 1)) %>% mutate(timepoint = row_number()) %>% mutate(ts_length = i) %>% mutate(process = "Gaussian") storage[[i]] <- tmp } outData <- data.table::rbindlist(storage, use.names = TRUE) } #----------- # Sinusoidal #----------- if(process == "Sinusoidal"){ t <- seq(from = 0, to = 4*pi, by = 100) a <- 3 b <- 2 amp <- 2 for(i in use_lengths){ n <- i noise <- runif(n) tmp <- data.frame(values = c(a*sin(b*t)+noise*amp)) %>% mutate(timepoint = row_number()) %>% mutate(ts_length = i) %>% mutate(process = "Sinusoidal") storage[[i]] <- tmp } outData <- data.table::rbindlist(storage, use.names = TRUE) } #------ # ARIMA #------ if(process == "ARIMA"){ for(i in use_lengths){ tmp <- data.frame(values = c(1 + 0.5 * 1:i + arima.sim(list(ma = 0.5), n = i))) %>% mutate(timepoint = row_number()) %>% mutate(ts_length = i) %>% mutate(process = "ARIMA") storage[[i]] <- tmp } outData <- data.table::rbindlist(storage, use.names = TRUE) } #--------- # CumSum #--------- if(process == "CumSum"){ for(i in use_lengths){ tmp <- data.frame(values = cumsum(rnorm(i, mean = 0, sd = 1))) %>% mutate(timepoint = row_number()) %>% mutate(ts_length = i) %>% mutate(process = "CumSum") storage[[i]] <- tmp } outData <- data.table::rbindlist(storage, use.names = TRUE) } return(outData) }
#' @export mutate_.tbl_svy <- function(.data, ..., .dots) { dots <- lazyeval::all_dots(.dots, ..., all_named = FALSE) if (any(names2(dots) %in% as.character(survey_vars(.data)))) { stop("Cannot modify survey variable") } .data$variables <- mutate_(.data$variables, .dots = dots) .data } #' @export select_.tbl_svy <- function(.data, ..., .dots) { dots <- lazyeval::all_dots(.dots, ...) vars <- dplyr::select_vars_(names(.data$variables), dots, include = attr(.data, "group_vars")) .data$variables <- select_(.data$variables, .dots = vars) # Also rename survey_vars, group_vars, and the structures in the # svydesign2 object .data <- rename_special_vars(.data, vars) .data } #' @export rename_.tbl_svy <- function(.data, ..., .dots) { dots <- lazyeval::all_dots(.dots, ...) vars <- dplyr::rename_vars_(names(.data$variables), dots) .data$variables <- rename_(.data$variables, .dots = vars) # Also rename survey_vars, group_vars, and the structures in the # svydesign2 object .data <- rename_special_vars(.data, vars) .data } #' @export filter_.tbl_svy <- function(.data, ..., .dots) { dots <- lazyeval::all_dots(.dots, ...) # There's probably a better way to do this... But I need to use # survey::subset because I want to make sure that I recalculate the # survey_vars correctly. Create a variable with the row numbers, run dplyr # on the variables data.frame and then pass the row_numbers that are kept # into survey::svydesign2 `[` row_numbers <- .data$variables row_numbers <- dplyr::mutate_(row_numbers, "`___row_nums` = dplyr::row_number(1)") row_numbers <- dplyr::filter_(row_numbers, .dots = dots) row_numbers <- row_numbers$`___row_nums` .data[row_numbers, ] } # Helper to rename variables stored in survey_vars and svydesign2 stucture rename_special_vars <- function(svy, var_list) { renamed_vars <- var_list[var_list != names(var_list)] svars <- as.character(survey_vars(svy)) for (iii in seq_along(renamed_vars)) { this_var <- renamed_vars[iii] # Make changes in the survey_vars structure svars <- lapply(svars, function(x) { x[x == this_var] <- names(this_var) x }) # Make changes in actual sydesign2 object's structures names(svy$cluster)[names(svy$cluster) == this_var] <- names(this_var) if(svy$has.strata) { names(svy$strata)[names(svy$strata) == this_var] <- names(this_var) } names(svy$allprob)[names(svy$allprob) == this_var] <- names(this_var) attr(svy$fpc$popsize, "dimnames")[[2]][ attr(svy$fpc$popsize, "dimnames")[[2]] == this_var ] <- names(this_var) } survey_vars(svy) <- svars svy } # Import + export generics from dplyr #' Single table verbs from dplyr #' #' These are data manipulation functions designed to work on \code{tbl_svy} objects. #' #' \code{mutate} and \code{transmute} can add or modify variables. See #' \code{\link[dplyr]{mutate}} for more details. #' #' \code{select} and \code{rename} keep or rename variables. See #' \code{\link[dplyr]{select}} for more details. #' #' \code{filter} keeps certain observaions. See \code{\link[dplyr]{filter}} #' for more details. #' #' \code{arrange} is not implemented for \code{tbl_svy} objects. Nor are any #' two table verbs such as \code{bind_rows}, \code{bind_cols} or any of the #' joins (\code{full_join}, \code{left_join}, etc.). These data manipulations #' may require modifications to the survey variable specifications and so #' cannot be done automatically. Instead, use dplyr to perform them while the #' data is still stored in data.frames. #'@name dplyr_single NULL #' @name mutate #' @rdname dplyr_single #' @export #' @importFrom dplyr mutate NULL #' @name mutate_ #' @export #' @importFrom dplyr mutate_ #' @rdname dplyr_single NULL #' @name transmute #' @rdname dplyr_single #' @export #' @importFrom dplyr transmute NULL #' @name transmute_ #' @export #' @importFrom dplyr transmute_ #' @rdname dplyr_single NULL #' @name select #' @rdname dplyr_single #' @export #' @importFrom dplyr select NULL #' @name select_ #' @export #' @importFrom dplyr select_ #' @rdname dplyr_single NULL #' @name rename #' @rdname dplyr_single #' @export #' @importFrom dplyr rename NULL #' @name rename_ #' @export #' @importFrom dplyr rename_ #' @rdname dplyr_single NULL #' @name filter #' @export #' @importFrom dplyr filter #' @rdname dplyr_single NULL #' @name filter_ #' @export #' @importFrom dplyr filter_ #' @rdname dplyr_single NULL #' Summarise and mutate multiple columns. #' #' See \code{\link[dplyr]{mutate_each}} for more details. #' #' @name mutate_each #' @export #' @importFrom dplyr mutate_each NULL #' @name mutate_each_ #' @export #' @importFrom dplyr mutate_each_ #' @rdname mutate_each NULL #' @name summarise_each #' @export #' @importFrom dplyr summarise_each #' @rdname mutate_each NULL #' @name summarise_each_ #' @export #' @importFrom dplyr summarise_each_ #' @rdname mutate_each ?NULL #' @name summarize_each #' @export #' @importFrom dplyr summarize_each #' @rdname mutate_each NULL #' @name summarize_each_ #' @export #' @importFrom dplyr summarize_each_ #' @rdname mutate_each NULL #' @name funs #' @export #' @importFrom dplyr funs #' @rdname mutate_each NULL #' @name funs_ #' @export #' @importFrom dplyr funs_ #' @rdname mutate_each NULL
/R/manip.r
no_license
carlganz/srvyr
R
false
false
5,365
r
#' @export mutate_.tbl_svy <- function(.data, ..., .dots) { dots <- lazyeval::all_dots(.dots, ..., all_named = FALSE) if (any(names2(dots) %in% as.character(survey_vars(.data)))) { stop("Cannot modify survey variable") } .data$variables <- mutate_(.data$variables, .dots = dots) .data } #' @export select_.tbl_svy <- function(.data, ..., .dots) { dots <- lazyeval::all_dots(.dots, ...) vars <- dplyr::select_vars_(names(.data$variables), dots, include = attr(.data, "group_vars")) .data$variables <- select_(.data$variables, .dots = vars) # Also rename survey_vars, group_vars, and the structures in the # svydesign2 object .data <- rename_special_vars(.data, vars) .data } #' @export rename_.tbl_svy <- function(.data, ..., .dots) { dots <- lazyeval::all_dots(.dots, ...) vars <- dplyr::rename_vars_(names(.data$variables), dots) .data$variables <- rename_(.data$variables, .dots = vars) # Also rename survey_vars, group_vars, and the structures in the # svydesign2 object .data <- rename_special_vars(.data, vars) .data } #' @export filter_.tbl_svy <- function(.data, ..., .dots) { dots <- lazyeval::all_dots(.dots, ...) # There's probably a better way to do this... But I need to use # survey::subset because I want to make sure that I recalculate the # survey_vars correctly. Create a variable with the row numbers, run dplyr # on the variables data.frame and then pass the row_numbers that are kept # into survey::svydesign2 `[` row_numbers <- .data$variables row_numbers <- dplyr::mutate_(row_numbers, "`___row_nums` = dplyr::row_number(1)") row_numbers <- dplyr::filter_(row_numbers, .dots = dots) row_numbers <- row_numbers$`___row_nums` .data[row_numbers, ] } # Helper to rename variables stored in survey_vars and svydesign2 stucture rename_special_vars <- function(svy, var_list) { renamed_vars <- var_list[var_list != names(var_list)] svars <- as.character(survey_vars(svy)) for (iii in seq_along(renamed_vars)) { this_var <- renamed_vars[iii] # Make changes in the survey_vars structure svars <- lapply(svars, function(x) { x[x == this_var] <- names(this_var) x }) # Make changes in actual sydesign2 object's structures names(svy$cluster)[names(svy$cluster) == this_var] <- names(this_var) if(svy$has.strata) { names(svy$strata)[names(svy$strata) == this_var] <- names(this_var) } names(svy$allprob)[names(svy$allprob) == this_var] <- names(this_var) attr(svy$fpc$popsize, "dimnames")[[2]][ attr(svy$fpc$popsize, "dimnames")[[2]] == this_var ] <- names(this_var) } survey_vars(svy) <- svars svy } # Import + export generics from dplyr #' Single table verbs from dplyr #' #' These are data manipulation functions designed to work on \code{tbl_svy} objects. #' #' \code{mutate} and \code{transmute} can add or modify variables. See #' \code{\link[dplyr]{mutate}} for more details. #' #' \code{select} and \code{rename} keep or rename variables. See #' \code{\link[dplyr]{select}} for more details. #' #' \code{filter} keeps certain observaions. See \code{\link[dplyr]{filter}} #' for more details. #' #' \code{arrange} is not implemented for \code{tbl_svy} objects. Nor are any #' two table verbs such as \code{bind_rows}, \code{bind_cols} or any of the #' joins (\code{full_join}, \code{left_join}, etc.). These data manipulations #' may require modifications to the survey variable specifications and so #' cannot be done automatically. Instead, use dplyr to perform them while the #' data is still stored in data.frames. #'@name dplyr_single NULL #' @name mutate #' @rdname dplyr_single #' @export #' @importFrom dplyr mutate NULL #' @name mutate_ #' @export #' @importFrom dplyr mutate_ #' @rdname dplyr_single NULL #' @name transmute #' @rdname dplyr_single #' @export #' @importFrom dplyr transmute NULL #' @name transmute_ #' @export #' @importFrom dplyr transmute_ #' @rdname dplyr_single NULL #' @name select #' @rdname dplyr_single #' @export #' @importFrom dplyr select NULL #' @name select_ #' @export #' @importFrom dplyr select_ #' @rdname dplyr_single NULL #' @name rename #' @rdname dplyr_single #' @export #' @importFrom dplyr rename NULL #' @name rename_ #' @export #' @importFrom dplyr rename_ #' @rdname dplyr_single NULL #' @name filter #' @export #' @importFrom dplyr filter #' @rdname dplyr_single NULL #' @name filter_ #' @export #' @importFrom dplyr filter_ #' @rdname dplyr_single NULL #' Summarise and mutate multiple columns. #' #' See \code{\link[dplyr]{mutate_each}} for more details. #' #' @name mutate_each #' @export #' @importFrom dplyr mutate_each NULL #' @name mutate_each_ #' @export #' @importFrom dplyr mutate_each_ #' @rdname mutate_each NULL #' @name summarise_each #' @export #' @importFrom dplyr summarise_each #' @rdname mutate_each NULL #' @name summarise_each_ #' @export #' @importFrom dplyr summarise_each_ #' @rdname mutate_each ?NULL #' @name summarize_each #' @export #' @importFrom dplyr summarize_each #' @rdname mutate_each NULL #' @name summarize_each_ #' @export #' @importFrom dplyr summarize_each_ #' @rdname mutate_each NULL #' @name funs #' @export #' @importFrom dplyr funs #' @rdname mutate_each NULL #' @name funs_ #' @export #' @importFrom dplyr funs_ #' @rdname mutate_each NULL
# 绘制kmplot和 卡方检验获取p值。 #install.packages("survival") #install.packages("survminer") library(survival) library(survminer) rt=read.table("expTime.txt",header=T,sep="\t",check.names=F,row.names=1) #读取输入文件 rt$futime=rt$futime/365 gene=colnames(rt)[3] pFilter=0.05 #km方法pvalue过滤条件 #对肿瘤类型进行循环 for(i in levels(rt[,"CancerType"])){ rt1=rt[(rt[,"CancerType"]==i),] group=ifelse(rt1[,gene]>median(rt1[,gene]),"high","low") diff=survdiff(Surv(futime, fustat) ~group,data = rt1) pValue=1-pchisq(diff$chisq,df=1) # 只有p值显著的才画图 if(pValue<pFilter){ # 我感觉p值格式可以不用处理 if(pValue<0.001){ pValue="p<0.001" }else{ pValue=paste0("p=",sprintf("%.03f",pValue)) } fit <- survfit(Surv(futime, fustat) ~ group, data = rt1) #绘制生存曲线 surPlot=ggsurvplot(fit, data=rt1, title=paste0("Cancer: ",i), pval=pValue, pval.size=6, legend.labs=c("high","low"), legend.title=paste0(gene," levels"), font.legend=12, xlab="Time(years)", ylab="Overall survival", break.time.by = 1, palette=c("red","blue"), conf.int=F, fontsize=4, risk.table=TRUE, risk.table.title="", risk.table.height=.25) # 保存图片为pdf格式 pdf(file=paste0("survival.",i,".pdf"),onefile = FALSE, width = 6, #图片的宽度 height =5) #图片的高度 print(surPlot) dev.off() } }
/R_scripts/survival/TCGA_kmplot/panCancer08.survival.R
no_license
DawnEve/bioToolKit
R
false
false
1,875
r
# 绘制kmplot和 卡方检验获取p值。 #install.packages("survival") #install.packages("survminer") library(survival) library(survminer) rt=read.table("expTime.txt",header=T,sep="\t",check.names=F,row.names=1) #读取输入文件 rt$futime=rt$futime/365 gene=colnames(rt)[3] pFilter=0.05 #km方法pvalue过滤条件 #对肿瘤类型进行循环 for(i in levels(rt[,"CancerType"])){ rt1=rt[(rt[,"CancerType"]==i),] group=ifelse(rt1[,gene]>median(rt1[,gene]),"high","low") diff=survdiff(Surv(futime, fustat) ~group,data = rt1) pValue=1-pchisq(diff$chisq,df=1) # 只有p值显著的才画图 if(pValue<pFilter){ # 我感觉p值格式可以不用处理 if(pValue<0.001){ pValue="p<0.001" }else{ pValue=paste0("p=",sprintf("%.03f",pValue)) } fit <- survfit(Surv(futime, fustat) ~ group, data = rt1) #绘制生存曲线 surPlot=ggsurvplot(fit, data=rt1, title=paste0("Cancer: ",i), pval=pValue, pval.size=6, legend.labs=c("high","low"), legend.title=paste0(gene," levels"), font.legend=12, xlab="Time(years)", ylab="Overall survival", break.time.by = 1, palette=c("red","blue"), conf.int=F, fontsize=4, risk.table=TRUE, risk.table.title="", risk.table.height=.25) # 保存图片为pdf格式 pdf(file=paste0("survival.",i,".pdf"),onefile = FALSE, width = 6, #图片的宽度 height =5) #图片的高度 print(surPlot) dev.off() } }
test = list( name = "q8", cases = list( ottr::TestCase$new( hidden = FALSE, name = NA, points = 0.5, code = { testthat::expect_equal(length(z), 10) } ), ottr::TestCase$new( hidden = TRUE, name = NA, points = 0.5, code = { actual <- c( 6.74191689429334, 2.87060365720782, 4.72625682267468, 5.26572520992208, 4.808536646282, 3.78775096781703, 7.02304399487788, 3.8106819231738, 8.03684742775408, 3.87457180189516 ) testthat::expect_equal(actual, z) } ) ) )
/test/test_assign/files/rmd-correct/autograder/tests/q8.R
permissive
ucbds-infra/otter-grader
R
false
false
694
r
test = list( name = "q8", cases = list( ottr::TestCase$new( hidden = FALSE, name = NA, points = 0.5, code = { testthat::expect_equal(length(z), 10) } ), ottr::TestCase$new( hidden = TRUE, name = NA, points = 0.5, code = { actual <- c( 6.74191689429334, 2.87060365720782, 4.72625682267468, 5.26572520992208, 4.808536646282, 3.78775096781703, 7.02304399487788, 3.8106819231738, 8.03684742775408, 3.87457180189516 ) testthat::expect_equal(actual, z) } ) ) )
#' Calculate graph small-worldness #' #' This function will calculate the characteristic path length and clustering #' coefficient, which are used to calculate small-worldness. #' #' @param g The graph (or list of graphs) of interest #' @param rand List of (lists of) equivalent random graphs (output from #' \code{\link{sim.rand.graph.par}}) #' @export #' #' @return A data frame with the following components: #' \item{density}{The range of density thresholds used.} #' \item{N}{The number of random graphs that were generated.} #' \item{Lp}{The characteristic path length.} #' \item{Cp}{The clustering coefficient.} #' \item{Lp.rand}{The mean characteristic path length of the random graphs with #' the same degree distribution as g.} #' \item{Cp.rand}{The mean clustering coefficient of the random graphs with #' the same degree distribution as g.} #' \item{Lp.norm}{The normalized characteristic path length.} #' \item{Cp.norm}{The normalized clustering coefficient.} #' \item{sigma}{The small-world measure of the graph.} #' #' @author Christopher G. Watson, \email{cgwatson@@bu.edu} #' @references Watts D.J., Strogatz S.H. (1998) \emph{Collective dynamics of #' 'small-world' networks}. Nature, 393:440-442. small.world <- function(g, rand) { if (is.igraph(g)) g <- list(g) # Single graph at a single density Lp <- vapply(g, function(x) graph_attr(x, 'Lp'), numeric(1)) Cp <- vapply(g, function(x) graph_attr(x, 'Cp'), numeric(1)) densities <- vapply(g, function(x) graph_attr(x, 'density'), numeric(1)) if (is.igraph(rand[[1]])) { Lp.rand <- mean(vapply(rand, function(x) graph_attr(x, 'Lp'), numeric(1))) Cp.rand <- mean(vapply(rand, function(x) graph_attr(x, 'Cp'), numeric(1))) N <- length(rand) } else { if (length(rand[[1]]) == 1) { # If there's 1 rand graph for each density Lp.rand <- vapply(rand, function(x) vapply(x, function(y) graph_attr(y, 'Lp'), numeric(1)), numeric(1)) Cp.rand <- vapply(rand, function(x) vapply(x, function(y) graph_attr(y, 'Cp'), numeric(1)), numeric(1)) N <- 1 } else { Lp.rand <- colMeans(vapply(rand, function(x) vapply(x, function(y) graph_attr(y, 'Lp'), numeric(1)), numeric(length(rand[[1]])))) Cp.rand <- colMeans(vapply(rand, function(x) vapply(x, function(y) graph_attr(y, 'Cp'), numeric(1)), numeric(length(rand[[1]])))) N <- vapply(rand, length, numeric(1)) } } Cp.norm <- Cp / Cp.rand Lp.norm <- Lp / Lp.rand sigma <- Cp.norm / Lp.norm Lp <- round(Lp, 4) Lp.rand <- round(Lp.rand, 4) Lp.norm <- round(Lp.norm, 4) Cp <- round(Cp, 4) Cp.rand <- round(Cp.rand, 4) Cp.norm <- round(Cp.norm, 4) sigma <- round(sigma, 4) return(data.table(density=densities, N, Lp, Cp, Lp.rand, Cp.rand, Lp.norm, Cp.norm, sigma)) }
/R/small_world.R
no_license
luojiahuli/brainGraph
R
false
false
3,073
r
#' Calculate graph small-worldness #' #' This function will calculate the characteristic path length and clustering #' coefficient, which are used to calculate small-worldness. #' #' @param g The graph (or list of graphs) of interest #' @param rand List of (lists of) equivalent random graphs (output from #' \code{\link{sim.rand.graph.par}}) #' @export #' #' @return A data frame with the following components: #' \item{density}{The range of density thresholds used.} #' \item{N}{The number of random graphs that were generated.} #' \item{Lp}{The characteristic path length.} #' \item{Cp}{The clustering coefficient.} #' \item{Lp.rand}{The mean characteristic path length of the random graphs with #' the same degree distribution as g.} #' \item{Cp.rand}{The mean clustering coefficient of the random graphs with #' the same degree distribution as g.} #' \item{Lp.norm}{The normalized characteristic path length.} #' \item{Cp.norm}{The normalized clustering coefficient.} #' \item{sigma}{The small-world measure of the graph.} #' #' @author Christopher G. Watson, \email{cgwatson@@bu.edu} #' @references Watts D.J., Strogatz S.H. (1998) \emph{Collective dynamics of #' 'small-world' networks}. Nature, 393:440-442. small.world <- function(g, rand) { if (is.igraph(g)) g <- list(g) # Single graph at a single density Lp <- vapply(g, function(x) graph_attr(x, 'Lp'), numeric(1)) Cp <- vapply(g, function(x) graph_attr(x, 'Cp'), numeric(1)) densities <- vapply(g, function(x) graph_attr(x, 'density'), numeric(1)) if (is.igraph(rand[[1]])) { Lp.rand <- mean(vapply(rand, function(x) graph_attr(x, 'Lp'), numeric(1))) Cp.rand <- mean(vapply(rand, function(x) graph_attr(x, 'Cp'), numeric(1))) N <- length(rand) } else { if (length(rand[[1]]) == 1) { # If there's 1 rand graph for each density Lp.rand <- vapply(rand, function(x) vapply(x, function(y) graph_attr(y, 'Lp'), numeric(1)), numeric(1)) Cp.rand <- vapply(rand, function(x) vapply(x, function(y) graph_attr(y, 'Cp'), numeric(1)), numeric(1)) N <- 1 } else { Lp.rand <- colMeans(vapply(rand, function(x) vapply(x, function(y) graph_attr(y, 'Lp'), numeric(1)), numeric(length(rand[[1]])))) Cp.rand <- colMeans(vapply(rand, function(x) vapply(x, function(y) graph_attr(y, 'Cp'), numeric(1)), numeric(length(rand[[1]])))) N <- vapply(rand, length, numeric(1)) } } Cp.norm <- Cp / Cp.rand Lp.norm <- Lp / Lp.rand sigma <- Cp.norm / Lp.norm Lp <- round(Lp, 4) Lp.rand <- round(Lp.rand, 4) Lp.norm <- round(Lp.norm, 4) Cp <- round(Cp, 4) Cp.rand <- round(Cp.rand, 4) Cp.norm <- round(Cp.norm, 4) sigma <- round(sigma, 4) return(data.table(density=densities, N, Lp, Cp, Lp.rand, Cp.rand, Lp.norm, Cp.norm, sigma)) }
#' A call from Fannie Mae's public API given the API's path. #' #' @param path This is the API's path after the origin URL. #' #' @return A list with the following elements: #' * content -- The parsed JSON returned from API call. #' Note that the object is returned as a nested list. #' * url -- The URL used to make the call #' * response -- The response object created by [httr::GET()] #' @export #' #' @examples #' get_url("/v1/mortgage-lender-sentiment/results") get_url <- function(path) { if (Sys.getenv("fannieapi_key") == "") { stop("Please set an API key with `fannieapi::set_api_key()`", call. = FALSE) } url <- httr::modify_url(api_url, path = path) resp <- httr::GET(url, httr::add_headers(Authorization = Sys.getenv("fannieapi_key"), accept = "application/json")) if (httr::http_type(resp) != "application/json") { stop("API did not return JSON", call. = FALSE) } parsed <- jsonlite::fromJSON(suppressMessages(httr::content(resp, "text")), simplifyVector = FALSE) if (httr::http_error(resp)) { stop(sprintf("API request failed [%s]\n %s\n See https://developer.theexchange.fanniemae.com/assets/pdf/FAQ.pdf for details.", httr::status_code(resp), parsed$message), call. = FALSE) } list(content = parsed, path = path, response = resp) }
/R/get_url.R
permissive
saadaslam/fannieapi
R
false
false
1,375
r
#' A call from Fannie Mae's public API given the API's path. #' #' @param path This is the API's path after the origin URL. #' #' @return A list with the following elements: #' * content -- The parsed JSON returned from API call. #' Note that the object is returned as a nested list. #' * url -- The URL used to make the call #' * response -- The response object created by [httr::GET()] #' @export #' #' @examples #' get_url("/v1/mortgage-lender-sentiment/results") get_url <- function(path) { if (Sys.getenv("fannieapi_key") == "") { stop("Please set an API key with `fannieapi::set_api_key()`", call. = FALSE) } url <- httr::modify_url(api_url, path = path) resp <- httr::GET(url, httr::add_headers(Authorization = Sys.getenv("fannieapi_key"), accept = "application/json")) if (httr::http_type(resp) != "application/json") { stop("API did not return JSON", call. = FALSE) } parsed <- jsonlite::fromJSON(suppressMessages(httr::content(resp, "text")), simplifyVector = FALSE) if (httr::http_error(resp)) { stop(sprintf("API request failed [%s]\n %s\n See https://developer.theexchange.fanniemae.com/assets/pdf/FAQ.pdf for details.", httr::status_code(resp), parsed$message), call. = FALSE) } list(content = parsed, path = path, response = resp) }
## R code for Plot 3 setwd("~/RDataScience/Plotsdata") dataFile <- "~/RDataScience/Plotsdata/household_power_consumption.txt" data <- read.table(dataFile, header=TRUE, sep=";", stringsAsFactors=FALSE, dec=".") subData <- data[data$Date %in% c("1/2/2007","2/2/2007"),] data <- read.table(dataFile, header=TRUE, sep=";", stringsAsFactors=FALSE, dec=".") subData <- data[data$Date %in% c("1/2/2007","2/2/2007") ,] ## png and PLOT 3 datetime <- strptime(paste(subData$Date, subData$Time, sep=" "), "%d/%m/%Y %H:%M:%S") globalActivePower <- as.numeric(subData$Global_active_power) globalReactivePower <- as.numeric(subData$Global_reactive_power) voltage <- as.numeric(subData$Voltage) subMeteringI <- as.numeric(subData$Sub_metering_1) subMeteringII <- as.numeric(subData$Sub_metering_2) subMeteringIII <- as.numeric(subData$Sub_metering_3) png("plot4.png", width=480, height=480) par(mfrow = c(2, 2)) plot(datetime, globalActivePower, type="l", xlab="", ylab="Global Active Power", cex=0.2) plot(datetime, voltage, type="l", xlab="datetime", ylab="Voltage") plot(datetime, subMeteringI, type="l", ylab="Energy Submetering", xlab="") lines(datetime, subMeteringII, type="l", col="red") lines(datetime, subMeteringIII, type="l", col="blue") legend("topright", c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"), lty=, lwd=2.5, col=c("black", "red", "blue"), bty="o") plot(datetime, globalReactivePower, type="l", xlab="datetime", ylab="Global_reactive_power") dev.off()
/Plotsdata/R Code for Plot 4.R
no_license
drallayeh/RDataScience
R
false
false
1,474
r
## R code for Plot 3 setwd("~/RDataScience/Plotsdata") dataFile <- "~/RDataScience/Plotsdata/household_power_consumption.txt" data <- read.table(dataFile, header=TRUE, sep=";", stringsAsFactors=FALSE, dec=".") subData <- data[data$Date %in% c("1/2/2007","2/2/2007"),] data <- read.table(dataFile, header=TRUE, sep=";", stringsAsFactors=FALSE, dec=".") subData <- data[data$Date %in% c("1/2/2007","2/2/2007") ,] ## png and PLOT 3 datetime <- strptime(paste(subData$Date, subData$Time, sep=" "), "%d/%m/%Y %H:%M:%S") globalActivePower <- as.numeric(subData$Global_active_power) globalReactivePower <- as.numeric(subData$Global_reactive_power) voltage <- as.numeric(subData$Voltage) subMeteringI <- as.numeric(subData$Sub_metering_1) subMeteringII <- as.numeric(subData$Sub_metering_2) subMeteringIII <- as.numeric(subData$Sub_metering_3) png("plot4.png", width=480, height=480) par(mfrow = c(2, 2)) plot(datetime, globalActivePower, type="l", xlab="", ylab="Global Active Power", cex=0.2) plot(datetime, voltage, type="l", xlab="datetime", ylab="Voltage") plot(datetime, subMeteringI, type="l", ylab="Energy Submetering", xlab="") lines(datetime, subMeteringII, type="l", col="red") lines(datetime, subMeteringIII, type="l", col="blue") legend("topright", c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"), lty=, lwd=2.5, col=c("black", "red", "blue"), bty="o") plot(datetime, globalReactivePower, type="l", xlab="datetime", ylab="Global_reactive_power") dev.off()
#' Check whether input is a valid instructor ID #' #' \code{is_instructor_id} checks to see if its input #' is a valid instructor ID. Instructor IDs should #' be alphabetic strings. #' #' @param instructor_id A vector of strings #' #' @param strict A logical scalar (currently ignored). #' #' @return A logical vector, each element of which is TRUE #' if the corresponding element of \code{instructor_id} is #' a valid instructor ID. #' #' @export #' #' @examples #' is_instructor_id("JoeSmith") is_instructor_id <- function(instructor_id, strict = FALSE) { all(stringr::str_detect(instructor_id, "[:alpha:]")) }
/R/is_instructor_id.R
permissive
bvkrauth/setc
R
false
false
618
r
#' Check whether input is a valid instructor ID #' #' \code{is_instructor_id} checks to see if its input #' is a valid instructor ID. Instructor IDs should #' be alphabetic strings. #' #' @param instructor_id A vector of strings #' #' @param strict A logical scalar (currently ignored). #' #' @return A logical vector, each element of which is TRUE #' if the corresponding element of \code{instructor_id} is #' a valid instructor ID. #' #' @export #' #' @examples #' is_instructor_id("JoeSmith") is_instructor_id <- function(instructor_id, strict = FALSE) { all(stringr::str_detect(instructor_id, "[:alpha:]")) }
## File Name: gdina_proc_sequential_items.R ## File Version: 0.08 gdina_proc_sequential_items <- function( data, q.matrix ) { maxK <- max( data, na.rm=TRUE ) sequential <- FALSE if ( maxK > 1){ res0 <- sequential.items( data=data ) data <- res0$dat.expand sequential <- TRUE q.matrix <- q.matrix[,-c(1:2)] } res <- list( data=data, sequential=sequential, q.matrix=q.matrix ) return(res) }
/R/gdina_proc_sequential_items.R
no_license
Janehappiest/CDM
R
false
false
462
r
## File Name: gdina_proc_sequential_items.R ## File Version: 0.08 gdina_proc_sequential_items <- function( data, q.matrix ) { maxK <- max( data, na.rm=TRUE ) sequential <- FALSE if ( maxK > 1){ res0 <- sequential.items( data=data ) data <- res0$dat.expand sequential <- TRUE q.matrix <- q.matrix[,-c(1:2)] } res <- list( data=data, sequential=sequential, q.matrix=q.matrix ) return(res) }
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. #' @include arrow-package.R `parquet::arrow::FileReader` <- R6Class("parquet::arrow::FileReader", inherit = `arrow::Object`, public = list( ReadTable = function(col_select = NULL) { col_select <- enquo(col_select) if(quo_is_null(col_select)) { shared_ptr(`arrow::Table`, parquet___arrow___FileReader__ReadTable1(self)) } else { all_vars <- shared_ptr(`arrow::Schema`, parquet___arrow___FileReader__GetSchema(self))$names indices <- match(vars_select(all_vars, !!col_select), all_vars) - 1L shared_ptr(`arrow::Table`, parquet___arrow___FileReader__ReadTable2(self, indices)) } }, GetSchema = function() { shared_ptr(`arrow::Schema`, parquet___arrow___FileReader__GetSchema(self)) } ) ) `parquet::arrow::ArrowReaderProperties` <- R6Class("parquet::arrow::ArrowReaderProperties", inherit = `arrow::Object`, public = list( read_dictionary = function(column_index) { parquet___arrow___ArrowReaderProperties__get_read_dictionary(self, column_index) }, set_read_dictionary = function(column_index, read_dict) { parquet___arrow___ArrowReaderProperties__set_read_dictionary(self, column_index, read_dict) } ), active = list( use_threads = function(use_threads) { if(missing(use_threads)) { parquet___arrow___ArrowReaderProperties__get_use_threads(self) } else { parquet___arrow___ArrowReaderProperties__set_use_threads(self, use_threads) } } ) ) #' Create a new ArrowReaderProperties instance #' #' @param use_threads use threads? #' #' @export #' @keywords internal parquet_arrow_reader_properties <- function(use_threads = option_use_threads()) { shared_ptr(`parquet::arrow::ArrowReaderProperties`, parquet___arrow___ArrowReaderProperties__Make(isTRUE(use_threads))) } #' Parquet file reader #' #' @inheritParams read_delim_arrow #' @param props reader file properties, as created by [parquet_arrow_reader_properties()] #' #' @param ... additional parameters #' #' @export parquet_file_reader <- function(file, props = parquet_arrow_reader_properties(), ...) { UseMethod("parquet_file_reader") } #' @export `parquet_file_reader.arrow::io::RandomAccessFile` <- function(file, props = parquet_arrow_reader_properties(), ...) { unique_ptr(`parquet::arrow::FileReader`, parquet___arrow___FileReader__OpenFile(file, props)) } #' @export parquet_file_reader.character <- function(file, props = parquet_arrow_reader_properties(), memory_map = TRUE, ...) { file <- normalizePath(file) if (isTRUE(memory_map)) { parquet_file_reader(mmap_open(file), props = props, ...) } else { parquet_file_reader(ReadableFile(file), props = props, ...) } } #' @export parquet_file_reader.raw <- function(file, props = parquet_arrow_reader_properties(), ...) { parquet_file_reader(BufferReader(file), props = props, ...) } #' Read a Parquet file #' #' '[Parquet](https://parquet.apache.org/)' is a columnar storage file format. #' This function enables you to read Parquet files into R. #' #' @inheritParams read_delim_arrow #' @inheritParams parquet_file_reader #' #' @return A [arrow::Table][arrow__Table], or a `data.frame` if `as_tibble` is #' `TRUE`. #' @examples #' \donttest{ #' try({ #' df <- read_parquet(system.file("v0.7.1.parquet", package="arrow")) #' }) #' } #' @export read_parquet <- function(file, col_select = NULL, as_tibble = TRUE, props = parquet_arrow_reader_properties(), ...) { reader <- parquet_file_reader(file, props = props, ...) tab <- reader$ReadTable(!!enquo(col_select)) if (as_tibble) { tab <- as.data.frame(tab) } tab } #' Write Parquet file to disk #' #' [Parquet](https://parquet.apache.org/) is a columnar storage file format. #' This function enables you to write Parquet files from R. #' #' @param table An [arrow::Table][arrow__Table], or an object convertible to it #' @param file a file path #' #' @examples #' \donttest{ #' try({ #' tf <- tempfile(fileext = ".parquet") #' on.exit(unlink(tf)) #' write_parquet(tibble::tibble(x = 1:5), tf) #' }) #' } #' @export write_parquet <- function(table, file) { write_parquet_file(to_arrow(table), file) }
/r/R/parquet.R
permissive
Elbehery/arrow
R
false
false
5,091
r
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. #' @include arrow-package.R `parquet::arrow::FileReader` <- R6Class("parquet::arrow::FileReader", inherit = `arrow::Object`, public = list( ReadTable = function(col_select = NULL) { col_select <- enquo(col_select) if(quo_is_null(col_select)) { shared_ptr(`arrow::Table`, parquet___arrow___FileReader__ReadTable1(self)) } else { all_vars <- shared_ptr(`arrow::Schema`, parquet___arrow___FileReader__GetSchema(self))$names indices <- match(vars_select(all_vars, !!col_select), all_vars) - 1L shared_ptr(`arrow::Table`, parquet___arrow___FileReader__ReadTable2(self, indices)) } }, GetSchema = function() { shared_ptr(`arrow::Schema`, parquet___arrow___FileReader__GetSchema(self)) } ) ) `parquet::arrow::ArrowReaderProperties` <- R6Class("parquet::arrow::ArrowReaderProperties", inherit = `arrow::Object`, public = list( read_dictionary = function(column_index) { parquet___arrow___ArrowReaderProperties__get_read_dictionary(self, column_index) }, set_read_dictionary = function(column_index, read_dict) { parquet___arrow___ArrowReaderProperties__set_read_dictionary(self, column_index, read_dict) } ), active = list( use_threads = function(use_threads) { if(missing(use_threads)) { parquet___arrow___ArrowReaderProperties__get_use_threads(self) } else { parquet___arrow___ArrowReaderProperties__set_use_threads(self, use_threads) } } ) ) #' Create a new ArrowReaderProperties instance #' #' @param use_threads use threads? #' #' @export #' @keywords internal parquet_arrow_reader_properties <- function(use_threads = option_use_threads()) { shared_ptr(`parquet::arrow::ArrowReaderProperties`, parquet___arrow___ArrowReaderProperties__Make(isTRUE(use_threads))) } #' Parquet file reader #' #' @inheritParams read_delim_arrow #' @param props reader file properties, as created by [parquet_arrow_reader_properties()] #' #' @param ... additional parameters #' #' @export parquet_file_reader <- function(file, props = parquet_arrow_reader_properties(), ...) { UseMethod("parquet_file_reader") } #' @export `parquet_file_reader.arrow::io::RandomAccessFile` <- function(file, props = parquet_arrow_reader_properties(), ...) { unique_ptr(`parquet::arrow::FileReader`, parquet___arrow___FileReader__OpenFile(file, props)) } #' @export parquet_file_reader.character <- function(file, props = parquet_arrow_reader_properties(), memory_map = TRUE, ...) { file <- normalizePath(file) if (isTRUE(memory_map)) { parquet_file_reader(mmap_open(file), props = props, ...) } else { parquet_file_reader(ReadableFile(file), props = props, ...) } } #' @export parquet_file_reader.raw <- function(file, props = parquet_arrow_reader_properties(), ...) { parquet_file_reader(BufferReader(file), props = props, ...) } #' Read a Parquet file #' #' '[Parquet](https://parquet.apache.org/)' is a columnar storage file format. #' This function enables you to read Parquet files into R. #' #' @inheritParams read_delim_arrow #' @inheritParams parquet_file_reader #' #' @return A [arrow::Table][arrow__Table], or a `data.frame` if `as_tibble` is #' `TRUE`. #' @examples #' \donttest{ #' try({ #' df <- read_parquet(system.file("v0.7.1.parquet", package="arrow")) #' }) #' } #' @export read_parquet <- function(file, col_select = NULL, as_tibble = TRUE, props = parquet_arrow_reader_properties(), ...) { reader <- parquet_file_reader(file, props = props, ...) tab <- reader$ReadTable(!!enquo(col_select)) if (as_tibble) { tab <- as.data.frame(tab) } tab } #' Write Parquet file to disk #' #' [Parquet](https://parquet.apache.org/) is a columnar storage file format. #' This function enables you to write Parquet files from R. #' #' @param table An [arrow::Table][arrow__Table], or an object convertible to it #' @param file a file path #' #' @examples #' \donttest{ #' try({ #' tf <- tempfile(fileext = ".parquet") #' on.exit(unlink(tf)) #' write_parquet(tibble::tibble(x = 1:5), tf) #' }) #' } #' @export write_parquet <- function(table, file) { write_parquet_file(to_arrow(table), file) }
\name{fitSpBk} \alias{fitSpBk} \docType{data} \title{ Fitted glmssn object for example data set MiddleFork.ssn } \description{ The MiddleFork04.ssn data folder contains the spatial, attribute, and topological information needed to construct a spatial stream network object using the SSN package. This is a fitted model using the \code{\link{glmssn}} function. It is used for the block prediction example. } \details{ See the help for \code{\link{glmssn}} for how the model was created, and \code{\link{BlockPredict}} for usage in block prediction. } \examples{ library(SSN) data(modelFits) ls() }
/man/fitSpBk.Rd
no_license
jayverhoef/SSN
R
false
false
611
rd
\name{fitSpBk} \alias{fitSpBk} \docType{data} \title{ Fitted glmssn object for example data set MiddleFork.ssn } \description{ The MiddleFork04.ssn data folder contains the spatial, attribute, and topological information needed to construct a spatial stream network object using the SSN package. This is a fitted model using the \code{\link{glmssn}} function. It is used for the block prediction example. } \details{ See the help for \code{\link{glmssn}} for how the model was created, and \code{\link{BlockPredict}} for usage in block prediction. } \examples{ library(SSN) data(modelFits) ls() }
library(DESeq2) library(BiocParallel) register(MulticoreParam(multicoreWorkers()))#register the number of cores in that computer load(file = '/export/home/chche/WTC_Project/Data/MDD_int.Rdata') # MDD_integer tech_cov <- read.table('/export/home/pfkuan/WTCproject/Epigenetics/Data/levinsonNIMH_MDD_EQTL_Covariates/covariates/Technical_factors.txt', header = T, sep ='\t') bio_cov <- read.table('/export/home/pfkuan/WTCproject/Epigenetics/Data/levinsonNIMH_MDD_EQTL_Covariates/covariates/Biological_and_hidden_factors.txt', header = T, sep ='\t') MDDstat <- read.table('/export/home/pfkuan/WTCproject/Epigenetics/Data/levinsonNIMH_MDD_EQTL_Covariates/data_used_for_MDD_analysis/Dx_Case_status.txt', header = T, sep ='\t') MDD <- as.factor(MDDstat$MDD.status) all_cov = cbind(MDD,tech_cov[,-1],bio_cov[,-1]) for (i in factor.choice){ all_cov_factor[,i] = factor(all_cov_factor[,i]) } x <- paste(colnames(all_cov) ,collapse = '+') fomu = as.formula(paste('~',x)) x = DESeqDataSetFromMatrix(countData = t(MDD_inte), colData = all_cov, design = fomu) MDD922 = DESeq(x, parallel = T) save(MDD922, file ='/export/home/chche/WTC_Project/Data/MDD922_all_cov.Rdata') res922MDD = results(MDD922, contrast = c("MDD", "1", "2")) save(res922MDD, file = '/export/home/chche/WTC_Project/Data/res922M_all_cov.Rdata')
/April_14th.R
no_license
chang-che/Work
R
false
false
1,331
r
library(DESeq2) library(BiocParallel) register(MulticoreParam(multicoreWorkers()))#register the number of cores in that computer load(file = '/export/home/chche/WTC_Project/Data/MDD_int.Rdata') # MDD_integer tech_cov <- read.table('/export/home/pfkuan/WTCproject/Epigenetics/Data/levinsonNIMH_MDD_EQTL_Covariates/covariates/Technical_factors.txt', header = T, sep ='\t') bio_cov <- read.table('/export/home/pfkuan/WTCproject/Epigenetics/Data/levinsonNIMH_MDD_EQTL_Covariates/covariates/Biological_and_hidden_factors.txt', header = T, sep ='\t') MDDstat <- read.table('/export/home/pfkuan/WTCproject/Epigenetics/Data/levinsonNIMH_MDD_EQTL_Covariates/data_used_for_MDD_analysis/Dx_Case_status.txt', header = T, sep ='\t') MDD <- as.factor(MDDstat$MDD.status) all_cov = cbind(MDD,tech_cov[,-1],bio_cov[,-1]) for (i in factor.choice){ all_cov_factor[,i] = factor(all_cov_factor[,i]) } x <- paste(colnames(all_cov) ,collapse = '+') fomu = as.formula(paste('~',x)) x = DESeqDataSetFromMatrix(countData = t(MDD_inte), colData = all_cov, design = fomu) MDD922 = DESeq(x, parallel = T) save(MDD922, file ='/export/home/chche/WTC_Project/Data/MDD922_all_cov.Rdata') res922MDD = results(MDD922, contrast = c("MDD", "1", "2")) save(res922MDD, file = '/export/home/chche/WTC_Project/Data/res922M_all_cov.Rdata')
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/build_survey.R \name{build_survey} \alias{build_survey} \title{A function to build LaTeX survey tool for RICH economic games or related social network data collection} \usage{ build_survey( path, pattern = ".jpg", start = 1, stop = 3, n_panels = 4, n_rows = 5, n_cols = 8, seed = 1, ordered = NULL ) } \arguments{ \item{path}{Full path to main folder.} \item{pattern}{File extension of photos. Should be ".jpg" or ".JPG".} \item{start}{Location of start of PID in file name. If files are saved as "XXX.jpg" for example, this is 1.} \item{stop}{Location of end of PID in file name. If files are saved as "XXX.jpg" for example, this is 3.} \item{n_panels}{Number of frames/panels/blocks of photos to be output. I use four big panels and randomize order at each game.} \item{n_rows}{Number of rows per panel. With 7cm x 10cm photos, I use five rows of photos per panel.} \item{n_cols}{Number of cols per panel. With 7cm x 10cm photos, I use six to eight cols of photos per panel.} \item{seed}{A seed for the random number generator to sort the order of photos in the array.} \item{ordered}{A list of IDs in explicit order. Can be used to overide random sorting.} } \description{ This function allows you to speed up data collection and photo randomization. Simply set a path to the main folder. Set the number of panels, and the number of rows and cols per panel. Then run the function. This function relies on 'xtable' to create a survey tool, and compile to PDF using LaTeX. The user must have a LaTeX build on the system path. } \examples{ \dontrun{ build_survey(path=path, pattern=".jpg", start=1, stop=3, n_panels=2, n_rows=4, n_cols=5, seed=1, ordered = sorted_ids) } }
/man/build_survey.Rd
no_license
ctross/DieTryin
R
false
true
1,875
rd
% Generated by roxygen2: do not edit by hand % Please edit documentation in R/build_survey.R \name{build_survey} \alias{build_survey} \title{A function to build LaTeX survey tool for RICH economic games or related social network data collection} \usage{ build_survey( path, pattern = ".jpg", start = 1, stop = 3, n_panels = 4, n_rows = 5, n_cols = 8, seed = 1, ordered = NULL ) } \arguments{ \item{path}{Full path to main folder.} \item{pattern}{File extension of photos. Should be ".jpg" or ".JPG".} \item{start}{Location of start of PID in file name. If files are saved as "XXX.jpg" for example, this is 1.} \item{stop}{Location of end of PID in file name. If files are saved as "XXX.jpg" for example, this is 3.} \item{n_panels}{Number of frames/panels/blocks of photos to be output. I use four big panels and randomize order at each game.} \item{n_rows}{Number of rows per panel. With 7cm x 10cm photos, I use five rows of photos per panel.} \item{n_cols}{Number of cols per panel. With 7cm x 10cm photos, I use six to eight cols of photos per panel.} \item{seed}{A seed for the random number generator to sort the order of photos in the array.} \item{ordered}{A list of IDs in explicit order. Can be used to overide random sorting.} } \description{ This function allows you to speed up data collection and photo randomization. Simply set a path to the main folder. Set the number of panels, and the number of rows and cols per panel. Then run the function. This function relies on 'xtable' to create a survey tool, and compile to PDF using LaTeX. The user must have a LaTeX build on the system path. } \examples{ \dontrun{ build_survey(path=path, pattern=".jpg", start=1, stop=3, n_panels=2, n_rows=4, n_cols=5, seed=1, ordered = sorted_ids) } }