content large_stringlengths 0 6.46M | path large_stringlengths 3 331 | license_type large_stringclasses 2 values | repo_name large_stringlengths 5 125 | language large_stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 4 6.46M | extension large_stringclasses 75 values | text stringlengths 0 6.46M |
|---|---|---|---|---|---|---|---|---|---|
cor2 <- function(df)
{
cor_matrix = cor(df, use="complete-pairs")
rounded_percent_matrix = round(100*cor_matrix)
return(rounded_percent_matrix)
} | /Week1/cor2.R | no_license | RokoMijic/Signal | R | false | false | 151 | r | cor2 <- function(df)
{
cor_matrix = cor(df, use="complete-pairs")
rounded_percent_matrix = round(100*cor_matrix)
return(rounded_percent_matrix)
} |
% File radiometric.Rd
\name{radiometric}
\title{convert a colorSpec object from actinometric to radiometric}
\alias{radiometric}
\alias{radiometric.colorSpec}
\alias{is.radiometric}
\alias{is.radiometric.colorSpec}
\description{
Convert a \bold{colorSpec} object to have quantity that is radiometric (energy of photons) - to prepare it for colorimetric calculations.
Test an object for whether it is radiometric.
}
\usage{
\S3method{radiometric}{colorSpec}( x, multiplier=1, warn=FALSE )
\S3method{is.radiometric}{colorSpec}( x )
}
\arguments{
\item{x}{a \bold{colorSpec} object}
\item{multiplier}{a scalar which is multiplied by the output, and intended for unit conversion}
\item{warn}{if \code{TRUE} and a conversion actually takes place, the a \code{WARN} message is issued.
This makes the user aware of the conversion, so units can be verified. This can be useful when \code{radiometric()} is called from another \bold{colorSpec} function.}
}
\value{
\code{radiometric()} returns a \bold{colorSpec} object with
\code{\link{quantity}} that is
radiometric (energy-based) and not actinometric (photon-based).
If \code{type(x)} is a material type
(\code{'material'} or \code{'responsivity.material'})
then \code{x} is returned unchanged.
If \code{quantity(x)} starts with \code{'energy'},
then \code{is.radiometric()} returns \code{TRUE}, and otherwise \code{FALSE}.
}
\details{
If the \code{\link{quantity}} of \code{x} does not start with \code{'photons'}
then the quantity is not actinometric
and so \code{x} is returned unchanged.
Otherwise \code{x} is actinometric (photon-based).
If \code{\link{type}(x)} is \code{'light'} then
the most common actinometric unit of photon count is
(\eqn{\mu}mole of photons) = (\eqn{6.02214 x 10^{17}} photons).
The conversion equation is:
\deqn{ E = Q * 10^{-6} * N_A * h * c / \lambda }
where \eqn{E} is the energy of the photons,
\eqn{Q} is the photon count,
\eqn{N_A} is Avogadro's constant,
\eqn{h} is Planck's constant, \eqn{c} is the speed of light,
and \eqn{\lambda} is the wavelength in meters.
The output energy unit is joule.\cr
If the unit of \code{Q} is not (\eqn{\mu}mole of photons),
then the output should be scaled appropriately.
For example, if the unit of photon count is exaphotons,
then set \code{multiplier=1/0.602214}.
If the \code{\link{quantity}(x)} is \code{'photons->electrical'},
then the most common actinometric unit of responsivity to light is quantum efficiency (QE).
The conversion equation is:
\deqn{ R_e = QE * \lambda * e / (h * c) }
where \eqn{R_e} is the energy-based responsivity,
\eqn{QE} is the quantum efficiency,
and \eqn{e} is the charge of an electron (in C).
The output responsivity unit is coulombs/joule (C/J) or amps/watt (A/W).\cr
If the unit of \code{x} is not quantum efficiency,
then \code{multiplier} should be set appropriately.
If the \code{\link{quantity}(x)} is
\code{'photons->neural'} or \code{'photons->action'},
the most common actinometric unit of photon count is
(\eqn{\mu}mole of photons) = (\eqn{6.02214 x 10^{17}} photons).
The conversion equation is:
\deqn{ R_e = R_p * \lambda * 10^6 / ( N_A * h * c) }
where \eqn{R_e} is the energy-based responsivity,
\eqn{R_p} is the photon-based responsivity.
This essentially the reciprocal of the first conversion equation.
The argument \code{multiplier} is applied to the right side of all the above
conversion equations.
}
\note{
To log the executed conversion equation,
execute \code{cs.options(loglevel='INFO')}.
}
\source{
Wikipedia.
\bold{Photon counting}.
\url{https://en.wikipedia.org/wiki/Photon_counting}
}
\seealso{
\code{\link{quantity}},
\code{\link{type}},
\code{\link{F96T12}},
\code{\link{cs.options}},
\code{\link{actinometric}}
}
\examples{
sum( F96T12 ) # the step size is 1nm, from 300 to 900nm
# [1] 320.1132 photon irradiance, (micromoles of photons)*m^{-2}*sec^{-1}
sum( radiometric(F96T12) )
# [1] 68.91819 irradiance, watts*m^{-2}
}
\keyword{light}
| /man/radiometric.Rd | no_license | cran/colorSpec | R | false | false | 4,080 | rd | % File radiometric.Rd
\name{radiometric}
\title{convert a colorSpec object from actinometric to radiometric}
\alias{radiometric}
\alias{radiometric.colorSpec}
\alias{is.radiometric}
\alias{is.radiometric.colorSpec}
\description{
Convert a \bold{colorSpec} object to have quantity that is radiometric (energy of photons) - to prepare it for colorimetric calculations.
Test an object for whether it is radiometric.
}
\usage{
\S3method{radiometric}{colorSpec}( x, multiplier=1, warn=FALSE )
\S3method{is.radiometric}{colorSpec}( x )
}
\arguments{
\item{x}{a \bold{colorSpec} object}
\item{multiplier}{a scalar which is multiplied by the output, and intended for unit conversion}
\item{warn}{if \code{TRUE} and a conversion actually takes place, the a \code{WARN} message is issued.
This makes the user aware of the conversion, so units can be verified. This can be useful when \code{radiometric()} is called from another \bold{colorSpec} function.}
}
\value{
\code{radiometric()} returns a \bold{colorSpec} object with
\code{\link{quantity}} that is
radiometric (energy-based) and not actinometric (photon-based).
If \code{type(x)} is a material type
(\code{'material'} or \code{'responsivity.material'})
then \code{x} is returned unchanged.
If \code{quantity(x)} starts with \code{'energy'},
then \code{is.radiometric()} returns \code{TRUE}, and otherwise \code{FALSE}.
}
\details{
If the \code{\link{quantity}} of \code{x} does not start with \code{'photons'}
then the quantity is not actinometric
and so \code{x} is returned unchanged.
Otherwise \code{x} is actinometric (photon-based).
If \code{\link{type}(x)} is \code{'light'} then
the most common actinometric unit of photon count is
(\eqn{\mu}mole of photons) = (\eqn{6.02214 x 10^{17}} photons).
The conversion equation is:
\deqn{ E = Q * 10^{-6} * N_A * h * c / \lambda }
where \eqn{E} is the energy of the photons,
\eqn{Q} is the photon count,
\eqn{N_A} is Avogadro's constant,
\eqn{h} is Planck's constant, \eqn{c} is the speed of light,
and \eqn{\lambda} is the wavelength in meters.
The output energy unit is joule.\cr
If the unit of \code{Q} is not (\eqn{\mu}mole of photons),
then the output should be scaled appropriately.
For example, if the unit of photon count is exaphotons,
then set \code{multiplier=1/0.602214}.
If the \code{\link{quantity}(x)} is \code{'photons->electrical'},
then the most common actinometric unit of responsivity to light is quantum efficiency (QE).
The conversion equation is:
\deqn{ R_e = QE * \lambda * e / (h * c) }
where \eqn{R_e} is the energy-based responsivity,
\eqn{QE} is the quantum efficiency,
and \eqn{e} is the charge of an electron (in C).
The output responsivity unit is coulombs/joule (C/J) or amps/watt (A/W).\cr
If the unit of \code{x} is not quantum efficiency,
then \code{multiplier} should be set appropriately.
If the \code{\link{quantity}(x)} is
\code{'photons->neural'} or \code{'photons->action'},
the most common actinometric unit of photon count is
(\eqn{\mu}mole of photons) = (\eqn{6.02214 x 10^{17}} photons).
The conversion equation is:
\deqn{ R_e = R_p * \lambda * 10^6 / ( N_A * h * c) }
where \eqn{R_e} is the energy-based responsivity,
\eqn{R_p} is the photon-based responsivity.
This essentially the reciprocal of the first conversion equation.
The argument \code{multiplier} is applied to the right side of all the above
conversion equations.
}
\note{
To log the executed conversion equation,
execute \code{cs.options(loglevel='INFO')}.
}
\source{
Wikipedia.
\bold{Photon counting}.
\url{https://en.wikipedia.org/wiki/Photon_counting}
}
\seealso{
\code{\link{quantity}},
\code{\link{type}},
\code{\link{F96T12}},
\code{\link{cs.options}},
\code{\link{actinometric}}
}
\examples{
sum( F96T12 ) # the step size is 1nm, from 300 to 900nm
# [1] 320.1132 photon irradiance, (micromoles of photons)*m^{-2}*sec^{-1}
sum( radiometric(F96T12) )
# [1] 68.91819 irradiance, watts*m^{-2}
}
\keyword{light}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/mtg_pals.R
\docType{data}
\name{mtg_colors}
\alias{mtg_colors}
\title{this code is heavily based on
https://drsimonj.svbtle.com/creating-corporate-colour-palettes-for-ggplot2}
\format{An object of class \code{character} of length 7.}
\usage{
mtg_colors
}
\description{
this code is heavily based on
https://drsimonj.svbtle.com/creating-corporate-colour-palettes-for-ggplot2
}
\keyword{datasets}
| /man/mtg_colors.Rd | permissive | khailper/mtggplot | R | false | true | 473 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/mtg_pals.R
\docType{data}
\name{mtg_colors}
\alias{mtg_colors}
\title{this code is heavily based on
https://drsimonj.svbtle.com/creating-corporate-colour-palettes-for-ggplot2}
\format{An object of class \code{character} of length 7.}
\usage{
mtg_colors
}
\description{
this code is heavily based on
https://drsimonj.svbtle.com/creating-corporate-colour-palettes-for-ggplot2
}
\keyword{datasets}
|
#Plots 3 line charts
plot4<- function()
{
#load the data
myData <- read.table("household_power_consumption.txt", head=TRUE, sep = ";")
#Combine Date and Time
myData$Time<-paste(myData$Date,myData$Time, sep = " ")
#convert to date the Date column
myData$Date <- as.Date(myData$Date, format="%d/%m/%Y")
myData$Time <- strptime(myData$Time, format="%d/%m/%Y %H:%M:%S")
#convert Sub_metering_1,Sub_metering_2,Sub_metering_3 to numeric
myData$Global_active_power <- as.numeric(as.character(myData$Global_active_power))
myData$Sub_metering_1 <- as.numeric(as.character(myData$Sub_metering_1))
myData$Sub_metering_2 <- as.numeric(as.character(myData$Sub_metering_2))
myData$Sub_metering_3 <- as.numeric(as.character(myData$Sub_metering_3))
myData$Voltage <- as.numeric(as.character(myData$Voltage))
myData$Global_reactive_power <- as.numeric(as.character(myData$Global_reactive_power))
#subset data
startDay<-as.Date(c("2007-02-01"), format="%Y-%m-%d")
endDay<-as.Date(c("2007-02-02"), format="%Y-%m-%d")
subsetOfData<-subset(myData,myData$Date>=startDay & myData$Date<=endDay)
#create the histogram
#PNG image
png("plot4.png",width = 480, height = 480, bg="transparent")
attach(mtcars)
par(mfrow=c(2,2))
plot(subsetOfData$Time,subsetOfData$Global_active_power, type='l',
ylab = "Global Active Power (Kilowatts)", xlab = "")
plot(subsetOfData$Time,subsetOfData$Voltage, type='l',
ylab = "Voltage", xlab = "datetime")
plot(subsetOfData$Time, subsetOfData$Sub_metering_1, type='l',col="black",
xlab = "", ylab="Energy Sub metering")
lines(subsetOfData$Time, subsetOfData$Sub_metering_2, type='l',col="red",
xlab = "", ylab="")
lines(subsetOfData$Time, subsetOfData$Sub_metering_3, type='l',col="blue",
xlab = "", ylab="")
legend("topright",pch=NA, col=c("black","blue","red"), legend=c("Sub_metering_1",
"Sub_metering_2",
"Sub_metering_3"),
lwd=2, xpd = TRUE, bty ="n")
plot(subsetOfData$Time,subsetOfData$Global_reactive_power, type='l',
ylab = "Voltage", xlab = "datetime")
#close
dev.off()
} | /plot4.R | no_license | bilklo/ExData_Plotting1 | R | false | false | 2,280 | r | #Plots 3 line charts
plot4<- function()
{
#load the data
myData <- read.table("household_power_consumption.txt", head=TRUE, sep = ";")
#Combine Date and Time
myData$Time<-paste(myData$Date,myData$Time, sep = " ")
#convert to date the Date column
myData$Date <- as.Date(myData$Date, format="%d/%m/%Y")
myData$Time <- strptime(myData$Time, format="%d/%m/%Y %H:%M:%S")
#convert Sub_metering_1,Sub_metering_2,Sub_metering_3 to numeric
myData$Global_active_power <- as.numeric(as.character(myData$Global_active_power))
myData$Sub_metering_1 <- as.numeric(as.character(myData$Sub_metering_1))
myData$Sub_metering_2 <- as.numeric(as.character(myData$Sub_metering_2))
myData$Sub_metering_3 <- as.numeric(as.character(myData$Sub_metering_3))
myData$Voltage <- as.numeric(as.character(myData$Voltage))
myData$Global_reactive_power <- as.numeric(as.character(myData$Global_reactive_power))
#subset data
startDay<-as.Date(c("2007-02-01"), format="%Y-%m-%d")
endDay<-as.Date(c("2007-02-02"), format="%Y-%m-%d")
subsetOfData<-subset(myData,myData$Date>=startDay & myData$Date<=endDay)
#create the histogram
#PNG image
png("plot4.png",width = 480, height = 480, bg="transparent")
attach(mtcars)
par(mfrow=c(2,2))
plot(subsetOfData$Time,subsetOfData$Global_active_power, type='l',
ylab = "Global Active Power (Kilowatts)", xlab = "")
plot(subsetOfData$Time,subsetOfData$Voltage, type='l',
ylab = "Voltage", xlab = "datetime")
plot(subsetOfData$Time, subsetOfData$Sub_metering_1, type='l',col="black",
xlab = "", ylab="Energy Sub metering")
lines(subsetOfData$Time, subsetOfData$Sub_metering_2, type='l',col="red",
xlab = "", ylab="")
lines(subsetOfData$Time, subsetOfData$Sub_metering_3, type='l',col="blue",
xlab = "", ylab="")
legend("topright",pch=NA, col=c("black","blue","red"), legend=c("Sub_metering_1",
"Sub_metering_2",
"Sub_metering_3"),
lwd=2, xpd = TRUE, bty ="n")
plot(subsetOfData$Time,subsetOfData$Global_reactive_power, type='l',
ylab = "Voltage", xlab = "datetime")
#close
dev.off()
} |
######################################
#
# COVID Monitoring
# Case Reporting w/ Delay
#
# County Model
#
# 07/28/21
#
######################################
library(nimble)
library(coda)
set.seed(576476)
#####################################
#Functions
#####################################
# Beta-Binomial distribution functions.
dbetabin=nimbleFunction(run=function(x=double(0),mu=double(0),phi=double(0),size=double(0),log=integer(0)){
returnType(double(0))
if(x>=0&x<=size){
return(lgamma(size+1)+lgamma(x+mu*phi)+lgamma(size-x+(1-mu)*phi)+lgamma(phi)-
lgamma(size+phi)-lgamma(mu*phi)-lgamma((1-mu)*phi)-lgamma(size-x+1)-lgamma(x+1))
}else{
return(-Inf)
}
})
rbetabin=nimbleFunction(run=function(n=integer(0),mu=double(0),phi=double(0),size=double(0)){
pi=rbeta(1,mu*phi,(1-mu)*phi)
returnType(double(0))
return(rbinom(1,size,pi))
})
#Replace MONTH with desired month
load('data_MONTH.Rda')
#Rda file contains all files needed to execute MCMC in nimble
#Model Code contains the model specification
#Constants contains:
#N - 91 (90 day window + 1 for next day forecast
#C - 89 (89 days with partial data, data for day 90 ignored and forecasted)
#D - 30 (maximum reporting delay)
#L - 88 (number of counties)
#num - number of neighbors for each county
#adj - vector defining adjacency
#Model data contains:
#z - reporting matrix (dimensions - onset date, reporting delay, county)
#y - case time series (dimensions - onset date, county)
#off - log(population) for each county
#X - design matrix for day of the week
#Inits contains initial values for the MCMC
#Build the model.
model=nimbleModel(model_code,constants,model_data,inits)
#Compile the model
compiled_model=compileNimble(model,resetFunctions = TRUE)
#Set monitors
mcmc_conf=configureMCMC(model,monitors=c('lambda','alpha','delta','y','tau.dc','d0','d.c'),useConjugacy = TRUE)
#Build MCMC
mcmc<-buildMCMC(mcmc_conf)
#Compile MCMC
compiled_mcmc<-compileNimble(mcmc, project = model)
#Run the model
samples=runMCMC(compiled_mcmc,inits=inits,
nchains = 1, nburnin=15000,niter = 30000,samplesAsCodaMCMC = TRUE,thin=10,
summary = FALSE, WAIC = FALSE,progressBar=TRUE)
| /model_code.R | no_license | kline273/OH-COVID-nowcast | R | false | false | 2,304 | r | ######################################
#
# COVID Monitoring
# Case Reporting w/ Delay
#
# County Model
#
# 07/28/21
#
######################################
library(nimble)
library(coda)
set.seed(576476)
#####################################
#Functions
#####################################
# Beta-Binomial distribution functions.
dbetabin=nimbleFunction(run=function(x=double(0),mu=double(0),phi=double(0),size=double(0),log=integer(0)){
returnType(double(0))
if(x>=0&x<=size){
return(lgamma(size+1)+lgamma(x+mu*phi)+lgamma(size-x+(1-mu)*phi)+lgamma(phi)-
lgamma(size+phi)-lgamma(mu*phi)-lgamma((1-mu)*phi)-lgamma(size-x+1)-lgamma(x+1))
}else{
return(-Inf)
}
})
rbetabin=nimbleFunction(run=function(n=integer(0),mu=double(0),phi=double(0),size=double(0)){
pi=rbeta(1,mu*phi,(1-mu)*phi)
returnType(double(0))
return(rbinom(1,size,pi))
})
#Replace MONTH with desired month
load('data_MONTH.Rda')
#Rda file contains all files needed to execute MCMC in nimble
#Model Code contains the model specification
#Constants contains:
#N - 91 (90 day window + 1 for next day forecast
#C - 89 (89 days with partial data, data for day 90 ignored and forecasted)
#D - 30 (maximum reporting delay)
#L - 88 (number of counties)
#num - number of neighbors for each county
#adj - vector defining adjacency
#Model data contains:
#z - reporting matrix (dimensions - onset date, reporting delay, county)
#y - case time series (dimensions - onset date, county)
#off - log(population) for each county
#X - design matrix for day of the week
#Inits contains initial values for the MCMC
#Build the model.
model=nimbleModel(model_code,constants,model_data,inits)
#Compile the model
compiled_model=compileNimble(model,resetFunctions = TRUE)
#Set monitors
mcmc_conf=configureMCMC(model,monitors=c('lambda','alpha','delta','y','tau.dc','d0','d.c'),useConjugacy = TRUE)
#Build MCMC
mcmc<-buildMCMC(mcmc_conf)
#Compile MCMC
compiled_mcmc<-compileNimble(mcmc, project = model)
#Run the model
samples=runMCMC(compiled_mcmc,inits=inits,
nchains = 1, nburnin=15000,niter = 30000,samplesAsCodaMCMC = TRUE,thin=10,
summary = FALSE, WAIC = FALSE,progressBar=TRUE)
|
index_to_xy <- function(m, i) {
rows <- dim(m)[1]
cols <- dim(m)[2]
list(
x = ifelse(i %% rows == 0, rows, i %% rows),
y = ceiling(i / rows)
)
}
#' Compute the variance of the phase-type distributions
#' @examples
#' var_phtype(pi5, QL3(4 ,onlyTrans = TRUE))
var_phtype <- function(prob, rates) {
mphtype(2, prob, rates) - mphtype(1, prob, rates)^2
}
hazard_phtype <- function(t, prob, rates) {
dphtype(t, prob, rates) / (1 - pphtype(t, prob, rates))
}
harm <- function(n) {
if(length(n) > 1) {
sapply(n, harm)
} else {
sum(1/seq_len(n))
}
}
h <- function(n) {
if(length(n) > 1) {
# TODO: Need to do this smarter if n is a vector
# Can compute it smarter by computing first then adding/substracting
sapply(n, h)
} else {
is_even = n %% 2 == 0
k <- floor(n / 2)
if(is_even) harm(n) - harm(n - k)
else harm(n - 1) - harm(n - k - 1)
}
} | /interparsys/R/utils.R | no_license | stefaneng/IPS-Thesis | R | false | false | 904 | r | index_to_xy <- function(m, i) {
rows <- dim(m)[1]
cols <- dim(m)[2]
list(
x = ifelse(i %% rows == 0, rows, i %% rows),
y = ceiling(i / rows)
)
}
#' Compute the variance of the phase-type distributions
#' @examples
#' var_phtype(pi5, QL3(4 ,onlyTrans = TRUE))
var_phtype <- function(prob, rates) {
mphtype(2, prob, rates) - mphtype(1, prob, rates)^2
}
hazard_phtype <- function(t, prob, rates) {
dphtype(t, prob, rates) / (1 - pphtype(t, prob, rates))
}
harm <- function(n) {
if(length(n) > 1) {
sapply(n, harm)
} else {
sum(1/seq_len(n))
}
}
h <- function(n) {
if(length(n) > 1) {
# TODO: Need to do this smarter if n is a vector
# Can compute it smarter by computing first then adding/substracting
sapply(n, h)
} else {
is_even = n %% 2 == 0
k <- floor(n / 2)
if(is_even) harm(n) - harm(n - k)
else harm(n - 1) - harm(n - k - 1)
}
} |
library(tidyverse)
# read data
survey <- read_csv("./data/20180222_surveys.csv")
# remove records without weight or hindfoot length
survey <- survey %>%
filter(!is.na(weight) & !is.na(hindfoot_length) & !is.na(sex))
## CHALLENGE 1
# Plot the hindfoot_length as function of weight using points
# Plot the weight as function of species using boxplot
# Replace the box plot with a violin plot
# How many surveys per gender? Show it as bar plot
# How many surveys per year? Show it as bar plot
### CHALLENGE 2
# First plot
# - Use `sex` as color
# - Adjust the transparency (alpha) of the points to 0.5
# - Change the y label to "hindfoot length"
# - Add a title to the graph, e.g. "hindfoot length vs weight"
# - Use a logarithmic scale for the x-axis
# - Set points' colors to "red" for females and "yellow" for males
ggplot(data = survey, mapping = aes(x = weight,
y = hindfoot_length)) +
geom_point()
# Second plot
# - Split bars into `sex`
# - Arrange bars for `F` and `M` side by side
# - Adjust the transparency of the bar to 0.5
# - Change the y label to "number of surveys"
# - Add a title to the graph, e.g. "Number of surveys per year"
# - Flip x and y-axis
ggplot(data = survey, mapping = aes(x = year)) +
geom_bar()
## CHALLENGE 3
# Read iNaturalist obseration in and around Brussels from 2019
inat_bxl <- read_tsv("./data/20191126_BXL_iNaturalist_top20.csv",
na = "")
# Plot the number of observations per species and year.
# Make the best plot ever!
| /src/20191126_challenges.R | permissive | Yasmine-Verzelen/coding-club | R | false | false | 1,548 | r | library(tidyverse)
# read data
survey <- read_csv("./data/20180222_surveys.csv")
# remove records without weight or hindfoot length
survey <- survey %>%
filter(!is.na(weight) & !is.na(hindfoot_length) & !is.na(sex))
## CHALLENGE 1
# Plot the hindfoot_length as function of weight using points
# Plot the weight as function of species using boxplot
# Replace the box plot with a violin plot
# How many surveys per gender? Show it as bar plot
# How many surveys per year? Show it as bar plot
### CHALLENGE 2
# First plot
# - Use `sex` as color
# - Adjust the transparency (alpha) of the points to 0.5
# - Change the y label to "hindfoot length"
# - Add a title to the graph, e.g. "hindfoot length vs weight"
# - Use a logarithmic scale for the x-axis
# - Set points' colors to "red" for females and "yellow" for males
ggplot(data = survey, mapping = aes(x = weight,
y = hindfoot_length)) +
geom_point()
# Second plot
# - Split bars into `sex`
# - Arrange bars for `F` and `M` side by side
# - Adjust the transparency of the bar to 0.5
# - Change the y label to "number of surveys"
# - Add a title to the graph, e.g. "Number of surveys per year"
# - Flip x and y-axis
ggplot(data = survey, mapping = aes(x = year)) +
geom_bar()
## CHALLENGE 3
# Read iNaturalist obseration in and around Brussels from 2019
inat_bxl <- read_tsv("./data/20191126_BXL_iNaturalist_top20.csv",
na = "")
# Plot the number of observations per species and year.
# Make the best plot ever!
|
library(SpatialVS)
### Name: small.test.dat
### Title: A small dataset for fast testing of functions
### Aliases: small.test.dat
### Keywords: dataset
### ** Examples
data("small.test")
#Here is a toy example for creating a data object that can be used for
#generating dat.obj for SpatialVS function
n=20
#simulate counts data
y=rpois(n=n, lambda=1)
#simulate covariate matrix
x1=rnorm(n)
x2=rnorm(n)
X=cbind(1, x1, x2)
#compute distance matrix from some simulated locations
loc_x=runif(n)
loc_y=runif(n)
dist=matrix(0,n, n)
for(i in 1:n)
{
for(j in 1:n)
{
dist[i,j]=sqrt((loc_x[i]-loc_x[j])^2+(loc_y[i]-loc_y[j])^2)
}
}
#assume offset is all zero
offset=rep(0, n)
#assemble the data object for SpatialVS
dat.obj=list(y=y, X=X, dist=dist, offset=offset)
| /data/genthat_extracted_code/SpatialVS/examples/small.test.dat.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 777 | r | library(SpatialVS)
### Name: small.test.dat
### Title: A small dataset for fast testing of functions
### Aliases: small.test.dat
### Keywords: dataset
### ** Examples
data("small.test")
#Here is a toy example for creating a data object that can be used for
#generating dat.obj for SpatialVS function
n=20
#simulate counts data
y=rpois(n=n, lambda=1)
#simulate covariate matrix
x1=rnorm(n)
x2=rnorm(n)
X=cbind(1, x1, x2)
#compute distance matrix from some simulated locations
loc_x=runif(n)
loc_y=runif(n)
dist=matrix(0,n, n)
for(i in 1:n)
{
for(j in 1:n)
{
dist[i,j]=sqrt((loc_x[i]-loc_x[j])^2+(loc_y[i]-loc_y[j])^2)
}
}
#assume offset is all zero
offset=rep(0, n)
#assemble the data object for SpatialVS
dat.obj=list(y=y, X=X, dist=dist, offset=offset)
|
setMethod('isTerminator', 'Instruction',
function(x, ...)
.Call('R_Instruction_isTerminator', x, PACKAGE = 'Rllvm'))
setMethod('isBinaryOp', 'Instruction',
function(x, ...)
.Call('R_Instruction_isBinaryOp', x, PACKAGE = 'Rllvm'))
setMethod('isShift', 'Instruction',
function(x, ...)
.Call('R_Instruction_isShift', x, PACKAGE = 'Rllvm'))
setMethod('isCast', 'Instruction',
function(x, ...)
.Call('R_Instruction_isCast', x, PACKAGE = 'Rllvm'))
setMethod('isLogicalShift', 'Instruction',
function(x, ...)
.Call('R_Instruction_isLogicalShift', x, PACKAGE = 'Rllvm'))
setMethod('isArithmeticShift', 'Instruction',
function(x, ...)
.Call('R_Instruction_isArithmeticShift', x, PACKAGE = 'Rllvm'))
setMethod('hasMetadata', 'Instruction',
function(x, ...)
.Call('R_Instruction_hasMetadata', x, PACKAGE = 'Rllvm'))
setMethod('hasMetadataOtherThanDebugLoc', 'Instruction',
function(x, ...)
.Call('R_Instruction_hasMetadataOtherThanDebugLoc', x, PACKAGE = 'Rllvm'))
setMethod('isAssociative', 'Instruction',
function(x, ...)
.Call('R_Instruction_isAssociative', x, PACKAGE = 'Rllvm'))
setMethod('isCommutative', 'Instruction',
function(x, ...)
.Call('R_Instruction_isCommutative', x, PACKAGE = 'Rllvm'))
setMethod('mayWriteToMemory', 'Instruction',
function(x, ...)
.Call('R_Instruction_mayWriteToMemory', x, PACKAGE = 'Rllvm'))
setMethod('mayReadFromMemory', 'Instruction',
function(x, ...)
.Call('R_Instruction_mayReadFromMemory', x, PACKAGE = 'Rllvm'))
setMethod('mayThrow', 'Instruction',
function(x, ...)
.Call('R_Instruction_mayThrow', x, PACKAGE = 'Rllvm'))
setMethod('mayHaveSideEffects', 'Instruction',
function(x, ...)
.Call('R_Instruction_mayHaveSideEffects', x, PACKAGE = 'Rllvm'))
setMethod('isSafeToSpeculativelyExecute', 'Instruction',
function(x, ...)
.Call('R_Instruction_isSafeToSpeculativelyExecute', x, PACKAGE = 'Rllvm'))
insertBefore =
function(inst, to)
{
if(!isNativeNull(getParent(inst)))
eraseFromParent(inst, FALSE)
.Call("R_Instruction_insertBefore", as(inst, "Instruction"), as(to, "Instruction"))
}
insertAfter =
function(inst, to)
{
if(!isNativeNull(getParent(inst)))
eraseFromParent(inst, FALSE)
.Call("R_Instruction_insertAfter", as(inst, "Instruction"), as(to, "Instruction"))
}
insertAtEnd =
function(inst, block)
{
i = getBlockInstructions(block)
insertAfter(inst, i[[length(i)]])
}
moveBefore =
function(inst, to)
{
.Call("R_Instruction_moveBefore", as(inst, "Instruction"), as(to, "Instruction"))
}
newAllocaInst = function(type) {
.Call("R_AllocaInst_new", as(type, "Type"))
}
setGeneric("removeFromParent",
function(inst, ...)
standardGeneric("removeFromParent"))
setMethod("removeFromParent", "Instruction",
function(inst, ...)
.Call("R_Instruction_eraseFromParent", inst, FALSE))
setMethod("removeFromParent", "BasicBlock",
function(inst, ...)
.Call("R_BasicBlock_eraseFromParent", inst, FALSE))
setMethod("eraseFromParent", "Instruction",
function(x, delete = TRUE, ...)
.Call("R_Instruction_eraseFromParent", x, as(delete, "logical")))
| /R/instruction.R | no_license | doktorschiwago/Rllvm2 | R | false | false | 3,694 | r | setMethod('isTerminator', 'Instruction',
function(x, ...)
.Call('R_Instruction_isTerminator', x, PACKAGE = 'Rllvm'))
setMethod('isBinaryOp', 'Instruction',
function(x, ...)
.Call('R_Instruction_isBinaryOp', x, PACKAGE = 'Rllvm'))
setMethod('isShift', 'Instruction',
function(x, ...)
.Call('R_Instruction_isShift', x, PACKAGE = 'Rllvm'))
setMethod('isCast', 'Instruction',
function(x, ...)
.Call('R_Instruction_isCast', x, PACKAGE = 'Rllvm'))
setMethod('isLogicalShift', 'Instruction',
function(x, ...)
.Call('R_Instruction_isLogicalShift', x, PACKAGE = 'Rllvm'))
setMethod('isArithmeticShift', 'Instruction',
function(x, ...)
.Call('R_Instruction_isArithmeticShift', x, PACKAGE = 'Rllvm'))
setMethod('hasMetadata', 'Instruction',
function(x, ...)
.Call('R_Instruction_hasMetadata', x, PACKAGE = 'Rllvm'))
setMethod('hasMetadataOtherThanDebugLoc', 'Instruction',
function(x, ...)
.Call('R_Instruction_hasMetadataOtherThanDebugLoc', x, PACKAGE = 'Rllvm'))
setMethod('isAssociative', 'Instruction',
function(x, ...)
.Call('R_Instruction_isAssociative', x, PACKAGE = 'Rllvm'))
setMethod('isCommutative', 'Instruction',
function(x, ...)
.Call('R_Instruction_isCommutative', x, PACKAGE = 'Rllvm'))
setMethod('mayWriteToMemory', 'Instruction',
function(x, ...)
.Call('R_Instruction_mayWriteToMemory', x, PACKAGE = 'Rllvm'))
setMethod('mayReadFromMemory', 'Instruction',
function(x, ...)
.Call('R_Instruction_mayReadFromMemory', x, PACKAGE = 'Rllvm'))
setMethod('mayThrow', 'Instruction',
function(x, ...)
.Call('R_Instruction_mayThrow', x, PACKAGE = 'Rllvm'))
setMethod('mayHaveSideEffects', 'Instruction',
function(x, ...)
.Call('R_Instruction_mayHaveSideEffects', x, PACKAGE = 'Rllvm'))
setMethod('isSafeToSpeculativelyExecute', 'Instruction',
function(x, ...)
.Call('R_Instruction_isSafeToSpeculativelyExecute', x, PACKAGE = 'Rllvm'))
insertBefore =
function(inst, to)
{
if(!isNativeNull(getParent(inst)))
eraseFromParent(inst, FALSE)
.Call("R_Instruction_insertBefore", as(inst, "Instruction"), as(to, "Instruction"))
}
insertAfter =
function(inst, to)
{
if(!isNativeNull(getParent(inst)))
eraseFromParent(inst, FALSE)
.Call("R_Instruction_insertAfter", as(inst, "Instruction"), as(to, "Instruction"))
}
insertAtEnd =
function(inst, block)
{
i = getBlockInstructions(block)
insertAfter(inst, i[[length(i)]])
}
moveBefore =
function(inst, to)
{
.Call("R_Instruction_moveBefore", as(inst, "Instruction"), as(to, "Instruction"))
}
newAllocaInst = function(type) {
.Call("R_AllocaInst_new", as(type, "Type"))
}
setGeneric("removeFromParent",
function(inst, ...)
standardGeneric("removeFromParent"))
setMethod("removeFromParent", "Instruction",
function(inst, ...)
.Call("R_Instruction_eraseFromParent", inst, FALSE))
setMethod("removeFromParent", "BasicBlock",
function(inst, ...)
.Call("R_BasicBlock_eraseFromParent", inst, FALSE))
setMethod("eraseFromParent", "Instruction",
function(x, delete = TRUE, ...)
.Call("R_Instruction_eraseFromParent", x, as(delete, "logical")))
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lsei.R
\name{lsei}
\alias{lsei}
\alias{lsi}
\alias{ldp}
\alias{qp}
\title{Least Squares and Quadratic Programming under Equality and Inequality Constraints}
\usage{
lsei(a, b, c=NULL, d=NULL, e=NULL, f=NULL, lower=-Inf, upper=Inf)
lsi(a, b, e=NULL, f=NULL, lower=-Inf, upper=Inf)
ldp(e, f)
qp(q, p, c=NULL, d=NULL, e=NULL, f=NULL, lower=-Inf, upper=Inf, tol=1e-15)
}
\arguments{
\item{a}{Design matrix.}
\item{b}{Response vector.}
\item{c}{Matrix of numeric coefficients on the left-hand sides of equality
constraints. If it is NULL, \code{c} and \code{d} are ignored.}
\item{d}{Vector of numeric values on the right-hand sides of equality
constraints.}
\item{e}{Matrix of numeric coefficients on the left-hand sides of inequality
constraints. If it is NULL, \code{e} and \code{f} are ignored.}
\item{f}{Vector of numeric values on the right-hand sides of inequality
constraints.}
\item{lower, upper}{Bounds on the solutions, as a way to specify such simple
inequality constraints.}
\item{q}{Matrix of numeric values for the quadratic term of a quadratic
programming problem.}
\item{p}{Vector of numeric values for the linear term of a quadratic
programming problem.}
\item{tol}{Tolerance, for calculating pseudo-rank in \code{qp}.}
}
\value{
A vector of the solution values
}
\description{
These functions can be used for solving least squares or quadratic
programming problems under general equality and/or inequality
constraints.
}
\details{
The \code{lsei} function solves a least squares problem under both equality
and inequality constraints. It is an implementation of the LSEI algorithm
described in Lawson and Hanson (1974, 1995).
The \code{lsi} function solves a least squares problem under inequality
constraints. It is an implementation of the LSI algorithm described in
Lawson and Hanson (1974, 1995).
The \code{ldp} function solves a least distance programming problem under
inequality constraints. It is an R wrapper of the LDP function which is in
Fortran, as described in Lawson and Hanson (1974, 1995).
The \code{qp} function solves a quadratic programming problem, by
transforming the problem into a least squares one under the same equality
and inequality constraints, which is then solved by function \code{lsei}.
The NNLS and LDP Fortran implementations used internally is downloaded from
\url{http://www.netlib.org/lawson-hanson/}.
Given matrices \code{a}, \code{c} and \code{e}, and vectors \code{b},
\code{d} and \code{f}, function \code{lsei} solves the least squares problem
under both equality and inequality constraints:
\deqn{\mathrm{minimize\ \ } || a x - b ||,}{minimize || a x - b ||,}
\deqn{\mathrm{subject\ to\ \ } c x = d, e x \ge f.}{subject to c x = d, e x
>= f.}
Function \code{lsi} solves the least squares problem under inequality
constraints:
\deqn{\mathrm{minimize\ \ } || a x - b ||,}{minimize || a x - b ||,}
\deqn{\mathrm{\ \ \ subject\ to\ \ } e x \ge f.}{subject to e x >= f.}
Function \code{ldp} solves the least distance programming problem under
inequality constraints:
\deqn{\mathrm{minimize\ \ } || x ||,}{minimize || x ||,} \deqn{\mathrm{\ \ \
subject\ to\ \ } e x \ge f.}{subject to e x >= f.}
Function \code{qp} solves the quadratic programming problem:
\deqn{\mathrm{minimize\ \ } \frac12 x^T q x + p^T x,}{minimize 0.5 x^T q x +
p^T x,} \deqn{\mathrm{subject\ to\ \ } c x = d, e x \ge f.}{subject to c x =
d, e x >= f.}
}
\examples{
beta = c(rnorm(2), 1)
beta[beta<0] = 0
beta = beta / sum(beta)
a = matrix(rnorm(18), ncol=3)
b = a \%*\% beta + rnorm(3,sd=.1)
c = t(rep(1, 3))
d = 1
e = diag(1,3)
f = rep(0,3)
lsei(a, b) # under no constraint
lsei(a, b, c, d) # under eq. constraints
lsei(a, b, e=e, f=f) # under ineq. constraints
lsei(a, b, c, d, e, f) # under eq. and ineq. constraints
lsei(a, b, rep(1,3), 1, lower=0) # same solution
q = crossprod(a)
p = -drop(crossprod(b, a))
qp(q, p, rep(1,3), 1, lower=0) # same solution
## Example from Lawson and Hanson (1974), p.140
a = cbind(c(.4302,.6246), c(.3516,.3384))
b = c(.6593, .9666)
c = c(.4087, .1593)
d = .1376
lsei(a, b, c, d) # Solution: -1.177499 3.884770
## Example from Lawson and Hanson (1974), p.170
a = cbind(c(.25,.5,.5,.8),rep(1,4))
b = c(.5,.6,.7,1.2)
e = cbind(c(1,0,-1),c(0,1,-1))
f = c(0,0,-1)
lsi(a, b, e, f) # Solution: 0.6213152 0.3786848
## Example from Lawson and Hanson (1974), p.171:
e = cbind(c(-.207,-.392,.599), c(2.558, -1.351, -1.206))
f = c(-1.3,-.084,.384)
ldp(e, f) # Solution: 0.1268538 -0.2554018
}
\references{
Lawson and Hanson (1974, 1995). Solving least squares problems.
Englewood Cliffs, N.J., Prentice-Hall.
}
\seealso{
\code{\link{nnls}},\code{\link{hfti}}.
}
\author{
Yong Wang <yongwang@auckland.ac.nz>
}
\keyword{algebra}
\keyword{array}
| /man/lsei.Rd | no_license | cran/lsei | R | false | true | 4,887 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lsei.R
\name{lsei}
\alias{lsei}
\alias{lsi}
\alias{ldp}
\alias{qp}
\title{Least Squares and Quadratic Programming under Equality and Inequality Constraints}
\usage{
lsei(a, b, c=NULL, d=NULL, e=NULL, f=NULL, lower=-Inf, upper=Inf)
lsi(a, b, e=NULL, f=NULL, lower=-Inf, upper=Inf)
ldp(e, f)
qp(q, p, c=NULL, d=NULL, e=NULL, f=NULL, lower=-Inf, upper=Inf, tol=1e-15)
}
\arguments{
\item{a}{Design matrix.}
\item{b}{Response vector.}
\item{c}{Matrix of numeric coefficients on the left-hand sides of equality
constraints. If it is NULL, \code{c} and \code{d} are ignored.}
\item{d}{Vector of numeric values on the right-hand sides of equality
constraints.}
\item{e}{Matrix of numeric coefficients on the left-hand sides of inequality
constraints. If it is NULL, \code{e} and \code{f} are ignored.}
\item{f}{Vector of numeric values on the right-hand sides of inequality
constraints.}
\item{lower, upper}{Bounds on the solutions, as a way to specify such simple
inequality constraints.}
\item{q}{Matrix of numeric values for the quadratic term of a quadratic
programming problem.}
\item{p}{Vector of numeric values for the linear term of a quadratic
programming problem.}
\item{tol}{Tolerance, for calculating pseudo-rank in \code{qp}.}
}
\value{
A vector of the solution values
}
\description{
These functions can be used for solving least squares or quadratic
programming problems under general equality and/or inequality
constraints.
}
\details{
The \code{lsei} function solves a least squares problem under both equality
and inequality constraints. It is an implementation of the LSEI algorithm
described in Lawson and Hanson (1974, 1995).
The \code{lsi} function solves a least squares problem under inequality
constraints. It is an implementation of the LSI algorithm described in
Lawson and Hanson (1974, 1995).
The \code{ldp} function solves a least distance programming problem under
inequality constraints. It is an R wrapper of the LDP function which is in
Fortran, as described in Lawson and Hanson (1974, 1995).
The \code{qp} function solves a quadratic programming problem, by
transforming the problem into a least squares one under the same equality
and inequality constraints, which is then solved by function \code{lsei}.
The NNLS and LDP Fortran implementations used internally is downloaded from
\url{http://www.netlib.org/lawson-hanson/}.
Given matrices \code{a}, \code{c} and \code{e}, and vectors \code{b},
\code{d} and \code{f}, function \code{lsei} solves the least squares problem
under both equality and inequality constraints:
\deqn{\mathrm{minimize\ \ } || a x - b ||,}{minimize || a x - b ||,}
\deqn{\mathrm{subject\ to\ \ } c x = d, e x \ge f.}{subject to c x = d, e x
>= f.}
Function \code{lsi} solves the least squares problem under inequality
constraints:
\deqn{\mathrm{minimize\ \ } || a x - b ||,}{minimize || a x - b ||,}
\deqn{\mathrm{\ \ \ subject\ to\ \ } e x \ge f.}{subject to e x >= f.}
Function \code{ldp} solves the least distance programming problem under
inequality constraints:
\deqn{\mathrm{minimize\ \ } || x ||,}{minimize || x ||,} \deqn{\mathrm{\ \ \
subject\ to\ \ } e x \ge f.}{subject to e x >= f.}
Function \code{qp} solves the quadratic programming problem:
\deqn{\mathrm{minimize\ \ } \frac12 x^T q x + p^T x,}{minimize 0.5 x^T q x +
p^T x,} \deqn{\mathrm{subject\ to\ \ } c x = d, e x \ge f.}{subject to c x =
d, e x >= f.}
}
\examples{
beta = c(rnorm(2), 1)
beta[beta<0] = 0
beta = beta / sum(beta)
a = matrix(rnorm(18), ncol=3)
b = a \%*\% beta + rnorm(3,sd=.1)
c = t(rep(1, 3))
d = 1
e = diag(1,3)
f = rep(0,3)
lsei(a, b) # under no constraint
lsei(a, b, c, d) # under eq. constraints
lsei(a, b, e=e, f=f) # under ineq. constraints
lsei(a, b, c, d, e, f) # under eq. and ineq. constraints
lsei(a, b, rep(1,3), 1, lower=0) # same solution
q = crossprod(a)
p = -drop(crossprod(b, a))
qp(q, p, rep(1,3), 1, lower=0) # same solution
## Example from Lawson and Hanson (1974), p.140
a = cbind(c(.4302,.6246), c(.3516,.3384))
b = c(.6593, .9666)
c = c(.4087, .1593)
d = .1376
lsei(a, b, c, d) # Solution: -1.177499 3.884770
## Example from Lawson and Hanson (1974), p.170
a = cbind(c(.25,.5,.5,.8),rep(1,4))
b = c(.5,.6,.7,1.2)
e = cbind(c(1,0,-1),c(0,1,-1))
f = c(0,0,-1)
lsi(a, b, e, f) # Solution: 0.6213152 0.3786848
## Example from Lawson and Hanson (1974), p.171:
e = cbind(c(-.207,-.392,.599), c(2.558, -1.351, -1.206))
f = c(-1.3,-.084,.384)
ldp(e, f) # Solution: 0.1268538 -0.2554018
}
\references{
Lawson and Hanson (1974, 1995). Solving least squares problems.
Englewood Cliffs, N.J., Prentice-Hall.
}
\seealso{
\code{\link{nnls}},\code{\link{hfti}}.
}
\author{
Yong Wang <yongwang@auckland.ac.nz>
}
\keyword{algebra}
\keyword{array}
|
Enviro = readRDS("Data/Environment/EnvironmentEstimates.rds")
Climate = readRDS("Data/Climate/CorrectedClimateEstimates.rds")
Governance = readRDS("Data/WG/GovernanceFormatted.rds")
Traits = readRDS("Data/Traits/TraitsFormatted.rds")[[1]] #Using best estimate of phylogeny
PA = readRDS("Data/ProtectedAreas/ProtectedAreaEstimates.rds")
LUH = Enviro[["LUH"]]
LUH[LUH == "NaN"] = NA
PD = Enviro[["PD"]]
Trends$Quantitative_method = as.character(Trends$Quantitative_method)
Trends$Quantitative_method = ifelse(is.na(Trends$Quantitative_method), "Manual calculation required",Trends$Quantitative_method)
Climate$PET_tho[is.infinite(Climate$PET_tho)] = NA
Lags = c(0,5,10)
TrendsList = list()
for(b in c(1:3)){
DataFrameComb = NULL
for(a in 1:nrow(Trends)){
print(a)
StudyStart = Trends$Study_year_start[a]
StudyEnd = Trends$Study_year_end[a]
Country = Trends$alpha.3[a]
Spec = Trends$Species[a]
ID_ = Trends$ID[a]
UID = Trends$UniqueID[a]
PopLat = Trends$Latitude[a]
Lag = Lags[b]
#Assign envioirnmental rasters
#Population density
PDtmp = PD[which(
(PD$Year <= (StudyEnd)) &
(PD$Year >= (StudyStart - Lag)) &
(PD$ID == ID_)),]
PDChange = unname((exp(coef(lm(Value ~ Year, data = PDtmp))[2]) - 1)*100)
PDEnd = PD[which(
(PD$Year == (StudyEnd)) &
(PD$ID == ID_)),3]
DF = subset(LUH, ID == ID_)
PrimEnd = DF[which(
(DF$Year == (StudyEnd)) &
(DF$ID == ID_)),3]
Primtmp = DF[which(
(DF$Year <= (StudyEnd)) &
(DF$Year >= (StudyStart - Lag)) &
(DF$ID == ID_)),]
Primtmp = Primtmp[complete.cases(Primtmp),]
if(nrow(Primtmp) < 2){
PriC= NA
NatC = NA
AgC = NA
HumC = NA
} else {
PriC = unname((exp(coef(lm(log(Primary + 0.01) ~ Year, data = Primtmp))[2]) - 1)*100)
NatC = unname((exp(coef(lm(log(Nature + 0.01) ~ Year, data = Primtmp))[2]) - 1)*100)
AgC = unname((exp(coef(lm(log(Ag + 0.01) ~ Year, data = Primtmp))[2]) - 1)*100)
HumC = unname((exp(coef(lm(log(Human + 0.01) ~ Year, data = Primtmp))[2]) - 1)*100)
}
#Frequency of extreme-highs
PreIndTemp_mx_mean = mean(Climate[which(
(Climate$Year < 1921) &
(Climate$ID == ID_)),]$CRUTS_max, na.rm = T)
PreIndTemp_mx_sd = sd(Climate[which(
(Climate$Year < 1921) &
(Climate$ID == ID_)),]$CRUTS_max, na.rm = T)
PreIndTemp_mx_threshold = PreIndTemp_mx_mean + PreIndTemp_mx_sd*2
PreIndTemp_mx_freq = length(Climate[which(
(Climate$Year < 1921) &
(Climate$ID == ID_) &
Climate$CRUTS_max > PreIndTemp_mx_threshold),]$CRUTS_max)
PreIndTemp_mx_freq = ifelse(length(PreIndTemp_mx_freq) == 0, 0, PreIndTemp_mx_freq)
PreIndTemp_mx_freq = PreIndTemp_mx_freq/20
StudyTemp_mx_freq = length(Climate[which(
(Climate$Year >= (StudyStart - Lag) & Climate$Year <= StudyEnd) &
(Climate$ID == ID_) &
Climate$CRUTS_max > PreIndTemp_mx_threshold),]$CRUTS_max)
StudyTemp_mx_freq = ifelse(length(StudyTemp_mx_freq) == 0, 0, StudyTemp_mx_freq)
StudyTemp_mx_freq = StudyTemp_mx_freq/(StudyEnd - (StudyStart - Lag))
ExHeat = StudyTemp_mx_freq - PreIndTemp_mx_freq
StudySpei_fl_mean = mean(Climate[which(
(Climate$Year >= (StudyStart - Lag) & Climate$Year <= StudyEnd) &
(Climate$ID == ID_)),]$PET_tho, na.rm = T)
#Frequency of extreme-drought
PreIndSpei_dr_mean = mean(Climate[which(
(Climate$Year < 1921) &
(Climate$ID == ID_)),]$PET_tho, na.rm = T)
PreIndSpei_dr_sd = sd(Climate[which(
(Climate$Year < 1921) &
(Climate$ID == ID_)),]$PET_tho, na.rm = T)
PreIndSpei_dr_threshold = PreIndSpei_dr_mean - PreIndSpei_dr_sd*2
PreIndSpei_dr_freq = length(Climate[which(
(Climate$Year < 1921) &
(Climate$ID == ID_) &
Climate$MPET_tho < PreIndSpei_dr_threshold),]$PET_tho)
PreIndSpei_dr_freq = ifelse(length(PreIndSpei_dr_freq) == 0, 0, PreIndSpei_dr_freq)
PreIndSpei_dr_freq = PreIndSpei_dr_freq/20
StudySpei_dr_freq = length(Climate[which(
(Climate$Year >= (StudyStart - Lag) & Climate$Year <= StudyEnd) &
(Climate$ID == ID_) &
Climate$Mean < PreIndSpei_dr_threshold),]$PET_tho)
StudySpei_dr_freq = ifelse(length(StudySpei_dr_freq) == 0, 0, StudySpei_dr_freq)
StudySpei_dr_freq = StudySpei_dr_freq/(1+ StudyEnd - (StudyStart - Lag))
DroughtChange = StudySpei_dr_freq - PreIndSpei_dr_freq
#Assign governance
#HDI
HDI = Governance[which(
Governance$Year == StudyStart &
Governance$Code == Country),]$HDI_mean
HDI_var = Governance[which(
Governance$Year == StudyStart &
Governance$Code == Country),]$HDI_var
#Governance
Gov = Governance[which(
Governance$Year == StudyStart &
Governance$Code == Country),]$Gov_mean
Gov_var = Governance[which(
Governance$Year == StudyStart &
Governance$Code == Country),]$Gov_var
Govtmp = Governance[which(
(Governance$Year <= (StudyEnd)) &
(Governance$Year >= (StudyStart - Lag)) &
(Governance$Code == Country)),]
if(min(Govtmp$Gov_mean) < 0){
Govtmp$Gov_mean = Govtmp$Gov_mean + abs(min(Govtmp$Gov_mean))
} else {
}
HDI_c = unname((exp(coef(lm(log(HDI_mean + 0.01) ~ Year, data = Govtmp))[2]) - 1)*100)
Gov_c = unname((exp(coef(lm(log(Gov_mean + 0.01) ~ Year, data = Govtmp))[2]) - 1)*100)
#Conflict present
Conf = Governance[which(
Governance$Year > (StudyStart - Lag) &
Governance$Year < StudyEnd &
Governance$Code == Country),]
Conf = if(any(Conf$Conflicts == "Conflict")){
"Conflict"
} else {
"No conlict"
}
#Assign traits
#Longevity
MaxLon = Traits[which(
Traits$Species == Spec),]$Longevity_log10
MaxLon_var = Traits[which(
Traits$Species == Spec),]$Longevity_log10_Var
#Body mass
BodyMass = Traits[which(
Traits$Species == Spec),]$BodyMass_log10
BodyMass_var = Traits[which(
Traits$Species == Spec),]$BodyMass_log10_Var
#Reproduction rate
Reprod = Traits[which(
Traits$Species == Spec),]$ReprodRate_mean
Reprod_var = Traits[which(
Traits$Species == Spec),]$ReprodRate_var
#Reproduction rate
Gen = Traits[which(
Traits$Species == Spec),]$Gen_mean
Gen_var = Traits[which(
Traits$Species == Spec),]$Gen_var
#Reproduction rate
Gen2 = Traits[which(
Traits$Species == Spec),]$clim_mn_sd
#Protected areas
ProArea_Size = PA[which(
PA$ID == ID_),]$N
ProArea_Count = PA[which(
PA$ID == ID_),]$ProtectedCells
ProArea = (ProArea_Count/ProArea_Size)*100
DataFrame = data.frame(
Row = a,
Start = StudyStart,
End = StudyEnd,
PDC = PDChange,
PD = PDEnd,
PriC = PriC,
Pri = PrimEnd,
NatC = NatC,
AgC = AgC,
HumC = HumC,
ExHeatC = ExHeat,
DroughtC = DroughtChange,
Drought = StudySpei_fl_mean,
HDI = HDI,
HDI_var = HDI_var,
HDI_c = HDI_c,
Gov = Gov,
Gov_var = Gov_var,
Gov_c = Gov_c,
Conf = Conf,
ProArea = ProArea,
MaxLon = MaxLon,
MaxLon_var = MaxLon_var,
BodyMass = BodyMass,
BodyMass_var = BodyMass_var,
Reprod = Reprod,
Reprod_var = Reprod_var,
Gen = Gen,
Gen_var = Gen_var,
Gen2 = Gen2)
DataFrameComb = rbind(DataFrameComb, DataFrame)
rm(StudyStart,
StudyEnd,
PDChange,
PDEnd,
PriC,
PrimEnd,
NatC,
AgC,
HumC,
ExheatC,
ExHeat,
DroughtC,
Drought,
HDI,
HDI_var,
HDI_c,
Gov,
Gov_var,
Gov_c,
Conf,
ProArea,
MaxLon,
MaxLon_var,
BodyMass,
BodyMass_var,
Reprod,
Reprod_var,
Gen,
Gen_var,
Gen2)
}
TrendsJoin = cbind(Trends, DataFrameComb)
TrendsJoin[TrendsJoin == "NaN"] = NA
TrendsList[[b]] = TrendsJoin
}
saveRDS(TrendsList, "Data/Analysis/DataToModel3.rds")
| /code/16_add_covariates_v0.3.R | permissive | GitTFJ/carnivore_trends | R | false | false | 8,223 | r | Enviro = readRDS("Data/Environment/EnvironmentEstimates.rds")
Climate = readRDS("Data/Climate/CorrectedClimateEstimates.rds")
Governance = readRDS("Data/WG/GovernanceFormatted.rds")
Traits = readRDS("Data/Traits/TraitsFormatted.rds")[[1]] #Using best estimate of phylogeny
PA = readRDS("Data/ProtectedAreas/ProtectedAreaEstimates.rds")
LUH = Enviro[["LUH"]]
LUH[LUH == "NaN"] = NA
PD = Enviro[["PD"]]
Trends$Quantitative_method = as.character(Trends$Quantitative_method)
Trends$Quantitative_method = ifelse(is.na(Trends$Quantitative_method), "Manual calculation required",Trends$Quantitative_method)
Climate$PET_tho[is.infinite(Climate$PET_tho)] = NA
Lags = c(0,5,10)
TrendsList = list()
for(b in c(1:3)){
DataFrameComb = NULL
for(a in 1:nrow(Trends)){
print(a)
StudyStart = Trends$Study_year_start[a]
StudyEnd = Trends$Study_year_end[a]
Country = Trends$alpha.3[a]
Spec = Trends$Species[a]
ID_ = Trends$ID[a]
UID = Trends$UniqueID[a]
PopLat = Trends$Latitude[a]
Lag = Lags[b]
#Assign envioirnmental rasters
#Population density
PDtmp = PD[which(
(PD$Year <= (StudyEnd)) &
(PD$Year >= (StudyStart - Lag)) &
(PD$ID == ID_)),]
PDChange = unname((exp(coef(lm(Value ~ Year, data = PDtmp))[2]) - 1)*100)
PDEnd = PD[which(
(PD$Year == (StudyEnd)) &
(PD$ID == ID_)),3]
DF = subset(LUH, ID == ID_)
PrimEnd = DF[which(
(DF$Year == (StudyEnd)) &
(DF$ID == ID_)),3]
Primtmp = DF[which(
(DF$Year <= (StudyEnd)) &
(DF$Year >= (StudyStart - Lag)) &
(DF$ID == ID_)),]
Primtmp = Primtmp[complete.cases(Primtmp),]
if(nrow(Primtmp) < 2){
PriC= NA
NatC = NA
AgC = NA
HumC = NA
} else {
PriC = unname((exp(coef(lm(log(Primary + 0.01) ~ Year, data = Primtmp))[2]) - 1)*100)
NatC = unname((exp(coef(lm(log(Nature + 0.01) ~ Year, data = Primtmp))[2]) - 1)*100)
AgC = unname((exp(coef(lm(log(Ag + 0.01) ~ Year, data = Primtmp))[2]) - 1)*100)
HumC = unname((exp(coef(lm(log(Human + 0.01) ~ Year, data = Primtmp))[2]) - 1)*100)
}
#Frequency of extreme-highs
PreIndTemp_mx_mean = mean(Climate[which(
(Climate$Year < 1921) &
(Climate$ID == ID_)),]$CRUTS_max, na.rm = T)
PreIndTemp_mx_sd = sd(Climate[which(
(Climate$Year < 1921) &
(Climate$ID == ID_)),]$CRUTS_max, na.rm = T)
PreIndTemp_mx_threshold = PreIndTemp_mx_mean + PreIndTemp_mx_sd*2
PreIndTemp_mx_freq = length(Climate[which(
(Climate$Year < 1921) &
(Climate$ID == ID_) &
Climate$CRUTS_max > PreIndTemp_mx_threshold),]$CRUTS_max)
PreIndTemp_mx_freq = ifelse(length(PreIndTemp_mx_freq) == 0, 0, PreIndTemp_mx_freq)
PreIndTemp_mx_freq = PreIndTemp_mx_freq/20
StudyTemp_mx_freq = length(Climate[which(
(Climate$Year >= (StudyStart - Lag) & Climate$Year <= StudyEnd) &
(Climate$ID == ID_) &
Climate$CRUTS_max > PreIndTemp_mx_threshold),]$CRUTS_max)
StudyTemp_mx_freq = ifelse(length(StudyTemp_mx_freq) == 0, 0, StudyTemp_mx_freq)
StudyTemp_mx_freq = StudyTemp_mx_freq/(StudyEnd - (StudyStart - Lag))
ExHeat = StudyTemp_mx_freq - PreIndTemp_mx_freq
StudySpei_fl_mean = mean(Climate[which(
(Climate$Year >= (StudyStart - Lag) & Climate$Year <= StudyEnd) &
(Climate$ID == ID_)),]$PET_tho, na.rm = T)
#Frequency of extreme-drought
PreIndSpei_dr_mean = mean(Climate[which(
(Climate$Year < 1921) &
(Climate$ID == ID_)),]$PET_tho, na.rm = T)
PreIndSpei_dr_sd = sd(Climate[which(
(Climate$Year < 1921) &
(Climate$ID == ID_)),]$PET_tho, na.rm = T)
PreIndSpei_dr_threshold = PreIndSpei_dr_mean - PreIndSpei_dr_sd*2
PreIndSpei_dr_freq = length(Climate[which(
(Climate$Year < 1921) &
(Climate$ID == ID_) &
Climate$MPET_tho < PreIndSpei_dr_threshold),]$PET_tho)
PreIndSpei_dr_freq = ifelse(length(PreIndSpei_dr_freq) == 0, 0, PreIndSpei_dr_freq)
PreIndSpei_dr_freq = PreIndSpei_dr_freq/20
StudySpei_dr_freq = length(Climate[which(
(Climate$Year >= (StudyStart - Lag) & Climate$Year <= StudyEnd) &
(Climate$ID == ID_) &
Climate$Mean < PreIndSpei_dr_threshold),]$PET_tho)
StudySpei_dr_freq = ifelse(length(StudySpei_dr_freq) == 0, 0, StudySpei_dr_freq)
StudySpei_dr_freq = StudySpei_dr_freq/(1+ StudyEnd - (StudyStart - Lag))
DroughtChange = StudySpei_dr_freq - PreIndSpei_dr_freq
#Assign governance
#HDI
HDI = Governance[which(
Governance$Year == StudyStart &
Governance$Code == Country),]$HDI_mean
HDI_var = Governance[which(
Governance$Year == StudyStart &
Governance$Code == Country),]$HDI_var
#Governance
Gov = Governance[which(
Governance$Year == StudyStart &
Governance$Code == Country),]$Gov_mean
Gov_var = Governance[which(
Governance$Year == StudyStart &
Governance$Code == Country),]$Gov_var
Govtmp = Governance[which(
(Governance$Year <= (StudyEnd)) &
(Governance$Year >= (StudyStart - Lag)) &
(Governance$Code == Country)),]
if(min(Govtmp$Gov_mean) < 0){
Govtmp$Gov_mean = Govtmp$Gov_mean + abs(min(Govtmp$Gov_mean))
} else {
}
HDI_c = unname((exp(coef(lm(log(HDI_mean + 0.01) ~ Year, data = Govtmp))[2]) - 1)*100)
Gov_c = unname((exp(coef(lm(log(Gov_mean + 0.01) ~ Year, data = Govtmp))[2]) - 1)*100)
#Conflict present
Conf = Governance[which(
Governance$Year > (StudyStart - Lag) &
Governance$Year < StudyEnd &
Governance$Code == Country),]
Conf = if(any(Conf$Conflicts == "Conflict")){
"Conflict"
} else {
"No conlict"
}
#Assign traits
#Longevity
MaxLon = Traits[which(
Traits$Species == Spec),]$Longevity_log10
MaxLon_var = Traits[which(
Traits$Species == Spec),]$Longevity_log10_Var
#Body mass
BodyMass = Traits[which(
Traits$Species == Spec),]$BodyMass_log10
BodyMass_var = Traits[which(
Traits$Species == Spec),]$BodyMass_log10_Var
#Reproduction rate
Reprod = Traits[which(
Traits$Species == Spec),]$ReprodRate_mean
Reprod_var = Traits[which(
Traits$Species == Spec),]$ReprodRate_var
#Reproduction rate
Gen = Traits[which(
Traits$Species == Spec),]$Gen_mean
Gen_var = Traits[which(
Traits$Species == Spec),]$Gen_var
#Reproduction rate
Gen2 = Traits[which(
Traits$Species == Spec),]$clim_mn_sd
#Protected areas
ProArea_Size = PA[which(
PA$ID == ID_),]$N
ProArea_Count = PA[which(
PA$ID == ID_),]$ProtectedCells
ProArea = (ProArea_Count/ProArea_Size)*100
DataFrame = data.frame(
Row = a,
Start = StudyStart,
End = StudyEnd,
PDC = PDChange,
PD = PDEnd,
PriC = PriC,
Pri = PrimEnd,
NatC = NatC,
AgC = AgC,
HumC = HumC,
ExHeatC = ExHeat,
DroughtC = DroughtChange,
Drought = StudySpei_fl_mean,
HDI = HDI,
HDI_var = HDI_var,
HDI_c = HDI_c,
Gov = Gov,
Gov_var = Gov_var,
Gov_c = Gov_c,
Conf = Conf,
ProArea = ProArea,
MaxLon = MaxLon,
MaxLon_var = MaxLon_var,
BodyMass = BodyMass,
BodyMass_var = BodyMass_var,
Reprod = Reprod,
Reprod_var = Reprod_var,
Gen = Gen,
Gen_var = Gen_var,
Gen2 = Gen2)
DataFrameComb = rbind(DataFrameComb, DataFrame)
rm(StudyStart,
StudyEnd,
PDChange,
PDEnd,
PriC,
PrimEnd,
NatC,
AgC,
HumC,
ExheatC,
ExHeat,
DroughtC,
Drought,
HDI,
HDI_var,
HDI_c,
Gov,
Gov_var,
Gov_c,
Conf,
ProArea,
MaxLon,
MaxLon_var,
BodyMass,
BodyMass_var,
Reprod,
Reprod_var,
Gen,
Gen_var,
Gen2)
}
TrendsJoin = cbind(Trends, DataFrameComb)
TrendsJoin[TrendsJoin == "NaN"] = NA
TrendsList[[b]] = TrendsJoin
}
saveRDS(TrendsList, "Data/Analysis/DataToModel3.rds")
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ume.network.R
\name{summary.ume.network.result}
\alias{summary.ume.network.result}
\title{Summarize result run by \code{\link{ume.network.run}}}
\usage{
\method{summary}{ume.network.result}(object, ...)
}
\arguments{
\item{object}{Result object created by \code{\link{ume.network.run}} function}
\item{...}{Additional arguments affecting the summary produced}
}
\value{
Returns summary of the ume network model result
}
\description{
This function uses summary function in coda package to summarize mcmc.list object. Monte carlo error (Time-series SE) is also obtained using the coda package and is printed in the summary as a default.
}
\examples{
network <- with(smoking, {
ume.network.data(Outcomes, Study, Treat, N = N, response = "binomial", type = "random")
})
\donttest{
result <- ume.network.run(network)
summary(result)
}
}
| /man/summary.ume.network.result.Rd | no_license | cran/bnma | R | false | true | 942 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ume.network.R
\name{summary.ume.network.result}
\alias{summary.ume.network.result}
\title{Summarize result run by \code{\link{ume.network.run}}}
\usage{
\method{summary}{ume.network.result}(object, ...)
}
\arguments{
\item{object}{Result object created by \code{\link{ume.network.run}} function}
\item{...}{Additional arguments affecting the summary produced}
}
\value{
Returns summary of the ume network model result
}
\description{
This function uses summary function in coda package to summarize mcmc.list object. Monte carlo error (Time-series SE) is also obtained using the coda package and is printed in the summary as a default.
}
\examples{
network <- with(smoking, {
ume.network.data(Outcomes, Study, Treat, N = N, response = "binomial", type = "random")
})
\donttest{
result <- ume.network.run(network)
summary(result)
}
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/declare_ra.R
\name{declare_ra}
\alias{declare_ra}
\title{Declare a random assignment procedure.}
\usage{
declare_ra(N = NULL, block_var = NULL, clust_var = NULL, m = NULL,
m_each = NULL, prob = NULL, prob_each = NULL, block_m = NULL,
block_m_each = NULL, block_prob = NULL, block_prob_each = NULL,
num_arms = NULL, condition_names = NULL, simple = FALSE,
balance_load = FALSE)
}
\arguments{
\item{N}{The number of units. N must be a positive integer. (required)}
\item{block_var}{A vector of length N that indicates which block each unit belongs to.}
\item{clust_var}{A vector of length N that indicates which cluster each unit belongs to.}
\item{m}{Use for a two-arm design in which m units (or clusters) are assigned to treatment and N-m units (or clusters) are assigned to control. (optional)}
\item{m_each}{Use for a multi-arm design in which the values of m_each determine the number of units (or clusters) assigned to each condition. m_each must be a numeric vector in which each entry is a nonnegative integer that describes how many units (or clusters) should be assigned to the 1st, 2nd, 3rd... treatment condition. m_each must sum to N. (optional)}
\item{prob}{Use for a two-arm design in which either floor(N*prob) or ceiling(N*prob) units (or clusters) are assigned to treatment. The probability of assignment to treatment is exactly prob because with probability 1-prob, floor(N*prob) units (or clusters) will be assigned to treatment and with probability prob, ceiling(N*prob) units (or clusters) will be assigned to treatment. prob must be a real number between 0 and 1 inclusive. (optional)}
\item{prob_each}{Use for a multi-arm design in which the values of prob_each determine the probabilties of assignment to each treatment condition. prob_each must be a numeric vector giving the probability of assignment to each condition. All entries must be nonnegative real numbers between 0 and 1 inclusive and the total must sum to 1. Because of integer issues, the exact number of units assigned to each condition may differ (slightly) from assignment to assignment, but the overall probability of assignment is exactly prob_each. (optional)}
\item{block_m}{Use for a two-arm design in which block_m describes the number of units to assign to treatment within each block. Note that in previous versions of randomizr, block_m behaved like block_m_each.}
\item{block_m_each}{Use for a multi-arm design in which the values of block_m_each determine the number of units (or clusters) assigned to each condition. block_m_each must be a matrix with the same number of rows as blocks and the same number of columns as treatment arms. Cell entries are the number of units (or clusters) to be assigned to each treatment arm within each block. The rows should respect the ordering of the blocks as determined by sort(unique(block_var)). The columns should be in the order of condition_names, if specified.}
\item{block_prob}{Use for a two-arm design in which block_prob describes the probability of assignment to treatment within each block. Differs from prob in that the probability of assignment can vary across blocks.}
\item{block_prob_each}{Use for a multi-arm design in which the values of block_prob_each determine the probabilties of assignment to each treatment condition. block_prob_each must be a matrix with the same number of rows as blocks and the same number of columns as treatment arms. Cell entries are the probabilites of assignment to treatment within each block. The rows should respect the ordering of the blocks as determined by sort(unique(block_var)). Use only if the probabilities of assignment should vary by block, otherwise use prob_each. Each row of block_prob_each must sum to 1.}
\item{num_arms}{The number of treatment arms. If unspecified, num_arms will be determined from the other arguments. (optional)}
\item{condition_names}{A character vector giving the names of the treatment groups. If unspecified, the treatment groups will be named 0 (for control) and 1 (for treatment) in a two-arm trial and T1, T2, T3, in a multi-arm trial. An execption is a two-group design in which num_arms is set to 2, in which case the condition names are T1 and T2, as in a multi-arm trial with two arms. (optional)}
\item{simple}{logical, defaults to FALSE. If TRUE, simple random assignment is used. When simple = TRUE, please do not specify m, m_each, block_m, or block_m_each.}
\item{balance_load}{logical, defaults to FALSE. This feature is experimental. If set to TRUE, the function will resolve rounding problems by randomly assigning "remainder" units to each possible treatment condition with equal probability, while ensuring that the total number of units assigned to each condition does not vary greatly from assignment to assignment. However, the true probabiltiies of assignment may be different from the nominal probabilities specified in prob_each or block_prob_each. Please use with caution and perform many tests before using in a real research scenario.}
}
\value{
A list of class "ra_declaration". The list has five entries:
$ra_function, a function that generates random assignments accroding to the declaration.
$ra_type, a string indicating the type of random assignment used
$probabilities_matrix, a matrix with N rows and num_arms columns, describing each unit's probabilities of assignment to conditions.
$block_var, the blocking variable.
$clust_var, the clustering variable.
}
\description{
Declare a random assignment procedure.
}
\examples{
# The declare_ra function is used in three ways:
# 1. To obtain some basic facts about a randomization:
declaration <- declare_ra(N=100, m_each=c(30, 30, 40))
declaration
# 2. To conduct a random assignment:
Z <- conduct_ra(declaration)
table(Z)
# 3. To obtain observed condition probabilities
probs <- obtain_condition_probabilities(declaration, Z)
table(probs, Z)
# Simple Random Assignment Declarations
declare_ra(N=100, simple = TRUE)
declare_ra(N=100, prob = .4, simple = TRUE)
declare_ra(N=100, prob_each=c(0.3, 0.3, 0.4),
condition_names=c("control", "placebo", "treatment"), simple=TRUE)
# Complete Random Assignment Declarations
declare_ra(N=100)
declare_ra(N=100, m_each = c(30, 70),
condition_names = c("control", "treatment"))
declare_ra(N=100, m_each=c(30, 30, 40))
# Block Random Assignment Declarations
block_var <- rep(c("A", "B","C"), times=c(50, 100, 200))
block_m_each <- rbind(c(10, 40),
c(30, 70),
c(50, 150))
declare_ra(block_var=block_var, block_m_each=block_m_each)
# Cluster Random Assignment Declarations
clust_var <- rep(letters, times=1:26)
declare_ra(clust_var=clust_var)
declare_ra(clust_var=clust_var, m_each=c(7, 7, 12))
# Blocked and Clustered Random Assignment Declarations
clust_var <- rep(letters, times=1:26)
block_var <- rep(NA, length(clust_var))
block_var[clust_var \%in\% letters[1:5]] <- "block_1"
block_var[clust_var \%in\% letters[6:10]] <- "block_2"
block_var[clust_var \%in\% letters[11:15]] <- "block_3"
block_var[clust_var \%in\% letters[16:20]] <- "block_4"
block_var[clust_var \%in\% letters[21:26]] <- "block_5"
table(block_var, clust_var)
declare_ra(clust_var = clust_var, block_var = block_var)
declare_ra(clust_var = clust_var, block_var = block_var, prob_each = c(.2, .5, .3))
}
| /man/declare_ra.Rd | no_license | anniejw6/randomizr | R | false | true | 7,415 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/declare_ra.R
\name{declare_ra}
\alias{declare_ra}
\title{Declare a random assignment procedure.}
\usage{
declare_ra(N = NULL, block_var = NULL, clust_var = NULL, m = NULL,
m_each = NULL, prob = NULL, prob_each = NULL, block_m = NULL,
block_m_each = NULL, block_prob = NULL, block_prob_each = NULL,
num_arms = NULL, condition_names = NULL, simple = FALSE,
balance_load = FALSE)
}
\arguments{
\item{N}{The number of units. N must be a positive integer. (required)}
\item{block_var}{A vector of length N that indicates which block each unit belongs to.}
\item{clust_var}{A vector of length N that indicates which cluster each unit belongs to.}
\item{m}{Use for a two-arm design in which m units (or clusters) are assigned to treatment and N-m units (or clusters) are assigned to control. (optional)}
\item{m_each}{Use for a multi-arm design in which the values of m_each determine the number of units (or clusters) assigned to each condition. m_each must be a numeric vector in which each entry is a nonnegative integer that describes how many units (or clusters) should be assigned to the 1st, 2nd, 3rd... treatment condition. m_each must sum to N. (optional)}
\item{prob}{Use for a two-arm design in which either floor(N*prob) or ceiling(N*prob) units (or clusters) are assigned to treatment. The probability of assignment to treatment is exactly prob because with probability 1-prob, floor(N*prob) units (or clusters) will be assigned to treatment and with probability prob, ceiling(N*prob) units (or clusters) will be assigned to treatment. prob must be a real number between 0 and 1 inclusive. (optional)}
\item{prob_each}{Use for a multi-arm design in which the values of prob_each determine the probabilties of assignment to each treatment condition. prob_each must be a numeric vector giving the probability of assignment to each condition. All entries must be nonnegative real numbers between 0 and 1 inclusive and the total must sum to 1. Because of integer issues, the exact number of units assigned to each condition may differ (slightly) from assignment to assignment, but the overall probability of assignment is exactly prob_each. (optional)}
\item{block_m}{Use for a two-arm design in which block_m describes the number of units to assign to treatment within each block. Note that in previous versions of randomizr, block_m behaved like block_m_each.}
\item{block_m_each}{Use for a multi-arm design in which the values of block_m_each determine the number of units (or clusters) assigned to each condition. block_m_each must be a matrix with the same number of rows as blocks and the same number of columns as treatment arms. Cell entries are the number of units (or clusters) to be assigned to each treatment arm within each block. The rows should respect the ordering of the blocks as determined by sort(unique(block_var)). The columns should be in the order of condition_names, if specified.}
\item{block_prob}{Use for a two-arm design in which block_prob describes the probability of assignment to treatment within each block. Differs from prob in that the probability of assignment can vary across blocks.}
\item{block_prob_each}{Use for a multi-arm design in which the values of block_prob_each determine the probabilties of assignment to each treatment condition. block_prob_each must be a matrix with the same number of rows as blocks and the same number of columns as treatment arms. Cell entries are the probabilites of assignment to treatment within each block. The rows should respect the ordering of the blocks as determined by sort(unique(block_var)). Use only if the probabilities of assignment should vary by block, otherwise use prob_each. Each row of block_prob_each must sum to 1.}
\item{num_arms}{The number of treatment arms. If unspecified, num_arms will be determined from the other arguments. (optional)}
\item{condition_names}{A character vector giving the names of the treatment groups. If unspecified, the treatment groups will be named 0 (for control) and 1 (for treatment) in a two-arm trial and T1, T2, T3, in a multi-arm trial. An execption is a two-group design in which num_arms is set to 2, in which case the condition names are T1 and T2, as in a multi-arm trial with two arms. (optional)}
\item{simple}{logical, defaults to FALSE. If TRUE, simple random assignment is used. When simple = TRUE, please do not specify m, m_each, block_m, or block_m_each.}
\item{balance_load}{logical, defaults to FALSE. This feature is experimental. If set to TRUE, the function will resolve rounding problems by randomly assigning "remainder" units to each possible treatment condition with equal probability, while ensuring that the total number of units assigned to each condition does not vary greatly from assignment to assignment. However, the true probabiltiies of assignment may be different from the nominal probabilities specified in prob_each or block_prob_each. Please use with caution and perform many tests before using in a real research scenario.}
}
\value{
A list of class "ra_declaration". The list has five entries:
$ra_function, a function that generates random assignments accroding to the declaration.
$ra_type, a string indicating the type of random assignment used
$probabilities_matrix, a matrix with N rows and num_arms columns, describing each unit's probabilities of assignment to conditions.
$block_var, the blocking variable.
$clust_var, the clustering variable.
}
\description{
Declare a random assignment procedure.
}
\examples{
# The declare_ra function is used in three ways:
# 1. To obtain some basic facts about a randomization:
declaration <- declare_ra(N=100, m_each=c(30, 30, 40))
declaration
# 2. To conduct a random assignment:
Z <- conduct_ra(declaration)
table(Z)
# 3. To obtain observed condition probabilities
probs <- obtain_condition_probabilities(declaration, Z)
table(probs, Z)
# Simple Random Assignment Declarations
declare_ra(N=100, simple = TRUE)
declare_ra(N=100, prob = .4, simple = TRUE)
declare_ra(N=100, prob_each=c(0.3, 0.3, 0.4),
condition_names=c("control", "placebo", "treatment"), simple=TRUE)
# Complete Random Assignment Declarations
declare_ra(N=100)
declare_ra(N=100, m_each = c(30, 70),
condition_names = c("control", "treatment"))
declare_ra(N=100, m_each=c(30, 30, 40))
# Block Random Assignment Declarations
block_var <- rep(c("A", "B","C"), times=c(50, 100, 200))
block_m_each <- rbind(c(10, 40),
c(30, 70),
c(50, 150))
declare_ra(block_var=block_var, block_m_each=block_m_each)
# Cluster Random Assignment Declarations
clust_var <- rep(letters, times=1:26)
declare_ra(clust_var=clust_var)
declare_ra(clust_var=clust_var, m_each=c(7, 7, 12))
# Blocked and Clustered Random Assignment Declarations
clust_var <- rep(letters, times=1:26)
block_var <- rep(NA, length(clust_var))
block_var[clust_var \%in\% letters[1:5]] <- "block_1"
block_var[clust_var \%in\% letters[6:10]] <- "block_2"
block_var[clust_var \%in\% letters[11:15]] <- "block_3"
block_var[clust_var \%in\% letters[16:20]] <- "block_4"
block_var[clust_var \%in\% letters[21:26]] <- "block_5"
table(block_var, clust_var)
declare_ra(clust_var = clust_var, block_var = block_var)
declare_ra(clust_var = clust_var, block_var = block_var, prob_each = c(.2, .5, .3))
}
|
################################## HTML Report Functions
amaretto_html_report <- function(AMARETTOinit,AMARETTOresults,CNV_matrix,MET_matrix,hyper_geo_test_bool=TRUE)
{
suppressMessages(suppressWarnings(library("AMARETTO")))
#file_wd=dirname(rstudioapi::getSourceEditorContext()$path)
#setwd(file_wd)
file_wd='./'
setwd(file_wd)
########################################################
# Evaluate AMARETTO Results
########################################################
# AMARETTOtestReport<-AMARETTO_EvaluateTestSet(AMARETTOresults,
# AMARETTOinit$MA_matrix_Var,AMARETTOinit$RegulatorData)
######################################################################################################################################################################################
######################################################################################################################################################################################
NrModules<-AMARETTOresults$NrModules
if (hyper_geo_test_bool)
{
saveRDS(AMARETTOresults, file = "hyper_geo_test/AMARETTOtestReport.RData")
########################################################
# Save AMARETTO results in different formats including .gmt
########################################################
suppressMessages(suppressWarnings(source("/usr/local/bin/amaretto/hyper_geo_test/ProcessTCGA_modules.R")))
rileen("/usr/local/bin/amaretto/hyper_geo_test/AMARETTOtestReport.RData", AMARETTOinit, AMARETTOresults)
}
######################################################################################################################################################################################
######################################################################################################################################################################################
######################################################################################################################################################################################
######################################################################################################################################################################################
##################################################################################################################################################################
#REPORT
##################################################################################################################################################################
unlink("report_htm/*")
unlink("report_html/htmls/*")
unlink("report_html/htmls/images/*")
unlink("report_html/htmls/data/*")
unlink("report_html/htmls/tables/*")
unlink("report_html/htmls/tables/module_hyper_geo_test/*")
dir.create("report_html")
dir.create("report_html/htmls")
dir.create("report_html/htmls/images")
dir.create("report_html/htmls/data")
dir.create("report_html/htmls/tables")
dir.create("report_html/htmls/tables/module_hyper_geo_test")
########################################################
# Save images of all the modules
########################################################
address1=paste("./","report_html",sep="")
address2=paste("./","htmls",sep="")
address3=paste("./","htmls/images",sep="")
for (ModuleNr in 1:NrModules )
{
html_address=paste("report_html","/htmls/images","/module",as.character(ModuleNr),".jpeg",sep="")
jpeg(file =html_address )
AMARETTO_VisualizeModule(AMARETTOinit, AMARETTOresults=AMARETTOresults, CNV_matrix, MET_matrix, ModuleNr=ModuleNr)
dev.off()
}
##############################################################################
# Create HTMLs for each module
##############################################################################
if (hyper_geo_test_bool)
{
###################################################
library("GSEABase")
library("rstudioapi")
suppressMessages(suppressWarnings(source("/usr/local/bin/amaretto/hyper_geo_test/HyperGTestGeneEnrichment.R")))
suppressMessages(suppressWarnings(source("/usr/local/bin/amaretto/hyper_geo_test/word_Cloud.R")))
b<- HyperGTestGeneEnrichment("/usr/local/bin/amaretto/hyper_geo_test/H.C2CP.genesets_forRileen.gmt", "/usr/local/bin/amaretto/hyper_geo_test/TCGA_modules_target_only.gmt", "hyper_geo_test/output.txt",show.overlapping.genes=TRUE)
df1<-read.table("hyper_geo_test/output.txt",sep="\t",header=TRUE, fill=TRUE)
#df2<-df1[order(-df1$p.value),]
df2=df1
df3<-read.table("hyper_geo_test/output.genes.txt",sep="\t",header=TRUE, fill=TRUE)
###################################################
print(head(df3))
}
library(R2HTML)
number_of_significant_gene_overlappings<-c()
for (ModuleNr in 1:NrModules )
{
module_name=paste("module",as.character(ModuleNr),sep="")
ModuleData=AMARETTOinit$MA_matrix_Var[AMARETTOresults$ModuleMembership==ModuleNr,]
currentRegulators = AMARETTOresults$AllRegulators[which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] != 0)]
RegulatorData=AMARETTOinit$RegulatorData[currentRegulators,]
module_regulators_weights=AMARETTOresults$RegulatoryPrograms[ModuleNr,][which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] != 0)]
module_regulators_weights<-data.frame(module_regulators_weights)
positiveRegulators=AMARETTOresults$AllRegulators[which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] > 0)]
negetiveRegulators=AMARETTOresults$AllRegulators[which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] < 0)]
ModuleGenes=rownames(ModuleData)
RegulatoryGenes=rownames(RegulatorData)
module_all_genes_data <- rbind(ModuleData, RegulatorData)
module_all_genes_data <-module_all_genes_data[order(rownames(module_all_genes_data)),]
module_all_genes_data <- unique(module_all_genes_data)
module_annotations<-create_gene_annotations(module_all_genes_data,ModuleGenes,module_regulators_weights)
if (hyper_geo_test_bool)
{
####################### Hyper Geometric Significance
module_name2=paste("Module_",as.character(ModuleNr),sep="")
print(module_name2)
filter_indexes<-(df3$Testset==module_name2) & (df3$p.value<0.05)
gene_descriptions<-df3$Description[filter_indexes]
gene_names<-df3$Geneset[filter_indexes]
print(length(gene_names))
print('hassan')
overlapping_gene_names<-df3$Overlapping.genes[filter_indexes]
number_overlappings<-df3$n.Overlapping[filter_indexes]
p_values<-df3$p.value[filter_indexes]
q_values<-df3$q.value[filter_indexes]
number_of_significant_gene_overlappings<-c(number_of_significant_gene_overlappings,length(gene_names))
print(head(df3))
print('hassan')
mmm<-gene_descriptions
mm<-as.vector(unique(mmm))
descriptions=""
for (var in mm)
{
descriptions = paste(descriptions,var,sep=" ")
descriptions =gsub(">",",",descriptions)
# descriptions<-substring(descriptions, 1)
# descriptions<-sub('.', '', descriptions)
}
if (nchar(descriptions)>0)
{
wordcloud_making(descriptions,module_name2)
}
#############################################
}
print(number_of_significant_gene_overlappings)
address=address2
fname=paste("module",as.character(ModuleNr),sep="")
tite_page=paste("module",as.character(ModuleNr),sep="")
graph1=paste("./images","/module",as.character(ModuleNr),".jpeg",sep = "")
tmpfic<-HTMLInitFile("./report_html/htmls/",filename=fname,Title = tite_page,CSSFile="http://www.stat.ucl.ac.be/R2HTML/Pastel.css")
####### CSS ####
bootstrap1='<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">'
bootstrap2='<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>'
bootstrap3='<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>'
HTML(bootstrap1,file=tmpfic)
HTML(bootstrap2,file=tmpfic)
HTML(bootstrap3,file=tmpfic)
HTML('<div class="container-fluid">',file=tmpfic)
################
HTML("<h1 class='text-center text-primary'> Module Results </h1>",file=tmpfic)
HTML('<br /><br />')
HTMLInsertGraph(graph1,file=tmpfic)
##################
if (hyper_geo_test_bool)
{
HTML('<hr class="col-xs-12">')
HTML("<h2 class='text-center text-primary'> Hyper Geometric Test </h2>",file=tmpfic)
HTML("<p> Conditioned on P-value <0.05 </p>",file=tmpfic)
HTML('<div class="row">',file=tmpfic)
if (nchar(descriptions)==0){
HTML("<h4 class='text-center text-danger'> Not enough for wordcloud </h4>",file=tmpfic)
}
if (nchar(descriptions)>0)
{
graph2=paste("./images","/",module_name2,"_WordCloud.png",sep = "")
HTMLInsertGraph(graph2,file=tmpfic)
}
if (length(gene_names)>0)
{
HTML('<div class="col-sm-1">',file=tmpfic)
HTML('</div>',file=tmpfic)
HTML('<div class="col-sm-10">',file=tmpfic)
table_command2=
'
<table class="table table-hover .table-striped table-bordered">
<thead>
<tr>
<th scope="col">Gene Names</th>
<th scope="col">Gene Description</th>
<th scope="col">Number of Overlapping Genes</th>
<th scope="col">Overlapping Genes Names</th>
<th scope="col">p-value</th>
<th scope="col">q-value</th>
</tr>
</thead>
<tbody>
'
module_hypo_table_header<-c('Gene-Names','Gene-Description','Number-of-Overlapping-Genes','Overlapping-Genes-Names','p-value','q-value')
module_hypo_table<-c()
##################
descriptions=strsplit(descriptions,",")[[1]]
HTML(table_command2,file=tmpfic)
for (kk in 1:length(gene_names))
{
link_command=paste("<a href=http://software.broadinstitute.org/gsea/msigdb/cards/",as.character(gene_names[kk]),".html>",gene_names[kk],'</a>',sep="")
HTML(paste('<tr>',
'<td valign="middle">',link_command,'</td>',
'<td valign="middle">',gsub(">"," ",gene_descriptions[kk]),'</td>',
'<td valign="middle">',number_overlappings[kk],'</td>',
'<td valign="middle">',gsub(" ",' ',as.character(overlapping_gene_names[kk])),'</td>',
'<td valign="middle">',round(p_values[kk],4),'</td>',
'<td valign="middle">',round(q_values[kk],4),'</td>',
'</tr>'),file=tmpfic)
rr<-c(as.character(gene_names[kk]),gsub(">"," ",gene_descriptions[kk]),number_overlappings[kk],gsub(" ",' ',as.character(overlapping_gene_names[kk])),round(p_values[kk],4),round(q_values[kk],4))
module_hypo_table<-rbind(module_hypo_table,rr)
}
HTML('</tbody></table>',file=tmpfic)
colnames(module_hypo_table)<-module_hypo_table_header
outfile=paste('./report_html/htmls/tables/module_hyper_geo_test/Module',ModuleNr,'_hypergeometric_test.tsv',sep='')
write.table(module_hypo_table,file=outfile,sep='\t',quote=F,col.names=T,row.names=F)
HTML('</div>',file=tmpfic)
HTML('<div class="col-sm-1">',file=tmpfic)
HTML('</div>',file=tmpfic)
##################
HTML('</div>',file=tmpfic)
}
}
HTML('<hr class="col-xs-12">')
ModuleData=AMARETTOinit$MA_matrix_Var[AMARETTOresults$ModuleMembership==ModuleNr,]
currentRegulators = AMARETTOresults$AllRegulators[which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] != 0)]
positiveRegulators=AMARETTOresults$AllRegulators[which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] > 0)]
negetiveRegulators=AMARETTOresults$AllRegulators[which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] < 0)]
module_regulators_data=AMARETTOresults$RegulatoryPrograms[ModuleNr,][which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] != 0)]
#colnames(module_regulators_data) <- c("Expression data")
HTML("<h2 class='text-center text-primary'>Module Genes Expression Data </h2>",file=tmpfic)
all_gene_expression_file_name_save=paste(module_name,"_","data",".csv",sep="")
all_gene_expression_file_address=paste("./report_html/htmls/data",'/',all_gene_expression_file_name_save,sep="")
write.csv(module_all_genes_data, file =all_gene_expression_file_address)
ModuleData<-round(ModuleData,2)
HTML(paste('<a href=', paste('./data','/',all_gene_expression_file_name_save,sep=""),' download>',' download all module gene data ','</a>',sep=""))
HTML(ModuleData,file=tmpfic)
HTML("<h2 class='text-center text-primary'>Regulators</h2>",file=tmpfic)
#############
annotations_file_name_save=paste(module_name,"_","annotations",".csv",sep="")
annotations_file_address=paste('./report_html/htmls/data','/',annotations_file_name_save,sep="")
write.csv(module_annotations, file =annotations_file_address)
HTML(paste('<a href=', paste('./data','/',annotations_file_name_save,sep=""),' download>',' download annotations data ','</a>',sep=""))
#############
# colnames(module_regulators_data)<-""
HTML(paste('<p class="text-success">',paste(as.character(positiveRegulators)),'</p>'),file=tmpfic)
HTML(paste('<p class="text-danger">',paste(as.character(negetiveRegulators)),'</p>'),file=tmpfic)
HTML('</div>',file=tmpfic)
}
##############################################################################
#Create the landing page
##############################################################################
tmpfic<-HTMLInitFile(address1,filename="index",Title = "Amartto Report",CSSFile="http://www.stat.ucl.ac.be/R2HTML/Pastel.css")
bootstrap1='<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">'
bootstrap2='<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>'
bootstrap3='<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>'
HTML(bootstrap1,file=tmpfic)
HTML(bootstrap2,file=tmpfic)
HTML(bootstrap3,file=tmpfic)
HTML('<div class="container-fluid">',file=tmpfic)
####################################### Create the TEXT #########################
HTML('<h1 class="text-primary text-center"> AMARETTO results </h1>',file=tmpfic)
HTML('<br /><br /><br /><br /><br />')
####################################### Create the table #########################
table_command5=
'
<table class="table table-hover.table-striped table-bordered">
<thead>
<tr>
<th scope="col"># of samples</th>
<th scope="col"># of modules</th>
<th scope="col"> Var-Percentage</th>
</tr>
</thead>
<tbody>
'
# HTML(table_command5,file=tmpfic)
# HTML('<tr>',file=tmpfic)
# HTML(paste('<td>',as.character(number_of_samples),'</td>'),file=tmpfic)
# HTML(paste('<td>',as.character(NrModules),'</td>'),file=tmpfic)
# HTML(paste('<td>',as.character(number_of_regulators),'</td>'),file=tmpfic)
# #HTML(paste('<td>',as.character(number_of_samples),'</td>'),file=tmpfic)
# HTML('</tr>',file=tmpfic)
table_command1=
'
<table class="table table-hover ">
<thead ">
<tr>
<th scope="col" class="align-middle">Module #</th>
<th scope="col" class="align-middle"># of target genes</th>
<th scope="col" class="align-middle"># of regulator genes</th>
<th scope="col" class="align-middle"># of significant gene overlappings</th>
</tr>
</thead>
<tbody>
'
ModuleNr<-1
ModuleData=AMARETTOinit$MA_matrix_Var[AMARETTOresults$ModuleMembership==ModuleNr,]
number_of_samples=length(colnames(ModuleData))
HTML('<div class="col-sm-3">',file=tmpfic)
HTML('</div>',file=tmpfic)
HTML('<div class="col-sm-6">',file=tmpfic)
HTML(paste('<p class=".text-success text-right"> # of samples = ',as.character(number_of_samples),'</p>'))
HTML('<br /><br />')
HTML(table_command1,file=tmpfic)
amaretto_result_table_header<-c('Module_No','number_of_target_genes','number_of_regulator_genes','number_of_significant_gene_overlappings')
amaretto_result_table<-c()
for (ModuleNr in 1:NrModules )
{
module_name=paste("module",as.character(ModuleNr),sep="")
###################### find module info
ModuleData=AMARETTOinit$MA_matrix_Var[AMARETTOresults$ModuleMembership==ModuleNr,]
currentRegulators = AMARETTOresults$AllRegulators[which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] != 0)]
RegulatorData=AMARETTOinit$RegulatorData[currentRegulators,]
module_regulators_weights=AMARETTOresults$RegulatoryPrograms[ModuleNr,][which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] != 0)]
ModuleGenes=rownames(ModuleData)
RegulatoryGenes=rownames(RegulatorData)
number_of_genes=length(rownames(ModuleData))
number_of_regulators=length(currentRegulators)
number_of_samples=length(colnames(ModuleData))
######################### creating Link for each module ####################
address='./htmls'
htmladdress=paste("'",address,"/module",as.character(ModuleNr),".html","'",sep="")
link_command=paste("<a href=",htmladdress,'>',module_name,'</a>',sep="")
#HTML(link_command,file=tmpfic)
###########################################################################
HTML('<tr>',file=tmpfic)
HTML(paste('<td class="align-middle">',link_command,'</td>'),file=tmpfic)
HTML(paste('<td class="align-middle">',as.character(number_of_genes),'</td>'),file=tmpfic)
HTML(paste('<td class="align-middle">',as.character(number_of_regulators),'</td>'),file=tmpfic)
HTML(paste('<td class="align-middle">',as.character( number_of_significant_gene_overlappings[ModuleNr]),'</td>'),file=tmpfic)
#HTML(paste('<td>',as.character(number_of_samples),'</td>'),file=tmpfic)
HTML('</tr>',file=tmpfic)
rr<-c(module_name,as.character(number_of_genes),as.character(number_of_regulators),as.character( number_of_significant_gene_overlappings[ModuleNr]))
amaretto_result_table<-rbind(amaretto_result_table,rr)
}
colnames(amaretto_result_table)<-amaretto_result_table_header
outfile=paste('./report_html/htmls/tables/amaretto','.tsv',sep='')
write.table(amaretto_result_table,file=outfile,sep='\t',quote=F,col.names=T,row.names=F)
HTML('</tbody></table>',file=tmpfic)
HTML('</div>',file=tmpfic)
HTML('<div class="col-sm-3">',file=tmpfic)
HTML('</div>',file=tmpfic)
####################################### #######################################
HTML('</div>',file=tmpfic)
# HTMLEndFile()
#################################################################################[####
zip(zipfile = 'reportZip', files = './report_html')
###############################
}
########################################################################################################################################
########################################################################################################################################
########################################################################################################################################
########################################################################################################################################
########################################################################################################################################
create_gene_annotations<-function(module_all_genes_data,Module_Genes_names,Module_regulators_weights)
{
all_genes_names=rownames(module_all_genes_data)
targets_bool<-c()
regulators_bool<-c()
regulators_weight<-c()
for (i in 1:length(all_genes_names))
{
gene_name=all_genes_names[i]
a=0
b=0
c=0
if (is.element(gene_name, Module_Genes_names))
{
a<-1
}
if (is.element(gene_name, rownames(Module_regulators_weights)))
{
b<-1
c<-Module_regulators_weights$module_regulators_weights[rownames(Module_regulators_weights)==gene_name]
}
targets_bool<-c(targets_bool,a)
regulators_bool<-c(regulators_bool,b)
regulators_weight<-c(regulators_weight,c)
}
df=data.frame(all_genes_names, targets_bool, regulators_bool,regulators_weight)
return(df)
}
| /src/mohsen_report_function.R | permissive | genepattern/docker-amaretto | R | false | false | 22,948 | r | ################################## HTML Report Functions
amaretto_html_report <- function(AMARETTOinit,AMARETTOresults,CNV_matrix,MET_matrix,hyper_geo_test_bool=TRUE)
{
suppressMessages(suppressWarnings(library("AMARETTO")))
#file_wd=dirname(rstudioapi::getSourceEditorContext()$path)
#setwd(file_wd)
file_wd='./'
setwd(file_wd)
########################################################
# Evaluate AMARETTO Results
########################################################
# AMARETTOtestReport<-AMARETTO_EvaluateTestSet(AMARETTOresults,
# AMARETTOinit$MA_matrix_Var,AMARETTOinit$RegulatorData)
######################################################################################################################################################################################
######################################################################################################################################################################################
NrModules<-AMARETTOresults$NrModules
if (hyper_geo_test_bool)
{
saveRDS(AMARETTOresults, file = "hyper_geo_test/AMARETTOtestReport.RData")
########################################################
# Save AMARETTO results in different formats including .gmt
########################################################
suppressMessages(suppressWarnings(source("/usr/local/bin/amaretto/hyper_geo_test/ProcessTCGA_modules.R")))
rileen("/usr/local/bin/amaretto/hyper_geo_test/AMARETTOtestReport.RData", AMARETTOinit, AMARETTOresults)
}
######################################################################################################################################################################################
######################################################################################################################################################################################
######################################################################################################################################################################################
######################################################################################################################################################################################
##################################################################################################################################################################
#REPORT
##################################################################################################################################################################
unlink("report_htm/*")
unlink("report_html/htmls/*")
unlink("report_html/htmls/images/*")
unlink("report_html/htmls/data/*")
unlink("report_html/htmls/tables/*")
unlink("report_html/htmls/tables/module_hyper_geo_test/*")
dir.create("report_html")
dir.create("report_html/htmls")
dir.create("report_html/htmls/images")
dir.create("report_html/htmls/data")
dir.create("report_html/htmls/tables")
dir.create("report_html/htmls/tables/module_hyper_geo_test")
########################################################
# Save images of all the modules
########################################################
address1=paste("./","report_html",sep="")
address2=paste("./","htmls",sep="")
address3=paste("./","htmls/images",sep="")
for (ModuleNr in 1:NrModules )
{
html_address=paste("report_html","/htmls/images","/module",as.character(ModuleNr),".jpeg",sep="")
jpeg(file =html_address )
AMARETTO_VisualizeModule(AMARETTOinit, AMARETTOresults=AMARETTOresults, CNV_matrix, MET_matrix, ModuleNr=ModuleNr)
dev.off()
}
##############################################################################
# Create HTMLs for each module
##############################################################################
if (hyper_geo_test_bool)
{
###################################################
library("GSEABase")
library("rstudioapi")
suppressMessages(suppressWarnings(source("/usr/local/bin/amaretto/hyper_geo_test/HyperGTestGeneEnrichment.R")))
suppressMessages(suppressWarnings(source("/usr/local/bin/amaretto/hyper_geo_test/word_Cloud.R")))
b<- HyperGTestGeneEnrichment("/usr/local/bin/amaretto/hyper_geo_test/H.C2CP.genesets_forRileen.gmt", "/usr/local/bin/amaretto/hyper_geo_test/TCGA_modules_target_only.gmt", "hyper_geo_test/output.txt",show.overlapping.genes=TRUE)
df1<-read.table("hyper_geo_test/output.txt",sep="\t",header=TRUE, fill=TRUE)
#df2<-df1[order(-df1$p.value),]
df2=df1
df3<-read.table("hyper_geo_test/output.genes.txt",sep="\t",header=TRUE, fill=TRUE)
###################################################
print(head(df3))
}
library(R2HTML)
number_of_significant_gene_overlappings<-c()
for (ModuleNr in 1:NrModules )
{
module_name=paste("module",as.character(ModuleNr),sep="")
ModuleData=AMARETTOinit$MA_matrix_Var[AMARETTOresults$ModuleMembership==ModuleNr,]
currentRegulators = AMARETTOresults$AllRegulators[which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] != 0)]
RegulatorData=AMARETTOinit$RegulatorData[currentRegulators,]
module_regulators_weights=AMARETTOresults$RegulatoryPrograms[ModuleNr,][which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] != 0)]
module_regulators_weights<-data.frame(module_regulators_weights)
positiveRegulators=AMARETTOresults$AllRegulators[which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] > 0)]
negetiveRegulators=AMARETTOresults$AllRegulators[which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] < 0)]
ModuleGenes=rownames(ModuleData)
RegulatoryGenes=rownames(RegulatorData)
module_all_genes_data <- rbind(ModuleData, RegulatorData)
module_all_genes_data <-module_all_genes_data[order(rownames(module_all_genes_data)),]
module_all_genes_data <- unique(module_all_genes_data)
module_annotations<-create_gene_annotations(module_all_genes_data,ModuleGenes,module_regulators_weights)
if (hyper_geo_test_bool)
{
####################### Hyper Geometric Significance
module_name2=paste("Module_",as.character(ModuleNr),sep="")
print(module_name2)
filter_indexes<-(df3$Testset==module_name2) & (df3$p.value<0.05)
gene_descriptions<-df3$Description[filter_indexes]
gene_names<-df3$Geneset[filter_indexes]
print(length(gene_names))
print('hassan')
overlapping_gene_names<-df3$Overlapping.genes[filter_indexes]
number_overlappings<-df3$n.Overlapping[filter_indexes]
p_values<-df3$p.value[filter_indexes]
q_values<-df3$q.value[filter_indexes]
number_of_significant_gene_overlappings<-c(number_of_significant_gene_overlappings,length(gene_names))
print(head(df3))
print('hassan')
mmm<-gene_descriptions
mm<-as.vector(unique(mmm))
descriptions=""
for (var in mm)
{
descriptions = paste(descriptions,var,sep=" ")
descriptions =gsub(">",",",descriptions)
# descriptions<-substring(descriptions, 1)
# descriptions<-sub('.', '', descriptions)
}
if (nchar(descriptions)>0)
{
wordcloud_making(descriptions,module_name2)
}
#############################################
}
print(number_of_significant_gene_overlappings)
address=address2
fname=paste("module",as.character(ModuleNr),sep="")
tite_page=paste("module",as.character(ModuleNr),sep="")
graph1=paste("./images","/module",as.character(ModuleNr),".jpeg",sep = "")
tmpfic<-HTMLInitFile("./report_html/htmls/",filename=fname,Title = tite_page,CSSFile="http://www.stat.ucl.ac.be/R2HTML/Pastel.css")
####### CSS ####
bootstrap1='<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">'
bootstrap2='<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>'
bootstrap3='<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>'
HTML(bootstrap1,file=tmpfic)
HTML(bootstrap2,file=tmpfic)
HTML(bootstrap3,file=tmpfic)
HTML('<div class="container-fluid">',file=tmpfic)
################
HTML("<h1 class='text-center text-primary'> Module Results </h1>",file=tmpfic)
HTML('<br /><br />')
HTMLInsertGraph(graph1,file=tmpfic)
##################
if (hyper_geo_test_bool)
{
HTML('<hr class="col-xs-12">')
HTML("<h2 class='text-center text-primary'> Hyper Geometric Test </h2>",file=tmpfic)
HTML("<p> Conditioned on P-value <0.05 </p>",file=tmpfic)
HTML('<div class="row">',file=tmpfic)
if (nchar(descriptions)==0){
HTML("<h4 class='text-center text-danger'> Not enough for wordcloud </h4>",file=tmpfic)
}
if (nchar(descriptions)>0)
{
graph2=paste("./images","/",module_name2,"_WordCloud.png",sep = "")
HTMLInsertGraph(graph2,file=tmpfic)
}
if (length(gene_names)>0)
{
HTML('<div class="col-sm-1">',file=tmpfic)
HTML('</div>',file=tmpfic)
HTML('<div class="col-sm-10">',file=tmpfic)
table_command2=
'
<table class="table table-hover .table-striped table-bordered">
<thead>
<tr>
<th scope="col">Gene Names</th>
<th scope="col">Gene Description</th>
<th scope="col">Number of Overlapping Genes</th>
<th scope="col">Overlapping Genes Names</th>
<th scope="col">p-value</th>
<th scope="col">q-value</th>
</tr>
</thead>
<tbody>
'
module_hypo_table_header<-c('Gene-Names','Gene-Description','Number-of-Overlapping-Genes','Overlapping-Genes-Names','p-value','q-value')
module_hypo_table<-c()
##################
descriptions=strsplit(descriptions,",")[[1]]
HTML(table_command2,file=tmpfic)
for (kk in 1:length(gene_names))
{
link_command=paste("<a href=http://software.broadinstitute.org/gsea/msigdb/cards/",as.character(gene_names[kk]),".html>",gene_names[kk],'</a>',sep="")
HTML(paste('<tr>',
'<td valign="middle">',link_command,'</td>',
'<td valign="middle">',gsub(">"," ",gene_descriptions[kk]),'</td>',
'<td valign="middle">',number_overlappings[kk],'</td>',
'<td valign="middle">',gsub(" ",' ',as.character(overlapping_gene_names[kk])),'</td>',
'<td valign="middle">',round(p_values[kk],4),'</td>',
'<td valign="middle">',round(q_values[kk],4),'</td>',
'</tr>'),file=tmpfic)
rr<-c(as.character(gene_names[kk]),gsub(">"," ",gene_descriptions[kk]),number_overlappings[kk],gsub(" ",' ',as.character(overlapping_gene_names[kk])),round(p_values[kk],4),round(q_values[kk],4))
module_hypo_table<-rbind(module_hypo_table,rr)
}
HTML('</tbody></table>',file=tmpfic)
colnames(module_hypo_table)<-module_hypo_table_header
outfile=paste('./report_html/htmls/tables/module_hyper_geo_test/Module',ModuleNr,'_hypergeometric_test.tsv',sep='')
write.table(module_hypo_table,file=outfile,sep='\t',quote=F,col.names=T,row.names=F)
HTML('</div>',file=tmpfic)
HTML('<div class="col-sm-1">',file=tmpfic)
HTML('</div>',file=tmpfic)
##################
HTML('</div>',file=tmpfic)
}
}
HTML('<hr class="col-xs-12">')
ModuleData=AMARETTOinit$MA_matrix_Var[AMARETTOresults$ModuleMembership==ModuleNr,]
currentRegulators = AMARETTOresults$AllRegulators[which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] != 0)]
positiveRegulators=AMARETTOresults$AllRegulators[which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] > 0)]
negetiveRegulators=AMARETTOresults$AllRegulators[which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] < 0)]
module_regulators_data=AMARETTOresults$RegulatoryPrograms[ModuleNr,][which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] != 0)]
#colnames(module_regulators_data) <- c("Expression data")
HTML("<h2 class='text-center text-primary'>Module Genes Expression Data </h2>",file=tmpfic)
all_gene_expression_file_name_save=paste(module_name,"_","data",".csv",sep="")
all_gene_expression_file_address=paste("./report_html/htmls/data",'/',all_gene_expression_file_name_save,sep="")
write.csv(module_all_genes_data, file =all_gene_expression_file_address)
ModuleData<-round(ModuleData,2)
HTML(paste('<a href=', paste('./data','/',all_gene_expression_file_name_save,sep=""),' download>',' download all module gene data ','</a>',sep=""))
HTML(ModuleData,file=tmpfic)
HTML("<h2 class='text-center text-primary'>Regulators</h2>",file=tmpfic)
#############
annotations_file_name_save=paste(module_name,"_","annotations",".csv",sep="")
annotations_file_address=paste('./report_html/htmls/data','/',annotations_file_name_save,sep="")
write.csv(module_annotations, file =annotations_file_address)
HTML(paste('<a href=', paste('./data','/',annotations_file_name_save,sep=""),' download>',' download annotations data ','</a>',sep=""))
#############
# colnames(module_regulators_data)<-""
HTML(paste('<p class="text-success">',paste(as.character(positiveRegulators)),'</p>'),file=tmpfic)
HTML(paste('<p class="text-danger">',paste(as.character(negetiveRegulators)),'</p>'),file=tmpfic)
HTML('</div>',file=tmpfic)
}
##############################################################################
#Create the landing page
##############################################################################
tmpfic<-HTMLInitFile(address1,filename="index",Title = "Amartto Report",CSSFile="http://www.stat.ucl.ac.be/R2HTML/Pastel.css")
bootstrap1='<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">'
bootstrap2='<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>'
bootstrap3='<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>'
HTML(bootstrap1,file=tmpfic)
HTML(bootstrap2,file=tmpfic)
HTML(bootstrap3,file=tmpfic)
HTML('<div class="container-fluid">',file=tmpfic)
####################################### Create the TEXT #########################
HTML('<h1 class="text-primary text-center"> AMARETTO results </h1>',file=tmpfic)
HTML('<br /><br /><br /><br /><br />')
####################################### Create the table #########################
table_command5=
'
<table class="table table-hover.table-striped table-bordered">
<thead>
<tr>
<th scope="col"># of samples</th>
<th scope="col"># of modules</th>
<th scope="col"> Var-Percentage</th>
</tr>
</thead>
<tbody>
'
# HTML(table_command5,file=tmpfic)
# HTML('<tr>',file=tmpfic)
# HTML(paste('<td>',as.character(number_of_samples),'</td>'),file=tmpfic)
# HTML(paste('<td>',as.character(NrModules),'</td>'),file=tmpfic)
# HTML(paste('<td>',as.character(number_of_regulators),'</td>'),file=tmpfic)
# #HTML(paste('<td>',as.character(number_of_samples),'</td>'),file=tmpfic)
# HTML('</tr>',file=tmpfic)
table_command1=
'
<table class="table table-hover ">
<thead ">
<tr>
<th scope="col" class="align-middle">Module #</th>
<th scope="col" class="align-middle"># of target genes</th>
<th scope="col" class="align-middle"># of regulator genes</th>
<th scope="col" class="align-middle"># of significant gene overlappings</th>
</tr>
</thead>
<tbody>
'
ModuleNr<-1
ModuleData=AMARETTOinit$MA_matrix_Var[AMARETTOresults$ModuleMembership==ModuleNr,]
number_of_samples=length(colnames(ModuleData))
HTML('<div class="col-sm-3">',file=tmpfic)
HTML('</div>',file=tmpfic)
HTML('<div class="col-sm-6">',file=tmpfic)
HTML(paste('<p class=".text-success text-right"> # of samples = ',as.character(number_of_samples),'</p>'))
HTML('<br /><br />')
HTML(table_command1,file=tmpfic)
amaretto_result_table_header<-c('Module_No','number_of_target_genes','number_of_regulator_genes','number_of_significant_gene_overlappings')
amaretto_result_table<-c()
for (ModuleNr in 1:NrModules )
{
module_name=paste("module",as.character(ModuleNr),sep="")
###################### find module info
ModuleData=AMARETTOinit$MA_matrix_Var[AMARETTOresults$ModuleMembership==ModuleNr,]
currentRegulators = AMARETTOresults$AllRegulators[which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] != 0)]
RegulatorData=AMARETTOinit$RegulatorData[currentRegulators,]
module_regulators_weights=AMARETTOresults$RegulatoryPrograms[ModuleNr,][which(AMARETTOresults$RegulatoryPrograms[ModuleNr,] != 0)]
ModuleGenes=rownames(ModuleData)
RegulatoryGenes=rownames(RegulatorData)
number_of_genes=length(rownames(ModuleData))
number_of_regulators=length(currentRegulators)
number_of_samples=length(colnames(ModuleData))
######################### creating Link for each module ####################
address='./htmls'
htmladdress=paste("'",address,"/module",as.character(ModuleNr),".html","'",sep="")
link_command=paste("<a href=",htmladdress,'>',module_name,'</a>',sep="")
#HTML(link_command,file=tmpfic)
###########################################################################
HTML('<tr>',file=tmpfic)
HTML(paste('<td class="align-middle">',link_command,'</td>'),file=tmpfic)
HTML(paste('<td class="align-middle">',as.character(number_of_genes),'</td>'),file=tmpfic)
HTML(paste('<td class="align-middle">',as.character(number_of_regulators),'</td>'),file=tmpfic)
HTML(paste('<td class="align-middle">',as.character( number_of_significant_gene_overlappings[ModuleNr]),'</td>'),file=tmpfic)
#HTML(paste('<td>',as.character(number_of_samples),'</td>'),file=tmpfic)
HTML('</tr>',file=tmpfic)
rr<-c(module_name,as.character(number_of_genes),as.character(number_of_regulators),as.character( number_of_significant_gene_overlappings[ModuleNr]))
amaretto_result_table<-rbind(amaretto_result_table,rr)
}
colnames(amaretto_result_table)<-amaretto_result_table_header
outfile=paste('./report_html/htmls/tables/amaretto','.tsv',sep='')
write.table(amaretto_result_table,file=outfile,sep='\t',quote=F,col.names=T,row.names=F)
HTML('</tbody></table>',file=tmpfic)
HTML('</div>',file=tmpfic)
HTML('<div class="col-sm-3">',file=tmpfic)
HTML('</div>',file=tmpfic)
####################################### #######################################
HTML('</div>',file=tmpfic)
# HTMLEndFile()
#################################################################################[####
zip(zipfile = 'reportZip', files = './report_html')
###############################
}
########################################################################################################################################
########################################################################################################################################
########################################################################################################################################
########################################################################################################################################
########################################################################################################################################
create_gene_annotations<-function(module_all_genes_data,Module_Genes_names,Module_regulators_weights)
{
all_genes_names=rownames(module_all_genes_data)
targets_bool<-c()
regulators_bool<-c()
regulators_weight<-c()
for (i in 1:length(all_genes_names))
{
gene_name=all_genes_names[i]
a=0
b=0
c=0
if (is.element(gene_name, Module_Genes_names))
{
a<-1
}
if (is.element(gene_name, rownames(Module_regulators_weights)))
{
b<-1
c<-Module_regulators_weights$module_regulators_weights[rownames(Module_regulators_weights)==gene_name]
}
targets_bool<-c(targets_bool,a)
regulators_bool<-c(regulators_bool,b)
regulators_weight<-c(regulators_weight,c)
}
df=data.frame(all_genes_names, targets_bool, regulators_bool,regulators_weight)
return(df)
}
|
#날짜 : 2020/08/04
#이름 : 이태훈
#내용 : 데이터시각화 - 막대그래프 교재 p141
install.packages('ggplot2')
library(ggplot2)
#기본 막대그래프
score <- c(80,72,60,78,82,94)
names(score) <- c('김유신','김춘추','장보고','강감찬','이순신','장약용')
barplot(score)
df_exam <- read.csv('../file/exam.csv')
barplot(df_exam$math)
#ggplot2 막대그래프
qplot(data = df_exam, x=id, y=math, geom = 'col')
| /Ch05/5-3.R | no_license | neogeolee/R | R | false | false | 446 | r | #날짜 : 2020/08/04
#이름 : 이태훈
#내용 : 데이터시각화 - 막대그래프 교재 p141
install.packages('ggplot2')
library(ggplot2)
#기본 막대그래프
score <- c(80,72,60,78,82,94)
names(score) <- c('김유신','김춘추','장보고','강감찬','이순신','장약용')
barplot(score)
df_exam <- read.csv('../file/exam.csv')
barplot(df_exam$math)
#ggplot2 막대그래프
qplot(data = df_exam, x=id, y=math, geom = 'col')
|
#__________________________________________________________________________________________________
# TO DO :
# merge write_genetic_map and write_blocs functions
# merge get_augmented_genetic_map and get_blocs functions
#__________________________________________________________________________________________________
.onLoad <- function(libname, pkgname) {
gwhapConfig = list()
# toy interpolated 1000 genomes
f1 = system.file("extdata", "chr1.interpolated_genetic_map.gz", package="gwhap", mustWork=TRUE)
f2 = system.file("extdata", "chr2.interpolated_genetic_map.gz", package="gwhap", mustWork=TRUE)
chr = list(1, 2)
names(chr) = c(f1, f2)
filepaths = c(f1, f2)
encodings = list("cM"="cM", "position"="bp","chr"=chr, "format"="table")
gwhapConfig[["genmap_toy_interpolated_1000"]] = list(filepaths=filepaths, encodings=encodings)
# toy reference 1000 genomes
f1 = system.file("extdata", "chr1_1000_Genome.txt",package="gwhap", mustWork=TRUE)
chr = list(1)
names(chr) = c(f1)
filepaths = c(f1)
encodings = list("cM"="Genetic_Map(cM)", "position"="position", "chr"=chr, "format"="table")
gwhapConfig[["genmap_toy_reference_1000"]] = list(filepaths=filepaths, encodings=encodings)
# toy rutger
f1 = system.file("extdata", "RUMapv3_B137_chr1.txt", package="gwhap", mustWork=TRUE)
chr=list(1)
names(chr) = f1
filepaths=c(f1)
encodings = list("cM"="Sex_averaged_start_map_position",
"position"="Build37_start_physical_position",
"chr"=chr,
"format"="bgen")
gwhapConfig[["genmap_toy_rutger"]] = list(filepaths=filepaths, encodings=encodings)
# toy flat(bim/plink) file to contain SNP physical position
filepaths = c(system.file("extdata", "example.bim", package="gwhap", mustWork=TRUE))
encodings = list('snp'=2, 'position'=3, 'chr'=1, "format"="table")
gwhapConfig[["snpbucket_toy_flat"]] = list(filepaths=filepaths, encodings=encodings)
# toy bgen file to contain SNP physical position
f1 = system.file("extdata", "ukb_chr1_v2.bgen.bgi", package="gwhap", mustWork=TRUE)
filepaths = c(f1)
chr = list(1)
names(chr) = c(f1)
encodings = list('snp'='snp', 'position'='position', 'chr'=chr, "format"="bgen")
gwhapConfig[["snpbucket_toy_bgen"]] = list(filepaths=filepaths, encodings=encodings)
assign("gwhapConfig", gwhapConfig, envir = parent.env(environment()))
}
# access this global varaible with
# gwhap:::gwhapConfig
#' Read the genetic map
#'
#' @description Read the genetic map
#'
#' @param genetic_map_dir A path to a dir containing the maps
#' @param chromosomes list of integer representing the chromosome that one want to read
#'
#' @return List representing the genetic map loaded.
#' @export
#'
get_genetic_map <- function(genetic_map_dir, chromosomes=1:23) {
gen_map = list()
for (chr in 1:23){
# read the rutgers map
chr_map = get_rutgers_map(sprintf('%s/RUMapv3_B137_chr%s.txt', genetic_map_dir, chr))
# append to gen_map list
gen_map[[chr]] = data.frame(cM=chr_map$cM, pos=chr_map$position, rsid=chr_map$rsid, chr=chr)
}
return(gen_map)
}
#' Get bim file
#'
#' @description Get bim file
#'
#' @param file_path A path to a file with tablulation as a delimiter
#'
#' @return The .bim file in a tibbles structure (data frame).
#' @import readr
#' @export
#'
get_bim_file <- function(file_path){
return(read_delim(file_path, delim='\t', col_names=c('chromosome', 'snp', 'bp')))
}
#' @export
#'
getAnnotVariants <- function(obj)
UseMethod("getAnnotVariants", obj)
#' @export
#'
getAnnotVariants.default <- function(obj) {
stop("getAnnotVariants not defined on this object")
}
#' @export
#'
getAnnotVariants.vcf <- function(obj) {
stop("getAnnotVariants not defined on this object")
}
#' @export
#'
getAnnotVariants.bgen <- function(obj) {
return(obj[["annot_variants"]])
}
#' @export
#'
getInternalIID <- function(obj)
UseMethod("getInternalIID", obj)
#' @export
#'
getInternalIID.default <- function(obj) {
stop("getInternalIID not defined on this object")
}
#' @export
#'
getInternalIID.vcf <- function(obj) {
stop("getInternalIID not defined on this object")
}
#' @export
#'
getInternalIID.bgen <- function(obj) {
return(obj[["annot_internalIID"]])
}
#' Get bgi file
#'
#' @description Get bgi file
#'
#' @param file_path A path to a .bgi file
#'
#' @return data frame with the following columns: chromosome, position, rsid, number_of_alleles, allele1, allele2, file_start_position size_in_bytes
#' @import DBI
#' @export
#'
get_bgi_file <- function(file_path){
# create a connection to the data base managment system
con = (dbConnect(RSQLite::SQLite(), file_path))
# read the variant table and store it as a data frame structure
bgi_dataframe = data.frame(dbReadTable(con, "Variant"))
# close the connection and frees ressources
dbDisconnect(con)
return(bgi_dataframe)
}
#' Get bgen file
#' @description Get bgen file
#'
#' @param file_path A path to a .bgen file
#' @param start the start genomic position
#' @param end the end genomic position
#' @param chromosome String. The chromosome code. '' by default
#' @param max_entries_per_sample An integer specifying the maximum number of probabilities expected per variant per sample.
#' This is used to set the third dimension of the data matrix returned. 4 by default.
#' @param samples A character vector specifying the IDs of samples to load data for.
#'
#' @return bgen file loaded in a bgen format
#' @import rbgen
#' @export
#'
get_bgen_file <- function(file_path, start, end, samples=samples, chromosome='', max_entries_per_sample=4){
return(bgen.load(filename = file_path,
data.frame(chromosome=chromosome, start=start, end=end),
samples = samples,
max_entries_per_sample = max_entries_per_sample))
}
#' Write genetic map
#'
#' @description Write genetic map
#'
#' @param output A dir path where the map is saved
#' @param dataframe dataframe representing the augmented genetic map for one chromosome
#' @import utils
#'
#' @export
#'
write_genetic_map <- function(dataframe, output){
write.table(dataframe, output, sep="\t", row.names=FALSE, quote=FALSE)
}
#' Get blocs
#'
#' @description Get blocs
#'
#' @param blocs_dir A path to the blocs dir
#' @param chromosomes A list of chromosomes that one want to read
#' @import readr
#'
#' @return the blocs concatenated into a data table structure
#' @export
#'
get_blocs <- function(blocs_dir, chromosomes=1:22){
blocs_df = c()
for (chr in chromosomes){
blocs_chr = sprintf('%s/blocs_chr%s.txt', blocs_dir, chr)
print(blocs_chr)
if(file.exists(blocs_chr)){blocs_df = rbind(blocs_df, read_delim(blocs_chr, delim='\t'))}
}
return(blocs_df)
}
#' Write blocs
#'
#' @description Write blocs
#'
#' @param dataframe dataframe representing the blocs created for one chromosome
#' @param output A dir path where the blocs are saved
#' @import utils
#'
#' @export
#'
write_blocs <- function(dataframe, output){
write.table(dataframe, output, sep="\t", row.names=FALSE, quote=FALSE)
}
#' Save haplotypes
#'
#' @description Save haplotypes per chromosome. Each rows represent the subject with their IID as index.
#' Each column represent the haplotypes name that basicaly contain the follow information chromosome code, bloc start bp, end bloc bp and the haplotypes code
#' @param haplotype_combined haplotype dataframe. The rows correspond to the subject while the column correspond to the haplotypes name
#' @param chromosome chromosome code
#' @param output A dir path where the haplotypes are saved
#'
#' @return None
#' @import utils
#' @export
#'
save_haplotypes <- function(haplotype_combined, chromosome, output){
# set the output path TOFIX
#haplotype_combined_path = sprintf('%s/haplotypes_%s.tsv', output, chromosome)
# remove NA in the column name added by cbind
#colnames(haplotype_combined) = vapply(strsplit(colnames(haplotype_combined),"[.]"), `[`, 2, FUN.VALUE=character(1))
# save the haplotype as tsv file
#write.table(haplotype_combined, haplotype_combined_path, sep="\t", row.names=TRUE, quote=FALSE)
# save the haplotypes as .RData
saveRDS(haplotype_combined, file=sprintf('%s/haplotypes_%s.RDS', output, chromosome), compress=T)
}
#' Load haplotypes
#'
#' @description Load haplotypes per chromosome.See save_haplotypes
#' @param output A dir path where the haplotypes are saved
#'
#' @return haplotype_combined haplotype dataframe
#' @import utils
#' @export
#'
load_haplotypes <- function(chromosome, dirpath){
return(readRDS(sprintf('%s/haplotypes_%s.RDS', dirpath, chromosome)))
}
#' Save tests
#'
#' @description Save haplotypes tests per chromosome. Each rows represent the subject with their IID as index.
#' Each column ...
#' @param haplotype_combined haplotype dataframe. The rows correspond to the subject while the column correspond to the haplotypes name
#' @param chromosome chromosome code
#' @param output A dir path where the haplotypes are saved
#'
#' @return None
#' @import utils
#' @export
#'
save_tests <- function(test, chromosome, output){
write.table(test,
file=file.path(
output,
sprintf('tests_results_chr%d.tsv', chromosome)),
sep="\t", quote=F, row.names=F)
}
#' Summary haplotypes test
#'
#' @description Filter on the results obtained and keep only the significant p values
#' @param test_path A dir path where the tests are saved
#' @param threshold threshold
#' @param verbose silent warning messages. FALSE by default.
#' @import utils
#'
#' @return None
#' @export
#'
summary_haplotypes_test <- function(test_path, threshold = 5e-6, verbose=FALSE){
# silent warning messages
if(verbose == TRUE){options(warn=0)} else{options(warn=-1)}
# init
test_possible = list('bloc_test_results', 'complete_test_results', 'single_test_results')
# init the outputs data frames
bloc_test_results = data.frame()
complete_test_results = data.frame()
single_test_results = data.frame()
# read each test and concatenate it into one dataframe for all blocs and chromosomes
chromosme_test_path = Sys.glob(file.path(test_path, '*'))
# create a summary dir
dir.create(sprintf('%s/summary', test_path))
for(chromosome_path in chromosme_test_path){
for(test in test_possible){
unit_test_path = Sys.glob(file.path(sprintf('%s/%s', chromosome_path, test), '*'))
for(unit_path in unit_test_path){
if(test=='bloc_test_results'){bloc_test_results <- rbind(bloc_test_results, data.frame(read_tsv(unit_path)))}
if(test=='complete_test_results'){complete_test_results <- rbind(complete_test_results, data.frame(read_tsv(unit_path)))}
if(test=='single_test_results'){single_test_results <- rbind(single_test_results, data.frame(read_tsv(unit_path)))}
}
}
}
# filtre on the significant p values
bloc_test_results = bloc_test_results[bloc_test_results$p_value < threshold, ]
complete_test_results = complete_test_results[complete_test_results$p_value < threshold, ]
single_test_results = single_test_results[single_test_results$p_value < threshold, ]
# write the summary
write.table(bloc_test_results, sprintf('%s/summary/bloc_test_results.tsv', test_path), sep="\t", row.names=FALSE, quote=FALSE)
write.table(complete_test_results, sprintf('%s/summary/complete_test_results.tsv', test_path), sep="\t", row.names=FALSE, quote=FALSE)
write.table(single_test_results, sprintf('%s/summary/single_test_results.tsv', test_path), sep="\t", row.names=FALSE, quote=FALSE)
}
#' Download rutgers maps
#'
#' @description download rutgers maps using the following url : http://compgen.rutgers.edu/downloads/rutgers_map_v3.zip
#'
#' @return None
#' @export
download_rutgers_map <- function(){
# dont use linux command
# use the native R cmd instead
# download the rutgers map
system('wget http://compgen.rutgers.edu/downloads/rutgers_map_v3.zip')
# unzip
system('unzip rutgers_map_v3.zip')
# remove the zip file
system('rm rutgers_map_v3.zip')
}
#' Create a S3 object ready to be queried from a haps file
#'
#' @param bgen_filename : full path name to the bgen file of the phased data
#' @return phased_data_loader : the genetetic mapin genMap format
#'
#' @import rbgen
#'
#' @export
phased_data_loader.haps <- function(haps_filename) {
# check the existence of haps_filename file
# TODO
# read 2 flavors of haps file with 1 or 2 cols describing the snps
sep = " "
hap_field_num = count.fields(haps_filename, sep=sep)[1]
phased_data = read_table(haps_filename, col_names=FALSE)
if ((hap_field_num%%2) == 0){
phased_data = phased_data[-2]
}
samples_num = (length(colnames(phased_data)) - 5)/2
tmp = sprintf("sample_%d", 0:(samples_num-1))
new_col_names = c(c('chrom', 'rsid', 'pos', 'allele_1', 'allele_2'),
unlist(lapply(tmp, function(s) sprintf("%s_strand%d", s, 1:2))))
colnames(phased_data) <- new_col_names
ret_obj <- list(phased_data=phased_data, is_phased=TRUE, full_fname_haps=haps_filename)
class(ret_obj) <- c(class(ret_obj), "phased", "haps")
return(ret_obj)
}
#' Create a S3 object ready to be queried from a bgen file
#'
#' @param bgen_filename : full path name to the bgen file of the phased data
#' @return phased_data_loader : the genetetic mapin genMap format
#'
#' @import rbgen
#'
#' @export
phased_data_loader.bgen <- function(bgen_filename) {
# silent warning messages
options(warn=-1)
# check the existence of bgen.bgi file
# TODO
# get the annotation
full_fname_bgi=sprintf("%s.bgi", bgen_filename)
annot_variants = get_bgi_file(full_fname_bgi)
# open and check that data are phased
data = get_bgen_file(file_path = bgen_filename,
start = annot_variants$position[1],
end = annot_variants$position[1],
samples = c(),
chromosome = '',
max_entries_per_sample = 4)
# print(str(data))
annot_internalIID <-data$samples
# In ukb chromosome names is not in the bgen/bgi :degenerated FLAG
chrom_name_degenerated = FALSE
if (unique(annot_variants$chromosome) == "") {
chrom_name_degenerated = TRUE
}
ret_obj = list(full_fname_bgen=bgen_filename,
is_phased=TRUE,
max_entries=4,
annot_internalIID=annot_internalIID,
annot_variants=annot_variants,
full_fname_bgi=full_fname_bgi,
chrom_name_degenerated=chrom_name_degenerated)
# create S3 object
class(ret_obj) <- c(class(ret_obj), "phased", "bgen")
return(ret_obj)
}
| /R/utils.R | no_license | yasmina-mekki/gwhap | R | false | false | 14,818 | r | #__________________________________________________________________________________________________
# TO DO :
# merge write_genetic_map and write_blocs functions
# merge get_augmented_genetic_map and get_blocs functions
#__________________________________________________________________________________________________
.onLoad <- function(libname, pkgname) {
gwhapConfig = list()
# toy interpolated 1000 genomes
f1 = system.file("extdata", "chr1.interpolated_genetic_map.gz", package="gwhap", mustWork=TRUE)
f2 = system.file("extdata", "chr2.interpolated_genetic_map.gz", package="gwhap", mustWork=TRUE)
chr = list(1, 2)
names(chr) = c(f1, f2)
filepaths = c(f1, f2)
encodings = list("cM"="cM", "position"="bp","chr"=chr, "format"="table")
gwhapConfig[["genmap_toy_interpolated_1000"]] = list(filepaths=filepaths, encodings=encodings)
# toy reference 1000 genomes
f1 = system.file("extdata", "chr1_1000_Genome.txt",package="gwhap", mustWork=TRUE)
chr = list(1)
names(chr) = c(f1)
filepaths = c(f1)
encodings = list("cM"="Genetic_Map(cM)", "position"="position", "chr"=chr, "format"="table")
gwhapConfig[["genmap_toy_reference_1000"]] = list(filepaths=filepaths, encodings=encodings)
# toy rutger
f1 = system.file("extdata", "RUMapv3_B137_chr1.txt", package="gwhap", mustWork=TRUE)
chr=list(1)
names(chr) = f1
filepaths=c(f1)
encodings = list("cM"="Sex_averaged_start_map_position",
"position"="Build37_start_physical_position",
"chr"=chr,
"format"="bgen")
gwhapConfig[["genmap_toy_rutger"]] = list(filepaths=filepaths, encodings=encodings)
# toy flat(bim/plink) file to contain SNP physical position
filepaths = c(system.file("extdata", "example.bim", package="gwhap", mustWork=TRUE))
encodings = list('snp'=2, 'position'=3, 'chr'=1, "format"="table")
gwhapConfig[["snpbucket_toy_flat"]] = list(filepaths=filepaths, encodings=encodings)
# toy bgen file to contain SNP physical position
f1 = system.file("extdata", "ukb_chr1_v2.bgen.bgi", package="gwhap", mustWork=TRUE)
filepaths = c(f1)
chr = list(1)
names(chr) = c(f1)
encodings = list('snp'='snp', 'position'='position', 'chr'=chr, "format"="bgen")
gwhapConfig[["snpbucket_toy_bgen"]] = list(filepaths=filepaths, encodings=encodings)
assign("gwhapConfig", gwhapConfig, envir = parent.env(environment()))
}
# access this global varaible with
# gwhap:::gwhapConfig
#' Read the genetic map
#'
#' @description Read the genetic map
#'
#' @param genetic_map_dir A path to a dir containing the maps
#' @param chromosomes list of integer representing the chromosome that one want to read
#'
#' @return List representing the genetic map loaded.
#' @export
#'
get_genetic_map <- function(genetic_map_dir, chromosomes=1:23) {
gen_map = list()
for (chr in 1:23){
# read the rutgers map
chr_map = get_rutgers_map(sprintf('%s/RUMapv3_B137_chr%s.txt', genetic_map_dir, chr))
# append to gen_map list
gen_map[[chr]] = data.frame(cM=chr_map$cM, pos=chr_map$position, rsid=chr_map$rsid, chr=chr)
}
return(gen_map)
}
#' Get bim file
#'
#' @description Get bim file
#'
#' @param file_path A path to a file with tablulation as a delimiter
#'
#' @return The .bim file in a tibbles structure (data frame).
#' @import readr
#' @export
#'
get_bim_file <- function(file_path){
return(read_delim(file_path, delim='\t', col_names=c('chromosome', 'snp', 'bp')))
}
#' @export
#'
getAnnotVariants <- function(obj)
UseMethod("getAnnotVariants", obj)
#' @export
#'
getAnnotVariants.default <- function(obj) {
stop("getAnnotVariants not defined on this object")
}
#' @export
#'
getAnnotVariants.vcf <- function(obj) {
stop("getAnnotVariants not defined on this object")
}
#' @export
#'
getAnnotVariants.bgen <- function(obj) {
return(obj[["annot_variants"]])
}
#' @export
#'
getInternalIID <- function(obj)
UseMethod("getInternalIID", obj)
#' @export
#'
getInternalIID.default <- function(obj) {
stop("getInternalIID not defined on this object")
}
#' @export
#'
getInternalIID.vcf <- function(obj) {
stop("getInternalIID not defined on this object")
}
#' @export
#'
getInternalIID.bgen <- function(obj) {
return(obj[["annot_internalIID"]])
}
#' Get bgi file
#'
#' @description Get bgi file
#'
#' @param file_path A path to a .bgi file
#'
#' @return data frame with the following columns: chromosome, position, rsid, number_of_alleles, allele1, allele2, file_start_position size_in_bytes
#' @import DBI
#' @export
#'
get_bgi_file <- function(file_path){
# create a connection to the data base managment system
con = (dbConnect(RSQLite::SQLite(), file_path))
# read the variant table and store it as a data frame structure
bgi_dataframe = data.frame(dbReadTable(con, "Variant"))
# close the connection and frees ressources
dbDisconnect(con)
return(bgi_dataframe)
}
#' Get bgen file
#' @description Get bgen file
#'
#' @param file_path A path to a .bgen file
#' @param start the start genomic position
#' @param end the end genomic position
#' @param chromosome String. The chromosome code. '' by default
#' @param max_entries_per_sample An integer specifying the maximum number of probabilities expected per variant per sample.
#' This is used to set the third dimension of the data matrix returned. 4 by default.
#' @param samples A character vector specifying the IDs of samples to load data for.
#'
#' @return bgen file loaded in a bgen format
#' @import rbgen
#' @export
#'
get_bgen_file <- function(file_path, start, end, samples=samples, chromosome='', max_entries_per_sample=4){
return(bgen.load(filename = file_path,
data.frame(chromosome=chromosome, start=start, end=end),
samples = samples,
max_entries_per_sample = max_entries_per_sample))
}
#' Write genetic map
#'
#' @description Write genetic map
#'
#' @param output A dir path where the map is saved
#' @param dataframe dataframe representing the augmented genetic map for one chromosome
#' @import utils
#'
#' @export
#'
write_genetic_map <- function(dataframe, output){
write.table(dataframe, output, sep="\t", row.names=FALSE, quote=FALSE)
}
#' Get blocs
#'
#' @description Get blocs
#'
#' @param blocs_dir A path to the blocs dir
#' @param chromosomes A list of chromosomes that one want to read
#' @import readr
#'
#' @return the blocs concatenated into a data table structure
#' @export
#'
get_blocs <- function(blocs_dir, chromosomes=1:22){
blocs_df = c()
for (chr in chromosomes){
blocs_chr = sprintf('%s/blocs_chr%s.txt', blocs_dir, chr)
print(blocs_chr)
if(file.exists(blocs_chr)){blocs_df = rbind(blocs_df, read_delim(blocs_chr, delim='\t'))}
}
return(blocs_df)
}
#' Write blocs
#'
#' @description Write blocs
#'
#' @param dataframe dataframe representing the blocs created for one chromosome
#' @param output A dir path where the blocs are saved
#' @import utils
#'
#' @export
#'
write_blocs <- function(dataframe, output){
write.table(dataframe, output, sep="\t", row.names=FALSE, quote=FALSE)
}
#' Save haplotypes
#'
#' @description Save haplotypes per chromosome. Each rows represent the subject with their IID as index.
#' Each column represent the haplotypes name that basicaly contain the follow information chromosome code, bloc start bp, end bloc bp and the haplotypes code
#' @param haplotype_combined haplotype dataframe. The rows correspond to the subject while the column correspond to the haplotypes name
#' @param chromosome chromosome code
#' @param output A dir path where the haplotypes are saved
#'
#' @return None
#' @import utils
#' @export
#'
save_haplotypes <- function(haplotype_combined, chromosome, output){
# set the output path TOFIX
#haplotype_combined_path = sprintf('%s/haplotypes_%s.tsv', output, chromosome)
# remove NA in the column name added by cbind
#colnames(haplotype_combined) = vapply(strsplit(colnames(haplotype_combined),"[.]"), `[`, 2, FUN.VALUE=character(1))
# save the haplotype as tsv file
#write.table(haplotype_combined, haplotype_combined_path, sep="\t", row.names=TRUE, quote=FALSE)
# save the haplotypes as .RData
saveRDS(haplotype_combined, file=sprintf('%s/haplotypes_%s.RDS', output, chromosome), compress=T)
}
#' Load haplotypes
#'
#' @description Load haplotypes per chromosome.See save_haplotypes
#' @param output A dir path where the haplotypes are saved
#'
#' @return haplotype_combined haplotype dataframe
#' @import utils
#' @export
#'
load_haplotypes <- function(chromosome, dirpath){
return(readRDS(sprintf('%s/haplotypes_%s.RDS', dirpath, chromosome)))
}
#' Save tests
#'
#' @description Save haplotypes tests per chromosome. Each rows represent the subject with their IID as index.
#' Each column ...
#' @param haplotype_combined haplotype dataframe. The rows correspond to the subject while the column correspond to the haplotypes name
#' @param chromosome chromosome code
#' @param output A dir path where the haplotypes are saved
#'
#' @return None
#' @import utils
#' @export
#'
save_tests <- function(test, chromosome, output){
write.table(test,
file=file.path(
output,
sprintf('tests_results_chr%d.tsv', chromosome)),
sep="\t", quote=F, row.names=F)
}
#' Summary haplotypes test
#'
#' @description Filter on the results obtained and keep only the significant p values
#' @param test_path A dir path where the tests are saved
#' @param threshold threshold
#' @param verbose silent warning messages. FALSE by default.
#' @import utils
#'
#' @return None
#' @export
#'
summary_haplotypes_test <- function(test_path, threshold = 5e-6, verbose=FALSE){
# silent warning messages
if(verbose == TRUE){options(warn=0)} else{options(warn=-1)}
# init
test_possible = list('bloc_test_results', 'complete_test_results', 'single_test_results')
# init the outputs data frames
bloc_test_results = data.frame()
complete_test_results = data.frame()
single_test_results = data.frame()
# read each test and concatenate it into one dataframe for all blocs and chromosomes
chromosme_test_path = Sys.glob(file.path(test_path, '*'))
# create a summary dir
dir.create(sprintf('%s/summary', test_path))
for(chromosome_path in chromosme_test_path){
for(test in test_possible){
unit_test_path = Sys.glob(file.path(sprintf('%s/%s', chromosome_path, test), '*'))
for(unit_path in unit_test_path){
if(test=='bloc_test_results'){bloc_test_results <- rbind(bloc_test_results, data.frame(read_tsv(unit_path)))}
if(test=='complete_test_results'){complete_test_results <- rbind(complete_test_results, data.frame(read_tsv(unit_path)))}
if(test=='single_test_results'){single_test_results <- rbind(single_test_results, data.frame(read_tsv(unit_path)))}
}
}
}
# filtre on the significant p values
bloc_test_results = bloc_test_results[bloc_test_results$p_value < threshold, ]
complete_test_results = complete_test_results[complete_test_results$p_value < threshold, ]
single_test_results = single_test_results[single_test_results$p_value < threshold, ]
# write the summary
write.table(bloc_test_results, sprintf('%s/summary/bloc_test_results.tsv', test_path), sep="\t", row.names=FALSE, quote=FALSE)
write.table(complete_test_results, sprintf('%s/summary/complete_test_results.tsv', test_path), sep="\t", row.names=FALSE, quote=FALSE)
write.table(single_test_results, sprintf('%s/summary/single_test_results.tsv', test_path), sep="\t", row.names=FALSE, quote=FALSE)
}
#' Download rutgers maps
#'
#' @description download rutgers maps using the following url : http://compgen.rutgers.edu/downloads/rutgers_map_v3.zip
#'
#' @return None
#' @export
download_rutgers_map <- function(){
# dont use linux command
# use the native R cmd instead
# download the rutgers map
system('wget http://compgen.rutgers.edu/downloads/rutgers_map_v3.zip')
# unzip
system('unzip rutgers_map_v3.zip')
# remove the zip file
system('rm rutgers_map_v3.zip')
}
#' Create a S3 object ready to be queried from a haps file
#'
#' @param bgen_filename : full path name to the bgen file of the phased data
#' @return phased_data_loader : the genetetic mapin genMap format
#'
#' @import rbgen
#'
#' @export
phased_data_loader.haps <- function(haps_filename) {
# check the existence of haps_filename file
# TODO
# read 2 flavors of haps file with 1 or 2 cols describing the snps
sep = " "
hap_field_num = count.fields(haps_filename, sep=sep)[1]
phased_data = read_table(haps_filename, col_names=FALSE)
if ((hap_field_num%%2) == 0){
phased_data = phased_data[-2]
}
samples_num = (length(colnames(phased_data)) - 5)/2
tmp = sprintf("sample_%d", 0:(samples_num-1))
new_col_names = c(c('chrom', 'rsid', 'pos', 'allele_1', 'allele_2'),
unlist(lapply(tmp, function(s) sprintf("%s_strand%d", s, 1:2))))
colnames(phased_data) <- new_col_names
ret_obj <- list(phased_data=phased_data, is_phased=TRUE, full_fname_haps=haps_filename)
class(ret_obj) <- c(class(ret_obj), "phased", "haps")
return(ret_obj)
}
#' Create a S3 object ready to be queried from a bgen file
#'
#' @param bgen_filename : full path name to the bgen file of the phased data
#' @return phased_data_loader : the genetetic mapin genMap format
#'
#' @import rbgen
#'
#' @export
phased_data_loader.bgen <- function(bgen_filename) {
# silent warning messages
options(warn=-1)
# check the existence of bgen.bgi file
# TODO
# get the annotation
full_fname_bgi=sprintf("%s.bgi", bgen_filename)
annot_variants = get_bgi_file(full_fname_bgi)
# open and check that data are phased
data = get_bgen_file(file_path = bgen_filename,
start = annot_variants$position[1],
end = annot_variants$position[1],
samples = c(),
chromosome = '',
max_entries_per_sample = 4)
# print(str(data))
annot_internalIID <-data$samples
# In ukb chromosome names is not in the bgen/bgi :degenerated FLAG
chrom_name_degenerated = FALSE
if (unique(annot_variants$chromosome) == "") {
chrom_name_degenerated = TRUE
}
ret_obj = list(full_fname_bgen=bgen_filename,
is_phased=TRUE,
max_entries=4,
annot_internalIID=annot_internalIID,
annot_variants=annot_variants,
full_fname_bgi=full_fname_bgi,
chrom_name_degenerated=chrom_name_degenerated)
# create S3 object
class(ret_obj) <- c(class(ret_obj), "phased", "bgen")
return(ret_obj)
}
|
library(tidyr)
library(dplyr)
SCR = "~/CS_SCR/"
DEPS = paste(SCR,"/deps/", sep="")
#DEPS = "/u/scr/mhahn/deps/"
data = read.csv(paste(DEPS, "DLM_MEMORY_OPTIMIZED/locality_optimized_dlm/manual_output_funchead_fine_depl", "/", "auto-summary-lstm_2.6.tsv", sep=""), sep="\t")
dataBackup = data
data = data %>% filter(HeadPOS == "VERB", DependentPOS == "NOUN") %>% select(-HeadPOS, -DependentPOS)
# OldEnglish: OSSameSide 0.769, OSSameSide_Real_Prob 0.49
#DLM_MEMORY_OPTIMIZED/locality_optimized_dlm/manual_output_funchead_fine_depl
#(base) mhahn@sc:~/scr/CODE/optimization-landscapes/optimizeDLM/OldEnglish$ ls output/
#ISWOC_Old_English_inferWeights_PerText.py_model_9104261.tsv
dataO = data %>% filter(CoarseDependency == "obj")
dataS = data %>% filter(CoarseDependency == "nsubj")
data = merge(dataO, dataS, by=c("Language", "FileName"))
data = data %>% mutate(OFartherThanS = (DistanceWeight.x > DistanceWeight.y))
data = data %>% mutate(OSSameSide = (sign(DH_Weight.x) == sign(DH_Weight.y)))
data = data %>% mutate(Order = ifelse(OSSameSide & OFartherThanS, "VSO", ifelse(OSSameSide, "SOV", "SVO")))
families = read.csv("families.tsv", sep="\t")
data = merge(data, families, by=c("Language"))
u = data %>% group_by(Language, Family) %>% summarise(OSSameSide = mean(OSSameSide))
print(u[order(u$OSSameSide),], n=60)
sigmoid = function(x) {
return(1/(1+exp(-x)))
}
real = read.csv(paste(SCR,"/deps/LANDSCAPE/mle-fine_selected/auto-summary-lstm_2.6.tsv", sep=""), sep="\t")
realO = real %>% filter(Dependency == "obj")
realS = real %>% filter(Dependency == "nsubj")
real = merge(realO, realS, by=c("Language", "FileName", "ModelName"))
real = real %>% mutate(OFartherThanS_Real = (Distance_Mean_NoPunct.x > Distance_Mean_NoPunct.y))
real = real %>% mutate(OSSameSide_Real = (sign(DH_Mean_NoPunct.x) == sign(DH_Mean_NoPunct.y)))
real = real %>% mutate(OSSameSide_Real_Prob = (sigmoid(DH_Mean_NoPunct.x) * sigmoid(DH_Mean_NoPunct.y)) + ((1-sigmoid(DH_Mean_NoPunct.x)) * (1-sigmoid(DH_Mean_NoPunct.y))))
real = real %>% mutate(Order_Real = ifelse(OSSameSide_Real & OFartherThanS_Real, "VSO", ifelse(OSSameSide_Real, "SOV", "SVO")))
u = merge(u, real %>% select(Language, OSSameSide_Real, OSSameSide_Real_Prob), by=c("Language"))
data = merge(data, real, by=c("Language"))
data$OSSameSide_Real_Prob_Log = log(data$OSSameSide_Real_Prob)
#########################
#########################
u = rbind(u, data.frame(Language=c("ISWOC_Old_English"), Family=c("Germanic"), OSSameSide = c(0.769), OSSameSide_Real = c(TRUE), OSSameSide_Real_Prob = c(0.49)))
u = rbind(u, data.frame(Language=c("Archaic_Greek"), Family=c("Germanic"), OSSameSide = c(0.8), OSSameSide_Real = c(TRUE), OSSameSide_Real_Prob = c(0.56)))
u = rbind(u, data.frame(Language=c("Classical_Greek"), Family=c("Germanic"), OSSameSide = c(0.53), OSSameSide_Real = c(TRUE), OSSameSide_Real_Prob = c(0.52)))
u = rbind(u, data.frame(Language=c("Koine_Greek"), Family=c("Germanic"), OSSameSide = c(0.67), OSSameSide_Real = c(TRUE), OSSameSide_Real_Prob = c(0.47)))
# OldEnglish: OSSameSide 0.769, OSSameSide_Real_Prob 0.49
u$Ancient = (u$Language %in% c("Classical_Chinese_2.6", "Latin_2.6", "Sanskrit_2.6", "Old_Church_Slavonic_2.6", "Old_Russian_2.6", "Ancient_Greek_2.6", "ISWOC_Old_English"))
u$Medieval = (u$Language %in% c("Old_French_2.6", "ISWOC_Spanish"))
u$Age = ifelse(u$Ancient, -1, ifelse(u$Medieval, 0, 1))
u = u[order(u$Age),]
u$Language = factor(u$Language, levels=u$Language)
uMandarin = u %>% filter(Language %in% c("Classical_Chinese_2.6", "Chinese_2.6")) %>% mutate(Group="Chinese (Mandarin)") %>% mutate(Time = ifelse(Age == -1, "-400", "+2000"))
u2Mandarin = uMandarin %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uCantonese = u %>% filter(Language %in% c("Classical_Chinese_2.6", "Cantonese_2.6")) %>% mutate(Group="Chinese (Cantonese)") %>% mutate(Time = ifelse(Age == -1, "-400", "+2000"))
u2Cantonese = uCantonese %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uEnglish = u %>% filter(Language %in% c("ISWOC_Old_English", "English_2.6")) %>% mutate(Group="English") %>% mutate(Time = ifelse(Age == -1, "+900", "+2000"))
u2English = uEnglish %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uFrench = u %>% filter(Language %in% c("Old_French_2.6", "French_2.6")) %>% mutate(Group="French") %>% mutate(Time = ifelse(Age == -1, "+0", ifelse(Age==0, "+1200", "+2000")))
u2French = uFrench %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uOldFrench = u %>% filter(Language %in% c("Latin_2.6", "Old_French_2.6")) %>% mutate(Group="French") %>% mutate(Time = ifelse(Age == -1, "+0", ifelse(Age==0, "+1200", "+2000")))
u2OldFrench = uOldFrench %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uSpanish = u %>% filter(Language %in% c("Latin_2.6", "Spanish_2.6")) %>% mutate(Group="Spanish") %>% mutate(Time = ifelse(Age == -1, "+0", ifelse(Age==0, "+1200", "+2000")))
u2Spanish = uSpanish %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uHindi = u %>% filter(Language %in% c("Sanskrit_2.6", "Hindi_2.6")) %>% mutate(Group="Hindi/Urdu") %>% mutate(Time = ifelse(Age == -1, "-200", ifelse(Age==0, "+1200", "+2000")))
u2Hindi = uHindi %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uUrdu = u %>% filter(Language %in% c("Sanskrit_2.6", "Urdu_2.6")) %>% mutate(Group="Hindi/Urdu") %>% mutate(Time = ifelse(Age == -1, "-200", ifelse(Age==0, "+1200", "+2000")))
u2Urdu = uUrdu %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uBulgarian = u %>% filter(Language %in% c("Old_Church_Slavonic_2.6", "Bulgarian_2.6")) %>% mutate(Group="South Slavic") %>% mutate(Time = ifelse(Age == -1, "+800", ifelse(Age==0, "+1200", "+2000")))
u2Bulgarian = uBulgarian %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uRussian = u %>% filter(Language %in% c("Old_Russian_2.6", "Russian_2.6")) %>% mutate(Group="East Slavic") %>% mutate(Time = ifelse(Age == -1, "+1200", ifelse(Age==0, "+1200", "+2000")))
u2Russian = uRussian %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uGreek1 = u %>% filter(Language %in% c("Archaic_Greek", "Classical_Greek")) %>% mutate(Group="Greek") %>% mutate(Time = ifelse(Language == "Archaic_Greek", "-700", "-400")) %>% mutate(Earlier=ifelse(Language == "Archaic_Greek", TRUE, FALSE))
u2Greek1 = uGreek1 %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uGreek2 = u %>% filter(Language %in% c("Classical_Greek", "Koine_Greek")) %>% mutate(Group="Greek") %>% mutate(Time = ifelse(Language == "Classical_Greek", "-400", "+0")) %>% mutate(Earlier=ifelse(Language == "Classical_Greek", TRUE, FALSE))
u2Greek2 = uGreek2 %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uGreek3 = u %>% filter(Language %in% c("Koine_Greek", "Greek_2.6")) %>% mutate(Group="Greek") %>% mutate(Time = ifelse(Language == "Koine_Greek", "+0", "+2000")) %>% mutate(Earlier=ifelse(Language == "Koine_Greek", TRUE, FALSE))
u2Greek3 = uGreek3 %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
library(ggrepel)
library(ggplot2)
# %>% filter(Language %in% c("Chinese_2.6", "Cantonese_2.6", "Classical_Chinese_2.6", "French_2.6", "Old_French_2.6", "Russian_2.6", "Old_Russian_2.6", "Latin_2.6", "Greek_2.6", "Ancient_Greek_2.6", "Sanskrit_2.6", "Urdu_2.6", "Hindi_2.6", "Spanish_2.6", "Italian_2.6"))
plot = ggplot(u, aes(x=OSSameSide_Real_Prob, y=OSSameSide)) #+ geom_smooth(method="lm")
plot = plot + geom_point(alpha=0.2) + xlab("Fraction of SOV/VSO/OSV... Orders (Real)") + ylab("Fraction of SOV/VSO/OSV... Orders (DLM Optimized)") + xlim(0,1) + ylim(0,1)
plot = plot + geom_segment(data=u2Mandarin, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uMandarin, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Cantonese, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uCantonese, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2French, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uFrench, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2OldFrench, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uOldFrench, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Spanish, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uSpanish, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Hindi, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uHindi, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Urdu, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uUrdu, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Bulgarian, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uBulgarian, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Russian, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uRussian, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2English, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uEnglish, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Greek1, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uGreek1, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Greek2, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uGreek2, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Greek3, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uGreek3, aes(label=Time), color="black")
#ggsave("figures/fracion-optimized_DLM_2.6.pdf", height=13, width=13)
plot = plot + facet_wrap(~Group)
#ggsave("figures/historical_2.6_times.pdf", width=10, height=10)
| /analysis/visualize_historical/ARCHIVE/landscapes_2.6_Historical_Years2.R | no_license | m-hahn/optimization-landscapes | R | false | false | 12,546 | r | library(tidyr)
library(dplyr)
SCR = "~/CS_SCR/"
DEPS = paste(SCR,"/deps/", sep="")
#DEPS = "/u/scr/mhahn/deps/"
data = read.csv(paste(DEPS, "DLM_MEMORY_OPTIMIZED/locality_optimized_dlm/manual_output_funchead_fine_depl", "/", "auto-summary-lstm_2.6.tsv", sep=""), sep="\t")
dataBackup = data
data = data %>% filter(HeadPOS == "VERB", DependentPOS == "NOUN") %>% select(-HeadPOS, -DependentPOS)
# OldEnglish: OSSameSide 0.769, OSSameSide_Real_Prob 0.49
#DLM_MEMORY_OPTIMIZED/locality_optimized_dlm/manual_output_funchead_fine_depl
#(base) mhahn@sc:~/scr/CODE/optimization-landscapes/optimizeDLM/OldEnglish$ ls output/
#ISWOC_Old_English_inferWeights_PerText.py_model_9104261.tsv
dataO = data %>% filter(CoarseDependency == "obj")
dataS = data %>% filter(CoarseDependency == "nsubj")
data = merge(dataO, dataS, by=c("Language", "FileName"))
data = data %>% mutate(OFartherThanS = (DistanceWeight.x > DistanceWeight.y))
data = data %>% mutate(OSSameSide = (sign(DH_Weight.x) == sign(DH_Weight.y)))
data = data %>% mutate(Order = ifelse(OSSameSide & OFartherThanS, "VSO", ifelse(OSSameSide, "SOV", "SVO")))
families = read.csv("families.tsv", sep="\t")
data = merge(data, families, by=c("Language"))
u = data %>% group_by(Language, Family) %>% summarise(OSSameSide = mean(OSSameSide))
print(u[order(u$OSSameSide),], n=60)
sigmoid = function(x) {
return(1/(1+exp(-x)))
}
real = read.csv(paste(SCR,"/deps/LANDSCAPE/mle-fine_selected/auto-summary-lstm_2.6.tsv", sep=""), sep="\t")
realO = real %>% filter(Dependency == "obj")
realS = real %>% filter(Dependency == "nsubj")
real = merge(realO, realS, by=c("Language", "FileName", "ModelName"))
real = real %>% mutate(OFartherThanS_Real = (Distance_Mean_NoPunct.x > Distance_Mean_NoPunct.y))
real = real %>% mutate(OSSameSide_Real = (sign(DH_Mean_NoPunct.x) == sign(DH_Mean_NoPunct.y)))
real = real %>% mutate(OSSameSide_Real_Prob = (sigmoid(DH_Mean_NoPunct.x) * sigmoid(DH_Mean_NoPunct.y)) + ((1-sigmoid(DH_Mean_NoPunct.x)) * (1-sigmoid(DH_Mean_NoPunct.y))))
real = real %>% mutate(Order_Real = ifelse(OSSameSide_Real & OFartherThanS_Real, "VSO", ifelse(OSSameSide_Real, "SOV", "SVO")))
u = merge(u, real %>% select(Language, OSSameSide_Real, OSSameSide_Real_Prob), by=c("Language"))
data = merge(data, real, by=c("Language"))
data$OSSameSide_Real_Prob_Log = log(data$OSSameSide_Real_Prob)
#########################
#########################
u = rbind(u, data.frame(Language=c("ISWOC_Old_English"), Family=c("Germanic"), OSSameSide = c(0.769), OSSameSide_Real = c(TRUE), OSSameSide_Real_Prob = c(0.49)))
u = rbind(u, data.frame(Language=c("Archaic_Greek"), Family=c("Germanic"), OSSameSide = c(0.8), OSSameSide_Real = c(TRUE), OSSameSide_Real_Prob = c(0.56)))
u = rbind(u, data.frame(Language=c("Classical_Greek"), Family=c("Germanic"), OSSameSide = c(0.53), OSSameSide_Real = c(TRUE), OSSameSide_Real_Prob = c(0.52)))
u = rbind(u, data.frame(Language=c("Koine_Greek"), Family=c("Germanic"), OSSameSide = c(0.67), OSSameSide_Real = c(TRUE), OSSameSide_Real_Prob = c(0.47)))
# OldEnglish: OSSameSide 0.769, OSSameSide_Real_Prob 0.49
u$Ancient = (u$Language %in% c("Classical_Chinese_2.6", "Latin_2.6", "Sanskrit_2.6", "Old_Church_Slavonic_2.6", "Old_Russian_2.6", "Ancient_Greek_2.6", "ISWOC_Old_English"))
u$Medieval = (u$Language %in% c("Old_French_2.6", "ISWOC_Spanish"))
u$Age = ifelse(u$Ancient, -1, ifelse(u$Medieval, 0, 1))
u = u[order(u$Age),]
u$Language = factor(u$Language, levels=u$Language)
uMandarin = u %>% filter(Language %in% c("Classical_Chinese_2.6", "Chinese_2.6")) %>% mutate(Group="Chinese (Mandarin)") %>% mutate(Time = ifelse(Age == -1, "-400", "+2000"))
u2Mandarin = uMandarin %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uCantonese = u %>% filter(Language %in% c("Classical_Chinese_2.6", "Cantonese_2.6")) %>% mutate(Group="Chinese (Cantonese)") %>% mutate(Time = ifelse(Age == -1, "-400", "+2000"))
u2Cantonese = uCantonese %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uEnglish = u %>% filter(Language %in% c("ISWOC_Old_English", "English_2.6")) %>% mutate(Group="English") %>% mutate(Time = ifelse(Age == -1, "+900", "+2000"))
u2English = uEnglish %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uFrench = u %>% filter(Language %in% c("Old_French_2.6", "French_2.6")) %>% mutate(Group="French") %>% mutate(Time = ifelse(Age == -1, "+0", ifelse(Age==0, "+1200", "+2000")))
u2French = uFrench %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uOldFrench = u %>% filter(Language %in% c("Latin_2.6", "Old_French_2.6")) %>% mutate(Group="French") %>% mutate(Time = ifelse(Age == -1, "+0", ifelse(Age==0, "+1200", "+2000")))
u2OldFrench = uOldFrench %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uSpanish = u %>% filter(Language %in% c("Latin_2.6", "Spanish_2.6")) %>% mutate(Group="Spanish") %>% mutate(Time = ifelse(Age == -1, "+0", ifelse(Age==0, "+1200", "+2000")))
u2Spanish = uSpanish %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uHindi = u %>% filter(Language %in% c("Sanskrit_2.6", "Hindi_2.6")) %>% mutate(Group="Hindi/Urdu") %>% mutate(Time = ifelse(Age == -1, "-200", ifelse(Age==0, "+1200", "+2000")))
u2Hindi = uHindi %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uUrdu = u %>% filter(Language %in% c("Sanskrit_2.6", "Urdu_2.6")) %>% mutate(Group="Hindi/Urdu") %>% mutate(Time = ifelse(Age == -1, "-200", ifelse(Age==0, "+1200", "+2000")))
u2Urdu = uUrdu %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uBulgarian = u %>% filter(Language %in% c("Old_Church_Slavonic_2.6", "Bulgarian_2.6")) %>% mutate(Group="South Slavic") %>% mutate(Time = ifelse(Age == -1, "+800", ifelse(Age==0, "+1200", "+2000")))
u2Bulgarian = uBulgarian %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uRussian = u %>% filter(Language %in% c("Old_Russian_2.6", "Russian_2.6")) %>% mutate(Group="East Slavic") %>% mutate(Time = ifelse(Age == -1, "+1200", ifelse(Age==0, "+1200", "+2000")))
u2Russian = uRussian %>% mutate(Earlier = (Age == min(Age))) %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uGreek1 = u %>% filter(Language %in% c("Archaic_Greek", "Classical_Greek")) %>% mutate(Group="Greek") %>% mutate(Time = ifelse(Language == "Archaic_Greek", "-700", "-400")) %>% mutate(Earlier=ifelse(Language == "Archaic_Greek", TRUE, FALSE))
u2Greek1 = uGreek1 %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uGreek2 = u %>% filter(Language %in% c("Classical_Greek", "Koine_Greek")) %>% mutate(Group="Greek") %>% mutate(Time = ifelse(Language == "Classical_Greek", "-400", "+0")) %>% mutate(Earlier=ifelse(Language == "Classical_Greek", TRUE, FALSE))
u2Greek2 = uGreek2 %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
uGreek3 = u %>% filter(Language %in% c("Koine_Greek", "Greek_2.6")) %>% mutate(Group="Greek") %>% mutate(Time = ifelse(Language == "Koine_Greek", "+0", "+2000")) %>% mutate(Earlier=ifelse(Language == "Koine_Greek", TRUE, FALSE))
u2Greek3 = uGreek3 %>% select(Group, Earlier, OSSameSide, OSSameSide_Real_Prob) %>% pivot_wider(names_from=Earlier, values_from=c(OSSameSide, OSSameSide_Real_Prob))
library(ggrepel)
library(ggplot2)
# %>% filter(Language %in% c("Chinese_2.6", "Cantonese_2.6", "Classical_Chinese_2.6", "French_2.6", "Old_French_2.6", "Russian_2.6", "Old_Russian_2.6", "Latin_2.6", "Greek_2.6", "Ancient_Greek_2.6", "Sanskrit_2.6", "Urdu_2.6", "Hindi_2.6", "Spanish_2.6", "Italian_2.6"))
plot = ggplot(u, aes(x=OSSameSide_Real_Prob, y=OSSameSide)) #+ geom_smooth(method="lm")
plot = plot + geom_point(alpha=0.2) + xlab("Fraction of SOV/VSO/OSV... Orders (Real)") + ylab("Fraction of SOV/VSO/OSV... Orders (DLM Optimized)") + xlim(0,1) + ylim(0,1)
plot = plot + geom_segment(data=u2Mandarin, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uMandarin, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Cantonese, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uCantonese, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2French, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uFrench, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2OldFrench, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uOldFrench, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Spanish, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uSpanish, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Hindi, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uHindi, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Urdu, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uUrdu, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Bulgarian, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uBulgarian, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Russian, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uRussian, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2English, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uEnglish, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Greek1, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uGreek1, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Greek2, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uGreek2, aes(label=Time), color="black")
plot = plot + geom_segment(data=u2Greek3, aes(x=OSSameSide_Real_Prob_TRUE, xend=OSSameSide_Real_Prob_FALSE, y=OSSameSide_TRUE, yend=OSSameSide_FALSE), arrow=arrow(), size=1, color="blue") + geom_label(data=uGreek3, aes(label=Time), color="black")
#ggsave("figures/fracion-optimized_DLM_2.6.pdf", height=13, width=13)
plot = plot + facet_wrap(~Group)
#ggsave("figures/historical_2.6_times.pdf", width=10, height=10)
|
##' @rdname prepare_results
##' @aliases prepare_results.pca
##' @author Julien Barnier <julien.barnier@@ens-lyon.fr>
##' @seealso \code{\link[ade4]{dudi.pca}}
##' @import dplyr
##' @importFrom tidyr gather
##' @importFrom utils head
##' @export
prepare_results.pca <- function(obj) {
if (!inherits(obj, "pca") || !inherits(obj, "dudi")) stop("obj must be of class dudi and pca")
if (!requireNamespace("ade4", quietly = TRUE)) {
stop("the ade4 package is needed for this function to work.")
}
vars <- obj$co
## Axes names and inertia
axes <- seq_len(ncol(vars))
eig <- obj$eig / sum(obj$eig) * 100
names(axes) <- paste("Axis", axes, paste0("(", head(round(eig, 2), length(axes)),"%)"))
## Eigenvalues
eig <- data.frame(dim = 1:length(eig), percent = eig)
## Inertia
inertia <- ade4::inertia.dudi(obj, row.inertia = TRUE, col.inertia = TRUE)
## Variables coordinates
vars$varname <- rownames(vars)
vars$Type <- "Active"
vars$Class <- "Quantitative"
## Supplementary variables coordinates
if (!is.null(obj$supv)) {
vars.quanti.sup <- obj$supv
vars.quanti.sup$varname <- rownames(vars.quanti.sup)
vars.quanti.sup$Type <- "Supplementary"
vars.quanti.sup$Class <- "Quantitative"
vars <- rbind(vars, vars.quanti.sup)
}
vars <- vars %>% gather(Axis, Coord, starts_with("Comp")) %>%
mutate(Axis = gsub("Comp", "", Axis, fixed = TRUE),
Coord = round(Coord, 3))
## Contributions
tmp <- inertia$col.abs
tmp <- tmp %>% mutate(varname = rownames(tmp),
Type = "Active", Class = "Quantitative") %>%
gather(Axis, Contrib, starts_with("Axis")) %>%
mutate(Axis = gsub("^Axis([0-9]+)$", "\\1", Axis),
Contrib = round(Contrib, 3))
vars <- vars %>% left_join(tmp, by = c("varname", "Type", "Class", "Axis"))
## Cos2
tmp <- abs(inertia$col.rel) / 100
tmp <- tmp %>% mutate(varname = rownames(tmp),
Type = "Active", Class = "Quantitative")
tmp <- tmp %>% gather(Axis, Cos2, starts_with("Axis")) %>%
mutate(Axis = gsub("Axis", "", Axis, fixed = TRUE),
Cos2 = round(Cos2, 3))
vars <- vars %>% left_join(tmp, by = c("varname", "Type", "Class", "Axis"))
vars <- vars %>% rename(Variable = varname)
## Compatibility with FactoMineR for qualitative supplementary variables
vars <- vars %>% mutate(Level = "")
## Individuals coordinates
ind <- obj$li
ind$Name <- rownames(ind)
ind$Type <- "Active"
if (!is.null(obj$supi)) {
tmp_sup <- obj$supi
tmp_sup$Name <- rownames(tmp_sup)
tmp_sup$Type <- "Supplementary"
ind <- ind %>% bind_rows(tmp_sup)
}
ind <- ind %>% gather(Axis, Coord, starts_with("Axis")) %>%
mutate(Axis = gsub("Axis", "", Axis, fixed = TRUE),
Coord = round(Coord, 3))
## Individuals contrib
tmp <- inertia$row.abs
tmp <- tmp %>% mutate(Name = rownames(tmp), Type = "Active") %>%
gather(Axis, Contrib, starts_with("Axis")) %>%
mutate(Axis = gsub("^Axis([0-9]+)$", "\\1", Axis),
Contrib = round(Contrib, 3))
ind <- ind %>% left_join(tmp, by = c("Name", "Type", "Axis"))
## Individuals Cos2
tmp <- abs(inertia$row.rel) / 100
tmp$Name <- rownames(tmp)
tmp$Type <- "Active"
tmp <- tmp %>%
gather(Axis, Cos2, starts_with("Axis")) %>%
mutate(Axis = gsub("Axis", "", Axis, fixed = TRUE),
Cos2 = round(Cos2, 3))
ind <- ind %>% left_join(tmp, by = c("Name", "Type", "Axis"))
return(list(vars = vars, ind = ind, eig = eig, axes = axes))
}
| /R/prepare_results_dudi_pca.R | no_license | LMXB/explor | R | false | false | 3,551 | r | ##' @rdname prepare_results
##' @aliases prepare_results.pca
##' @author Julien Barnier <julien.barnier@@ens-lyon.fr>
##' @seealso \code{\link[ade4]{dudi.pca}}
##' @import dplyr
##' @importFrom tidyr gather
##' @importFrom utils head
##' @export
prepare_results.pca <- function(obj) {
if (!inherits(obj, "pca") || !inherits(obj, "dudi")) stop("obj must be of class dudi and pca")
if (!requireNamespace("ade4", quietly = TRUE)) {
stop("the ade4 package is needed for this function to work.")
}
vars <- obj$co
## Axes names and inertia
axes <- seq_len(ncol(vars))
eig <- obj$eig / sum(obj$eig) * 100
names(axes) <- paste("Axis", axes, paste0("(", head(round(eig, 2), length(axes)),"%)"))
## Eigenvalues
eig <- data.frame(dim = 1:length(eig), percent = eig)
## Inertia
inertia <- ade4::inertia.dudi(obj, row.inertia = TRUE, col.inertia = TRUE)
## Variables coordinates
vars$varname <- rownames(vars)
vars$Type <- "Active"
vars$Class <- "Quantitative"
## Supplementary variables coordinates
if (!is.null(obj$supv)) {
vars.quanti.sup <- obj$supv
vars.quanti.sup$varname <- rownames(vars.quanti.sup)
vars.quanti.sup$Type <- "Supplementary"
vars.quanti.sup$Class <- "Quantitative"
vars <- rbind(vars, vars.quanti.sup)
}
vars <- vars %>% gather(Axis, Coord, starts_with("Comp")) %>%
mutate(Axis = gsub("Comp", "", Axis, fixed = TRUE),
Coord = round(Coord, 3))
## Contributions
tmp <- inertia$col.abs
tmp <- tmp %>% mutate(varname = rownames(tmp),
Type = "Active", Class = "Quantitative") %>%
gather(Axis, Contrib, starts_with("Axis")) %>%
mutate(Axis = gsub("^Axis([0-9]+)$", "\\1", Axis),
Contrib = round(Contrib, 3))
vars <- vars %>% left_join(tmp, by = c("varname", "Type", "Class", "Axis"))
## Cos2
tmp <- abs(inertia$col.rel) / 100
tmp <- tmp %>% mutate(varname = rownames(tmp),
Type = "Active", Class = "Quantitative")
tmp <- tmp %>% gather(Axis, Cos2, starts_with("Axis")) %>%
mutate(Axis = gsub("Axis", "", Axis, fixed = TRUE),
Cos2 = round(Cos2, 3))
vars <- vars %>% left_join(tmp, by = c("varname", "Type", "Class", "Axis"))
vars <- vars %>% rename(Variable = varname)
## Compatibility with FactoMineR for qualitative supplementary variables
vars <- vars %>% mutate(Level = "")
## Individuals coordinates
ind <- obj$li
ind$Name <- rownames(ind)
ind$Type <- "Active"
if (!is.null(obj$supi)) {
tmp_sup <- obj$supi
tmp_sup$Name <- rownames(tmp_sup)
tmp_sup$Type <- "Supplementary"
ind <- ind %>% bind_rows(tmp_sup)
}
ind <- ind %>% gather(Axis, Coord, starts_with("Axis")) %>%
mutate(Axis = gsub("Axis", "", Axis, fixed = TRUE),
Coord = round(Coord, 3))
## Individuals contrib
tmp <- inertia$row.abs
tmp <- tmp %>% mutate(Name = rownames(tmp), Type = "Active") %>%
gather(Axis, Contrib, starts_with("Axis")) %>%
mutate(Axis = gsub("^Axis([0-9]+)$", "\\1", Axis),
Contrib = round(Contrib, 3))
ind <- ind %>% left_join(tmp, by = c("Name", "Type", "Axis"))
## Individuals Cos2
tmp <- abs(inertia$row.rel) / 100
tmp$Name <- rownames(tmp)
tmp$Type <- "Active"
tmp <- tmp %>%
gather(Axis, Cos2, starts_with("Axis")) %>%
mutate(Axis = gsub("Axis", "", Axis, fixed = TRUE),
Cos2 = round(Cos2, 3))
ind <- ind %>% left_join(tmp, by = c("Name", "Type", "Axis"))
return(list(vars = vars, ind = ind, eig = eig, axes = axes))
}
|
#' Modify table_header
#'
#' This is a function meant for advanced users to gain
#' more control over the characteristics of the resulting
#' gtsummary table.
#'
#' Review the
#' \href{http://www.danieldsjoberg.com/gtsummary/articles/gtsummary_definition.html}{gtsummary definition}
#' vignette for information on `.$table_header` objects.
#'
#' @param x gtsummary object
#' @param column columns to update
#' @param label string of column label
#' @param hide logical indicating whether to hide column from output
#' @param align string indicating alignment of column, must be one of
#' `c("left", "right", "center")`
#' @param missing_emdash string that evaluates to logical identifying rows to
#' include em-dash for missing values, e.g. `"row_ref == TRUE"`
#' @param indent string that evaluates to logical identifying rows to indent
#' @param bold string that evaluates to logical identifying rows to bold
#' @param italic string that evaluates to logical identifying rows to italicize
#' @param text_interpret string, must be one of `"gt::md"` or `"gt::html"`
#' @param fmt_fun function that formats the statistics in the column
#' @param footnote_abbrev string with abbreviation definition, e.g.
#' `"CI = Confidence Interval"`
#' @param footnote string with text for column footnote
#' @param spanning_header string with text for spanning header
#'
#' @return gtsummary object
#' @export
#'
#'
#' @examples
#' # Example 1 ----------------------------------
#' modify_table_header_ex1 <-
#' lm(mpg ~ factor(cyl), mtcars) %>%
#' tbl_regression() %>%
#' modify_table_header(column = estimate,
#' label = "**Coefficient**",
#' fmt_fun = function(x) style_sigfig(x, digits = 5),
#' footnote = "Regression Coefficient") %>%
#' modify_table_header(column = "p.value",
#' hide = TRUE)
#' @section Example Output:
#' \if{html}{Example 1}
#'
#' \if{html}{\figure{modify_table_header_ex1.png}{options: width=50\%}}
modify_table_header <- function(x, column, label = NULL, hide = NULL, align = NULL,
missing_emdash = NULL, indent = NULL,
text_interpret = NULL, bold = NULL, italic = NULL,
fmt_fun = NULL, footnote_abbrev = NULL,
footnote = NULL, spanning_header = NULL) {
# checking inputs ------------------------------------------------------------
if (!inherits(x, "gtsummary")) stop("`x=` must be class 'gtsummary'", call. = FALSE)
# convert column input to string ---------------------------------------------
column <-
var_input_to_string(
data = vctr_2_tibble(x$table_header$column), arg_name = "column",
select_single = FALSE, select_input = {{ column }}
)
# label ----------------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "label", update = label,
class_check = "is.character", in_check = NULL
)
# hide -----------------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "hide", update = hide,
class_check = "is.logical", in_check = NULL
)
# align ----------------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "align", update = align,
class_check = "is.character", in_check = c("left", "right", "center")
)
# missing_emdash -------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "missing_emdash", update = missing_emdash,
class_check = "is.character", in_check = NULL
)
# indent ---------------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "indent", update = indent,
class_check = "is.character", in_check = NULL
)
# text_interpret -------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "text_interpret", update = text_interpret,
class_check = "is.character", in_check = c("gt::md", "gt::html")
)
# bold -----------------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "bold", update = bold,
class_check = "is.character", in_check = NULL
)
# italic ---------------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "italic", update = italic,
class_check = "is.character", in_check = NULL
)
# fmt_fun --------------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "fmt_fun", update = fmt_fun,
class_check = "is.function", in_check = NULL, in_list = TRUE
)
# footnote_abbrev ------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "footnote_abbrev", update = footnote_abbrev,
class_check = "is.character", in_check = NULL
)
# footnote -------------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "footnote", update = footnote,
class_check = "is.character", in_check = NULL
)
# spanning_header ------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "spanning_header", update = spanning_header,
class_check = "is.character", in_check = NULL
)
# return gtsummary object ----------------------------------------------------
x
}
.update_table_header_element <- function(x, column, element, update,
class_check = NULL, in_check = NULL,
in_list = FALSE) {
# return unaltered if no change requested ------------------------------------
if (is.null(update)) return(x)
# checking inputs ------------------------------------------------------------
if (length(update) != 1) {
glue("`{element}=` argument must be of length one.") %>%
abort()
}
if (!is.null(class_check) && !do.call(class_check, list(update))) {
glue("`{element}=` argument must satisfy `{class_check}()`") %>%
abort()
}
if (!is.null(in_check) && !update %in% in_check) {
glue("`{element}=` argument must be one of {paste(in_check, collapse = ", ")}") %>%
abort()
}
# making update --------------------------------------------------------------
if (in_list) update <- list(update)
x$table_header <-
x$table_header %>%
dplyr::rows_update(
tibble(column = column, element = update) %>% set_names(c("column", element)),
by = "column"
)
# return gtsummary object ----------------------------------------------------
x
}
| /R/modify_table_header.R | permissive | zixi-liu/gtsummary | R | false | false | 7,091 | r | #' Modify table_header
#'
#' This is a function meant for advanced users to gain
#' more control over the characteristics of the resulting
#' gtsummary table.
#'
#' Review the
#' \href{http://www.danieldsjoberg.com/gtsummary/articles/gtsummary_definition.html}{gtsummary definition}
#' vignette for information on `.$table_header` objects.
#'
#' @param x gtsummary object
#' @param column columns to update
#' @param label string of column label
#' @param hide logical indicating whether to hide column from output
#' @param align string indicating alignment of column, must be one of
#' `c("left", "right", "center")`
#' @param missing_emdash string that evaluates to logical identifying rows to
#' include em-dash for missing values, e.g. `"row_ref == TRUE"`
#' @param indent string that evaluates to logical identifying rows to indent
#' @param bold string that evaluates to logical identifying rows to bold
#' @param italic string that evaluates to logical identifying rows to italicize
#' @param text_interpret string, must be one of `"gt::md"` or `"gt::html"`
#' @param fmt_fun function that formats the statistics in the column
#' @param footnote_abbrev string with abbreviation definition, e.g.
#' `"CI = Confidence Interval"`
#' @param footnote string with text for column footnote
#' @param spanning_header string with text for spanning header
#'
#' @return gtsummary object
#' @export
#'
#'
#' @examples
#' # Example 1 ----------------------------------
#' modify_table_header_ex1 <-
#' lm(mpg ~ factor(cyl), mtcars) %>%
#' tbl_regression() %>%
#' modify_table_header(column = estimate,
#' label = "**Coefficient**",
#' fmt_fun = function(x) style_sigfig(x, digits = 5),
#' footnote = "Regression Coefficient") %>%
#' modify_table_header(column = "p.value",
#' hide = TRUE)
#' @section Example Output:
#' \if{html}{Example 1}
#'
#' \if{html}{\figure{modify_table_header_ex1.png}{options: width=50\%}}
modify_table_header <- function(x, column, label = NULL, hide = NULL, align = NULL,
missing_emdash = NULL, indent = NULL,
text_interpret = NULL, bold = NULL, italic = NULL,
fmt_fun = NULL, footnote_abbrev = NULL,
footnote = NULL, spanning_header = NULL) {
# checking inputs ------------------------------------------------------------
if (!inherits(x, "gtsummary")) stop("`x=` must be class 'gtsummary'", call. = FALSE)
# convert column input to string ---------------------------------------------
column <-
var_input_to_string(
data = vctr_2_tibble(x$table_header$column), arg_name = "column",
select_single = FALSE, select_input = {{ column }}
)
# label ----------------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "label", update = label,
class_check = "is.character", in_check = NULL
)
# hide -----------------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "hide", update = hide,
class_check = "is.logical", in_check = NULL
)
# align ----------------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "align", update = align,
class_check = "is.character", in_check = c("left", "right", "center")
)
# missing_emdash -------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "missing_emdash", update = missing_emdash,
class_check = "is.character", in_check = NULL
)
# indent ---------------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "indent", update = indent,
class_check = "is.character", in_check = NULL
)
# text_interpret -------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "text_interpret", update = text_interpret,
class_check = "is.character", in_check = c("gt::md", "gt::html")
)
# bold -----------------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "bold", update = bold,
class_check = "is.character", in_check = NULL
)
# italic ---------------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "italic", update = italic,
class_check = "is.character", in_check = NULL
)
# fmt_fun --------------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "fmt_fun", update = fmt_fun,
class_check = "is.function", in_check = NULL, in_list = TRUE
)
# footnote_abbrev ------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "footnote_abbrev", update = footnote_abbrev,
class_check = "is.character", in_check = NULL
)
# footnote -------------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "footnote", update = footnote,
class_check = "is.character", in_check = NULL
)
# spanning_header ------------------------------------------------------------
x <- .update_table_header_element(
x = x, column = column, element = "spanning_header", update = spanning_header,
class_check = "is.character", in_check = NULL
)
# return gtsummary object ----------------------------------------------------
x
}
.update_table_header_element <- function(x, column, element, update,
class_check = NULL, in_check = NULL,
in_list = FALSE) {
# return unaltered if no change requested ------------------------------------
if (is.null(update)) return(x)
# checking inputs ------------------------------------------------------------
if (length(update) != 1) {
glue("`{element}=` argument must be of length one.") %>%
abort()
}
if (!is.null(class_check) && !do.call(class_check, list(update))) {
glue("`{element}=` argument must satisfy `{class_check}()`") %>%
abort()
}
if (!is.null(in_check) && !update %in% in_check) {
glue("`{element}=` argument must be one of {paste(in_check, collapse = ", ")}") %>%
abort()
}
# making update --------------------------------------------------------------
if (in_list) update <- list(update)
x$table_header <-
x$table_header %>%
dplyr::rows_update(
tibble(column = column, element = update) %>% set_names(c("column", element)),
by = "column"
)
# return gtsummary object ----------------------------------------------------
x
}
|
# TIP FLUORESCENCE & MOVEMENT - general CCF script
# This script is part of a suite of scripts for analysis of filopodia dynamics
# using the Fiji plugin Filopodyan. The questions addressed here are whether the
# accummulation of protein of interest in tips of filopodia correlates with their
# behaviour. This effect may occur either immediately (offset = 0) or with a delay
# (offset > 0) if the protein requires time to activate other downstream effectors
# before exerting its effect on tip movement. For this reason the script uses a cross-
# correlation function to compute cross-correlation (for each filopodium) at each
# value of the offset. It then looks at groups of filopodia that share a similar
# relationship between fluorescence and movement (responding vs non-responding filopodia)
# using hierarchical clustering, and compares the properties of those clusters.
# Data input: requires an .Rdata file from upstream Filopodyan .R scripts
# (load in Section 1).
# Data output: a CCF table (ccf.tip.dctm) and its clustered heatmap;
# top-correlating subcluster ('TCS') vs other filopodia ('nonTCS')
# Downstream applications: 1. Subcluster analysis (CCF, phenotype) 2. Randomisation analysis
# For more information contact Vasja Urbancic at vu203@cam.ac.uk.
rm(list = ls())
# ---------------------------------------------------------------------------
# 0. DEPENDENCIES:
# Required packages:
# install.packages("Hmisc", dependencies=TRUE, repos="http://cran.rstudio.com/")
# install.packages("RColorBrewer", dependencies=TRUE, repos="http://cran.rstudio.com/")
# install.packages("wavethresh", dependencies=TRUE, repos="http://cran.rstudio.com/")
library(Hmisc)
library(RColorBrewer)
library(wavethresh)
# Functions (general):
Count <- function(x) length(x[!is.na(x)])
SE <- function(x) sd(x, na.rm=TRUE)/sqrt(Count(x))
CI <- function(x) 1.96*sd(x, na.rm=TRUE)/sqrt(Count(x))
DrawErrorAsPolygon <- function(x, y1, y2, tt, col = 'grey') {
polygon(c(x[tt], rev(x[tt])), c(y1[tt], rev(y2[tt])),
col = col,
border = NA)
}
MovingAverage <- function(x, w = 5) {
filter(x, rep(1/w, w), sides = 2)
}
# Functions (for block randomisation):
extractBlockIndex <- function(which.block, block.size, ...) {
start <- ((which.block-1) * block.size) + 1
end <- ((which.block) * block.size)
c(start:end)
}
BlockReshuffle <- function(x, block.size = 12) {
stopifnot(length(x) > block.size)
n.blocks <- length(x) %/% block.size
overhang <- length(x) %% block.size
included <- 1:(block.size*n.blocks)
excluded.overhang <- setdiff(seq_along(x), included)
x.in.blocks <- list()
for(i in 1:n.blocks) {
x.in.blocks[[i]] <- x[extractBlockIndex(i, 12)]
}
# which blocks to keep in place (full of NAs), which blocks to swap over?
max.NA.per.block <- 0.25 * block.size
blocks.to.shuffle <- which(lapply(x.in.blocks, Count) > max.NA.per.block)
blocks.to.keep <- which(lapply(x.in.blocks, Count) <= max.NA.per.block)
# generate permuted blocks, plus insert NA blocks into their respective positions
#set.seed(0.1)
new.order <- c(sample(blocks.to.shuffle))
for (j in blocks.to.keep) {
new.order <- append(new.order, j, after = j-1)
}
# new vector
for(k in new.order) {
if(exists("z") == FALSE) {z <- c()}
z <- append(z, x.in.blocks[[k]])
}
z <- append(z, x[excluded.overhang])
z
}
# ---------------------------------------------------------------------------
# 1. Load data from saved workspace
# Load data:
# ENA (as metalist):
#load('~/Documents/Postdoc/ANALYSIS_local-files/ANALYSIS LOGS/2017-03_TipF_withBg_ENA/Huang4-01/LastWorkspace_ENA.Rdata')
# Normalised to filopodium (proj) fluorescece:
# load('~/Documents/Postdoc/ANALYSIS_local-files/ANALYSIS LOGS/2017-03_TipF_withBg_ENA/Huang4-01_Norm-toFilo/LastWorkspace_ENA.Rdata')
# Normalised to GC body:
# load('~/Documents/Postdoc/ANALYSIS_local-files/ANALYSIS LOGS/2017-03_TipF_withBg_ENA/Huang4-01_Norm-toGC/LastWorkspace_ENA.Rdata')
# Not normalised (only bg corrected):
# load('~/Documents/Postdoc/ANALYSIS_local-files/ANALYSIS LOGS/2017-03_TipF_withBg_ENA/Huang4-01_NormOFF/LastWorkspace_ENA.Rdata')
# VASP (as metalist):
# load('~/Documents/Postdoc/ANALYSIS_local-files/ANALYSIS LOGS/2017-03_TipF_withBg_VASP/Huang4-01/LastWorkspace_VASP.Rdata')
# load('~/Documents/Postdoc/ANALYSIS_local-files/ANALYSIS LOGS/2017-03_TipF_withBg_VASP/Huang4-01_NormOFF/LastWorkspace_VASP.Rdata')
load('/Users/Lab/Documents/Postdoc/2018_Szeged/TS7_Filopodyan/Materials/Datasets/4b_RESULTS/LastWorkspace_TipF.Rdata')
# Check normalisation method:
metalist[[1]]$nor.tip.setting
# Check background correction method:
metalist[[1]]$bg.corr.setting
# Saving location:
metalist[[1]]$Loc <- folder.names[1]
metalist[[1]]$Loc
# ---------------------------------------------------------------------------
# 2. Extract equivalent data from within the metalist:
all.dS <- metalist[[1]]$all.dS
dS.vector <- metalist[[1]]$dS.vector
bb <- metalist[[1]]$bb
max.t <- metalist[[1]]$max.t
spt <- metalist[[1]]$spt
threshold.ext.per.t <- metalist[[1]]$threshold.ext.per.t
threshold.retr.per.t <- metalist[[1]]$threshold.retr.per.t
tip.f <- metalist[[1]]$tip.f
all.move <- metalist[[1]]$all.move
# Options for using FDCTM instead of raw DCTM, and smoothed tipF signal:
# If use.fdctm == TRUE?
use.fdctm = TRUE
if(use.fdctm == FALSE) {
all.move <- metalist[[1]]$all.dctm99
}
use.ftip = FALSE
if(use.ftip == TRUE) {
tip.f <- apply(tip.f, 2, MovingAverage)
}
# Use difference from last timepoint, instead of actual data? (Uncomment if yes.)
# all.move <- apply(all.move, 2, diff)
# tip.f <- apply(tip.f, 2, diff)
# Difference for tip F, raw for movement:
# all.move <- all.move[2:max.t, ]
# tip.f <- apply(tip.f, 2, diff)
# Difference for movement, raw for tip F:
# all.move <- apply(all.move, 2, diff)
# tip.f <- tip.f[2:max.t, ]
# ---------------------------------------------------------------------------
# 3. Necessary data restructuring:
# 3a) - shift up the all.move table by one timepoint:
start.row <- bb+2
stop.row <- max.t
if (bb > 0) {
reshuffle.vec <- c(1:bb, start.row:stop.row, bb+1)
} else if (bb == 0) {
reshuffle.vec <- c(start.row:stop.row, bb+1)
}
all.move <- all.move[reshuffle.vec, ]; all.move[max.t, ] <- NA
# 3b) - check if any columns have zero DCTM measurements to remove from dataset
# (would trip CCF calculations and heatmaps):
n.timepoints <- colSums( !is.na(all.move)); n.timepoints
zero.lengths <- which(n.timepoints == 0); zero.lengths
if (length(zero.lengths) > 0) {
remove.cols <- zero.lengths
all.move <- all.move[, -zero.lengths]
tip.f <- tip.f[, -zero.lengths]
all.dS <- all.dS[, -zero.lengths]
n.timepoints <- n.timepoints[-zero.lengths]
rm(remove.cols)
}
short.lengths <- which(n.timepoints < 17); short.lengths
if (length(short.lengths) > 0) {
remove.cols <- short.lengths
all.move <- all.move[, -short.lengths]
tip.f <- tip.f[, -short.lengths]
all.dS <- all.dS[, -short.lengths]
n.timepoints <- n.timepoints[-short.lengths]
rm(remove.cols)
}
# ---------------------------------------------------------------------------
# Derived datasets:
# 4a) Create z scores
z.move <- scale(all.move, scale = TRUE, center = TRUE)
z.tip <- scale(tip.f, scale = TRUE, center = TRUE)
# 4b) Split all.move into all.ext, all.retr, all.stall
all.states <- cut(all.move,
breaks = c(-Inf, threshold.retr.per.t, threshold.ext.per.t, Inf),
labels = c("Retr", "Stall", "Ext"))
all.ext <- all.move; all.ext[which(all.states != "Ext")] <- NA
all.retr <- all.move; all.retr[which(all.states != "Retr")] <- NA
all.stall <- all.move; all.stall[which(all.states != "Stall")] <- NA
# illustrate how this works:
data.frame("Movement" = all.move[, 2],
"Ext" = all.ext[, 2],
"Stall" = all.stall[, 2],
"Retr" = all.retr[, 2])[22:121, ]
# ---------------------------------------------------------------------------
# 5. Explore correlations (over whole population) with XY scatterplots
dev.new(width = 7, height = 3.5)
par(mfrow = c(1,2))
par(mar = c(4,5,2,1)+0.1)
matplot(tip.f, all.move,
pch = 16, cex = 0.8,
col = "#41B6C420",
xlab = "Tip fluorescence [a.u.]",
# xlab = expression(Delta * "Tip Fluorescence / Projection Fluorescence [a.u.]"),
ylab = expression("Tip Movement [" * mu * "m]"),
# ylab = expression(Delta * "Tip Movement [" * mu * "m]"),
main = ""
)
abline(h = 0, lty = 2, col = "grey")
# abline(v = 1, lty = 2, col = "grey")
abline(v = 0, lty = 2, col = "grey")
rho <- cor.test(unlist(as.data.frame(tip.f)), unlist(as.data.frame(all.move)), na.action = "na.exclude")$estimate
legend("bottomright", legend = paste("Pearson Rho =", signif(rho, 2)), cex= 0.8, bty = "n")
# As above, with z-scores:
# dev.new()
matplot(z.tip, z.move,
pch = 16, cex = 0.8,
col = "#41B6C420",
xlab = "Tip fluorescence [z-score]",
# xlab = expression(Delta * "Tip Fluorescence / Projection Fluorescence [a.u.]"),
ylab = expression("Tip Movement [z-score]"),
# ylab = expression(Delta * "Tip Movement [" * mu * "m]"),
main = ""
)
abline(h = 0, lty = 2, col = "grey")
# abline(v = 1, lty = 2, col = "grey")
abline(v = 0, lty = 2, col = "grey")
rho.z <- cor.test(unlist(as.data.frame(z.tip)), unlist(as.data.frame(z.move)), na.action = "na.exclude")$estimate
legend("bottomright", legend = paste("Pearson Rho =", signif(rho.z, 2)), cex= 0.8, bty = "n")
range(tip.f, na.rm = TRUE)
dev.new(width = 3.5, height = 3.5)
hist(unlist(tip.f), col = "grey", border = "white", main = "", xlab = "TipF")
# ---------------------------------------------------------------------------
# 6. Calculate CCFs from tip F and tip movement tables
maxlag = 20
lag.range <- -maxlag:maxlag
lag.in.s <- lag.range * spt
ccf.tip.dctm <- data.frame(matrix(NA, ncol = ncol(all.move), nrow = 2*maxlag + 1))
all.filo <- seq_along(colnames(all.move))
for (i in all.filo) {
ccf.i <- ccf(tip.f[, i], all.move[, i], lag.max = 20, na.action = na.pass, plot = FALSE)
ccf.tip.dctm[, i] <- ccf.i
rm(ccf.i, ccf.z.i)
}
colnames(ccf.tip.dctm) <- colnames(all.move)
row.names(ccf.tip.dctm) <- lag.in.s
# The lag k value returned by ccf(x, y) estimates the correlation between x[t+k] and y[t].
# i.e. lag k for ccf(tip, move) estimates correlation between tip.f[t+k] and move[t]
# i.e. lag +2 means correlation between tip.f[t+2] and move[t] --> tip.f lagging behind movement
# i.e. lag -2 means correlation between tip.f[t-2] and move[t] --> tip.f leading ahead of movement
# ---------------------------------------------------------------------------
# 7. Compute and plot weighted CCFs (optional pre-clustering)
# 7a) - Compute weighted CCF metrics:
weights.vec <- n.timepoints
mean.ccf <- apply(ccf.tip.dctm, 1, mean, na.rm = TRUE)
w.mean.ccf <- apply(ccf.tip.dctm, 1, weighted.mean, w = weights.vec, na.rm = TRUE)
w.var.ccf <- apply(ccf.tip.dctm, 1, wtd.var, weights = weights.vec); w.var.ccf
w.sd.ccf <- sqrt(w.var.ccf); w.sd.ccf
counts.ccf <- apply(ccf.tip.dctm, 1, Count); counts.ccf
w.ci.ccf <- 1.96 * w.sd.ccf / sqrt(counts.ccf); w.ci.ccf
ci.ccf = apply(ccf.tip.dctm, 1, CI)
filo.ID.weights <- data.frame("Filo ID" = names(ccf.tip.dctm), "Timepoints" = weights.vec); filo.ID.weights
# 7b) - Plot weighted vs unweighted
dev.new()
matplot(lag.in.s, ccf.tip.dctm, type = "l",
main = "Cross-correlation of tip fluorescence and movement",
ylab = "CCF (Tip Fluorescence & DCTM (99%, smoothed))",
xlab = "Lag [s]",
col = rgb(0,0,0,0.12),
lty = 1
)
abline(v = 0, col = "black", lty = 3)
abline(h = 0, col = "black", lwd = 1)
lines (lag.in.s, w.mean.ccf, # RED: new mean (weighted)
col = 'red',
lwd = 4)
ci1 = w.mean.ccf + w.ci.ccf
ci2 = w.mean.ccf - w.ci.ccf
DrawErrorAsPolygon(lag.in.s, ci1, ci2, col = rgb(1,0,0,0.2))
lines (lag.in.s, mean.ccf, # BLUE: old mean (unweighted)
col = 'blue',
lwd = 4)
ci1 = mean.ccf + ci.ccf
ci2 = mean.ccf - ci.ccf
DrawErrorAsPolygon(lag.in.s, ci1, ci2, col = rgb(0,0,1,0.2))
text(-40, -0.5, "Mean and 95% CI", pos = 4, col = "blue")
text(-40, -0.6, "Weighted Mean and Weighted 95% CI", col = "red", pos = 4)
# 7c) - Lines coloured according to weighting:
# (??colorRampPalette)
weights.vec
weights.vec2 = weights.vec / max(weights.vec)
palette.Wh.Bu <- colorRampPalette(c("white", "midnightblue"))
palette.Wh.Cor <- colorRampPalette(c("white", "#F37370")) # coral colour palette for second dataset
palette.Wh.Bu(20)
palette.Wh.Cor(20)
# Vector according to which to assign colours:
weights.vec
weights.vec2
weight.interval <- as.numeric(cut(weights.vec, breaks = 10))
w.cols <- palette.Wh.Bu(60)[weight.interval]
w.cols.Coral <- palette.Wh.Cor(60)[weight.interval]
data.frame(weights.vec, weights.vec2, weight.interval, w.cols )
dev.new()
matplot(lag.in.s, ccf.tip.dctm, type = "l",
col = w.cols,
lty = 1,
main = "Cross-correlation of tip fluorescence and movement",
ylab = "CCF (Tip Fluorescence & Movement)",
xlab = "Lag [s]"
)
abline(v = 0, col = "black", lty = 3)
abline(h = 0, col = "black", lwd = 1)
lines(lag.in.s, w.mean.ccf, # MIDNIGHTBLUE: new mean (weighted)
col = 'midnightblue',
lwd = 4)
ci1 = w.mean.ccf + w.ci.ccf
ci2 = w.mean.ccf - w.ci.ccf
palette.Wh.Bu(20)[20]
palette.Wh.Bu(20)[20]
text(-40, -0.6, "Weighted Mean + 95% CI", col = 'midnightblue', pos = 4)
DrawErrorAsPolygon(lag.in.s, ci1, ci2, col = "#19197020")
# ---------------------------------------------------------------------------
# 8. Heatmaps and clustering
# display.brewer.all()
# ??heatmap
# This function creates n clusters from input table (based on euclid
# distance *in rows 18:24* (corresponding here to lags from -6 to +6))
GoCluster <- function(x, n.clusters) {
map.input <- t(x)
distance <- dist(map.input[, 18:24], method = "euclidean")
cluster <- hclust(distance, method = "complete")
cutree(cluster, k = n.clusters)
}
# This function extracts indices for filo of n-th subcluster within the cluster:
nthSubcluster <- function(x, n.clusters, nth) {
which(GoCluster(x, n.clusters = n.clusters) == nth)
}
nthSubclusterOthers <- function(x, n.clusters, nth) {
which(GoCluster(x, n.clusters = n.clusters) != nth)
}
# nthSubcluster(ccf.tip.dctm, n.clusters = 2, nth = 1)
# lapply(all.ccf.tables, function(x) nthSubcluster(x, 2, 1))
# ---------
# HEATMAPS:
# extract values for the heatmap scale min and max:
myHeatmap <- function(x) {
map.input = t(x)
distance <- dist(map.input[, 18:24], method = "euclidean")
cluster <- hclust(distance, method = "complete")
heatmap(map.input, Rowv = as.dendrogram(cluster), Colv = NA, xlab = "Lag", col = brewer.pal(12, "YlGnBu"), scale = "none")
}
dev.new()
myHeatmap(ccf.tip.dctm[, which(colSums(!is.na(ccf.tip.dctm)) != 0)])
# table(GoCluster(ccf.tip.dctm, 5))
# table(GoCluster(ccf.tip.dctm, 7))
# table(GoCluster(ccf.tip.dctm, 8))
# table(GoCluster(ccf.tip.dctm, 9))
Edges <- function(x) c(min(x, na.rm = TRUE), max(x, na.rm = TRUE))
printEdges <- function(x) print(c(min(x, na.rm = TRUE), max(x, na.rm = TRUE)))
heatmap.edges <- Edges(ccf.tip.dctm);
heatmap.edges
setwd(Loc.save); getwd()
save.image("LastWorkspace_CCFs.Rdata")
# graphics.off()
| /FilopodyanR CCF.R | no_license | marionlouveaux/NEUBIAS2018_TS7 | R | false | false | 15,270 | r | # TIP FLUORESCENCE & MOVEMENT - general CCF script
# This script is part of a suite of scripts for analysis of filopodia dynamics
# using the Fiji plugin Filopodyan. The questions addressed here are whether the
# accummulation of protein of interest in tips of filopodia correlates with their
# behaviour. This effect may occur either immediately (offset = 0) or with a delay
# (offset > 0) if the protein requires time to activate other downstream effectors
# before exerting its effect on tip movement. For this reason the script uses a cross-
# correlation function to compute cross-correlation (for each filopodium) at each
# value of the offset. It then looks at groups of filopodia that share a similar
# relationship between fluorescence and movement (responding vs non-responding filopodia)
# using hierarchical clustering, and compares the properties of those clusters.
# Data input: requires an .Rdata file from upstream Filopodyan .R scripts
# (load in Section 1).
# Data output: a CCF table (ccf.tip.dctm) and its clustered heatmap;
# top-correlating subcluster ('TCS') vs other filopodia ('nonTCS')
# Downstream applications: 1. Subcluster analysis (CCF, phenotype) 2. Randomisation analysis
# For more information contact Vasja Urbancic at vu203@cam.ac.uk.
rm(list = ls())
# ---------------------------------------------------------------------------
# 0. DEPENDENCIES:
# Required packages:
# install.packages("Hmisc", dependencies=TRUE, repos="http://cran.rstudio.com/")
# install.packages("RColorBrewer", dependencies=TRUE, repos="http://cran.rstudio.com/")
# install.packages("wavethresh", dependencies=TRUE, repos="http://cran.rstudio.com/")
library(Hmisc)
library(RColorBrewer)
library(wavethresh)
# Functions (general):
Count <- function(x) length(x[!is.na(x)])
SE <- function(x) sd(x, na.rm=TRUE)/sqrt(Count(x))
CI <- function(x) 1.96*sd(x, na.rm=TRUE)/sqrt(Count(x))
DrawErrorAsPolygon <- function(x, y1, y2, tt, col = 'grey') {
polygon(c(x[tt], rev(x[tt])), c(y1[tt], rev(y2[tt])),
col = col,
border = NA)
}
MovingAverage <- function(x, w = 5) {
filter(x, rep(1/w, w), sides = 2)
}
# Functions (for block randomisation):
extractBlockIndex <- function(which.block, block.size, ...) {
start <- ((which.block-1) * block.size) + 1
end <- ((which.block) * block.size)
c(start:end)
}
BlockReshuffle <- function(x, block.size = 12) {
stopifnot(length(x) > block.size)
n.blocks <- length(x) %/% block.size
overhang <- length(x) %% block.size
included <- 1:(block.size*n.blocks)
excluded.overhang <- setdiff(seq_along(x), included)
x.in.blocks <- list()
for(i in 1:n.blocks) {
x.in.blocks[[i]] <- x[extractBlockIndex(i, 12)]
}
# which blocks to keep in place (full of NAs), which blocks to swap over?
max.NA.per.block <- 0.25 * block.size
blocks.to.shuffle <- which(lapply(x.in.blocks, Count) > max.NA.per.block)
blocks.to.keep <- which(lapply(x.in.blocks, Count) <= max.NA.per.block)
# generate permuted blocks, plus insert NA blocks into their respective positions
#set.seed(0.1)
new.order <- c(sample(blocks.to.shuffle))
for (j in blocks.to.keep) {
new.order <- append(new.order, j, after = j-1)
}
# new vector
for(k in new.order) {
if(exists("z") == FALSE) {z <- c()}
z <- append(z, x.in.blocks[[k]])
}
z <- append(z, x[excluded.overhang])
z
}
# ---------------------------------------------------------------------------
# 1. Load data from saved workspace
# Load data:
# ENA (as metalist):
#load('~/Documents/Postdoc/ANALYSIS_local-files/ANALYSIS LOGS/2017-03_TipF_withBg_ENA/Huang4-01/LastWorkspace_ENA.Rdata')
# Normalised to filopodium (proj) fluorescece:
# load('~/Documents/Postdoc/ANALYSIS_local-files/ANALYSIS LOGS/2017-03_TipF_withBg_ENA/Huang4-01_Norm-toFilo/LastWorkspace_ENA.Rdata')
# Normalised to GC body:
# load('~/Documents/Postdoc/ANALYSIS_local-files/ANALYSIS LOGS/2017-03_TipF_withBg_ENA/Huang4-01_Norm-toGC/LastWorkspace_ENA.Rdata')
# Not normalised (only bg corrected):
# load('~/Documents/Postdoc/ANALYSIS_local-files/ANALYSIS LOGS/2017-03_TipF_withBg_ENA/Huang4-01_NormOFF/LastWorkspace_ENA.Rdata')
# VASP (as metalist):
# load('~/Documents/Postdoc/ANALYSIS_local-files/ANALYSIS LOGS/2017-03_TipF_withBg_VASP/Huang4-01/LastWorkspace_VASP.Rdata')
# load('~/Documents/Postdoc/ANALYSIS_local-files/ANALYSIS LOGS/2017-03_TipF_withBg_VASP/Huang4-01_NormOFF/LastWorkspace_VASP.Rdata')
load('/Users/Lab/Documents/Postdoc/2018_Szeged/TS7_Filopodyan/Materials/Datasets/4b_RESULTS/LastWorkspace_TipF.Rdata')
# Check normalisation method:
metalist[[1]]$nor.tip.setting
# Check background correction method:
metalist[[1]]$bg.corr.setting
# Saving location:
metalist[[1]]$Loc <- folder.names[1]
metalist[[1]]$Loc
# ---------------------------------------------------------------------------
# 2. Extract equivalent data from within the metalist:
all.dS <- metalist[[1]]$all.dS
dS.vector <- metalist[[1]]$dS.vector
bb <- metalist[[1]]$bb
max.t <- metalist[[1]]$max.t
spt <- metalist[[1]]$spt
threshold.ext.per.t <- metalist[[1]]$threshold.ext.per.t
threshold.retr.per.t <- metalist[[1]]$threshold.retr.per.t
tip.f <- metalist[[1]]$tip.f
all.move <- metalist[[1]]$all.move
# Options for using FDCTM instead of raw DCTM, and smoothed tipF signal:
# If use.fdctm == TRUE?
use.fdctm = TRUE
if(use.fdctm == FALSE) {
all.move <- metalist[[1]]$all.dctm99
}
use.ftip = FALSE
if(use.ftip == TRUE) {
tip.f <- apply(tip.f, 2, MovingAverage)
}
# Use difference from last timepoint, instead of actual data? (Uncomment if yes.)
# all.move <- apply(all.move, 2, diff)
# tip.f <- apply(tip.f, 2, diff)
# Difference for tip F, raw for movement:
# all.move <- all.move[2:max.t, ]
# tip.f <- apply(tip.f, 2, diff)
# Difference for movement, raw for tip F:
# all.move <- apply(all.move, 2, diff)
# tip.f <- tip.f[2:max.t, ]
# ---------------------------------------------------------------------------
# 3. Necessary data restructuring:
# 3a) - shift up the all.move table by one timepoint:
start.row <- bb+2
stop.row <- max.t
if (bb > 0) {
reshuffle.vec <- c(1:bb, start.row:stop.row, bb+1)
} else if (bb == 0) {
reshuffle.vec <- c(start.row:stop.row, bb+1)
}
all.move <- all.move[reshuffle.vec, ]; all.move[max.t, ] <- NA
# 3b) - check if any columns have zero DCTM measurements to remove from dataset
# (would trip CCF calculations and heatmaps):
n.timepoints <- colSums( !is.na(all.move)); n.timepoints
zero.lengths <- which(n.timepoints == 0); zero.lengths
if (length(zero.lengths) > 0) {
remove.cols <- zero.lengths
all.move <- all.move[, -zero.lengths]
tip.f <- tip.f[, -zero.lengths]
all.dS <- all.dS[, -zero.lengths]
n.timepoints <- n.timepoints[-zero.lengths]
rm(remove.cols)
}
short.lengths <- which(n.timepoints < 17); short.lengths
if (length(short.lengths) > 0) {
remove.cols <- short.lengths
all.move <- all.move[, -short.lengths]
tip.f <- tip.f[, -short.lengths]
all.dS <- all.dS[, -short.lengths]
n.timepoints <- n.timepoints[-short.lengths]
rm(remove.cols)
}
# ---------------------------------------------------------------------------
# Derived datasets:
# 4a) Create z scores
z.move <- scale(all.move, scale = TRUE, center = TRUE)
z.tip <- scale(tip.f, scale = TRUE, center = TRUE)
# 4b) Split all.move into all.ext, all.retr, all.stall
all.states <- cut(all.move,
breaks = c(-Inf, threshold.retr.per.t, threshold.ext.per.t, Inf),
labels = c("Retr", "Stall", "Ext"))
all.ext <- all.move; all.ext[which(all.states != "Ext")] <- NA
all.retr <- all.move; all.retr[which(all.states != "Retr")] <- NA
all.stall <- all.move; all.stall[which(all.states != "Stall")] <- NA
# illustrate how this works:
data.frame("Movement" = all.move[, 2],
"Ext" = all.ext[, 2],
"Stall" = all.stall[, 2],
"Retr" = all.retr[, 2])[22:121, ]
# ---------------------------------------------------------------------------
# 5. Explore correlations (over whole population) with XY scatterplots
dev.new(width = 7, height = 3.5)
par(mfrow = c(1,2))
par(mar = c(4,5,2,1)+0.1)
matplot(tip.f, all.move,
pch = 16, cex = 0.8,
col = "#41B6C420",
xlab = "Tip fluorescence [a.u.]",
# xlab = expression(Delta * "Tip Fluorescence / Projection Fluorescence [a.u.]"),
ylab = expression("Tip Movement [" * mu * "m]"),
# ylab = expression(Delta * "Tip Movement [" * mu * "m]"),
main = ""
)
abline(h = 0, lty = 2, col = "grey")
# abline(v = 1, lty = 2, col = "grey")
abline(v = 0, lty = 2, col = "grey")
rho <- cor.test(unlist(as.data.frame(tip.f)), unlist(as.data.frame(all.move)), na.action = "na.exclude")$estimate
legend("bottomright", legend = paste("Pearson Rho =", signif(rho, 2)), cex= 0.8, bty = "n")
# As above, with z-scores:
# dev.new()
matplot(z.tip, z.move,
pch = 16, cex = 0.8,
col = "#41B6C420",
xlab = "Tip fluorescence [z-score]",
# xlab = expression(Delta * "Tip Fluorescence / Projection Fluorescence [a.u.]"),
ylab = expression("Tip Movement [z-score]"),
# ylab = expression(Delta * "Tip Movement [" * mu * "m]"),
main = ""
)
abline(h = 0, lty = 2, col = "grey")
# abline(v = 1, lty = 2, col = "grey")
abline(v = 0, lty = 2, col = "grey")
rho.z <- cor.test(unlist(as.data.frame(z.tip)), unlist(as.data.frame(z.move)), na.action = "na.exclude")$estimate
legend("bottomright", legend = paste("Pearson Rho =", signif(rho.z, 2)), cex= 0.8, bty = "n")
range(tip.f, na.rm = TRUE)
dev.new(width = 3.5, height = 3.5)
hist(unlist(tip.f), col = "grey", border = "white", main = "", xlab = "TipF")
# ---------------------------------------------------------------------------
# 6. Calculate CCFs from tip F and tip movement tables
maxlag = 20
lag.range <- -maxlag:maxlag
lag.in.s <- lag.range * spt
ccf.tip.dctm <- data.frame(matrix(NA, ncol = ncol(all.move), nrow = 2*maxlag + 1))
all.filo <- seq_along(colnames(all.move))
for (i in all.filo) {
ccf.i <- ccf(tip.f[, i], all.move[, i], lag.max = 20, na.action = na.pass, plot = FALSE)
ccf.tip.dctm[, i] <- ccf.i
rm(ccf.i, ccf.z.i)
}
colnames(ccf.tip.dctm) <- colnames(all.move)
row.names(ccf.tip.dctm) <- lag.in.s
# The lag k value returned by ccf(x, y) estimates the correlation between x[t+k] and y[t].
# i.e. lag k for ccf(tip, move) estimates correlation between tip.f[t+k] and move[t]
# i.e. lag +2 means correlation between tip.f[t+2] and move[t] --> tip.f lagging behind movement
# i.e. lag -2 means correlation between tip.f[t-2] and move[t] --> tip.f leading ahead of movement
# ---------------------------------------------------------------------------
# 7. Compute and plot weighted CCFs (optional pre-clustering)
# 7a) - Compute weighted CCF metrics:
weights.vec <- n.timepoints
mean.ccf <- apply(ccf.tip.dctm, 1, mean, na.rm = TRUE)
w.mean.ccf <- apply(ccf.tip.dctm, 1, weighted.mean, w = weights.vec, na.rm = TRUE)
w.var.ccf <- apply(ccf.tip.dctm, 1, wtd.var, weights = weights.vec); w.var.ccf
w.sd.ccf <- sqrt(w.var.ccf); w.sd.ccf
counts.ccf <- apply(ccf.tip.dctm, 1, Count); counts.ccf
w.ci.ccf <- 1.96 * w.sd.ccf / sqrt(counts.ccf); w.ci.ccf
ci.ccf = apply(ccf.tip.dctm, 1, CI)
filo.ID.weights <- data.frame("Filo ID" = names(ccf.tip.dctm), "Timepoints" = weights.vec); filo.ID.weights
# 7b) - Plot weighted vs unweighted
dev.new()
matplot(lag.in.s, ccf.tip.dctm, type = "l",
main = "Cross-correlation of tip fluorescence and movement",
ylab = "CCF (Tip Fluorescence & DCTM (99%, smoothed))",
xlab = "Lag [s]",
col = rgb(0,0,0,0.12),
lty = 1
)
abline(v = 0, col = "black", lty = 3)
abline(h = 0, col = "black", lwd = 1)
lines (lag.in.s, w.mean.ccf, # RED: new mean (weighted)
col = 'red',
lwd = 4)
ci1 = w.mean.ccf + w.ci.ccf
ci2 = w.mean.ccf - w.ci.ccf
DrawErrorAsPolygon(lag.in.s, ci1, ci2, col = rgb(1,0,0,0.2))
lines (lag.in.s, mean.ccf, # BLUE: old mean (unweighted)
col = 'blue',
lwd = 4)
ci1 = mean.ccf + ci.ccf
ci2 = mean.ccf - ci.ccf
DrawErrorAsPolygon(lag.in.s, ci1, ci2, col = rgb(0,0,1,0.2))
text(-40, -0.5, "Mean and 95% CI", pos = 4, col = "blue")
text(-40, -0.6, "Weighted Mean and Weighted 95% CI", col = "red", pos = 4)
# 7c) - Lines coloured according to weighting:
# (??colorRampPalette)
weights.vec
weights.vec2 = weights.vec / max(weights.vec)
palette.Wh.Bu <- colorRampPalette(c("white", "midnightblue"))
palette.Wh.Cor <- colorRampPalette(c("white", "#F37370")) # coral colour palette for second dataset
palette.Wh.Bu(20)
palette.Wh.Cor(20)
# Vector according to which to assign colours:
weights.vec
weights.vec2
weight.interval <- as.numeric(cut(weights.vec, breaks = 10))
w.cols <- palette.Wh.Bu(60)[weight.interval]
w.cols.Coral <- palette.Wh.Cor(60)[weight.interval]
data.frame(weights.vec, weights.vec2, weight.interval, w.cols )
dev.new()
matplot(lag.in.s, ccf.tip.dctm, type = "l",
col = w.cols,
lty = 1,
main = "Cross-correlation of tip fluorescence and movement",
ylab = "CCF (Tip Fluorescence & Movement)",
xlab = "Lag [s]"
)
abline(v = 0, col = "black", lty = 3)
abline(h = 0, col = "black", lwd = 1)
lines(lag.in.s, w.mean.ccf, # MIDNIGHTBLUE: new mean (weighted)
col = 'midnightblue',
lwd = 4)
ci1 = w.mean.ccf + w.ci.ccf
ci2 = w.mean.ccf - w.ci.ccf
palette.Wh.Bu(20)[20]
palette.Wh.Bu(20)[20]
text(-40, -0.6, "Weighted Mean + 95% CI", col = 'midnightblue', pos = 4)
DrawErrorAsPolygon(lag.in.s, ci1, ci2, col = "#19197020")
# ---------------------------------------------------------------------------
# 8. Heatmaps and clustering
# display.brewer.all()
# ??heatmap
# This function creates n clusters from input table (based on euclid
# distance *in rows 18:24* (corresponding here to lags from -6 to +6))
GoCluster <- function(x, n.clusters) {
map.input <- t(x)
distance <- dist(map.input[, 18:24], method = "euclidean")
cluster <- hclust(distance, method = "complete")
cutree(cluster, k = n.clusters)
}
# This function extracts indices for filo of n-th subcluster within the cluster:
nthSubcluster <- function(x, n.clusters, nth) {
which(GoCluster(x, n.clusters = n.clusters) == nth)
}
nthSubclusterOthers <- function(x, n.clusters, nth) {
which(GoCluster(x, n.clusters = n.clusters) != nth)
}
# nthSubcluster(ccf.tip.dctm, n.clusters = 2, nth = 1)
# lapply(all.ccf.tables, function(x) nthSubcluster(x, 2, 1))
# ---------
# HEATMAPS:
# extract values for the heatmap scale min and max:
myHeatmap <- function(x) {
map.input = t(x)
distance <- dist(map.input[, 18:24], method = "euclidean")
cluster <- hclust(distance, method = "complete")
heatmap(map.input, Rowv = as.dendrogram(cluster), Colv = NA, xlab = "Lag", col = brewer.pal(12, "YlGnBu"), scale = "none")
}
dev.new()
myHeatmap(ccf.tip.dctm[, which(colSums(!is.na(ccf.tip.dctm)) != 0)])
# table(GoCluster(ccf.tip.dctm, 5))
# table(GoCluster(ccf.tip.dctm, 7))
# table(GoCluster(ccf.tip.dctm, 8))
# table(GoCluster(ccf.tip.dctm, 9))
Edges <- function(x) c(min(x, na.rm = TRUE), max(x, na.rm = TRUE))
printEdges <- function(x) print(c(min(x, na.rm = TRUE), max(x, na.rm = TRUE)))
heatmap.edges <- Edges(ccf.tip.dctm);
heatmap.edges
setwd(Loc.save); getwd()
save.image("LastWorkspace_CCFs.Rdata")
# graphics.off()
|
#' Calculate variance
#'
#' @param x Vector of indicator values.
calc.VAR <-
function(x)
{
(1/length(x)) * (sum((x - mean(x))^2))
}
| /R/calc.VAR.R | no_license | dataspekt/crodi | R | false | false | 141 | r | #' Calculate variance
#'
#' @param x Vector of indicator values.
calc.VAR <-
function(x)
{
(1/length(x)) * (sum((x - mean(x))^2))
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/comparison.R
\encoding{UTF-8}
\name{criterion}
\alias{criterion}
\alias{loo.mcpfit}
\alias{loo}
\alias{LOO}
\alias{waic.mcpfit}
\alias{waic}
\alias{WAIC}
\title{Compute information criteria for model comparison}
\usage{
criterion(fit, criterion = "loo", ...)
\method{loo}{mcpfit}(x, ...)
\method{waic}{mcpfit}(x, ...)
}
\arguments{
\item{fit}{An \code{\link{mcpfit}} object.}
\item{criterion}{One of \code{"loo"} (calls \code{\link[loo]{loo}}) or \code{"waic"} (calls \code{\link[loo]{waic}}).}
\item{...}{Currently ignored}
\item{x}{An \code{\link{mcpfit}} object.}
}
\value{
a \code{loo} or \code{psis_loo} object.
}
\description{
Takes an \code{\link{mcpfit}} as input and computes information criteria using loo or
WAIC. Compare models using \code{\link[loo]{loo_compare}} and \code{\link[loo]{loo_model_weights}}.
more in \code{\link[loo]{loo}}.
}
\section{Functions}{
\itemize{
\item \code{loo(mcpfit)}: Computes loo on mcpfit objects
\item \code{waic(mcpfit)}: Computes WAIC on mcpfit objects
}}
\examples{
\donttest{
# Define two models and sample them
# options(mc.cores = 3) # Speed up sampling
ex = mcp_example("intercepts") # Get some simulated data.
model1 = list(y ~ 1 + x, ~ 1)
model2 = list(y ~ 1 + x) # Without a change point
fit1 = mcp(model1, ex$data)
fit2 = mcp(model2, ex$data)
# Compute LOO for each and compare (works for waic(fit) too)
fit1$loo = loo(fit1)
fit2$loo = loo(fit2)
loo::loo_compare(fit1$loo, fit2$loo)
}
}
\seealso{
\code{\link{criterion}}
\code{\link{criterion}}
}
\author{
Jonas Kristoffer Lindeløv \email{jonas@lindeloev.dk}
}
| /man/criterion.Rd | no_license | lindeloev/mcp | R | false | true | 1,659 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/comparison.R
\encoding{UTF-8}
\name{criterion}
\alias{criterion}
\alias{loo.mcpfit}
\alias{loo}
\alias{LOO}
\alias{waic.mcpfit}
\alias{waic}
\alias{WAIC}
\title{Compute information criteria for model comparison}
\usage{
criterion(fit, criterion = "loo", ...)
\method{loo}{mcpfit}(x, ...)
\method{waic}{mcpfit}(x, ...)
}
\arguments{
\item{fit}{An \code{\link{mcpfit}} object.}
\item{criterion}{One of \code{"loo"} (calls \code{\link[loo]{loo}}) or \code{"waic"} (calls \code{\link[loo]{waic}}).}
\item{...}{Currently ignored}
\item{x}{An \code{\link{mcpfit}} object.}
}
\value{
a \code{loo} or \code{psis_loo} object.
}
\description{
Takes an \code{\link{mcpfit}} as input and computes information criteria using loo or
WAIC. Compare models using \code{\link[loo]{loo_compare}} and \code{\link[loo]{loo_model_weights}}.
more in \code{\link[loo]{loo}}.
}
\section{Functions}{
\itemize{
\item \code{loo(mcpfit)}: Computes loo on mcpfit objects
\item \code{waic(mcpfit)}: Computes WAIC on mcpfit objects
}}
\examples{
\donttest{
# Define two models and sample them
# options(mc.cores = 3) # Speed up sampling
ex = mcp_example("intercepts") # Get some simulated data.
model1 = list(y ~ 1 + x, ~ 1)
model2 = list(y ~ 1 + x) # Without a change point
fit1 = mcp(model1, ex$data)
fit2 = mcp(model2, ex$data)
# Compute LOO for each and compare (works for waic(fit) too)
fit1$loo = loo(fit1)
fit2$loo = loo(fit2)
loo::loo_compare(fit1$loo, fit2$loo)
}
}
\seealso{
\code{\link{criterion}}
\code{\link{criterion}}
}
\author{
Jonas Kristoffer Lindeløv \email{jonas@lindeloev.dk}
}
|
library(RDCOMClient)
OutApp <- COMCreate("Outlook.Application")
outmail = OutApp$CreateItem(0)
outmail[["To"]] = "sridhar.upadhya@accenture.com"
outmail[["subject"]] = "some subject"
outmail[["body"]] <- "hello"
outmail$Send()
class(RemReqs)
li <- as.list(RemReqs)
li
| /Mail.R | no_license | upadhyaya/R-all-on-CR | R | false | false | 280 | r | library(RDCOMClient)
OutApp <- COMCreate("Outlook.Application")
outmail = OutApp$CreateItem(0)
outmail[["To"]] = "sridhar.upadhya@accenture.com"
outmail[["subject"]] = "some subject"
outmail[["body"]] <- "hello"
outmail$Send()
class(RemReqs)
li <- as.list(RemReqs)
li
|
## Cufflinks class_code
## gffcompare class_code (https://ccb.jhu.edu/software/stringtie/gffcompare.shtml)
## Following only avaible from gffcompare (15 class code)
# k: reverse containment
# m: retained intron, full intron chain overlap/match
# n: retained intron, partial or no intron chain match
# y: contains a reference within its intron
## Following only avaible from cuffcompare
# .: multi
cuffClass=list(
`Known`=c(
`Complete match`="=",
`Contained`="c",
`Reverse contained`="k"
),
`Novel`=c(
`Potentially novel isoform`="j",
`Within a reference intron`="i",
`Exoninc overlap on the opposite strand`="x",
`Contains a reference within its intron`="y",
`Generic exonic overlap`="o",
`Retained introns (full)`="m",
`Retained introns (partial)`="n",
`Unkown, intergenic transcript`="u"
),
`Artefact`=c(
`Multiple classifications`=".",
`Possible polymerase run-on`="p",
`Possible pre-mRNA fragment`="e",
`Intron on the opposite strand`="s", # likely mapping-error
`Repeat`="r"
)
)
dt.cuffClass=data.table(
rbind(
c(class="Known",class_code="=",desc="Complete match"),
c(class="Known",class_code="c",desc="Contained"),
c(class="Known",class_code="k",desc="Reverse contained"),
c(class="Novel",class_code="j",desc="Potentially novel isoform"),
c(class="Novel",class_code="i",desc="Within a reference intron"),
c(class="Novel",class_code="x",desc="Exoninc overlap on the opposite strand"),
c(class="Novel",class_code="y",desc="Contains a reference within its intron"),
c(class="Novel",class_code="o",desc="Generic exonic overlap"),
c(class="Novel",class_code="m",desc="Retained introns (full)"),
c(class="Novel",class_code="n",desc="Retained introns (partial)"),
c(class="Novel",class_code="u",desc="Unkown, intergenic transcript"),
c(class="Artefact",class_code=".",desc="Multiple classifications"),
c(class="Artefact",class_code="p",desc="Possible polymerase run-on"),
c(class="Artefact",class_code="e",desc="Possible pre-mRNA fragment"),
c(class="Artefact",class_code="s",desc="Intron on the opposite strand"),
c(class="Artefact",class_code="r",desc="Repeat"))
)
dt.cuffClass$class=factor(dt.cuffClass$class,levels=c("Known","Novel","Artefact"))
#http://www.ensembl.org/common/Help/Glossary?db=core
#http://www.ensembl.org/Help/Faq?id=468
#http://www.ensembl.org/Help/View?id=151
bioType=list(
`Protein coding`=c(
'protein_coding', 'nonsense_mediated_decay', 'nontranslating_CDS', 'non_stop_decay', 'polymorphic_pseudogene', 'LRG_gene',
'IG_C_gene', 'IG_D_gene', 'IG_gene', 'IG_J_gene', 'IG_LV_gene', 'IG_M_gene', 'IG_V_gene', 'IG_Z_gene',
'TR_C_gene', 'TR_D_gene', 'TR_J_gene', 'TR_V_gene'
),
`Pseudogene`=c(
'pseudogene', 'processed_pseudogene', 'translated_processed_pseudogene', 'transcribed_processed_pseudogene', 'transcribed_unprocessed_pseudogene', 'unitary_pseudogene', 'transcribed_unitary_pseudogene',
'unprocessed_pseudogene', 'disrupted_domain', 'retained_intron',
'IG_C_pseudogene', 'IG_D_pseudogene', 'IG_J_pseudogene', 'IG_V_pseudogene', 'IG_pseudogene',
'TR_C_pseudogene', 'TR_D_pseudogene', 'TR_J_pseudogene', 'TR_V_pseudogene'
),
`Long noncoding`=c(
'3prime_overlapping_ncrna', 'ambiguous_orf', 'antisense', 'antisense_RNA', 'lincRNA', 'ncrna_host',
'processed_transcript', 'sense_intronic', 'sense_overlapping', 'macro_lncRNA', 'vaultRNA'
),
`Short noncoding`=c(
'miRNA', 'miRNA_pseudogene', 'piRNA', 'misc_RNA', 'misc_RNA_pseudogene', 'Mt_rRNA', 'Mt_tRNA', 'rRNA', 'scRNA', 'snlRNA',
'sRNA', 'snoRNA', 'scaRNA', 'snRNA', 'tRNA', 'tRNA_pseudogene', 'ribozyme'
),
`Others`=c(
'TEC', 'Artifact'
)
)
| /lib/cufflink.R | permissive | Keyong-bio/POPS-Placenta-Transcriptome-2020 | R | false | false | 3,691 | r | ## Cufflinks class_code
## gffcompare class_code (https://ccb.jhu.edu/software/stringtie/gffcompare.shtml)
## Following only avaible from gffcompare (15 class code)
# k: reverse containment
# m: retained intron, full intron chain overlap/match
# n: retained intron, partial or no intron chain match
# y: contains a reference within its intron
## Following only avaible from cuffcompare
# .: multi
cuffClass=list(
`Known`=c(
`Complete match`="=",
`Contained`="c",
`Reverse contained`="k"
),
`Novel`=c(
`Potentially novel isoform`="j",
`Within a reference intron`="i",
`Exoninc overlap on the opposite strand`="x",
`Contains a reference within its intron`="y",
`Generic exonic overlap`="o",
`Retained introns (full)`="m",
`Retained introns (partial)`="n",
`Unkown, intergenic transcript`="u"
),
`Artefact`=c(
`Multiple classifications`=".",
`Possible polymerase run-on`="p",
`Possible pre-mRNA fragment`="e",
`Intron on the opposite strand`="s", # likely mapping-error
`Repeat`="r"
)
)
dt.cuffClass=data.table(
rbind(
c(class="Known",class_code="=",desc="Complete match"),
c(class="Known",class_code="c",desc="Contained"),
c(class="Known",class_code="k",desc="Reverse contained"),
c(class="Novel",class_code="j",desc="Potentially novel isoform"),
c(class="Novel",class_code="i",desc="Within a reference intron"),
c(class="Novel",class_code="x",desc="Exoninc overlap on the opposite strand"),
c(class="Novel",class_code="y",desc="Contains a reference within its intron"),
c(class="Novel",class_code="o",desc="Generic exonic overlap"),
c(class="Novel",class_code="m",desc="Retained introns (full)"),
c(class="Novel",class_code="n",desc="Retained introns (partial)"),
c(class="Novel",class_code="u",desc="Unkown, intergenic transcript"),
c(class="Artefact",class_code=".",desc="Multiple classifications"),
c(class="Artefact",class_code="p",desc="Possible polymerase run-on"),
c(class="Artefact",class_code="e",desc="Possible pre-mRNA fragment"),
c(class="Artefact",class_code="s",desc="Intron on the opposite strand"),
c(class="Artefact",class_code="r",desc="Repeat"))
)
dt.cuffClass$class=factor(dt.cuffClass$class,levels=c("Known","Novel","Artefact"))
#http://www.ensembl.org/common/Help/Glossary?db=core
#http://www.ensembl.org/Help/Faq?id=468
#http://www.ensembl.org/Help/View?id=151
bioType=list(
`Protein coding`=c(
'protein_coding', 'nonsense_mediated_decay', 'nontranslating_CDS', 'non_stop_decay', 'polymorphic_pseudogene', 'LRG_gene',
'IG_C_gene', 'IG_D_gene', 'IG_gene', 'IG_J_gene', 'IG_LV_gene', 'IG_M_gene', 'IG_V_gene', 'IG_Z_gene',
'TR_C_gene', 'TR_D_gene', 'TR_J_gene', 'TR_V_gene'
),
`Pseudogene`=c(
'pseudogene', 'processed_pseudogene', 'translated_processed_pseudogene', 'transcribed_processed_pseudogene', 'transcribed_unprocessed_pseudogene', 'unitary_pseudogene', 'transcribed_unitary_pseudogene',
'unprocessed_pseudogene', 'disrupted_domain', 'retained_intron',
'IG_C_pseudogene', 'IG_D_pseudogene', 'IG_J_pseudogene', 'IG_V_pseudogene', 'IG_pseudogene',
'TR_C_pseudogene', 'TR_D_pseudogene', 'TR_J_pseudogene', 'TR_V_pseudogene'
),
`Long noncoding`=c(
'3prime_overlapping_ncrna', 'ambiguous_orf', 'antisense', 'antisense_RNA', 'lincRNA', 'ncrna_host',
'processed_transcript', 'sense_intronic', 'sense_overlapping', 'macro_lncRNA', 'vaultRNA'
),
`Short noncoding`=c(
'miRNA', 'miRNA_pseudogene', 'piRNA', 'misc_RNA', 'misc_RNA_pseudogene', 'Mt_rRNA', 'Mt_tRNA', 'rRNA', 'scRNA', 'snlRNA',
'sRNA', 'snoRNA', 'scaRNA', 'snRNA', 'tRNA', 'tRNA_pseudogene', 'ribozyme'
),
`Others`=c(
'TEC', 'Artifact'
)
)
|
\name{S.STpiPS}
\alias{S.STpiPS}
\title{Stratified Sampling Applying Without Replacement piPS Design in all Strata}
\description{Draws a probability proportional to size simple random sample without
replacement of size \eqn{n_h} in stratum \eqn{h} of size \eqn{N_h}}
\usage{
S.STpiPS(S,x,nh)
}
\arguments{
\item{S}{Vector identifying the membership to the strata of each unit in the population}
\item{x}{Vector of auxiliary information for each unit in the population}
\item{nh}{Vector of sample size in each stratum}
}
\seealso{
\code{\link{E.STpiPS}}
}
\details{The selected sample is drawn according to the Sunter method (sequential-list procedure) in each stratum}
\value{The function returns a matrix of \eqn{n=n_1+\cdots+n_h} rows and two columns. Each element of the first column indicates the unit that
was selected. Each element of the second column indicates the inclusion probability of this unit}
\author{Hugo Andres Gutierrez Rojas \email{hagutierrezro@gmail.com}}
\references{
Sarndal, C-E. and Swensson, B. and Wretman, J. (1992), \emph{Model Assisted Survey Sampling}. Springer.\cr
Gutierrez, H. A. (2009), \emph{Estrategias de muestreo: Diseno de encuestas y estimacion de parametros}.
Editorial Universidad Santo Tomas.
}
\examples{
############
## Example 1
############
# Vector U contains the label of a population of size N=5
U <- c("Yves", "Ken", "Erik", "Sharon", "Leslie")
# The auxiliary information
x <- c(52, 60, 75, 100, 50)
# Vector Strata contains an indicator variable of stratum membership
Strata <- c("A", "A", "A", "B", "B")
# Then sample size in each stratum
mh <- c(2,2)
# Draws a stratified PPS sample with replacement of size n=4
res <- S.STPPS(Strata, x, mh)
# The selected sample
sam <- res[,1]
U[sam]
# The selection probability of each unit selected to be in the sample
pk <- res[,2]
pk
############
## Example 2
############
# Uses the Lucy data to draw a stratified random sample
# according to a piPS design in each stratum
data(Lucy)
attach(Lucy)
# Level is the stratifying variable
summary(Level)
# Defines the size of each stratum
N1<-summary(Level)[[1]]
N2<-summary(Level)[[2]]
N3<-summary(Level)[[3]]
N1;N2;N3
# Defines the sample size at each stratum
n1<-70
n2<-100
n3<-200
nh<-c(n1,n2,n3)
nh
# Draws a stratified sample
S <- Level
x <- Employees
res <- S.STpiPS(S, x, nh)
sam<-res[,1]
# The information about the units in the sample is stored in an object called data
data <- Lucy[sam,]
data
dim(data)
# The selection probability of each unit selected in the sample
pik <- res[,2]
pik
}
\keyword{survey}
| /man/S.STpiPS.Rd | no_license | psirusteam/TeachingSampling | R | false | false | 2,572 | rd | \name{S.STpiPS}
\alias{S.STpiPS}
\title{Stratified Sampling Applying Without Replacement piPS Design in all Strata}
\description{Draws a probability proportional to size simple random sample without
replacement of size \eqn{n_h} in stratum \eqn{h} of size \eqn{N_h}}
\usage{
S.STpiPS(S,x,nh)
}
\arguments{
\item{S}{Vector identifying the membership to the strata of each unit in the population}
\item{x}{Vector of auxiliary information for each unit in the population}
\item{nh}{Vector of sample size in each stratum}
}
\seealso{
\code{\link{E.STpiPS}}
}
\details{The selected sample is drawn according to the Sunter method (sequential-list procedure) in each stratum}
\value{The function returns a matrix of \eqn{n=n_1+\cdots+n_h} rows and two columns. Each element of the first column indicates the unit that
was selected. Each element of the second column indicates the inclusion probability of this unit}
\author{Hugo Andres Gutierrez Rojas \email{hagutierrezro@gmail.com}}
\references{
Sarndal, C-E. and Swensson, B. and Wretman, J. (1992), \emph{Model Assisted Survey Sampling}. Springer.\cr
Gutierrez, H. A. (2009), \emph{Estrategias de muestreo: Diseno de encuestas y estimacion de parametros}.
Editorial Universidad Santo Tomas.
}
\examples{
############
## Example 1
############
# Vector U contains the label of a population of size N=5
U <- c("Yves", "Ken", "Erik", "Sharon", "Leslie")
# The auxiliary information
x <- c(52, 60, 75, 100, 50)
# Vector Strata contains an indicator variable of stratum membership
Strata <- c("A", "A", "A", "B", "B")
# Then sample size in each stratum
mh <- c(2,2)
# Draws a stratified PPS sample with replacement of size n=4
res <- S.STPPS(Strata, x, mh)
# The selected sample
sam <- res[,1]
U[sam]
# The selection probability of each unit selected to be in the sample
pk <- res[,2]
pk
############
## Example 2
############
# Uses the Lucy data to draw a stratified random sample
# according to a piPS design in each stratum
data(Lucy)
attach(Lucy)
# Level is the stratifying variable
summary(Level)
# Defines the size of each stratum
N1<-summary(Level)[[1]]
N2<-summary(Level)[[2]]
N3<-summary(Level)[[3]]
N1;N2;N3
# Defines the sample size at each stratum
n1<-70
n2<-100
n3<-200
nh<-c(n1,n2,n3)
nh
# Draws a stratified sample
S <- Level
x <- Employees
res <- S.STpiPS(S, x, nh)
sam<-res[,1]
# The information about the units in the sample is stored in an object called data
data <- Lucy[sam,]
data
dim(data)
# The selection probability of each unit selected in the sample
pik <- res[,2]
pik
}
\keyword{survey}
|
setwd("C:/Users/Adam/Desktop/Data Science Capstone/Assignment3/specdata")
getwd()
data <- read.csv("226.csv")
data <- na.omit(data)
names(data)
#[1] "Date" "sulfate" "nitrate" "ID"
plot(sulfate~nitrate,data)
#calculate mean sulfate concentration
smean <-mean(data$sulfate,na.rm=TRUE)
abline(h=smean)
#use lm to fit a regression line through data
model1 <-lm (sulfate~nitrate,data)
model1
par(mfrow=c(3,2))
plot(sulfate~nitrate,data)
abline(h=smean)
abline(model1,col="red")
plot(model1)
termplot(model1)
summary(model1)
| /src/Question_13.R | no_license | ascerra12/Analysis-of-Air-Pollution | R | false | false | 582 | r | setwd("C:/Users/Adam/Desktop/Data Science Capstone/Assignment3/specdata")
getwd()
data <- read.csv("226.csv")
data <- na.omit(data)
names(data)
#[1] "Date" "sulfate" "nitrate" "ID"
plot(sulfate~nitrate,data)
#calculate mean sulfate concentration
smean <-mean(data$sulfate,na.rm=TRUE)
abline(h=smean)
#use lm to fit a regression line through data
model1 <-lm (sulfate~nitrate,data)
model1
par(mfrow=c(3,2))
plot(sulfate~nitrate,data)
abline(h=smean)
abline(model1,col="red")
plot(model1)
termplot(model1)
summary(model1)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/connectivity.R
\name{neuprint_connection_table}
\alias{neuprint_connection_table}
\title{Get the upstream and downstream connectivity of a neuron}
\usage{
neuprint_connection_table(bodyids, prepost = c("PRE", "POST"),
roi = NULL, progress = FALSE, dataset = NULL,
all_segments = TRUE, conn = NULL, ...)
}
\arguments{
\item{bodyids}{the body IDs for neurons/segments (bodies) you wish to query}
\item{prepost}{whether to look for partners presynaptic to postsynaptic to the given bodyids}
\item{roi}{a single ROI. Use \code{neuprint_ROIs} to see what is available.}
\item{progress}{default FALSE. If TRUE, the API is called separately for each neuron and yuo can asses its progress, if an error is thrown by any one bodyid, that bodyid is ignored}
\item{dataset}{optional, a dataset you want to query. If NULL, the default specified by your R environ file is used. See \code{neuprint_login} for details.}
\item{all_segments}{if TRUE, all bodies are considered, if FALSE, only 'Neurons', i.e. bodies with a status roughly traced status.}
\item{conn}{optional, a neuprintr connection object, which also specifies the neuPrint server see \code{?neuprint_login}.
If NULL, your defaults set in your R.profile or R.environ are used.}
\item{...}{methods passed to \code{neuprint_login}}
}
\value{
a data frame giving partners within an ROI, the connection strength for weights to or from that partner, and the direction, for the given bodyid
}
\description{
Get the upstream and downstream connectivity of a body, restricted to within an ROI if specified
}
\seealso{
\code{\link{neuprint_fetch_custom}}, \code{\link{neuprint_simple_connectivity}}, \code{\link{neuprint_common_connectivity}}
}
| /man/neuprint_connection_table.Rd | no_license | Tomke587/neuprintr | R | false | true | 1,774 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/connectivity.R
\name{neuprint_connection_table}
\alias{neuprint_connection_table}
\title{Get the upstream and downstream connectivity of a neuron}
\usage{
neuprint_connection_table(bodyids, prepost = c("PRE", "POST"),
roi = NULL, progress = FALSE, dataset = NULL,
all_segments = TRUE, conn = NULL, ...)
}
\arguments{
\item{bodyids}{the body IDs for neurons/segments (bodies) you wish to query}
\item{prepost}{whether to look for partners presynaptic to postsynaptic to the given bodyids}
\item{roi}{a single ROI. Use \code{neuprint_ROIs} to see what is available.}
\item{progress}{default FALSE. If TRUE, the API is called separately for each neuron and yuo can asses its progress, if an error is thrown by any one bodyid, that bodyid is ignored}
\item{dataset}{optional, a dataset you want to query. If NULL, the default specified by your R environ file is used. See \code{neuprint_login} for details.}
\item{all_segments}{if TRUE, all bodies are considered, if FALSE, only 'Neurons', i.e. bodies with a status roughly traced status.}
\item{conn}{optional, a neuprintr connection object, which also specifies the neuPrint server see \code{?neuprint_login}.
If NULL, your defaults set in your R.profile or R.environ are used.}
\item{...}{methods passed to \code{neuprint_login}}
}
\value{
a data frame giving partners within an ROI, the connection strength for weights to or from that partner, and the direction, for the given bodyid
}
\description{
Get the upstream and downstream connectivity of a body, restricted to within an ROI if specified
}
\seealso{
\code{\link{neuprint_fetch_custom}}, \code{\link{neuprint_simple_connectivity}}, \code{\link{neuprint_common_connectivity}}
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/qnews_search_contexts.R
\name{qnews_search_contexts}
\alias{qnews_search_contexts}
\alias{search_better}
\title{Get article metadata from GoogleNews RSS feed.}
\usage{
search_better(x)
qnews_search_contexts(qorp, search, window = 15, highlight_color = "#dae2ba")
}
\value{
A data frame.
}
\description{
Get article metadata from GoogleNews RSS feed.
}
| /man/qnews_search_contexts.Rd | no_license | han-tun/quicknews | R | false | true | 431 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/qnews_search_contexts.R
\name{qnews_search_contexts}
\alias{qnews_search_contexts}
\alias{search_better}
\title{Get article metadata from GoogleNews RSS feed.}
\usage{
search_better(x)
qnews_search_contexts(qorp, search, window = 15, highlight_color = "#dae2ba")
}
\value{
A data frame.
}
\description{
Get article metadata from GoogleNews RSS feed.
}
|
# x: the vector
# n: the number of samples
# centered: if FALSE, then average current sample and previous (n-1) samples
# if TRUE, then average symmetrically in past and future. (If n is even, use one more sample from future.)
movingAverage <- function(x, n=1, centered=FALSE) {
if (centered) {
before <- floor ((n-1)/2)
after <- ceiling((n-1)/2)
}else {
before <- n-1
after <- 0
}
# Track the sum and count of number of non-NA items
s <- rep(0, length(x))
count <- rep(0, length(x))
# Add the centered data
new <- x
# Add to count list wherever there isn't a
count <- count + !is.na(new)
# Now replace NA_s with 0_s and add to total
new[is.na(new)] <- 0
s <- s + new
# Add the data from before
i <- 1
while (i <= before) {
# This is the vector with offset values to add
new <- c(rep(NA, i), x[1:(length(x)-i)])
count <- count + !is.na(new)
new[is.na(new)] <- 0
s <- s + new
i <- i+1
}
# Add the data from after
i <- 1
while (i <= after) {
# This is the vector with offset values to add
new <- c(x[(i+1):length(x)], rep(NA, i))
count <- count + !is.na(new)
new[is.na(new)] <- 0
s <- s + new
i <- i+1
}
# return sum divided by count
s/count
}
# # Make same plots from before, with thicker lines
# plot(x, y, type="l", col=grey(.5))
# grid()
# y_lag <- filter(y, rep(1/20, 20), sides=1)
# lines(x, y_lag, col="red", lwd=4) # Lagged average in red
# y_sym <- filter(y, rep(1/21,21), sides=2)
# lines(x, y_sym, col="blue", lwd=4) # Symmetrical average in blue
# # Calculate lagged moving average with new method and overplot with green
# y_lag_na.rm <- movingAverage(y, 20)
# lines(x, y_lag_na.rm, col="green", lwd=2)
# # Calculate symmetrical moving average with new method and overplot with green
# y_sym_na.rm <- movingAverage(y, 21, TRUE)
# lines(x, y_sym_na.rm, col="green", lwd=2) | /R求移动平均数.R | no_license | highandhigh/MyUsefulCode | R | false | false | 2,068 | r | # x: the vector
# n: the number of samples
# centered: if FALSE, then average current sample and previous (n-1) samples
# if TRUE, then average symmetrically in past and future. (If n is even, use one more sample from future.)
movingAverage <- function(x, n=1, centered=FALSE) {
if (centered) {
before <- floor ((n-1)/2)
after <- ceiling((n-1)/2)
}else {
before <- n-1
after <- 0
}
# Track the sum and count of number of non-NA items
s <- rep(0, length(x))
count <- rep(0, length(x))
# Add the centered data
new <- x
# Add to count list wherever there isn't a
count <- count + !is.na(new)
# Now replace NA_s with 0_s and add to total
new[is.na(new)] <- 0
s <- s + new
# Add the data from before
i <- 1
while (i <= before) {
# This is the vector with offset values to add
new <- c(rep(NA, i), x[1:(length(x)-i)])
count <- count + !is.na(new)
new[is.na(new)] <- 0
s <- s + new
i <- i+1
}
# Add the data from after
i <- 1
while (i <= after) {
# This is the vector with offset values to add
new <- c(x[(i+1):length(x)], rep(NA, i))
count <- count + !is.na(new)
new[is.na(new)] <- 0
s <- s + new
i <- i+1
}
# return sum divided by count
s/count
}
# # Make same plots from before, with thicker lines
# plot(x, y, type="l", col=grey(.5))
# grid()
# y_lag <- filter(y, rep(1/20, 20), sides=1)
# lines(x, y_lag, col="red", lwd=4) # Lagged average in red
# y_sym <- filter(y, rep(1/21,21), sides=2)
# lines(x, y_sym, col="blue", lwd=4) # Symmetrical average in blue
# # Calculate lagged moving average with new method and overplot with green
# y_lag_na.rm <- movingAverage(y, 20)
# lines(x, y_lag_na.rm, col="green", lwd=2)
# # Calculate symmetrical moving average with new method and overplot with green
# y_sym_na.rm <- movingAverage(y, 21, TRUE)
# lines(x, y_sym_na.rm, col="green", lwd=2) |
# #' @title Creates gray-level run length matrix from RIA image
# #' @encoding UTF-8
# #'
# #' @description Creates gray-level run length matrix (GLRLM) from \emph{RIA_image}.
# #' GLRLM assesses the spatial relation of voxels to each other by investigating how many times
# #' same value voxels occur next to each other in a given direction. By default the \emph{$modif}
# #' image will be used to calculate GLRLMs. If \emph{use_slot} is given, then the data
# #' present in \emph{RIA_image$use_slot} will be used for calculations.
# #' Results will be saved into the \emph{glrlm} slot. The name of the subslot is determined
# #' by the supplied string in \emph{save_name}, or is automatically generated by RIA. \emph{off_right},
# #' \emph{off_down} and \emph{off_z} logicals are used to indicate the direction of the runs.
# #'
# #' @param RIA_data_in \emph{RIA_image}.
# #' @param off_right integer, positive values indicate to look to the right, negative values
# #' indicate to look to the left, while 0 indicates no offset in the X plane.
# #' @param off_down integer, positive values indicate to look to the right, negative values
# #' indicate to look to the left, while 0 indicates no offset in the Y plane.
# #' @param off_z integer, positive values indicate to look to the right, negative values
# #' indicate to look to the left, while 0 indicates no offset in the Z plane.
# #' @param use_type string, can be \emph{"single"} which runs the function on a single image,
# #' which is determined using \emph{"use_orig"} or \emph{"use_slot"}. \emph{"discretized"}
# #' takes all datasets in the \emph{RIA_image$discretized} slot and runs the analysis on them.
# #' @param use_orig logical, indicating to use image present in \emph{RIA_data$orig}.
# #' If FALSE, the modified image will be used stored in \emph{RIA_data$modif}.
# #' @param use_slot string, name of slot where data wished to be used is. Use if the desired image
# #' is not in the \emph{data$orig} or \emph{data$modif} slot of the \emph{RIA_image}. For example,
# #' if the desired dataset is in \emph{RIA_image$discretized$ep_4}, then \emph{use_slot} should be
# #' \emph{discretized$ep_4}. The results are automatically saved. If the results are not saved to
# #' the desired slot, then please use \emph{save_name} parameter.
# #' @param save_name string, indicating the name of subslot of \emph{$glcm} to save results to.
# #' If left empty, then it will be automatically determined based on the
# #' last entry of \emph{RIA_image$log$events}.
# #' @param verbose_in logical indicating whether to print detailed information.
# #' Most prints can also be suppressed using the \code{\link{suppressMessages}} function.
# #'
# #' @return \emph{RIA_image} containing the GLRLM.
# #'
# #' @examples \dontrun{
# #' #Discretize loaded image and then calculate GLRLM matrix of RIA_image$modif
# #' RIA_image <- discretize(RIA_image, bins_in = c(4, 8), equal_prob = TRUE,
# #' use_orig = TRUE, write_orig = FALSE)
# #' RIA_image <- glrlm(RIA_image, use_orig = FALSE, verbose_in = TRUE)
# #'
# #' #Use use_slot parameter to set which image to use
# #' RIA_image <- glrlm(RIA_image, use_orig = FALSE, use_slot = "discretized$ep_4",
# #' off_right = 1, off_down = 1, off_z = 0)
# #'
# #' #Batch calculation of GLRLM matrices on all discretized images
# #' RIA_image <- glrlm(RIA_image, use_type = "discretized",
# #' off_right = 1, off_down = 1, off_z = 0)
# #' }
# #'
# #' @references
# #' Mary M. Galloway et al.
# #' Texture analysis using gray level run lengths.
# #' Computer Graphics and Image Processing. 1975; 4:172-179.
# #' DOI: 10.1016/S0146-664X(75)80008-6
# #' \url{https://www.sciencedirect.com/science/article/pii/S0146664X75800086/}
# #'
# #' Márton KOLOSSVÁRY et al.
# #' Radiomic Features Are Superior to Conventional Quantitative Computed Tomographic
# #' Metrics to Identify Coronary Plaques With Napkin-Ring Sign
# #' Circulation: Cardiovascular Imaging (2017).
# #' DOI: 10.1161/circimaging.117.006843
# #' \url{https://pubmed.ncbi.nlm.nih.gov/29233836/}
# #'
# #' Márton KOLOSSVÁRY et al.
# #' Cardiac Computed Tomography Radiomics: A Comprehensive Review on Radiomic Techniques.
# #' Journal of Thoracic Imaging (2018).
# #' DOI: 10.1097/RTI.0000000000000268
# #' \url{https://pubmed.ncbi.nlm.nih.gov/28346329/}
# #' @encoding UTF-8
glrlm <- function(RIA_data_in, off_right = 1, off_down = 0, off_z = 0, use_type = "single", use_orig = FALSE, use_slot = NULL, save_name = NULL, verbose_in = TRUE)
{
data_in_orig <- check_data_in(RIA_data_in, use_type = use_type, use_orig = use_orig, use_slot = use_slot, verbose_in = verbose_in)
if(any(class(data_in_orig) != "list")) data_in_orig <- list(data_in_orig)
list_names <- names(data_in_orig)
if(!is.null(save_name) & (length(data_in_orig) != length(save_name))) {stop(paste0("PLEASE PROVIDE THE SAME NUMBER OF NAMES AS THERE ARE IMAGES!\n",
"NUMBER OF NAMES: ", length(save_name), "\n",
"NUMBER OF IMAGES: ", length(data_in), "\n"))
}
for (k in 1: length(data_in_orig))
{
data_in <- data_in_orig[[k]]
if(off_z & dim(data_in)[3] == 1) {{stop("WARNING: CANNOT ASSESS Z PLANE OFFSET IF DATA IS 2D!")}}
data_NA <- as.vector(data_in)
data_NA <- data_NA[!is.na(data_NA)]
if(length(data_NA) == 0) {stop("WARNING: SUPPLIED RIA_image DOES NOT CONTAIN ANY DATA!!!")}
if(length(dim(data_in)) < 2 | length(dim(data_in)) > 3) stop(paste0("DATA LOADED IS ", length(dim(data_in)), " DIMENSIONAL. ONLY 2D AND 3D DATA ARE SUPPORTED!"))
dim_x <- dim(data_in)[1]
dim_y <- dim(data_in)[2]
dim_z <- ifelse(!is.na(dim(data_in)[3]), dim(data_in)[3], 1)
if(off_right > 0) off_right = 1; if(off_right < 0) off_right = -1
if(off_down > 0) off_down = 1; if(off_down < 0) off_down = -1
if(off_z > 0) off_z = 1; if(off_z < 0) off_z = -1
base_m <- array(NA, dim = c(dim_x+dim_x*abs(off_down), dim_y+dim_y*abs(off_right), dim_z+dim_z*abs(off_z)))
offset <- array(c(dim_x*off_down, dim_y*off_right, dim_z*off_z)); offset[offset == 0] <- NA; offset <- min(abs(offset), na.rm = TRUE)
#Position data into correct possition
if(off_down > -1 & off_right > -1 & off_z > -1) {base_m[1:dim_x, 1:dim_y, 1:dim_z] <- data_in
}else if(off_down == -1 & off_right > -1 & off_z > -1) {base_m[(dim_x+1):(2*dim_x), 1:dim_y, 1:dim_z] <- data_in
}else if(off_down > -1 & off_right == -1 & off_z > -1) {base_m[1:dim_x, (dim_y+1):(2*dim_y), 1:dim_z] <- data_in
}else if(off_down > -1 & off_right > -1 & off_z == -1) {base_m[1:dim_x, 1:dim_y, (dim_z+1):(2*dim_z)] <- data_in
}else if(off_down == -1 & off_right == -1 & off_z > -1) {base_m[(dim_x+1):(2*dim_x), (dim_y+1):(2*dim_y), 1:dim_z] <- data_in
}else if(off_down == -1 & off_right > -1 & off_z == -1) {base_m[(dim_x+1):(2*dim_x), 1:dim_y, (dim_z+1):(2*dim_z)] <- data_in
}else if(off_down > -1 & off_right == -1 & off_z == -1) {base_m[1:dim_x, (dim_y+1):(2*dim_y), (dim_z+1):(2*dim_z)] <- data_in
}else if(off_down == -1 & off_right == -1 & off_z == -1) {base_m[(dim_x+1):(2*dim_x), (dim_y+1):(2*dim_y), (dim_z+1):(2*dim_z)] <- data_in
}else {stop("WARNING: OFFSETS ARE NOT APPROPRIATE PLEASE SEE help(glrlm)!!!")
}
#create gray level number, first by the name of the file, then the event log
num_ind <- unlist(gregexpr('[1-9]', list_names[k]))
num_txt <- substr(list_names[k], num_ind[1], num_ind[length(num_ind)])
gray_levels <- as.numeric(num_txt)
if(length(gray_levels) == 0) {
txt <- automatic_name(RIA_data_in, use_orig, use_slot)
num_ind <- unlist(gregexpr('[1-9]', txt))
num_txt <- substr(txt, num_ind[1], num_ind[length(num_ind)])
gray_levels <- as.numeric(num_txt)
}
gray_levels_unique <- unique(data_NA)[order(unique(data_NA))] #optimize which gray values to run on
glrlm <- array(0, c(gray_levels, offset))
for (i in gray_levels_unique) {
base_filt_m <- data_in; base_filt_m[base_filt_m != i] <- NA; base_filt_m[base_filt_m == i] <- 1
base_filt_change_m <- array(NA, dim = c(dim_x+dim_x*abs(off_down), dim_y+dim_y*abs(off_right), dim_z+dim_z*abs(off_z)))
if(off_down > -1 & off_right > -1 & off_z > -1) {base_filt_change_m[1:dim_x, 1:dim_y, 1:dim_z] <- base_filt_m
} else if(off_down == -1 & off_right > -1 & off_z > -1) {base_filt_change_m[(dim_x+1):(2*dim_x), 1:dim_y, 1:dim_z] <- base_filt_m
} else if(off_down > -1 & off_right == -1 & off_z > -1) {base_filt_change_m[1:dim_x, (dim_y+1):(2*dim_y), 1:dim_z] <- base_filt_m
} else if(off_down > -1 & off_right > -1 & off_z == -1) {base_filt_change_m[1:dim_x, 1:dim_y, (dim_z+1):(2*dim_z)] <- base_filt_m
} else if(off_down == -1 & off_right == -1 & off_z > -1) {base_filt_change_m[(dim_x+1):(2*dim_x), (dim_y+1):(2*dim_y), 1:dim_z] <- base_filt_m
} else if(off_down == -1 & off_right > -1 & off_z == -1) {base_filt_change_m[(dim_x+1):(2*dim_x), 1:dim_y, (dim_z+1):(2*dim_z)] <- base_filt_m
} else if(off_down > -1 & off_right == -1 & off_z == -1) {base_filt_change_m[1:dim_x, (dim_y+1):(2*dim_y), (dim_z+1):(2*dim_z)] <- base_filt_m
} else if(off_down == -1 & off_right == -1 & off_z == -1) {base_filt_change_m[(dim_x+1):(2*dim_x), (dim_y+1):(2*dim_y), (dim_z+1):(2*dim_z)] <- base_filt_m
} else {stop("WARNING: OFFSETS ARE NOT APPROPRIATE PLEASE SEE help(glrlm)!!!")
}
for (j in 1: (offset-1)) {
shift_m <- array(NA, dim = c(dim_x+dim_x*abs(off_down), dim_y+dim_y*abs(off_right), dim_z+dim_z*abs(off_z)))
if(off_down > -1 & off_right > -1 & off_z > -1) {shift_m[(1+j*off_down):(dim_x+j*off_down), (1+j*off_right):(dim_y+j*off_right), (1+j*off_z):(dim_z+j*off_z)] <- base_filt_m
} else if(off_down == -1 & off_right > -1 & off_z > -1) {shift_m[((dim_x+1)+j*off_down):((2*dim_x)+j*off_down), (1+j*off_right):(dim_y+j*off_right), (1+j*off_z):(dim_z+j*off_z)] <- base_filt_m
} else if(off_down > -1 & off_right == -1 & off_z > -1) {shift_m[(1+j*off_down):(dim_x+j*off_down), ((dim_y+1)+j*off_right):((2*dim_y)+j*off_right), (1+j*off_z):(dim_z+j*off_z)] <- base_filt_m
} else if(off_down > -1 & off_right > -1 & off_z == -1) {shift_m[(1+j*off_down):(dim_x+j*off_down), (1+j*off_right):(dim_y+j*off_right), ((dim_z+1)+j*off_z):((2*dim_z)+j*off_z)] <- base_filt_m
} else if(off_down == -1 & off_right == -1 & off_z > -1) {shift_m[((dim_x+1)+j*off_down):((2*dim_x)+j*off_down), ((dim_y+1)+j*off_right):((2*dim_y)+j*off_right), (1+j*off_z):(dim_z+j*off_z)] <- base_filt_m
} else if(off_down == -1 & off_right > -1 & off_z == -1) {shift_m[((dim_x+1)+j*off_down):((2*dim_x)+j*off_down), (1+j*off_right):(dim_y+j*off_right), ((dim_z+1)+j*off_z):((2*dim_z)+j*off_z)] <- base_filt_m
} else if(off_down > -1 & off_right == -1 & off_z == -1) {shift_m[(1+j*off_down):(dim_x+j*off_down), ((dim_y+1)+j*off_right):((2*dim_y)+j*off_right), ((dim_z+1)+j*off_z):((2*dim_z)+j*off_z)] <- base_filt_m
} else if(off_down == -1 & off_right == -1 & off_z == -1) {shift_m[((dim_x+1)+j*off_down):((2*dim_x)+j*off_down), ((dim_y+1)+j*off_right):((2*dim_y)+j*off_right), ((dim_z+1)+j*off_z):((2*dim_z)+j*off_z)] <- base_filt_m
} else {stop("WARNING: OFFSETS ARE NOT APPROPRIATE PLEASE SEE help(glrlm)!!!")
}
diff_m <- base_filt_change_m - shift_m
count_diff <- length(diff_m[!is.na(diff_m)])
glrlm[i,(j+1)] <- count_diff
base_filt_change_m <- diff_m
}
count_gl <- base_filt_m; count_gl <- count_gl[!is.na(count_gl)]; count_gl <- sum(count_gl, na.rm = TRUE)
#Count GLRLM runs by calculating the number of maximum runs and subtracting it from shorter runs. If longest possible run is 2, then no need.
if(dim(glrlm)[2] >= 3) {
for (p in seq(dim(glrlm)[2], 3, -1)) {
if(glrlm[i, p] > 0) {
m = 2
for (q in seq((p-1), 2, -1)) {
glrlm[i, q] <- glrlm[i, q] - glrlm[i, p]*m
m = m+1
}
}
}
}
#Remaining runs are equal to single occurrences
glrlm_r_sum <- sum(col(glrlm)[1,]*glrlm[i,], na.rm = TRUE)
if(glrlm_r_sum != count_gl) {glrlm[i,1] <- (count_gl-glrlm_r_sum)
} else {glrlm[i,1] <- 0}
}
if(use_type == "single") {
if(any(class(RIA_data_in) == "RIA_image") )
{
if(is.null(save_name)) {
txt <- automatic_name(RIA_data_in, use_orig, use_slot)
txt <- paste0(txt, "_", as.numeric(off_right), as.numeric(off_down), as.numeric(off_z))
RIA_data_in$glrlm[[txt]] <- glrlm
}
if(!is.null(save_name)) {RIA_data_in$glrlm[[save_name]] <- glrlm
}
}
}
if(use_type == "discretized") {
if(any(class(RIA_data_in) == "RIA_image"))
{
if(is.null(save_name[k])) {
txt <- list_names[k]
txt <- paste0(txt, "_", as.numeric(off_right), as.numeric(off_down), as.numeric(off_z))
RIA_data_in$glrlm[[txt]] <- glrlm
}
if(!is.null(save_name[k])) {RIA_data_in$glrlm[[save_name[k]]] <- glrlm
}
}
}
if(is.null(save_name)) {txt_name <- txt
} else {txt_name <- save_name}
if(verbose_in) {message(paste0("GLRLM WAS SUCCESSFULLY ADDED TO '", txt_name, "' SLOT OF RIA_image$glrlm\n"))}
}
if(any(class(RIA_data_in) == "RIA_image")) {return(RIA_data_in)
} else {return(glrlm)}
} | /R/glrlm.R | no_license | cran/RIA | R | false | false | 13,775 | r | # #' @title Creates gray-level run length matrix from RIA image
# #' @encoding UTF-8
# #'
# #' @description Creates gray-level run length matrix (GLRLM) from \emph{RIA_image}.
# #' GLRLM assesses the spatial relation of voxels to each other by investigating how many times
# #' same value voxels occur next to each other in a given direction. By default the \emph{$modif}
# #' image will be used to calculate GLRLMs. If \emph{use_slot} is given, then the data
# #' present in \emph{RIA_image$use_slot} will be used for calculations.
# #' Results will be saved into the \emph{glrlm} slot. The name of the subslot is determined
# #' by the supplied string in \emph{save_name}, or is automatically generated by RIA. \emph{off_right},
# #' \emph{off_down} and \emph{off_z} logicals are used to indicate the direction of the runs.
# #'
# #' @param RIA_data_in \emph{RIA_image}.
# #' @param off_right integer, positive values indicate to look to the right, negative values
# #' indicate to look to the left, while 0 indicates no offset in the X plane.
# #' @param off_down integer, positive values indicate to look to the right, negative values
# #' indicate to look to the left, while 0 indicates no offset in the Y plane.
# #' @param off_z integer, positive values indicate to look to the right, negative values
# #' indicate to look to the left, while 0 indicates no offset in the Z plane.
# #' @param use_type string, can be \emph{"single"} which runs the function on a single image,
# #' which is determined using \emph{"use_orig"} or \emph{"use_slot"}. \emph{"discretized"}
# #' takes all datasets in the \emph{RIA_image$discretized} slot and runs the analysis on them.
# #' @param use_orig logical, indicating to use image present in \emph{RIA_data$orig}.
# #' If FALSE, the modified image will be used stored in \emph{RIA_data$modif}.
# #' @param use_slot string, name of slot where data wished to be used is. Use if the desired image
# #' is not in the \emph{data$orig} or \emph{data$modif} slot of the \emph{RIA_image}. For example,
# #' if the desired dataset is in \emph{RIA_image$discretized$ep_4}, then \emph{use_slot} should be
# #' \emph{discretized$ep_4}. The results are automatically saved. If the results are not saved to
# #' the desired slot, then please use \emph{save_name} parameter.
# #' @param save_name string, indicating the name of subslot of \emph{$glcm} to save results to.
# #' If left empty, then it will be automatically determined based on the
# #' last entry of \emph{RIA_image$log$events}.
# #' @param verbose_in logical indicating whether to print detailed information.
# #' Most prints can also be suppressed using the \code{\link{suppressMessages}} function.
# #'
# #' @return \emph{RIA_image} containing the GLRLM.
# #'
# #' @examples \dontrun{
# #' #Discretize loaded image and then calculate GLRLM matrix of RIA_image$modif
# #' RIA_image <- discretize(RIA_image, bins_in = c(4, 8), equal_prob = TRUE,
# #' use_orig = TRUE, write_orig = FALSE)
# #' RIA_image <- glrlm(RIA_image, use_orig = FALSE, verbose_in = TRUE)
# #'
# #' #Use use_slot parameter to set which image to use
# #' RIA_image <- glrlm(RIA_image, use_orig = FALSE, use_slot = "discretized$ep_4",
# #' off_right = 1, off_down = 1, off_z = 0)
# #'
# #' #Batch calculation of GLRLM matrices on all discretized images
# #' RIA_image <- glrlm(RIA_image, use_type = "discretized",
# #' off_right = 1, off_down = 1, off_z = 0)
# #' }
# #'
# #' @references
# #' Mary M. Galloway et al.
# #' Texture analysis using gray level run lengths.
# #' Computer Graphics and Image Processing. 1975; 4:172-179.
# #' DOI: 10.1016/S0146-664X(75)80008-6
# #' \url{https://www.sciencedirect.com/science/article/pii/S0146664X75800086/}
# #'
# #' Márton KOLOSSVÁRY et al.
# #' Radiomic Features Are Superior to Conventional Quantitative Computed Tomographic
# #' Metrics to Identify Coronary Plaques With Napkin-Ring Sign
# #' Circulation: Cardiovascular Imaging (2017).
# #' DOI: 10.1161/circimaging.117.006843
# #' \url{https://pubmed.ncbi.nlm.nih.gov/29233836/}
# #'
# #' Márton KOLOSSVÁRY et al.
# #' Cardiac Computed Tomography Radiomics: A Comprehensive Review on Radiomic Techniques.
# #' Journal of Thoracic Imaging (2018).
# #' DOI: 10.1097/RTI.0000000000000268
# #' \url{https://pubmed.ncbi.nlm.nih.gov/28346329/}
# #' @encoding UTF-8
glrlm <- function(RIA_data_in, off_right = 1, off_down = 0, off_z = 0, use_type = "single", use_orig = FALSE, use_slot = NULL, save_name = NULL, verbose_in = TRUE)
{
data_in_orig <- check_data_in(RIA_data_in, use_type = use_type, use_orig = use_orig, use_slot = use_slot, verbose_in = verbose_in)
if(any(class(data_in_orig) != "list")) data_in_orig <- list(data_in_orig)
list_names <- names(data_in_orig)
if(!is.null(save_name) & (length(data_in_orig) != length(save_name))) {stop(paste0("PLEASE PROVIDE THE SAME NUMBER OF NAMES AS THERE ARE IMAGES!\n",
"NUMBER OF NAMES: ", length(save_name), "\n",
"NUMBER OF IMAGES: ", length(data_in), "\n"))
}
for (k in 1: length(data_in_orig))
{
data_in <- data_in_orig[[k]]
if(off_z & dim(data_in)[3] == 1) {{stop("WARNING: CANNOT ASSESS Z PLANE OFFSET IF DATA IS 2D!")}}
data_NA <- as.vector(data_in)
data_NA <- data_NA[!is.na(data_NA)]
if(length(data_NA) == 0) {stop("WARNING: SUPPLIED RIA_image DOES NOT CONTAIN ANY DATA!!!")}
if(length(dim(data_in)) < 2 | length(dim(data_in)) > 3) stop(paste0("DATA LOADED IS ", length(dim(data_in)), " DIMENSIONAL. ONLY 2D AND 3D DATA ARE SUPPORTED!"))
dim_x <- dim(data_in)[1]
dim_y <- dim(data_in)[2]
dim_z <- ifelse(!is.na(dim(data_in)[3]), dim(data_in)[3], 1)
if(off_right > 0) off_right = 1; if(off_right < 0) off_right = -1
if(off_down > 0) off_down = 1; if(off_down < 0) off_down = -1
if(off_z > 0) off_z = 1; if(off_z < 0) off_z = -1
base_m <- array(NA, dim = c(dim_x+dim_x*abs(off_down), dim_y+dim_y*abs(off_right), dim_z+dim_z*abs(off_z)))
offset <- array(c(dim_x*off_down, dim_y*off_right, dim_z*off_z)); offset[offset == 0] <- NA; offset <- min(abs(offset), na.rm = TRUE)
#Position data into correct possition
if(off_down > -1 & off_right > -1 & off_z > -1) {base_m[1:dim_x, 1:dim_y, 1:dim_z] <- data_in
}else if(off_down == -1 & off_right > -1 & off_z > -1) {base_m[(dim_x+1):(2*dim_x), 1:dim_y, 1:dim_z] <- data_in
}else if(off_down > -1 & off_right == -1 & off_z > -1) {base_m[1:dim_x, (dim_y+1):(2*dim_y), 1:dim_z] <- data_in
}else if(off_down > -1 & off_right > -1 & off_z == -1) {base_m[1:dim_x, 1:dim_y, (dim_z+1):(2*dim_z)] <- data_in
}else if(off_down == -1 & off_right == -1 & off_z > -1) {base_m[(dim_x+1):(2*dim_x), (dim_y+1):(2*dim_y), 1:dim_z] <- data_in
}else if(off_down == -1 & off_right > -1 & off_z == -1) {base_m[(dim_x+1):(2*dim_x), 1:dim_y, (dim_z+1):(2*dim_z)] <- data_in
}else if(off_down > -1 & off_right == -1 & off_z == -1) {base_m[1:dim_x, (dim_y+1):(2*dim_y), (dim_z+1):(2*dim_z)] <- data_in
}else if(off_down == -1 & off_right == -1 & off_z == -1) {base_m[(dim_x+1):(2*dim_x), (dim_y+1):(2*dim_y), (dim_z+1):(2*dim_z)] <- data_in
}else {stop("WARNING: OFFSETS ARE NOT APPROPRIATE PLEASE SEE help(glrlm)!!!")
}
#create gray level number, first by the name of the file, then the event log
num_ind <- unlist(gregexpr('[1-9]', list_names[k]))
num_txt <- substr(list_names[k], num_ind[1], num_ind[length(num_ind)])
gray_levels <- as.numeric(num_txt)
if(length(gray_levels) == 0) {
txt <- automatic_name(RIA_data_in, use_orig, use_slot)
num_ind <- unlist(gregexpr('[1-9]', txt))
num_txt <- substr(txt, num_ind[1], num_ind[length(num_ind)])
gray_levels <- as.numeric(num_txt)
}
gray_levels_unique <- unique(data_NA)[order(unique(data_NA))] #optimize which gray values to run on
glrlm <- array(0, c(gray_levels, offset))
for (i in gray_levels_unique) {
base_filt_m <- data_in; base_filt_m[base_filt_m != i] <- NA; base_filt_m[base_filt_m == i] <- 1
base_filt_change_m <- array(NA, dim = c(dim_x+dim_x*abs(off_down), dim_y+dim_y*abs(off_right), dim_z+dim_z*abs(off_z)))
if(off_down > -1 & off_right > -1 & off_z > -1) {base_filt_change_m[1:dim_x, 1:dim_y, 1:dim_z] <- base_filt_m
} else if(off_down == -1 & off_right > -1 & off_z > -1) {base_filt_change_m[(dim_x+1):(2*dim_x), 1:dim_y, 1:dim_z] <- base_filt_m
} else if(off_down > -1 & off_right == -1 & off_z > -1) {base_filt_change_m[1:dim_x, (dim_y+1):(2*dim_y), 1:dim_z] <- base_filt_m
} else if(off_down > -1 & off_right > -1 & off_z == -1) {base_filt_change_m[1:dim_x, 1:dim_y, (dim_z+1):(2*dim_z)] <- base_filt_m
} else if(off_down == -1 & off_right == -1 & off_z > -1) {base_filt_change_m[(dim_x+1):(2*dim_x), (dim_y+1):(2*dim_y), 1:dim_z] <- base_filt_m
} else if(off_down == -1 & off_right > -1 & off_z == -1) {base_filt_change_m[(dim_x+1):(2*dim_x), 1:dim_y, (dim_z+1):(2*dim_z)] <- base_filt_m
} else if(off_down > -1 & off_right == -1 & off_z == -1) {base_filt_change_m[1:dim_x, (dim_y+1):(2*dim_y), (dim_z+1):(2*dim_z)] <- base_filt_m
} else if(off_down == -1 & off_right == -1 & off_z == -1) {base_filt_change_m[(dim_x+1):(2*dim_x), (dim_y+1):(2*dim_y), (dim_z+1):(2*dim_z)] <- base_filt_m
} else {stop("WARNING: OFFSETS ARE NOT APPROPRIATE PLEASE SEE help(glrlm)!!!")
}
for (j in 1: (offset-1)) {
shift_m <- array(NA, dim = c(dim_x+dim_x*abs(off_down), dim_y+dim_y*abs(off_right), dim_z+dim_z*abs(off_z)))
if(off_down > -1 & off_right > -1 & off_z > -1) {shift_m[(1+j*off_down):(dim_x+j*off_down), (1+j*off_right):(dim_y+j*off_right), (1+j*off_z):(dim_z+j*off_z)] <- base_filt_m
} else if(off_down == -1 & off_right > -1 & off_z > -1) {shift_m[((dim_x+1)+j*off_down):((2*dim_x)+j*off_down), (1+j*off_right):(dim_y+j*off_right), (1+j*off_z):(dim_z+j*off_z)] <- base_filt_m
} else if(off_down > -1 & off_right == -1 & off_z > -1) {shift_m[(1+j*off_down):(dim_x+j*off_down), ((dim_y+1)+j*off_right):((2*dim_y)+j*off_right), (1+j*off_z):(dim_z+j*off_z)] <- base_filt_m
} else if(off_down > -1 & off_right > -1 & off_z == -1) {shift_m[(1+j*off_down):(dim_x+j*off_down), (1+j*off_right):(dim_y+j*off_right), ((dim_z+1)+j*off_z):((2*dim_z)+j*off_z)] <- base_filt_m
} else if(off_down == -1 & off_right == -1 & off_z > -1) {shift_m[((dim_x+1)+j*off_down):((2*dim_x)+j*off_down), ((dim_y+1)+j*off_right):((2*dim_y)+j*off_right), (1+j*off_z):(dim_z+j*off_z)] <- base_filt_m
} else if(off_down == -1 & off_right > -1 & off_z == -1) {shift_m[((dim_x+1)+j*off_down):((2*dim_x)+j*off_down), (1+j*off_right):(dim_y+j*off_right), ((dim_z+1)+j*off_z):((2*dim_z)+j*off_z)] <- base_filt_m
} else if(off_down > -1 & off_right == -1 & off_z == -1) {shift_m[(1+j*off_down):(dim_x+j*off_down), ((dim_y+1)+j*off_right):((2*dim_y)+j*off_right), ((dim_z+1)+j*off_z):((2*dim_z)+j*off_z)] <- base_filt_m
} else if(off_down == -1 & off_right == -1 & off_z == -1) {shift_m[((dim_x+1)+j*off_down):((2*dim_x)+j*off_down), ((dim_y+1)+j*off_right):((2*dim_y)+j*off_right), ((dim_z+1)+j*off_z):((2*dim_z)+j*off_z)] <- base_filt_m
} else {stop("WARNING: OFFSETS ARE NOT APPROPRIATE PLEASE SEE help(glrlm)!!!")
}
diff_m <- base_filt_change_m - shift_m
count_diff <- length(diff_m[!is.na(diff_m)])
glrlm[i,(j+1)] <- count_diff
base_filt_change_m <- diff_m
}
count_gl <- base_filt_m; count_gl <- count_gl[!is.na(count_gl)]; count_gl <- sum(count_gl, na.rm = TRUE)
#Count GLRLM runs by calculating the number of maximum runs and subtracting it from shorter runs. If longest possible run is 2, then no need.
if(dim(glrlm)[2] >= 3) {
for (p in seq(dim(glrlm)[2], 3, -1)) {
if(glrlm[i, p] > 0) {
m = 2
for (q in seq((p-1), 2, -1)) {
glrlm[i, q] <- glrlm[i, q] - glrlm[i, p]*m
m = m+1
}
}
}
}
#Remaining runs are equal to single occurrences
glrlm_r_sum <- sum(col(glrlm)[1,]*glrlm[i,], na.rm = TRUE)
if(glrlm_r_sum != count_gl) {glrlm[i,1] <- (count_gl-glrlm_r_sum)
} else {glrlm[i,1] <- 0}
}
if(use_type == "single") {
if(any(class(RIA_data_in) == "RIA_image") )
{
if(is.null(save_name)) {
txt <- automatic_name(RIA_data_in, use_orig, use_slot)
txt <- paste0(txt, "_", as.numeric(off_right), as.numeric(off_down), as.numeric(off_z))
RIA_data_in$glrlm[[txt]] <- glrlm
}
if(!is.null(save_name)) {RIA_data_in$glrlm[[save_name]] <- glrlm
}
}
}
if(use_type == "discretized") {
if(any(class(RIA_data_in) == "RIA_image"))
{
if(is.null(save_name[k])) {
txt <- list_names[k]
txt <- paste0(txt, "_", as.numeric(off_right), as.numeric(off_down), as.numeric(off_z))
RIA_data_in$glrlm[[txt]] <- glrlm
}
if(!is.null(save_name[k])) {RIA_data_in$glrlm[[save_name[k]]] <- glrlm
}
}
}
if(is.null(save_name)) {txt_name <- txt
} else {txt_name <- save_name}
if(verbose_in) {message(paste0("GLRLM WAS SUCCESSFULLY ADDED TO '", txt_name, "' SLOT OF RIA_image$glrlm\n"))}
}
if(any(class(RIA_data_in) == "RIA_image")) {return(RIA_data_in)
} else {return(glrlm)}
} |
% file MatManlyMix/man/IMDb.Rd
% This file is a component of the package 'MatManlyMix' for R
%---------------------
\name{Satellite}
\alias{Satellite}
\docType{data}
\encoding{UTF-8}
\title{Satellite data}
\description{Data publicly available at the University of California - Irvine machine learning repository (http://archive.ics.uci.edu/ml), was originally obtained by NASA.
}
\usage{data(IMDb)}
\format{
A list of 2 objects: Y and id, where Y represents the data array of spectral values and id represents the true id of three classes: Soil with vegetation stubble, damp grey soil, and grey soil. Y is the of dimensionality 4 x 9 x 845.
}
\details{The data are publicly available on http://archive.ics.uci.edu/ml.}
\examples{
data(Satellite)
}
\keyword{datasets}
| /man/Satellite.Rd | no_license | cran/MatManlyMix | R | false | false | 778 | rd | % file MatManlyMix/man/IMDb.Rd
% This file is a component of the package 'MatManlyMix' for R
%---------------------
\name{Satellite}
\alias{Satellite}
\docType{data}
\encoding{UTF-8}
\title{Satellite data}
\description{Data publicly available at the University of California - Irvine machine learning repository (http://archive.ics.uci.edu/ml), was originally obtained by NASA.
}
\usage{data(IMDb)}
\format{
A list of 2 objects: Y and id, where Y represents the data array of spectral values and id represents the true id of three classes: Soil with vegetation stubble, damp grey soil, and grey soil. Y is the of dimensionality 4 x 9 x 845.
}
\details{The data are publicly available on http://archive.ics.uci.edu/ml.}
\examples{
data(Satellite)
}
\keyword{datasets}
|
Write_fun <- function(matrix_list){
dir.create(file.path('output/Excel'), showWarnings = FALSE)
dir.create(file.path('output/RDS'), showWarnings = FALSE)
j <- 1
for (i in matrix_list){
name_sheet <- names(matrix_list[j])
write.xlsx(matrix_list[[j]], file = sprintf("output/Excel/Buffer%s_Threshold%s_Year%s.xlsx", BufferDistance, Threshold, Year), sheetName = names(matrix_list[j]), append = T)
saveRDS(matrix_list[j],file = sprintf("output/RDS/Buffer%s_Threshold%s_%s_%s", BufferDistance, Threshold, name_sheet, Year))
j <- j + 1
}
} | /R/write_data.R | no_license | JornDallinga/Forest_chrono | R | false | false | 563 | r | Write_fun <- function(matrix_list){
dir.create(file.path('output/Excel'), showWarnings = FALSE)
dir.create(file.path('output/RDS'), showWarnings = FALSE)
j <- 1
for (i in matrix_list){
name_sheet <- names(matrix_list[j])
write.xlsx(matrix_list[[j]], file = sprintf("output/Excel/Buffer%s_Threshold%s_Year%s.xlsx", BufferDistance, Threshold, Year), sheetName = names(matrix_list[j]), append = T)
saveRDS(matrix_list[j],file = sprintf("output/RDS/Buffer%s_Threshold%s_%s_%s", BufferDistance, Threshold, name_sheet, Year))
j <- j + 1
}
} |
Report <- R6::R6Class(
classname = "Report",
public = list(
print = function(success = TRUE, warning = TRUE, error = TRUE) {
types <- c(success_id, warning_id, error_id)[c(success, warning, error)]
cat("Validation summary: \n")
if (success) cat(" Number of successful validations: ", private$n_passed, "\n", sep = "")
if (warning) cat(" Number of failed validations: ", private$n_failed, "\n", sep = "")
if (error) cat(" Number of validations with warnings: ", private$n_warned, "\n", sep = "")
if (nrow(private$validation_results) > 0) {
cat("\n")
cat("Advanced view: \n")
print(private$validation_results %>%
dplyr::filter(type %in% types) %>%
dplyr::select(table_name, description, type, num.violations) %>%
dplyr::group_by(table_name, description, type) %>%
dplyr::summarise(total_violations = sum(num.violations)) %>%
knitr::kable())
}
invisible(self)
},
add_validations = function(data, name = NULL) {
object_name <- ifelse(!is.null(name), name, get_first_name(data))
results <- parse_results_to_df(data) %>%
dplyr::mutate(table_name = object_name) %>%
dplyr::select(table_name, dplyr::everything())
n_results <- get_results_number(results)
private$n_failed <- sum(private$n_failed, n_results[error_id], na.rm = TRUE)
private$n_warned <- sum(private$n_warned, n_results[warning_id], na.rm = TRUE)
private$n_passed <- sum(private$n_passed, n_results[success_id], na.rm = TRUE)
private$validation_results <- dplyr::bind_rows(private$validation_results, results)
invisible(data)
},
get_validations = function(unnest = FALSE) {
validation_results = private$validation_results
if (unnest) {
if (all(purrr::map_lgl(validation_results$error_df, is.null))) {
validation_results$error_df <- NULL
return(validation_results)
}
validation_results <- validation_results %>%
tidyr::unnest(error_df, keep_empty = TRUE)
}
validation_results
},
generate_html_report = function(extra_params) {
params_list <- modifyList(list(validation_results = private$validation_results), extra_params)
do.call(private$report_constructor, params_list)
},
save_html_report = function(
template = system.file("rmarkdown/templates/standard/skeleton/skeleton.Rmd", package = "data.validator"),
output_file = "validation_report.html", output_dir = getwd(), report_ui_constructor = render_semantic_report_ui,
...) {
private$report_constructor <- report_ui_constructor
rmarkdown::render(
input = template,
output_format = "html_document",
output_file = output_file,
output_dir = output_dir,
knit_root_dir = getwd(),
params = list(
generate_report_html = self$generate_html_report,
extra_params = list(...)
),
quiet = TRUE
)
},
save_log = function(file_name = "validation_log.txt", success, warning, error) {
sink(file_name)
self$print(success, warning, error)
sink()
},
save_results = function(file_name, method = write.csv, ...) {
self$get_validations(unnest = TRUE) %>%
write.csv(file = file_name)
}
),
private = list(
n_failed = 0,
n_passed = 0,
n_warned = 0,
validation_results = dplyr::tibble(),
report_constructor = NULL
)
)
#' Create new validator object
#'
#' @description The object returns R6 class environment resposible for storing validation results.
#' @export
data_validation_report <- function() {
Report$new()
}
#' Add validation results to the Report object
#'
#' @description This function adds results to validator object with aggregating summary of
#' success, error and warning checks. Moreover it parses assertr results attributes and stores
#' them inside usable table.
#'
#' @param data Data that was validated.
#' @param report Report object to store validation results.
#' @export
add_results <- function(data, report) {
report$add_validations(data, name = attr(data, "data-name"))
}
#' Get validation results
#'
#' @description The response is a list containing information about successful, failed, warning assertions and
#' the table stores important information about validation results. Those are:
#' \itemize{
#' \item table_name - name of validated table
#' \item assertion.id - id used for each assertion
#' \item description - assertion description
#' \item num.violations - number of violations (assertion and column specific)
#' \item call - assertion call
#' \item message - assertion result message for specific column
#' \item type - error, warning or success
#' \item error_df - nested table storing details about error or warning result (like vilated indexes and valies)
#' }
#' @param report Report object that stores validation results. See \link{add_results}.
#' @param unnest If TRUE, error_df table is unnested. Results with remaining columns duplicated in table.
#' @export
get_results <- function(report, unnest = FALSE) {
report$get_validations(unnest)
}
#' Saving results table to external file
#'
#' @param report Report object that stores validation results. See \link{get_results}.
#' @param file_name Name of the resulting file (including extension).
#' @param method Function that should be used to save results table (write.csv default).
#' @param ... Remaining parameters passed to \code{method}.
#' @export
save_results <- function(report, file_name = "results.csv", method = utils::write.csv, ...) {
report$save_results(file_name, method, ...)
}
#' Saving results as a HTML report
#'
#' @param report Report object that stores validation results.
#' @param output_file Html file name to write report to.
#' @param output_dir Target report directory.
#' @param ui_constructor Function of \code{validation_results} and optional parameters that generates HTML
#' code or HTML widget that should be used to generate report content. See \code{custom_report} example.
#' @param template Path to Rmd template in which ui_contructor is rendered. See \code{data.validator} rmarkdown
#' template to see basic construction - the one is used as a default template.
#' @param ... Additional parameters passed to \code{ui_constructor}.
#' @export
save_report <- function(report, output_file = "validation_report.html", output_dir = getwd(), ui_constructor = render_semantic_report_ui,
template = system.file("rmarkdown/templates/standard/skeleton/skeleton.Rmd", package = "data.validator"), ...) {
report$save_html_report(template, output_file, output_dir, ui_constructor, ...)
}
#' Save simple validation summary in text file
#'
#' @description Saves \code{print(validator)} output inside text file.
#' @param report Report object that stores validation results.
#' @param file_name Name of the resulting file (including extension).
#' @param success Should success results be presented?
#' @param warning Should warning results be presented?
#' @param error Should error results be presented?
#' @export
save_summary <- function(report, file_name = "validation_log.txt", success = TRUE, warning = TRUE, error = TRUE) {
report$save_log(file_name, success, warning, error)
}
| /R/report.R | no_license | G-Nia/data.validator | R | false | false | 7,430 | r | Report <- R6::R6Class(
classname = "Report",
public = list(
print = function(success = TRUE, warning = TRUE, error = TRUE) {
types <- c(success_id, warning_id, error_id)[c(success, warning, error)]
cat("Validation summary: \n")
if (success) cat(" Number of successful validations: ", private$n_passed, "\n", sep = "")
if (warning) cat(" Number of failed validations: ", private$n_failed, "\n", sep = "")
if (error) cat(" Number of validations with warnings: ", private$n_warned, "\n", sep = "")
if (nrow(private$validation_results) > 0) {
cat("\n")
cat("Advanced view: \n")
print(private$validation_results %>%
dplyr::filter(type %in% types) %>%
dplyr::select(table_name, description, type, num.violations) %>%
dplyr::group_by(table_name, description, type) %>%
dplyr::summarise(total_violations = sum(num.violations)) %>%
knitr::kable())
}
invisible(self)
},
add_validations = function(data, name = NULL) {
object_name <- ifelse(!is.null(name), name, get_first_name(data))
results <- parse_results_to_df(data) %>%
dplyr::mutate(table_name = object_name) %>%
dplyr::select(table_name, dplyr::everything())
n_results <- get_results_number(results)
private$n_failed <- sum(private$n_failed, n_results[error_id], na.rm = TRUE)
private$n_warned <- sum(private$n_warned, n_results[warning_id], na.rm = TRUE)
private$n_passed <- sum(private$n_passed, n_results[success_id], na.rm = TRUE)
private$validation_results <- dplyr::bind_rows(private$validation_results, results)
invisible(data)
},
get_validations = function(unnest = FALSE) {
validation_results = private$validation_results
if (unnest) {
if (all(purrr::map_lgl(validation_results$error_df, is.null))) {
validation_results$error_df <- NULL
return(validation_results)
}
validation_results <- validation_results %>%
tidyr::unnest(error_df, keep_empty = TRUE)
}
validation_results
},
generate_html_report = function(extra_params) {
params_list <- modifyList(list(validation_results = private$validation_results), extra_params)
do.call(private$report_constructor, params_list)
},
save_html_report = function(
template = system.file("rmarkdown/templates/standard/skeleton/skeleton.Rmd", package = "data.validator"),
output_file = "validation_report.html", output_dir = getwd(), report_ui_constructor = render_semantic_report_ui,
...) {
private$report_constructor <- report_ui_constructor
rmarkdown::render(
input = template,
output_format = "html_document",
output_file = output_file,
output_dir = output_dir,
knit_root_dir = getwd(),
params = list(
generate_report_html = self$generate_html_report,
extra_params = list(...)
),
quiet = TRUE
)
},
save_log = function(file_name = "validation_log.txt", success, warning, error) {
sink(file_name)
self$print(success, warning, error)
sink()
},
save_results = function(file_name, method = write.csv, ...) {
self$get_validations(unnest = TRUE) %>%
write.csv(file = file_name)
}
),
private = list(
n_failed = 0,
n_passed = 0,
n_warned = 0,
validation_results = dplyr::tibble(),
report_constructor = NULL
)
)
#' Create new validator object
#'
#' @description The object returns R6 class environment resposible for storing validation results.
#' @export
data_validation_report <- function() {
Report$new()
}
#' Add validation results to the Report object
#'
#' @description This function adds results to validator object with aggregating summary of
#' success, error and warning checks. Moreover it parses assertr results attributes and stores
#' them inside usable table.
#'
#' @param data Data that was validated.
#' @param report Report object to store validation results.
#' @export
add_results <- function(data, report) {
report$add_validations(data, name = attr(data, "data-name"))
}
#' Get validation results
#'
#' @description The response is a list containing information about successful, failed, warning assertions and
#' the table stores important information about validation results. Those are:
#' \itemize{
#' \item table_name - name of validated table
#' \item assertion.id - id used for each assertion
#' \item description - assertion description
#' \item num.violations - number of violations (assertion and column specific)
#' \item call - assertion call
#' \item message - assertion result message for specific column
#' \item type - error, warning or success
#' \item error_df - nested table storing details about error or warning result (like vilated indexes and valies)
#' }
#' @param report Report object that stores validation results. See \link{add_results}.
#' @param unnest If TRUE, error_df table is unnested. Results with remaining columns duplicated in table.
#' @export
get_results <- function(report, unnest = FALSE) {
report$get_validations(unnest)
}
#' Saving results table to external file
#'
#' @param report Report object that stores validation results. See \link{get_results}.
#' @param file_name Name of the resulting file (including extension).
#' @param method Function that should be used to save results table (write.csv default).
#' @param ... Remaining parameters passed to \code{method}.
#' @export
save_results <- function(report, file_name = "results.csv", method = utils::write.csv, ...) {
report$save_results(file_name, method, ...)
}
#' Saving results as a HTML report
#'
#' @param report Report object that stores validation results.
#' @param output_file Html file name to write report to.
#' @param output_dir Target report directory.
#' @param ui_constructor Function of \code{validation_results} and optional parameters that generates HTML
#' code or HTML widget that should be used to generate report content. See \code{custom_report} example.
#' @param template Path to Rmd template in which ui_contructor is rendered. See \code{data.validator} rmarkdown
#' template to see basic construction - the one is used as a default template.
#' @param ... Additional parameters passed to \code{ui_constructor}.
#' @export
save_report <- function(report, output_file = "validation_report.html", output_dir = getwd(), ui_constructor = render_semantic_report_ui,
template = system.file("rmarkdown/templates/standard/skeleton/skeleton.Rmd", package = "data.validator"), ...) {
report$save_html_report(template, output_file, output_dir, ui_constructor, ...)
}
#' Save simple validation summary in text file
#'
#' @description Saves \code{print(validator)} output inside text file.
#' @param report Report object that stores validation results.
#' @param file_name Name of the resulting file (including extension).
#' @param success Should success results be presented?
#' @param warning Should warning results be presented?
#' @param error Should error results be presented?
#' @export
save_summary <- function(report, file_name = "validation_log.txt", success = TRUE, warning = TRUE, error = TRUE) {
report$save_log(file_name, success, warning, error)
}
|
#' Function to build the DeepMedic Network from scratch so it can be customized
#'
#' @param model_params all 3 spatial dimensions of input shape must be even; must
#' also specify downsamp factor for downsampled pathway and kernel size of conv layers
#' Input size is contextual input size (largest patch before downsampling)
#'
#' @return
#' @export
#'
#' @examples
build_DeepMedic <- function(model_params){
high_res_path_image_size <- c(((model_params$input_shape[1:2]/model_params$d_factor - 16)*model_params$d_factor+16),model_params$input_shape[3],1)
low_res_path_image_size <- c((model_params$input_shape[1:2]/3), model_params$input_shape[3],1)
input_path_1 <- keras::layer_input(shape=high_res_path_image_size, name="input_path_1")
# input_shape_2 <- c((model_params$input_shape[1:3]%/%2+8), model_params$input_shape[4])
# input_shape_2 <- c((model_params$input_shape[1:2]%/%2+
# (model_params$kernel_size-1)*model_params$downsamp_factor),
# model_params$input_shape[3]%/%model_params$downsamp_factor,
# model_params$input_shape[4])
# for(d in 1:length(input_shape_2[1:3])){
# if(input_shape_2[d] %% 2 != 0){
# input_shape_2[d] <- input_shape_2[d] - 1
# }
# }
input_path_2 <- keras::layer_input(shape=low_res_path_image_size, name="input_path_2")
path_1 <- input_path_1 %>%
keras::layer_conv_3d(filters = 30, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_1_conv_1") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 30, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_1_conv_2") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 40, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_1_conv_3") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 40, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_1_conv_4") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 40, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_1_conv_5") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 40, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_1_conv_6") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 50, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_1_conv_7") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 50, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_1_conv_8") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02)
path_2 <- input_path_2 %>%
keras::layer_conv_3d(filters = 30, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_2_conv_1") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 30, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_2_conv_2") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 40, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_2_conv_3") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 40, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_2_conv_4") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 40, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_2_conv_5") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 40, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_2_conv_6") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 50, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_2_conv_7") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 50, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_2_conv_8") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_upsampling_3d(size=c(3,3,1))
# path_1_shape <- path_1$get_shape()$as_list() %>%
# unlist()
# path_2_shape <- path_2$get_shape()$as_list() %>%
# unlist()
# shape_diff <- path_1_shape - path_2_shape
# if(sum(shape_diff) > 0){
# path_2 <- path_2 %>%
# keras::layer_zero_padding_3d(padding = list(
# list(shape_diff[1],0),
# list(shape_diff[2],0),
# list(shape_diff[3],0)
# ))
# }
concat_layer <- keras::layer_add(list(path_1, path_2))
main_output <- concat_layer %>%
keras::layer_dense(units = 150, activation="relu") %>%
# keras::layer_alpha_dropout(rate=0.5) %>%
keras::layer_dense(units = 150, activation="relu") %>%
# keras::layer_alpha_dropout(rate=0.5) %>%
keras::layer_dense(units = model_params$num_classes,
activation = model_params$activation, name="main_output")
model <- keras::keras_model(inputs = c(input_path_1, input_path_2), outputs = main_output)
# model <- keras::keras_model(inputs = concat_layer, outputs = main_output)
model %>% keras::compile(optimizer = model_params$optimizer,
loss = model_params$loss,
metrics = model_params$metrics)
return(model)
}
| /R/build_DeepMedic.R | permissive | willi3by/niiMLr | R | false | false | 9,557 | r | #' Function to build the DeepMedic Network from scratch so it can be customized
#'
#' @param model_params all 3 spatial dimensions of input shape must be even; must
#' also specify downsamp factor for downsampled pathway and kernel size of conv layers
#' Input size is contextual input size (largest patch before downsampling)
#'
#' @return
#' @export
#'
#' @examples
build_DeepMedic <- function(model_params){
high_res_path_image_size <- c(((model_params$input_shape[1:2]/model_params$d_factor - 16)*model_params$d_factor+16),model_params$input_shape[3],1)
low_res_path_image_size <- c((model_params$input_shape[1:2]/3), model_params$input_shape[3],1)
input_path_1 <- keras::layer_input(shape=high_res_path_image_size, name="input_path_1")
# input_shape_2 <- c((model_params$input_shape[1:3]%/%2+8), model_params$input_shape[4])
# input_shape_2 <- c((model_params$input_shape[1:2]%/%2+
# (model_params$kernel_size-1)*model_params$downsamp_factor),
# model_params$input_shape[3]%/%model_params$downsamp_factor,
# model_params$input_shape[4])
# for(d in 1:length(input_shape_2[1:3])){
# if(input_shape_2[d] %% 2 != 0){
# input_shape_2[d] <- input_shape_2[d] - 1
# }
# }
input_path_2 <- keras::layer_input(shape=low_res_path_image_size, name="input_path_2")
path_1 <- input_path_1 %>%
keras::layer_conv_3d(filters = 30, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_1_conv_1") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 30, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_1_conv_2") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 40, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_1_conv_3") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 40, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_1_conv_4") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 40, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_1_conv_5") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 40, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_1_conv_6") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 50, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_1_conv_7") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 50, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_1_conv_8") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02)
path_2 <- input_path_2 %>%
keras::layer_conv_3d(filters = 30, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_2_conv_1") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 30, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_2_conv_2") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 40, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_2_conv_3") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 40, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_2_conv_4") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 40, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_2_conv_5") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 40, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_2_conv_6") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 50, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_2_conv_7") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_conv_3d(filters = 50, kernel_size = c(3,3,1),
kernel_initializer = keras::initializer_he_normal(),
kernel_regularizer = keras::regularizer_l1_l2(l1=0.00001, l2=0.0001),
name = "path_2_conv_8") %>%
keras::layer_activation_parametric_relu() %>%
keras::layer_batch_normalization() %>%
keras::layer_spatial_dropout_3d(rate=0.02) %>%
keras::layer_upsampling_3d(size=c(3,3,1))
# path_1_shape <- path_1$get_shape()$as_list() %>%
# unlist()
# path_2_shape <- path_2$get_shape()$as_list() %>%
# unlist()
# shape_diff <- path_1_shape - path_2_shape
# if(sum(shape_diff) > 0){
# path_2 <- path_2 %>%
# keras::layer_zero_padding_3d(padding = list(
# list(shape_diff[1],0),
# list(shape_diff[2],0),
# list(shape_diff[3],0)
# ))
# }
concat_layer <- keras::layer_add(list(path_1, path_2))
main_output <- concat_layer %>%
keras::layer_dense(units = 150, activation="relu") %>%
# keras::layer_alpha_dropout(rate=0.5) %>%
keras::layer_dense(units = 150, activation="relu") %>%
# keras::layer_alpha_dropout(rate=0.5) %>%
keras::layer_dense(units = model_params$num_classes,
activation = model_params$activation, name="main_output")
model <- keras::keras_model(inputs = c(input_path_1, input_path_2), outputs = main_output)
# model <- keras::keras_model(inputs = concat_layer, outputs = main_output)
model %>% keras::compile(optimizer = model_params$optimizer,
loss = model_params$loss,
metrics = model_params$metrics)
return(model)
}
|
# This script contains examples for a basic review of R
library(lobstr)
# See Chapters 1,2,4,6 of R4DS for additional details
##################################
#### R scripts and commenting ####
##################################
# Use '#' to comment code
# It is VERY IMPORTANT to leave yourself notes in comments
# This makes your code more readable to others and also reminds you what
# you were doing
# You should also use commenting to organize and separate code in meaningful
# sections or compartments.
# You can create collapsible code using multiple #
# Load Data ######################
# Plot Data ######################
# Load data ----------------------
# Plot data ----------------------
#################################
#### Using R as a calculator ####
#################################
# Note: You can used ctrl + enter (Windows) or cmd + enter (Mac) to send code
# to the console
# You can use R for basic mathematical operations
1+4
40*50
sqrt(2)
abs(4.56)
(4+5)/10
4+5/10
# Mathematical operators
# +
# -
# *
# /
# ^ or **
# %% - modulus - example: 5 %% 2
# %/% - integer division - 5 %/% 2
# What is going on here?
(5 %/% 2)*2+ (5 %% 2)
(50 %/% 12)*12+ (50 %% 12)
###########################
#### Object assignment ####
###########################
## Basic object assignment -----------------------------------------------------
# Note: The keyboard shortcut for the assignment operator is alt + "-" or option + "-"
x <- 2
# Why <- is preferred over = ?
# This is good practice:
x <- 1
# this works but is considered bad practice. Why?
x = 10
# How does '=' differ in functions?
# (Note: runif() generates random uniform numbers)
runif(n = 5)
# notice what happens here...
runif(min <- 5)
runif(min = 5)
# Side note: There are other assignment operators/functions. We will revisit
# these later. But here is a quick divergence:
## Scoping/global assignment operator - "<<-"
# Example of <<- (Note this is often not a great idea)
a<<-1
# note how regular assignment works
test_func <- function(x){
z <- x
}
test_func(5)
# versus global assignment
test_func <- function(x){
z <<- x
}
test_func(5)
## The assign() function also assigns values
assign("z",15)
# assignment can also be done backwards (although this is not standard)
13 -> x
## Naming conventions --------------------------------------------------------
# Consistency is important for efficient programming
# (See Advanced R 1st ED - Style guide - http://adv-r.had.co.nz/Style.html)
### Object Names - should be meaningful and consistent ------------------------
#### First, select a style and try to stay consistent
# snake_case (recomended)
my_vector <- c(1,2,3,4)
# camelCase
myVector <- c(1,2,3,4)
MyVector <- c(1,2,3,4)
# with periods
my.vector <- c(1,2,3,4)
#### Second, choose object names that are easy to understand but not too complex
# Good
start_year <- 2005
init_yr <- 2005
# Bad
first_year_of_the_simulation <- 2005
fyots <- 2005
simyr1 <- 2005
nelyx589 <- 2005
# Illegal Names - some names are not allowed
_abc <- 2
if <- 2
1abc <- 2
# but these can be forced with `` (however, try to avoid this)
`1abc` <- 2
### File Names - should also be meaningful --------------------------
# Good names
# regression-models.R
# utility-functions.R
# regression-models-01252021.R # Note: date stamp added
# Bad names
foo.r
stuff.r
# for files that need to be run sequentially
# 0-download.R
# 1-clean_data.R
# 2-build_models.R
## Copy on modify -----------------------------------------------
# R is lazy (this is a good thing) when it comes to object assignment
# When is value of b assigned a new location?
a <- c(1, 5, 3, 2)
b <- a
b[[1]] <- 10
# the function lobstr::obj_addr() tells us the address (memory location) of an object
a <- c(1, 5, 3, 2)
b <- a
lobstr::obj_addr(a)
lobstr::obj_addr(b)
b[[1]] <- 10
lobstr::obj_addr(b)
# R creates copies lazily (from help file)
x <- 1:10
y <- x
lobstr::obj_addr(x)
lobstr::obj_addr(y)
# The address of an object is different every time you create it:
obj_addr(1:10)
obj_addr(1:10)
#########################################
#### Workspace and working directory ####
#########################################
# summarizing workspace
ls()
# clearing objects
rm(a)
# clearing all objects
rm(list=ls())
# locating or setting the working directory
getwd()
setwd()
############################################
#### Very Basic Data Types & Structures ####
############################################
## Basic data types -----------------------------------------------------------
# checking object type/class
class(1)
# numeric or double
class(1)
# integer
class(1L)
# character
class("1")
# factor
class(as.factor("1"))
## Vectors --------------------------------------------------------------------
# created with c() function
vec1 <- c(1,2,3,4)
# or with : colon/sequence operator
vec2 <- 10:30
30:10
# or other functions
seq(from = 100, to = 1000, by = 30)
### vectorized operations
vec1^2
sqrt(vec1)
## list -----------------------------------------------------------------------
list(1,2,3,4)
list(vec1,vec2)
# lists can contain different data types
list(vec1,vec2,c("A","B","C","D"))
# values can also be named
list(a=1,
b="happy",
c=c(24L,30L),
d=1:30,
e=letters)
## data.frame -----------------------------------------------------------------
tmp_df <- data.frame(a = c(1,2,3),
b = c("happy","sad","mad"))
# Note: a data.frame is a special type of list
b <- as.list(tmp_df)
lobstr::obj_addr(tmp_df$a) == lobstr::obj_addr(b$a)
## missing values -------------------------------------------------------------
vec3 <- c(1,2,3,4,NA,6,7)
###################
#### Functions ####
###################
# R is a functional programing language...we have already seen a number of functions
mean(vec1)
median(vec2)
# Most functions allow or require multiple argument
# Notice required, optional and default argument
mean(vec3)
mean(vec3, na.rm = TRUE)
# default values
rnorm(10)
rnorm(10,mean = 10,sd = 3)
# writing a basic function
add_vals <- function(a,b){
a+b
}
add_vals(1,2)
add_vals(c(1,2,3),c(4,5,6))
################
#### Base R ####
################
# Functions and operators loaded as part of the default R install package,
# available without having to load packages
# view examples here: https://rstudio.com/wp-content/uploads/2016/10/r-cheat-sheet-3.pdf
#############################
#### installing packages ####
#############################
# installing packages
install.packages("dplyr")
# updating packages
update.packages()
# loading packages
library(dplyr)
# calling functions from within packages
# Note: function names often overlap
stats::filter()
dplyr::filter()
# installing packages from github
# UNCOMENT ONLY TO INSTALL THE DEVELOPMENT VERSION
# install.packages("devtools")
# devtools::install_github("hadley/dplyr")
############################
#### Logical Operations ####
############################
# basic
1<3
2<=2
5==4
5!=4
# and/or
TRUE & TRUE
TRUE & FALSE
TRUE | FALSE
# in
5 %in% c(1,2,3,4)
# is. functions
is.integer(1L)
is.integer(1)
is.character("happy")
is.na(NA)
NA == NA
is.numeric(1L)
is.double(1L)
# vectorized logic
vec1 <- 1:10
vec1
vec1 < 5
x <- c(TRUE,TRUE,FALSE)
!x
# Basic conditional statements
a <- 3
b <- 2
if (a>b) {"A"} else {"B"}
a <- 1
b <- 2
if (a==b) {
print("Equals")
} else {
print("Not Equal")
}
###########################
#### Internal datasets ####
###########################
# R comes with a number of preloaded datasets
mtcars
# use data() to see available datasets
data()
# or data sets contained in other packages
data(package = .packages(all.available = TRUE))
####################
#### Help Files ####
####################
# use ? or help() to get help files for function
?mean
help(mean)
# Example in help files
x <- c(0:10, 50)
xm <- mean(x)
c(xm, mean(x, trim = 0.1))
###########################
#### Sourceing Scripts ####
###########################
source("R/SimEpi/admin/install_packages.R")
| /in_class_scripts/basics.R | no_license | aarmiller/SimEpi | R | false | false | 8,097 | r |
# This script contains examples for a basic review of R
library(lobstr)
# See Chapters 1,2,4,6 of R4DS for additional details
##################################
#### R scripts and commenting ####
##################################
# Use '#' to comment code
# It is VERY IMPORTANT to leave yourself notes in comments
# This makes your code more readable to others and also reminds you what
# you were doing
# You should also use commenting to organize and separate code in meaningful
# sections or compartments.
# You can create collapsible code using multiple #
# Load Data ######################
# Plot Data ######################
# Load data ----------------------
# Plot data ----------------------
#################################
#### Using R as a calculator ####
#################################
# Note: You can used ctrl + enter (Windows) or cmd + enter (Mac) to send code
# to the console
# You can use R for basic mathematical operations
1+4
40*50
sqrt(2)
abs(4.56)
(4+5)/10
4+5/10
# Mathematical operators
# +
# -
# *
# /
# ^ or **
# %% - modulus - example: 5 %% 2
# %/% - integer division - 5 %/% 2
# What is going on here?
(5 %/% 2)*2+ (5 %% 2)
(50 %/% 12)*12+ (50 %% 12)
###########################
#### Object assignment ####
###########################
## Basic object assignment -----------------------------------------------------
# Note: The keyboard shortcut for the assignment operator is alt + "-" or option + "-"
x <- 2
# Why <- is preferred over = ?
# This is good practice:
x <- 1
# this works but is considered bad practice. Why?
x = 10
# How does '=' differ in functions?
# (Note: runif() generates random uniform numbers)
runif(n = 5)
# notice what happens here...
runif(min <- 5)
runif(min = 5)
# Side note: There are other assignment operators/functions. We will revisit
# these later. But here is a quick divergence:
## Scoping/global assignment operator - "<<-"
# Example of <<- (Note this is often not a great idea)
a<<-1
# note how regular assignment works
test_func <- function(x){
z <- x
}
test_func(5)
# versus global assignment
test_func <- function(x){
z <<- x
}
test_func(5)
## The assign() function also assigns values
assign("z",15)
# assignment can also be done backwards (although this is not standard)
13 -> x
## Naming conventions --------------------------------------------------------
# Consistency is important for efficient programming
# (See Advanced R 1st ED - Style guide - http://adv-r.had.co.nz/Style.html)
### Object Names - should be meaningful and consistent ------------------------
#### First, select a style and try to stay consistent
# snake_case (recomended)
my_vector <- c(1,2,3,4)
# camelCase
myVector <- c(1,2,3,4)
MyVector <- c(1,2,3,4)
# with periods
my.vector <- c(1,2,3,4)
#### Second, choose object names that are easy to understand but not too complex
# Good
start_year <- 2005
init_yr <- 2005
# Bad
first_year_of_the_simulation <- 2005
fyots <- 2005
simyr1 <- 2005
nelyx589 <- 2005
# Illegal Names - some names are not allowed
_abc <- 2
if <- 2
1abc <- 2
# but these can be forced with `` (however, try to avoid this)
`1abc` <- 2
### File Names - should also be meaningful --------------------------
# Good names
# regression-models.R
# utility-functions.R
# regression-models-01252021.R # Note: date stamp added
# Bad names
foo.r
stuff.r
# for files that need to be run sequentially
# 0-download.R
# 1-clean_data.R
# 2-build_models.R
## Copy on modify -----------------------------------------------
# R is lazy (this is a good thing) when it comes to object assignment
# When is value of b assigned a new location?
a <- c(1, 5, 3, 2)
b <- a
b[[1]] <- 10
# the function lobstr::obj_addr() tells us the address (memory location) of an object
a <- c(1, 5, 3, 2)
b <- a
lobstr::obj_addr(a)
lobstr::obj_addr(b)
b[[1]] <- 10
lobstr::obj_addr(b)
# R creates copies lazily (from help file)
x <- 1:10
y <- x
lobstr::obj_addr(x)
lobstr::obj_addr(y)
# The address of an object is different every time you create it:
obj_addr(1:10)
obj_addr(1:10)
#########################################
#### Workspace and working directory ####
#########################################
# summarizing workspace
ls()
# clearing objects
rm(a)
# clearing all objects
rm(list=ls())
# locating or setting the working directory
getwd()
setwd()
############################################
#### Very Basic Data Types & Structures ####
############################################
## Basic data types -----------------------------------------------------------
# checking object type/class
class(1)
# numeric or double
class(1)
# integer
class(1L)
# character
class("1")
# factor
class(as.factor("1"))
## Vectors --------------------------------------------------------------------
# created with c() function
vec1 <- c(1,2,3,4)
# or with : colon/sequence operator
vec2 <- 10:30
30:10
# or other functions
seq(from = 100, to = 1000, by = 30)
### vectorized operations
vec1^2
sqrt(vec1)
## list -----------------------------------------------------------------------
list(1,2,3,4)
list(vec1,vec2)
# lists can contain different data types
list(vec1,vec2,c("A","B","C","D"))
# values can also be named
list(a=1,
b="happy",
c=c(24L,30L),
d=1:30,
e=letters)
## data.frame -----------------------------------------------------------------
tmp_df <- data.frame(a = c(1,2,3),
b = c("happy","sad","mad"))
# Note: a data.frame is a special type of list
b <- as.list(tmp_df)
lobstr::obj_addr(tmp_df$a) == lobstr::obj_addr(b$a)
## missing values -------------------------------------------------------------
vec3 <- c(1,2,3,4,NA,6,7)
###################
#### Functions ####
###################
# R is a functional programing language...we have already seen a number of functions
mean(vec1)
median(vec2)
# Most functions allow or require multiple argument
# Notice required, optional and default argument
mean(vec3)
mean(vec3, na.rm = TRUE)
# default values
rnorm(10)
rnorm(10,mean = 10,sd = 3)
# writing a basic function
add_vals <- function(a,b){
a+b
}
add_vals(1,2)
add_vals(c(1,2,3),c(4,5,6))
################
#### Base R ####
################
# Functions and operators loaded as part of the default R install package,
# available without having to load packages
# view examples here: https://rstudio.com/wp-content/uploads/2016/10/r-cheat-sheet-3.pdf
#############################
#### installing packages ####
#############################
# installing packages
install.packages("dplyr")
# updating packages
update.packages()
# loading packages
library(dplyr)
# calling functions from within packages
# Note: function names often overlap
stats::filter()
dplyr::filter()
# installing packages from github
# UNCOMENT ONLY TO INSTALL THE DEVELOPMENT VERSION
# install.packages("devtools")
# devtools::install_github("hadley/dplyr")
############################
#### Logical Operations ####
############################
# basic
1<3
2<=2
5==4
5!=4
# and/or
TRUE & TRUE
TRUE & FALSE
TRUE | FALSE
# in
5 %in% c(1,2,3,4)
# is. functions
is.integer(1L)
is.integer(1)
is.character("happy")
is.na(NA)
NA == NA
is.numeric(1L)
is.double(1L)
# vectorized logic
vec1 <- 1:10
vec1
vec1 < 5
x <- c(TRUE,TRUE,FALSE)
!x
# Basic conditional statements
a <- 3
b <- 2
if (a>b) {"A"} else {"B"}
a <- 1
b <- 2
if (a==b) {
print("Equals")
} else {
print("Not Equal")
}
###########################
#### Internal datasets ####
###########################
# R comes with a number of preloaded datasets
mtcars
# use data() to see available datasets
data()
# or data sets contained in other packages
data(package = .packages(all.available = TRUE))
####################
#### Help Files ####
####################
# use ? or help() to get help files for function
?mean
help(mean)
# Example in help files
x <- c(0:10, 50)
xm <- mean(x)
c(xm, mean(x, trim = 0.1))
###########################
#### Sourceing Scripts ####
###########################
source("R/SimEpi/admin/install_packages.R")
|
library(sqldf)
powerconsumption<-read.csv.sql("C:/Users/youngj/downloads/household_power_consumption.txt",
sql="select* from file WHERE Date in ('1/2/2007', '2/2/2007')", header=TRUE, sep=";")
powerconsumption$NewDate <- as.POSIXct(paste(powerconsumption$Date, powerconsumption$Time,sep=" "), format="%d/%m/%Y %H:%M:%S")
plot(powerconsumption$NewDate, powerconsumption$Global_active_power, type ="l", xlab="", ylab="Global Active Power (kilowatts)")
dev.copy(png, file = "plot2.png", width=480, height=480)
dev.off()
| /plot2.R | no_license | Josh82/Exploratory-Data-Analysis | R | false | false | 549 | r | library(sqldf)
powerconsumption<-read.csv.sql("C:/Users/youngj/downloads/household_power_consumption.txt",
sql="select* from file WHERE Date in ('1/2/2007', '2/2/2007')", header=TRUE, sep=";")
powerconsumption$NewDate <- as.POSIXct(paste(powerconsumption$Date, powerconsumption$Time,sep=" "), format="%d/%m/%Y %H:%M:%S")
plot(powerconsumption$NewDate, powerconsumption$Global_active_power, type ="l", xlab="", ylab="Global Active Power (kilowatts)")
dev.copy(png, file = "plot2.png", width=480, height=480)
dev.off()
|
# Random Forest Classification
# Importing the dataset
setwd("C:/Users/adip9/Desktop/Udemy/Machine-Learning-A-Z-New/Machine Learning A-Z New/Part 3 - Classification/Section 20 - Random Forest Classification")
dataset = read.csv('Social_Network_Ads.csv')
dataset = dataset[3:5]
# Encoding the target feature as factor
dataset$Purchased = factor(dataset$Purchased, levels = c(0, 1))
# Splitting the dataset into the Training set and Test set
# install.packages('caTools')
library(caTools)
set.seed(123)
split = sample.split(dataset$Purchased, SplitRatio = 0.75)
training_set = subset(dataset, split == TRUE)
test_set = subset(dataset, split == FALSE)
# Feature Scaling
training_set[-3] = scale(training_set[-3])
test_set[-3] = scale(test_set[-3])
# Fitting classifier to the Training set
#install.packages('randomForest')
library(randomForest)
classifier = randomForest(x = training_set[-3],
y = training_set$Purchased,
ntree = 10)
# Predicting the Test set results
y_pred = predict(classifier, newdata = test_set[-3])
# Making the Confusion Matrix
cm = table(test_set[, 3], y_pred)
# Visualising the Training set results
library(ElemStatLearn)
set = training_set
X1 = seq(min(set[, 1]) - 1, max(set[, 1]) + 1, by = 0.01)
X2 = seq(min(set[, 2]) - 1, max(set[, 2]) + 1, by = 0.01)
grid_set = expand.grid(X1, X2)
colnames(grid_set) = c('Age', 'EstimatedSalary')
y_grid = predict(classifier, newdata = grid_set)
plot(set[, -3],
main = 'Random Forest Classification (Training set)',
xlab = 'Age', ylab = 'Estimated Salary',
xlim = range(X1), ylim = range(X2))
contour(X1, X2, matrix(as.numeric(y_grid), length(X1), length(X2)), add = TRUE)
points(grid_set, pch = '.', col = ifelse(y_grid == 1, 'springgreen3', 'tomato'))
points(set, pch = 21, bg = ifelse(set[, 3] == 1, 'green4', 'red3'))
# Visualising the Test set results
library(ElemStatLearn)
set = test_set
X1 = seq(min(set[, 1]) - 1, max(set[, 1]) + 1, by = 0.01)
X2 = seq(min(set[, 2]) - 1, max(set[, 2]) + 1, by = 0.01)
grid_set = expand.grid(X1, X2)
colnames(grid_set) = c('Age', 'EstimatedSalary')
y_grid = predict(classifier, newdata = grid_set)
plot(set[, -3],
main = 'Random Forest Classification (Test set)',
xlab = 'Age', ylab = 'Estimated Salary',
xlim = range(X1), ylim = range(X2))
contour(X1, X2, matrix(as.numeric(y_grid), length(X1), length(X2)), add = TRUE)
points(grid_set, pch = '.', col = ifelse(y_grid == 1, 'springgreen3', 'tomato'))
points(set, pch = 21, bg = ifelse(set[, 3] == 1, 'green4', 'red3')) | /Classification/Random Forest Classification/Aditya_Patel_Random Forest_Classifcation.R | no_license | ap752/Machine_Learning_With_R | R | false | false | 2,629 | r | # Random Forest Classification
# Importing the dataset
setwd("C:/Users/adip9/Desktop/Udemy/Machine-Learning-A-Z-New/Machine Learning A-Z New/Part 3 - Classification/Section 20 - Random Forest Classification")
dataset = read.csv('Social_Network_Ads.csv')
dataset = dataset[3:5]
# Encoding the target feature as factor
dataset$Purchased = factor(dataset$Purchased, levels = c(0, 1))
# Splitting the dataset into the Training set and Test set
# install.packages('caTools')
library(caTools)
set.seed(123)
split = sample.split(dataset$Purchased, SplitRatio = 0.75)
training_set = subset(dataset, split == TRUE)
test_set = subset(dataset, split == FALSE)
# Feature Scaling
training_set[-3] = scale(training_set[-3])
test_set[-3] = scale(test_set[-3])
# Fitting classifier to the Training set
#install.packages('randomForest')
library(randomForest)
classifier = randomForest(x = training_set[-3],
y = training_set$Purchased,
ntree = 10)
# Predicting the Test set results
y_pred = predict(classifier, newdata = test_set[-3])
# Making the Confusion Matrix
cm = table(test_set[, 3], y_pred)
# Visualising the Training set results
library(ElemStatLearn)
set = training_set
X1 = seq(min(set[, 1]) - 1, max(set[, 1]) + 1, by = 0.01)
X2 = seq(min(set[, 2]) - 1, max(set[, 2]) + 1, by = 0.01)
grid_set = expand.grid(X1, X2)
colnames(grid_set) = c('Age', 'EstimatedSalary')
y_grid = predict(classifier, newdata = grid_set)
plot(set[, -3],
main = 'Random Forest Classification (Training set)',
xlab = 'Age', ylab = 'Estimated Salary',
xlim = range(X1), ylim = range(X2))
contour(X1, X2, matrix(as.numeric(y_grid), length(X1), length(X2)), add = TRUE)
points(grid_set, pch = '.', col = ifelse(y_grid == 1, 'springgreen3', 'tomato'))
points(set, pch = 21, bg = ifelse(set[, 3] == 1, 'green4', 'red3'))
# Visualising the Test set results
library(ElemStatLearn)
set = test_set
X1 = seq(min(set[, 1]) - 1, max(set[, 1]) + 1, by = 0.01)
X2 = seq(min(set[, 2]) - 1, max(set[, 2]) + 1, by = 0.01)
grid_set = expand.grid(X1, X2)
colnames(grid_set) = c('Age', 'EstimatedSalary')
y_grid = predict(classifier, newdata = grid_set)
plot(set[, -3],
main = 'Random Forest Classification (Test set)',
xlab = 'Age', ylab = 'Estimated Salary',
xlim = range(X1), ylim = range(X2))
contour(X1, X2, matrix(as.numeric(y_grid), length(X1), length(X2)), add = TRUE)
points(grid_set, pch = '.', col = ifelse(y_grid == 1, 'springgreen3', 'tomato'))
points(set, pch = 21, bg = ifelse(set[, 3] == 1, 'green4', 'red3')) |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/personalizeevents_service.R
\name{personalizeevents}
\alias{personalizeevents}
\title{Amazon Personalize Events}
\usage{
personalizeevents(
config = list(),
credentials = list(),
endpoint = NULL,
region = NULL
)
}
\arguments{
\item{config}{Optional configuration of credentials, endpoint, and/or region.
\itemize{
\item{\strong{credentials}:} {\itemize{
\item{\strong{creds}:} {\itemize{
\item{\strong{access_key_id}:} {AWS access key ID}
\item{\strong{secret_access_key}:} {AWS secret access key}
\item{\strong{session_token}:} {AWS temporary session token}
}}
\item{\strong{profile}:} {The name of a profile to use. If not given, then the default profile is used.}
\item{\strong{anonymous}:} {Set anonymous credentials.}
\item{\strong{endpoint}:} {The complete URL to use for the constructed client.}
\item{\strong{region}:} {The AWS Region used in instantiating the client.}
}}
\item{\strong{close_connection}:} {Immediately close all HTTP connections.}
\item{\strong{timeout}:} {The time in seconds till a timeout exception is thrown when attempting to make a connection. The default is 60 seconds.}
\item{\strong{s3_force_path_style}:} {Set this to \code{true} to force the request to use path-style addressing, i.e. \verb{http://s3.amazonaws.com/BUCKET/KEY}.}
\item{\strong{sts_regional_endpoint}:} {Set sts regional endpoint resolver to regional or legacy \url{https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-endpoints.html}}
}}
\item{credentials}{Optional credentials shorthand for the config parameter
\itemize{
\item{\strong{creds}:} {\itemize{
\item{\strong{access_key_id}:} {AWS access key ID}
\item{\strong{secret_access_key}:} {AWS secret access key}
\item{\strong{session_token}:} {AWS temporary session token}
}}
\item{\strong{profile}:} {The name of a profile to use. If not given, then the default profile is used.}
\item{\strong{anonymous}:} {Set anonymous credentials.}
}}
\item{endpoint}{Optional shorthand for complete URL to use for the constructed client.}
\item{region}{Optional shorthand for AWS Region used in instantiating the client.}
}
\value{
A client for the service. You can call the service's operations using
syntax like \code{svc$operation(...)}, where \code{svc} is the name you've assigned
to the client. The available operations are listed in the
Operations section.
}
\description{
Amazon Personalize can consume real-time user event data, such as
\emph{stream} or \emph{click} data, and use it for model training either alone or
combined with historical data. For more information see \href{https://docs.aws.amazon.com/personalize/latest/dg/recording-events.html}{Recording Events}.
}
\section{Service syntax}{
\if{html}{\out{<div class="sourceCode">}}\preformatted{svc <- personalizeevents(
config = list(
credentials = list(
creds = list(
access_key_id = "string",
secret_access_key = "string",
session_token = "string"
),
profile = "string",
anonymous = "logical"
),
endpoint = "string",
region = "string",
close_connection = "logical",
timeout = "numeric",
s3_force_path_style = "logical",
sts_regional_endpoint = "string"
),
credentials = list(
creds = list(
access_key_id = "string",
secret_access_key = "string",
session_token = "string"
),
profile = "string",
anonymous = "logical"
),
endpoint = "string",
region = "string"
)
}\if{html}{\out{</div>}}
}
\section{Operations}{
\tabular{ll}{
\link[=personalizeevents_put_events]{put_events} \tab Records user interaction event data\cr
\link[=personalizeevents_put_items]{put_items} \tab Adds one or more items to an Items dataset\cr
\link[=personalizeevents_put_users]{put_users} \tab Adds one or more users to a Users dataset
}
}
\examples{
\dontrun{
svc <- personalizeevents()
svc$put_events(
Foo = 123
)
}
}
| /cran/paws.machine.learning/man/personalizeevents.Rd | permissive | paws-r/paws | R | false | true | 3,955 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/personalizeevents_service.R
\name{personalizeevents}
\alias{personalizeevents}
\title{Amazon Personalize Events}
\usage{
personalizeevents(
config = list(),
credentials = list(),
endpoint = NULL,
region = NULL
)
}
\arguments{
\item{config}{Optional configuration of credentials, endpoint, and/or region.
\itemize{
\item{\strong{credentials}:} {\itemize{
\item{\strong{creds}:} {\itemize{
\item{\strong{access_key_id}:} {AWS access key ID}
\item{\strong{secret_access_key}:} {AWS secret access key}
\item{\strong{session_token}:} {AWS temporary session token}
}}
\item{\strong{profile}:} {The name of a profile to use. If not given, then the default profile is used.}
\item{\strong{anonymous}:} {Set anonymous credentials.}
\item{\strong{endpoint}:} {The complete URL to use for the constructed client.}
\item{\strong{region}:} {The AWS Region used in instantiating the client.}
}}
\item{\strong{close_connection}:} {Immediately close all HTTP connections.}
\item{\strong{timeout}:} {The time in seconds till a timeout exception is thrown when attempting to make a connection. The default is 60 seconds.}
\item{\strong{s3_force_path_style}:} {Set this to \code{true} to force the request to use path-style addressing, i.e. \verb{http://s3.amazonaws.com/BUCKET/KEY}.}
\item{\strong{sts_regional_endpoint}:} {Set sts regional endpoint resolver to regional or legacy \url{https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-endpoints.html}}
}}
\item{credentials}{Optional credentials shorthand for the config parameter
\itemize{
\item{\strong{creds}:} {\itemize{
\item{\strong{access_key_id}:} {AWS access key ID}
\item{\strong{secret_access_key}:} {AWS secret access key}
\item{\strong{session_token}:} {AWS temporary session token}
}}
\item{\strong{profile}:} {The name of a profile to use. If not given, then the default profile is used.}
\item{\strong{anonymous}:} {Set anonymous credentials.}
}}
\item{endpoint}{Optional shorthand for complete URL to use for the constructed client.}
\item{region}{Optional shorthand for AWS Region used in instantiating the client.}
}
\value{
A client for the service. You can call the service's operations using
syntax like \code{svc$operation(...)}, where \code{svc} is the name you've assigned
to the client. The available operations are listed in the
Operations section.
}
\description{
Amazon Personalize can consume real-time user event data, such as
\emph{stream} or \emph{click} data, and use it for model training either alone or
combined with historical data. For more information see \href{https://docs.aws.amazon.com/personalize/latest/dg/recording-events.html}{Recording Events}.
}
\section{Service syntax}{
\if{html}{\out{<div class="sourceCode">}}\preformatted{svc <- personalizeevents(
config = list(
credentials = list(
creds = list(
access_key_id = "string",
secret_access_key = "string",
session_token = "string"
),
profile = "string",
anonymous = "logical"
),
endpoint = "string",
region = "string",
close_connection = "logical",
timeout = "numeric",
s3_force_path_style = "logical",
sts_regional_endpoint = "string"
),
credentials = list(
creds = list(
access_key_id = "string",
secret_access_key = "string",
session_token = "string"
),
profile = "string",
anonymous = "logical"
),
endpoint = "string",
region = "string"
)
}\if{html}{\out{</div>}}
}
\section{Operations}{
\tabular{ll}{
\link[=personalizeevents_put_events]{put_events} \tab Records user interaction event data\cr
\link[=personalizeevents_put_items]{put_items} \tab Adds one or more items to an Items dataset\cr
\link[=personalizeevents_put_users]{put_users} \tab Adds one or more users to a Users dataset
}
}
\examples{
\dontrun{
svc <- personalizeevents()
svc$put_events(
Foo = 123
)
}
}
|
testlist <- list(id = NULL, score = NULL, id = NULL, booklet_id = c(-643154262L, -640034375L, -1183008257L, -14414407L, -1073545456L, 2092564409L, -1179010631L, -62024L, -1195853640L, -640024577L, 184549375L, -17L, -49494L, -1073545472L, 16777215L, -1L, -1315861L, -1L, -1L, 1258233921L, -1L, -1L, -1L, -1L, -1L, -1L, -1L, -1L, -1L, -65536L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), item_score = integer(0), person_id = integer(0))
result <- do.call(dexterMST:::mutate_booklet_score,testlist)
str(result) | /dexterMST/inst/testfiles/mutate_booklet_score/libFuzzer_mutate_booklet_score/mutate_booklet_score_valgrind_files/1612726496-test.R | no_license | akhikolla/updatedatatype-list1 | R | false | false | 737 | r | testlist <- list(id = NULL, score = NULL, id = NULL, booklet_id = c(-643154262L, -640034375L, -1183008257L, -14414407L, -1073545456L, 2092564409L, -1179010631L, -62024L, -1195853640L, -640024577L, 184549375L, -17L, -49494L, -1073545472L, 16777215L, -1L, -1315861L, -1L, -1L, 1258233921L, -1L, -1L, -1L, -1L, -1L, -1L, -1L, -1L, -1L, -65536L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), item_score = integer(0), person_id = integer(0))
result <- do.call(dexterMST:::mutate_booklet_score,testlist)
str(result) |
#r is dynamically typed
#same object can be all
x<-2
x
class(x)
is.numeric(x)
i<-5L
#L means its an integer
class(i)
#integer
is.numeric(i)
#yes. int is subset of num
4L*2.8
5L/2L
x <- "data"
x
class(x)
y <- factor("data")
y
nchar(x)
#number of characters
nchar("hello")
nchar(3)
nchar(452)
nchar(y)
#nchar doesnt work on factors
date1 <- as.Date("2018-04-13")
class(date1)
as.numeric(date1)
#unix epoch day
date2 <- as.POSIXct("2018-04-28 08:56")
as.numeric(date2)
TRUE
FALSE
#must be allcaps
#check for logical is
is.logical(TRUE)
2==3
2!=3
2 <3
#true an false
"data" == "stats"
"date" < "stats"
#vectors
x <- c(1,2,3,4,5,6,7,8,9,10)
x/4
x^2
sqrt(x)
#being able to deal with each element of a vector at once makes r easier to work with
y <- -3:6
x+y
x*y
x/y
#element by element
length(x)
length(y)
length(x+y)
#if length different there is a warning message and calculation anyways
x <- 10:1
y <- -4:5
x < y
#helper function any tells if any are true
any(x < y)
all(x < y)
q <- c("laa","dido",
"daram")
nchar(q)
#every variable is a vector
f <- 7
f
#here just the first element present
c(One="a",Two="y",Last="r")
#no arrows, notice
#now its a map
w <- 1:3
w
names(w) <- c("a","b","c")
w
a = c("a","a","c")
factor(a)
#levels returns unique values
#is that a set? a sorted set?
z <- c(1,2,NA,8,3,NA,3)
is.na(zChar)
#missing data is important to statistics
#na isnt null
z <- c()
VectorMap <- c(purple="Tinky Winky",red<-"Po",yellow="laalaa")
VectorMap
x <- 1: 10
x
mean(x)
sum(x)
nchar(x)
#mean(x,TAB list available options
mean(x,na.rm = TRUE,trim = 0.1)
x[c(2,6)]<-NA
x
mean(x,na.rm = FALSE)
#even one NA in a vector returns NA
| /first.r | no_license | danikot/r-scratchpad | R | false | false | 1,796 | r | #r is dynamically typed
#same object can be all
x<-2
x
class(x)
is.numeric(x)
i<-5L
#L means its an integer
class(i)
#integer
is.numeric(i)
#yes. int is subset of num
4L*2.8
5L/2L
x <- "data"
x
class(x)
y <- factor("data")
y
nchar(x)
#number of characters
nchar("hello")
nchar(3)
nchar(452)
nchar(y)
#nchar doesnt work on factors
date1 <- as.Date("2018-04-13")
class(date1)
as.numeric(date1)
#unix epoch day
date2 <- as.POSIXct("2018-04-28 08:56")
as.numeric(date2)
TRUE
FALSE
#must be allcaps
#check for logical is
is.logical(TRUE)
2==3
2!=3
2 <3
#true an false
"data" == "stats"
"date" < "stats"
#vectors
x <- c(1,2,3,4,5,6,7,8,9,10)
x/4
x^2
sqrt(x)
#being able to deal with each element of a vector at once makes r easier to work with
y <- -3:6
x+y
x*y
x/y
#element by element
length(x)
length(y)
length(x+y)
#if length different there is a warning message and calculation anyways
x <- 10:1
y <- -4:5
x < y
#helper function any tells if any are true
any(x < y)
all(x < y)
q <- c("laa","dido",
"daram")
nchar(q)
#every variable is a vector
f <- 7
f
#here just the first element present
c(One="a",Two="y",Last="r")
#no arrows, notice
#now its a map
w <- 1:3
w
names(w) <- c("a","b","c")
w
a = c("a","a","c")
factor(a)
#levels returns unique values
#is that a set? a sorted set?
z <- c(1,2,NA,8,3,NA,3)
is.na(zChar)
#missing data is important to statistics
#na isnt null
z <- c()
VectorMap <- c(purple="Tinky Winky",red<-"Po",yellow="laalaa")
VectorMap
x <- 1: 10
x
mean(x)
sum(x)
nchar(x)
#mean(x,TAB list available options
mean(x,na.rm = TRUE,trim = 0.1)
x[c(2,6)]<-NA
x
mean(x,na.rm = FALSE)
#even one NA in a vector returns NA
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/parameters.R
\docType{data}
\name{weight_func}
\alias{weight_func}
\alias{misc_parameters}
\alias{surv_dist}
\alias{Laplace}
\alias{neighbors}
\title{Parameter objects related to miscellaneous models.}
\format{An object of class \code{qual_param} (inherits from \code{param}) of length 4.}
\usage{
weight_func
surv_dist
Laplace
neighbors
}
\value{
Each object is generated by either \code{new_quant_param} or
\code{new_qual_param}.
}
\description{
These are objects that can be used for modeling, especially in conjunction
with the \pkg{parsnip} package.
}
\details{
These objects are pre-made parameter sets that are useful in a variety of
models.
\itemize{
\item \code{weight_func}: The type of kernel function that weights the distances
between samples (e.g. in a K-near neighbors model).
\item \code{surv_dist}: the statistical distribution of the data in a survival
analysis model (e.g. \code{parsnip::surv_reg()}) .
\item \code{Laplace}: the Laplace correction used to smooth low-frequency counts.
\item \code{neighbors}: a parameter for the number of neighbors used in a prototype
model.
}
}
\keyword{datasets}
| /man/misc_parameters.Rd | no_license | baifengbai/dials | R | false | true | 1,199 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/parameters.R
\docType{data}
\name{weight_func}
\alias{weight_func}
\alias{misc_parameters}
\alias{surv_dist}
\alias{Laplace}
\alias{neighbors}
\title{Parameter objects related to miscellaneous models.}
\format{An object of class \code{qual_param} (inherits from \code{param}) of length 4.}
\usage{
weight_func
surv_dist
Laplace
neighbors
}
\value{
Each object is generated by either \code{new_quant_param} or
\code{new_qual_param}.
}
\description{
These are objects that can be used for modeling, especially in conjunction
with the \pkg{parsnip} package.
}
\details{
These objects are pre-made parameter sets that are useful in a variety of
models.
\itemize{
\item \code{weight_func}: The type of kernel function that weights the distances
between samples (e.g. in a K-near neighbors model).
\item \code{surv_dist}: the statistical distribution of the data in a survival
analysis model (e.g. \code{parsnip::surv_reg()}) .
\item \code{Laplace}: the Laplace correction used to smooth low-frequency counts.
\item \code{neighbors}: a parameter for the number of neighbors used in a prototype
model.
}
}
\keyword{datasets}
|
temp<- read.table("c:\\temp\\household_power_consumption.txt", sep=";", header = T, stringsAsFactors=F)
library(dplyr)
temp2<-tbl_df(temp)
temp2$Date<-as.Date(temp2$Date, "%d/%m/%Y")
temp2<- filter(temp2, Date >= "2007-02-01"& Date <="2007-02-02")
Date_time<- as.POSIXct(paste(temp2$Date, temp2$Time))
plot(x=Date_time, y=as.numeric(temp2$Global_active_power), type='l',
xlab=c(""), ylab=c("Global ACtive Power (kilowatts)"))
| /Plot2.R | no_license | PKostya/datasciencecoursera | R | false | false | 433 | r | temp<- read.table("c:\\temp\\household_power_consumption.txt", sep=";", header = T, stringsAsFactors=F)
library(dplyr)
temp2<-tbl_df(temp)
temp2$Date<-as.Date(temp2$Date, "%d/%m/%Y")
temp2<- filter(temp2, Date >= "2007-02-01"& Date <="2007-02-02")
Date_time<- as.POSIXct(paste(temp2$Date, temp2$Time))
plot(x=Date_time, y=as.numeric(temp2$Global_active_power), type='l',
xlab=c(""), ylab=c("Global ACtive Power (kilowatts)"))
|
options(unzip = Sys.which("unzip"))
Sys.which("tar")
devtools::install_github("nstrayer/datadrivencv")
| /install_dep.r | no_license | andersy005/cv | R | false | false | 103 | r | options(unzip = Sys.which("unzip"))
Sys.which("tar")
devtools::install_github("nstrayer/datadrivencv")
|
#' Mean Absolute Scaled Error
#'
#' @param f forecast;
#' @param x numeric or time-series of observed response
#' @param naive function; forecast method used
#' @param ... additional arguments passed to naive
#'
#' @details
#'
#' The mean absolute scaled error calculates the error relative to a naive
#' prediction
#'
#' @references
#' \url{https://www.otexts.org/fpp/2/5}
#'
mase <- function(f, x, naive=naive, ... ) {
nx <- getResponse(f)
fit.naive <- naive(nx, ...)
if( ! is.forecast(fit.naive) ) {
fc.naive <- forecast(fit.naive)
} else {
fc.naive <- fit.naive
}
err <- x - f$mean
naive.err <- x - fc.naive$mean
mean(abs(err/naive.err))
}
| /R/mase.R | no_license | decisionpatterns/ml.tools | R | false | false | 712 | r | #' Mean Absolute Scaled Error
#'
#' @param f forecast;
#' @param x numeric or time-series of observed response
#' @param naive function; forecast method used
#' @param ... additional arguments passed to naive
#'
#' @details
#'
#' The mean absolute scaled error calculates the error relative to a naive
#' prediction
#'
#' @references
#' \url{https://www.otexts.org/fpp/2/5}
#'
mase <- function(f, x, naive=naive, ... ) {
nx <- getResponse(f)
fit.naive <- naive(nx, ...)
if( ! is.forecast(fit.naive) ) {
fc.naive <- forecast(fit.naive)
} else {
fc.naive <- fit.naive
}
err <- x - f$mean
naive.err <- x - fc.naive$mean
mean(abs(err/naive.err))
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/proximitybeacon_functions.R
\name{beacons.register}
\alias{beacons.register}
\title{Registers a previously unregistered beacon given its `advertisedId`. These IDs are unique within the system. An ID can be registered only once. Authenticate using an [OAuth access token](https://developers.google.com/identity/protocols/OAuth2) from a signed-in user with **Is owner** or **Can edit** permissions in the Google Developers Console project.}
\usage{
beacons.register(Beacon, projectId = NULL)
}
\arguments{
\item{Beacon}{The \link{Beacon} object to pass to this method}
\item{projectId}{The project id of the project the beacon will be registered to}
}
\description{
Autogenerated via \code{\link[googleAuthR]{gar_create_api_skeleton}}
}
\details{
Authentication scopes used by this function are:
\itemize{
\item https://www.googleapis.com/auth/userlocation.beacon.registry
}
Set \code{options(googleAuthR.scopes.selected = c(https://www.googleapis.com/auth/userlocation.beacon.registry)}
Then run \code{googleAuthR::gar_auth()} to authenticate.
See \code{\link[googleAuthR]{gar_auth}} for details.
}
\seealso{
\href{https://developers.google.com/beacons/proximity/}{Google Documentation}
Other Beacon functions: \code{\link{Beacon.properties}},
\code{\link{Beacon}}, \code{\link{beacons.update}}
}
| /googleproximitybeaconv1beta1.auto/man/beacons.register.Rd | permissive | Phippsy/autoGoogleAPI | R | false | true | 1,382 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/proximitybeacon_functions.R
\name{beacons.register}
\alias{beacons.register}
\title{Registers a previously unregistered beacon given its `advertisedId`. These IDs are unique within the system. An ID can be registered only once. Authenticate using an [OAuth access token](https://developers.google.com/identity/protocols/OAuth2) from a signed-in user with **Is owner** or **Can edit** permissions in the Google Developers Console project.}
\usage{
beacons.register(Beacon, projectId = NULL)
}
\arguments{
\item{Beacon}{The \link{Beacon} object to pass to this method}
\item{projectId}{The project id of the project the beacon will be registered to}
}
\description{
Autogenerated via \code{\link[googleAuthR]{gar_create_api_skeleton}}
}
\details{
Authentication scopes used by this function are:
\itemize{
\item https://www.googleapis.com/auth/userlocation.beacon.registry
}
Set \code{options(googleAuthR.scopes.selected = c(https://www.googleapis.com/auth/userlocation.beacon.registry)}
Then run \code{googleAuthR::gar_auth()} to authenticate.
See \code{\link[googleAuthR]{gar_auth}} for details.
}
\seealso{
\href{https://developers.google.com/beacons/proximity/}{Google Documentation}
Other Beacon functions: \code{\link{Beacon.properties}},
\code{\link{Beacon}}, \code{\link{beacons.update}}
}
|
\name{Electre_tri}
\alias{Electre_tri}
\title{ELECTRE TRI Method}
\description{The Electre Tri is a multiple criteria decision aiding method, designed to deal with sorting problems. Electre Tri method has been developed by LAMSADE (Paris-Dauphine University, Paris, France).}
\usage{
Electre_tri(performanceMatrix,
alternatives,
profiles,
profiles_names,
criteria,
minmaxcriteria,
criteriaWeights,
IndifferenceThresholds,
PreferenceThresholds,
VetoThresholds,
lambda = NULL)
}
\arguments{
\item{performanceMatrix}{Matrix or data frame containing the performance table. Each row corresponds to an alternative, and each column to a criterion. Rows (resp. columns) must be named according to the IDs of the alternatives (resp. criteria).}
\item{alternatives}{Vector containing names of alternatives, according to which the data should be filtered.}
\item{profiles}{Matrix containing, in each row, the lower profiles of the categories. The columns are named according to the criteria, and the rows are named according to the categories. The index of the row in the matrix corresponds to the rank of the category.}
\item{profiles_names}{Vector containing profiles'names}
\item{criteria}{Vector containing names of criteria, according to which the data should be filtered.}
\item{minmaxcriteria}{criteriaMinMax Vector containing the preference direction on each of the criteria. "min" (resp."max") indicates that the criterion has to be minimized (maximized).}
\item{criteriaWeights}{Vector containing the weights of the criteria.}
\item{IndifferenceThresholds}{Vector containing the indifference thresholds constraints defined for each criterion.}
\item{PreferenceThresholds}{Vector containing the preference thresholds constraints defined for each criterion.}
\item{VetoThresholds}{Vector containing the veto thresholds constraints defined for each criterion}
\item{lambda}{The lambda-cutting lambda- should be in the range 0.5 and 1.0) level indicates how many of the criteria have to be fulfilled in order to assign an alternative to a specific category. Default value=0.75}
}
\references{Mousseau V., Slowinski R., "Inferring an ELECTRE TRI Model from Assignment Examples", Journal of Global Optimization, vol. 12, 1998, 157-174.
Mousseau V., Figueira J., NAUX J.P, "Using assignment examples to infer weights for ELECTRE TRI method : Some experimental results", Universite de Paris Dauphine, cahier du Lamsade n 150, 1997,
Mousseau V., Slowinski R., Zielniewicz P. : "ELECTRE TRI 2.0a, User documentation", Universite de Paris-Dauphine, Document du LAMSADE no 111}
\author{Michel Prombo <michel.prombo@statec.etat.lu>}
\examples{
# the performance table
performanceMatrix <- cbind(
c(-120.0,-150.0,-100.0,-60,-30.0,-80,-45.0),
c(-284.0,-269.0,-413.0,-596,-1321.0,-734,-982.0),
c(5.0,2.0,4.0,6,8.0,5,7.0),
c(3.5,4.5,5.5,8,7.5,4,8.5),
c(18.0,24.0,17.0,20,16.0,21,13.0)
)
# Vector containing names of alternatives
alternatives <- c("a1","a2","a3","a4","a5","a6","a7")
# Vector containing names of criteria
criteria <- c( "g1","g2","g3","g4","g5")
criteriaWeights <- c(0.25,0.45,0.10,0.12,0.08)
# vector indicating the direction of the criteria evaluation .
minmaxcriteria <- c("max","max","max","max","max")
# Matrix containing the profiles.
profiles <- cbind(c(-100,-50),c(-1000,-500),c(4,7),c(4,7),c(15,20))
# vector defining profiles' names
profiles_names <-c("b1","b2")
# thresholds vector
IndifferenceThresholds <- c(15,80,1,0.5,1)
PreferenceThresholds <- c(40,350,3,3.5,5)
VetoThresholds <- c(100,850,5,4.5,8)
# Testing
Electre_tri(performanceMatrix,
alternatives,
profiles,
profiles_names,
criteria,
minmaxcriteria,
criteriaWeights,
IndifferenceThresholds,
PreferenceThresholds,
VetoThresholds,
lambda=NULL)
}
\keyword{ELECTRE methods}
\keyword{Sorting problem}
\keyword{Aggregation/disaggregation approaches}
\keyword{Multi-criteria decision aiding}
| /man/Electre_tri.Rd | no_license | rolfcheung/OutrankingTools | R | false | false | 4,075 | rd | \name{Electre_tri}
\alias{Electre_tri}
\title{ELECTRE TRI Method}
\description{The Electre Tri is a multiple criteria decision aiding method, designed to deal with sorting problems. Electre Tri method has been developed by LAMSADE (Paris-Dauphine University, Paris, France).}
\usage{
Electre_tri(performanceMatrix,
alternatives,
profiles,
profiles_names,
criteria,
minmaxcriteria,
criteriaWeights,
IndifferenceThresholds,
PreferenceThresholds,
VetoThresholds,
lambda = NULL)
}
\arguments{
\item{performanceMatrix}{Matrix or data frame containing the performance table. Each row corresponds to an alternative, and each column to a criterion. Rows (resp. columns) must be named according to the IDs of the alternatives (resp. criteria).}
\item{alternatives}{Vector containing names of alternatives, according to which the data should be filtered.}
\item{profiles}{Matrix containing, in each row, the lower profiles of the categories. The columns are named according to the criteria, and the rows are named according to the categories. The index of the row in the matrix corresponds to the rank of the category.}
\item{profiles_names}{Vector containing profiles'names}
\item{criteria}{Vector containing names of criteria, according to which the data should be filtered.}
\item{minmaxcriteria}{criteriaMinMax Vector containing the preference direction on each of the criteria. "min" (resp."max") indicates that the criterion has to be minimized (maximized).}
\item{criteriaWeights}{Vector containing the weights of the criteria.}
\item{IndifferenceThresholds}{Vector containing the indifference thresholds constraints defined for each criterion.}
\item{PreferenceThresholds}{Vector containing the preference thresholds constraints defined for each criterion.}
\item{VetoThresholds}{Vector containing the veto thresholds constraints defined for each criterion}
\item{lambda}{The lambda-cutting lambda- should be in the range 0.5 and 1.0) level indicates how many of the criteria have to be fulfilled in order to assign an alternative to a specific category. Default value=0.75}
}
\references{Mousseau V., Slowinski R., "Inferring an ELECTRE TRI Model from Assignment Examples", Journal of Global Optimization, vol. 12, 1998, 157-174.
Mousseau V., Figueira J., NAUX J.P, "Using assignment examples to infer weights for ELECTRE TRI method : Some experimental results", Universite de Paris Dauphine, cahier du Lamsade n 150, 1997,
Mousseau V., Slowinski R., Zielniewicz P. : "ELECTRE TRI 2.0a, User documentation", Universite de Paris-Dauphine, Document du LAMSADE no 111}
\author{Michel Prombo <michel.prombo@statec.etat.lu>}
\examples{
# the performance table
performanceMatrix <- cbind(
c(-120.0,-150.0,-100.0,-60,-30.0,-80,-45.0),
c(-284.0,-269.0,-413.0,-596,-1321.0,-734,-982.0),
c(5.0,2.0,4.0,6,8.0,5,7.0),
c(3.5,4.5,5.5,8,7.5,4,8.5),
c(18.0,24.0,17.0,20,16.0,21,13.0)
)
# Vector containing names of alternatives
alternatives <- c("a1","a2","a3","a4","a5","a6","a7")
# Vector containing names of criteria
criteria <- c( "g1","g2","g3","g4","g5")
criteriaWeights <- c(0.25,0.45,0.10,0.12,0.08)
# vector indicating the direction of the criteria evaluation .
minmaxcriteria <- c("max","max","max","max","max")
# Matrix containing the profiles.
profiles <- cbind(c(-100,-50),c(-1000,-500),c(4,7),c(4,7),c(15,20))
# vector defining profiles' names
profiles_names <-c("b1","b2")
# thresholds vector
IndifferenceThresholds <- c(15,80,1,0.5,1)
PreferenceThresholds <- c(40,350,3,3.5,5)
VetoThresholds <- c(100,850,5,4.5,8)
# Testing
Electre_tri(performanceMatrix,
alternatives,
profiles,
profiles_names,
criteria,
minmaxcriteria,
criteriaWeights,
IndifferenceThresholds,
PreferenceThresholds,
VetoThresholds,
lambda=NULL)
}
\keyword{ELECTRE methods}
\keyword{Sorting problem}
\keyword{Aggregation/disaggregation approaches}
\keyword{Multi-criteria decision aiding}
|
library("cowplot")
library("dplyr")
library("ggplot2")
library("ggthemes")
library("gtools")
library("Matrix")
library("MODIS")
library("plotly")
library("rjson")
library("shiny")
library("shinyFiles")
library("data.table")
library("scibet")
library("readr")
library("reactable")
library("reticulate")
library("shinyjs")
library("presto")
library("bbplot")
reticulate::use_virtualenv("../renv/python/virtualenvs/renv-python-3.8.5/")
#### Variables that persist across sessions
## Read in table with datasets available for SciBet
datasets_scibet <- fread("../meta/SciBet_reference_list.tsv")
## Source functions
source("SCAP_functions.R")
source_python("../Python/rank_genes_groups_df.py")
anndata <- import('anndata')
scanpy <- import('scanpy')
init <- 0 # flag for autosave
server <- function(input, output, session){
session$onSessionEnded(stopApp)
options(shiny.maxRequestSize=500*1024^2)
rvalues <- reactiveValues(tmp_annotations = NULL, cells = NULL, order = NULL, features = NULL, obs = NULL, obs_cat = NULL, reductions = NULL, cell_ids = NULL, h5ad = NULL, path_to_data = NULL,
raw_dtype = NULL)
rvalues_mod <- reactiveValues(tmp_annotations = NULL, cells = NULL, order = NULL, features = NULL, obs = NULL, obs_cat = NULL, reductions = NULL, cell_ids = NULL, h5ad = NULL, path_to_data = NULL,
raw_dtype = NULL)
de_reacts <- reactiveValues(do_DE_plots = FALSE)
## Determine folders for ShinyDir button
volumes <- c("FTP" = "/ftp", Home = fs::path_home())
## GenAP2 logo
output$genap_logo <- renderImage({
# Return a list containing the filename
list(src = "./img/GenAP_powered_reg.png",
contentType = 'image/png',
width = "100%",
height = "100%",
alt = "This is alternate text")
}, deleteFile = FALSE)
## File directory
shinyFileChoose(input, "h5ad_in", roots = volumes, session = session)
# connect chosen .h5ad file
observeEvent(input$h5ad_in, {
path <- parseFilePaths(selection = input$h5ad_in, roots = volumes)$datapath
if(is.integer(path[1]) || identical(path, character(0)) || identical(path, character(0))) return(NULL)
h5ad_files <- path#paste0(path,"/",list.files(path))
assays <- sub(".h5ad","",sub(paste0(".*/"),"",h5ad_files))
data <- list()
## Iterate over all assays and connect to h5ad objects
for(i in 1:length(assays)){
data[[i]] <- tryCatch({
anndata$read(h5ad_files[i])
},
error = function(e){
showModal(modalDialog(p(paste0("An error occured trying to connect to ", h5ad_files[i])), title = "Error connecting to h5ad file."), session = getDefaultReactiveDomain())
return(NULL)
})
}
if(is.null(data)) return(NULL)
if(length(data) != length(assays)) return(NULL)
if(length(unlist(lapply(data, function(x){x}))) != length(assays)) return(NULL)
names(data) <- assays
## Check if RAW Anndata object is present or not. If not present, use the main object
if(is.null(data[[1]]$raw)){
rvalues$features <- rownames(data[[1]]$var)
}else{
test_gene_name <- rownames(data[[1]]$var)[1]
if(test_gene_name %in% rownames(data[[1]]$raw$var)){ # check if rownames are numbers or gene names
rvalues$features <- rownames(data[[1]]$raw$var)
}else if("features" %in% colnames(data[[1]]$raw$var)){ ## Check if there is a column named features in raw
rvalues$features <- data[[1]]$raw$var$features
}else if(test_gene_name %in% data[[1]]$raw$var[,1]){ # otherwise, check if the first column contains rownames
rvalues$features <- data[[1]]$raw$var[,1]
}
}
rvalues$obs <- data[[1]]$obs_keys()
## Determine type of annotation and create a layer to annotate for easy usage later on
rvalues$obs_cat <- check_if_obs_cat(obs_df = data[[1]]$obs) ## Function to check if an observation is categorical or numeric
reductions <- data[[1]]$obsm$as_dict()
if(length(reductions) == 0){
showModal(modalDialog(p(paste0(h5ad_files[i], " has no dimensional reductions.")), title = "Error connecting to h5ad file."), session = getDefaultReactiveDomain())
return(NULL)
}
reduction_keys <- data[[1]]$obsm_keys()
r_names <- rownames(data[[1]]$obs)
for(i in 1:length(reductions)){
reductions[[i]] <- as.data.frame(reductions[[i]])
colnames(reductions[[i]]) <- paste0(reduction_keys[i], "_", 1:ncol(reductions[[i]]))
rownames(reductions[[i]]) <- r_names
}
names(reductions) <- reduction_keys
rvalues$reductions <- reductions
rvalues$cell_ids <- rownames(data[[1]]$obs)
rvalues$h5ad <- data
rvalues$path_to_data <- h5ad_files
## unload modality rvalues
for(i in names(rvalues_mod)){
rvalues_mod[[i]] <- NULL
}
## Determine what data is likely stored in .raw
if(is.null(data[[1]]$raw)){ ## Check if raw exists
rvalues$raw_dtype <- "NULL"
}else if(sum(rvalues$h5ad[[1]]$raw$X[1,]) %% 1 == 0){ ## Check whether raw contains un-normalized data or normalized data
rvalues$raw_dtype <- "counts"
}else{ ## Only if the other two conditions fail, use raw values to calculate differential expression
rvalues$raw_dtype <- "normalized"
}
init <<- 0
## Hide differential expression panels and reset input values
shinyjs::hide("de_results")
## Show message when no DE has been calculated (i.e. a new dataset loaded)
shinyjs::show("empty_de")
})
# observe({ # auto save h5ad file(s)
# req(rvalues$h5ad)
# invalidateLater(120000) # 2 min
# if(init>0){
# #tryCatch(
# # {
# cat(file = stderr(), paste0(rvalues$path_to_data, "\n"))
# showNotification("Saving...", duration = NULL, id = 'auto_save')
# for(i in 1:length(rvalues$path_to_data)){
# rvalues$h5ad[[i]]$write(filename = rvalues$path_to_data[i])
# }
# removeNotification(id = 'auto_save')
# # },
# # error = function(e)
# # {
# #cat(file = stderr(), unlist(e))
# # showModal(modalDialog(p(paste0("An error occured trying to write to ", rvalues$path_to_data[i], ": ", unlist(e))), title = "Error writing to h5ad file."), session = getDefaultReactiveDomain())
# # }
# # )
# }
# init <<- init + 1
# })
source(file.path("server", "main.server.R"), local = TRUE)$value
source(file.path("server", "cell_annotation.server.R"), local = TRUE)$value
source(file.path("server", "modalities.server.R"), local = TRUE)$value
source(file.path("server", "custom_metadata.server.R"), local = TRUE)$value
source(file.path("server", "file_conversion.server.R"), local = TRUE)$value
source(file.path("server", "compare_annotations.server.R"), local = TRUE)$value
source(file.path("server", "scibet.server.R"), local = TRUE)$value
source(file.path("server", "differential_expression.server.R"), local = TRUE)$value
} # server end
| /R/server.R | no_license | gaelcge/SCAP | R | false | false | 7,011 | r | library("cowplot")
library("dplyr")
library("ggplot2")
library("ggthemes")
library("gtools")
library("Matrix")
library("MODIS")
library("plotly")
library("rjson")
library("shiny")
library("shinyFiles")
library("data.table")
library("scibet")
library("readr")
library("reactable")
library("reticulate")
library("shinyjs")
library("presto")
library("bbplot")
reticulate::use_virtualenv("../renv/python/virtualenvs/renv-python-3.8.5/")
#### Variables that persist across sessions
## Read in table with datasets available for SciBet
datasets_scibet <- fread("../meta/SciBet_reference_list.tsv")
## Source functions
source("SCAP_functions.R")
source_python("../Python/rank_genes_groups_df.py")
anndata <- import('anndata')
scanpy <- import('scanpy')
init <- 0 # flag for autosave
server <- function(input, output, session){
session$onSessionEnded(stopApp)
options(shiny.maxRequestSize=500*1024^2)
rvalues <- reactiveValues(tmp_annotations = NULL, cells = NULL, order = NULL, features = NULL, obs = NULL, obs_cat = NULL, reductions = NULL, cell_ids = NULL, h5ad = NULL, path_to_data = NULL,
raw_dtype = NULL)
rvalues_mod <- reactiveValues(tmp_annotations = NULL, cells = NULL, order = NULL, features = NULL, obs = NULL, obs_cat = NULL, reductions = NULL, cell_ids = NULL, h5ad = NULL, path_to_data = NULL,
raw_dtype = NULL)
de_reacts <- reactiveValues(do_DE_plots = FALSE)
## Determine folders for ShinyDir button
volumes <- c("FTP" = "/ftp", Home = fs::path_home())
## GenAP2 logo
output$genap_logo <- renderImage({
# Return a list containing the filename
list(src = "./img/GenAP_powered_reg.png",
contentType = 'image/png',
width = "100%",
height = "100%",
alt = "This is alternate text")
}, deleteFile = FALSE)
## File directory
shinyFileChoose(input, "h5ad_in", roots = volumes, session = session)
# connect chosen .h5ad file
observeEvent(input$h5ad_in, {
path <- parseFilePaths(selection = input$h5ad_in, roots = volumes)$datapath
if(is.integer(path[1]) || identical(path, character(0)) || identical(path, character(0))) return(NULL)
h5ad_files <- path#paste0(path,"/",list.files(path))
assays <- sub(".h5ad","",sub(paste0(".*/"),"",h5ad_files))
data <- list()
## Iterate over all assays and connect to h5ad objects
for(i in 1:length(assays)){
data[[i]] <- tryCatch({
anndata$read(h5ad_files[i])
},
error = function(e){
showModal(modalDialog(p(paste0("An error occured trying to connect to ", h5ad_files[i])), title = "Error connecting to h5ad file."), session = getDefaultReactiveDomain())
return(NULL)
})
}
if(is.null(data)) return(NULL)
if(length(data) != length(assays)) return(NULL)
if(length(unlist(lapply(data, function(x){x}))) != length(assays)) return(NULL)
names(data) <- assays
## Check if RAW Anndata object is present or not. If not present, use the main object
if(is.null(data[[1]]$raw)){
rvalues$features <- rownames(data[[1]]$var)
}else{
test_gene_name <- rownames(data[[1]]$var)[1]
if(test_gene_name %in% rownames(data[[1]]$raw$var)){ # check if rownames are numbers or gene names
rvalues$features <- rownames(data[[1]]$raw$var)
}else if("features" %in% colnames(data[[1]]$raw$var)){ ## Check if there is a column named features in raw
rvalues$features <- data[[1]]$raw$var$features
}else if(test_gene_name %in% data[[1]]$raw$var[,1]){ # otherwise, check if the first column contains rownames
rvalues$features <- data[[1]]$raw$var[,1]
}
}
rvalues$obs <- data[[1]]$obs_keys()
## Determine type of annotation and create a layer to annotate for easy usage later on
rvalues$obs_cat <- check_if_obs_cat(obs_df = data[[1]]$obs) ## Function to check if an observation is categorical or numeric
reductions <- data[[1]]$obsm$as_dict()
if(length(reductions) == 0){
showModal(modalDialog(p(paste0(h5ad_files[i], " has no dimensional reductions.")), title = "Error connecting to h5ad file."), session = getDefaultReactiveDomain())
return(NULL)
}
reduction_keys <- data[[1]]$obsm_keys()
r_names <- rownames(data[[1]]$obs)
for(i in 1:length(reductions)){
reductions[[i]] <- as.data.frame(reductions[[i]])
colnames(reductions[[i]]) <- paste0(reduction_keys[i], "_", 1:ncol(reductions[[i]]))
rownames(reductions[[i]]) <- r_names
}
names(reductions) <- reduction_keys
rvalues$reductions <- reductions
rvalues$cell_ids <- rownames(data[[1]]$obs)
rvalues$h5ad <- data
rvalues$path_to_data <- h5ad_files
## unload modality rvalues
for(i in names(rvalues_mod)){
rvalues_mod[[i]] <- NULL
}
## Determine what data is likely stored in .raw
if(is.null(data[[1]]$raw)){ ## Check if raw exists
rvalues$raw_dtype <- "NULL"
}else if(sum(rvalues$h5ad[[1]]$raw$X[1,]) %% 1 == 0){ ## Check whether raw contains un-normalized data or normalized data
rvalues$raw_dtype <- "counts"
}else{ ## Only if the other two conditions fail, use raw values to calculate differential expression
rvalues$raw_dtype <- "normalized"
}
init <<- 0
## Hide differential expression panels and reset input values
shinyjs::hide("de_results")
## Show message when no DE has been calculated (i.e. a new dataset loaded)
shinyjs::show("empty_de")
})
# observe({ # auto save h5ad file(s)
# req(rvalues$h5ad)
# invalidateLater(120000) # 2 min
# if(init>0){
# #tryCatch(
# # {
# cat(file = stderr(), paste0(rvalues$path_to_data, "\n"))
# showNotification("Saving...", duration = NULL, id = 'auto_save')
# for(i in 1:length(rvalues$path_to_data)){
# rvalues$h5ad[[i]]$write(filename = rvalues$path_to_data[i])
# }
# removeNotification(id = 'auto_save')
# # },
# # error = function(e)
# # {
# #cat(file = stderr(), unlist(e))
# # showModal(modalDialog(p(paste0("An error occured trying to write to ", rvalues$path_to_data[i], ": ", unlist(e))), title = "Error writing to h5ad file."), session = getDefaultReactiveDomain())
# # }
# # )
# }
# init <<- init + 1
# })
source(file.path("server", "main.server.R"), local = TRUE)$value
source(file.path("server", "cell_annotation.server.R"), local = TRUE)$value
source(file.path("server", "modalities.server.R"), local = TRUE)$value
source(file.path("server", "custom_metadata.server.R"), local = TRUE)$value
source(file.path("server", "file_conversion.server.R"), local = TRUE)$value
source(file.path("server", "compare_annotations.server.R"), local = TRUE)$value
source(file.path("server", "scibet.server.R"), local = TRUE)$value
source(file.path("server", "differential_expression.server.R"), local = TRUE)$value
} # server end
|
endPoint <- function(y,verbose=TRUE,.unique=TRUE,...){
UseMethod("endPoint", y)
}
endPoint.evmOpt <- function(y, verbose=TRUE,.unique=TRUE,...){
if(.unique) Unique <- unique else Unique <- identity
p <- texmexMakeParams(coef(y), y$data$D)
endpoint <- y$family$endpoint
negShape <- p[, ncol(p)] < 0
if(any(negShape)){
UpperEndPoint <- endpoint(p, y)
UpperEndPoint[!negShape] <- Inf
if(verbose){
o <- Unique(cbind(y$data$D[['xi']], p))
print(signif(o,...))
} else {
invisible(Unique(UpperEndPoint))
}
} else {
Unique(rep(Inf,length(negShape)))
}
}
endPoint.evmBoot <- endPoint.evmSim <- function(y,verbose=TRUE,.unique=TRUE,...){
endPoint(y$map,verbose=verbose,.unique=.unique,...)
}
| /texmex/R/endPoint.R | no_license | ingted/R-Examples | R | false | false | 751 | r | endPoint <- function(y,verbose=TRUE,.unique=TRUE,...){
UseMethod("endPoint", y)
}
endPoint.evmOpt <- function(y, verbose=TRUE,.unique=TRUE,...){
if(.unique) Unique <- unique else Unique <- identity
p <- texmexMakeParams(coef(y), y$data$D)
endpoint <- y$family$endpoint
negShape <- p[, ncol(p)] < 0
if(any(negShape)){
UpperEndPoint <- endpoint(p, y)
UpperEndPoint[!negShape] <- Inf
if(verbose){
o <- Unique(cbind(y$data$D[['xi']], p))
print(signif(o,...))
} else {
invisible(Unique(UpperEndPoint))
}
} else {
Unique(rep(Inf,length(negShape)))
}
}
endPoint.evmBoot <- endPoint.evmSim <- function(y,verbose=TRUE,.unique=TRUE,...){
endPoint(y$map,verbose=verbose,.unique=.unique,...)
}
|
options(echo=F)
local({r <- getOption("repos"); r["CRAN"] <- "http://cran.us.r-project.org"; options(repos = r)})
if (!"R.utils" %in% rownames(installed.packages())) install.packages("R.utils")
if (!"plyr" %in% rownames(installed.packages())) install.packages("plyr")
#if (!"rgl" %in% rownames(installed.packages())) install.packages("rgl")
if (!"randomForest" %in% rownames(installed.packages())) install.packages("randomForest")
if (!"gains" %in% rownames(installed.packages())) install.packages("gains")
library(R.utils)
setwd(normalizePath(dirname(R.utils::commandArgs(asValues=TRUE)$"f")))
source("h2oR.R")
source("utilsR.R")
ipPort <- get_args(commandArgs(trailingOnly = TRUE))
failed <<- F
removePackage <- function(package) {
failed <<- F
tryCatch(remove.packages(package), error = function(e) {failed <<- T})
if (! failed) {
print(paste("Removed package", package))
}
}
removePackage('h2o')
failed <<- F
tryCatch(library(h2o), error = function(e) {failed <<- T})
if (! failed) {
stop("Failed to remove h2o library")
}
h2o_r_package_file <- NULL
dir_to_search = normalizePath("../../../target/R", winslash = "/")
files = dir(dir_to_search)
for (i in 1:length(files)) {
f = files[i]
# print(f)
arr = strsplit(f, '\\.')[[1]]
# print(arr)
lastidx = length(arr)
suffix = arr[lastidx]
# print(paste("SUFFIX", suffix))
if (suffix == "gz") {
h2o_r_package_file = f #arr[lastidx]
break
}
}
# if (is.null(h2o_r_package_file)) {
# stop(paste("H2O package not found in", dir_to_search))
# }
install.packages("h2o",
repos = c(H2O = paste0(ifelse(.Platform$OS.type == "windows", "file:", "file://"),
dir_to_search),
getOption("repos")))
library(h2o)
h2o.init(ip = ipPort[[1]],
port = ipPort[[2]],
startH2O = FALSE)
##generate master_seed
seed <- NULL
MASTER_SEED <- FALSE
if (file.exists("../master_seed")) {
MASTER_SEED <<- TRUE
seed <- read.table("../master_seed")[[1]]
SEED <<- seed
}
seed <- setupRandomSeed(seed, suppress = TRUE)
if (! file.exists("../master_seed")) {
write.table(seed, "../master_seed", row.names = F, col.names = F)
}
| /R/tests/Utils/runnerSetupPackage.R | permissive | ledell/h2o | R | false | false | 2,276 | r | options(echo=F)
local({r <- getOption("repos"); r["CRAN"] <- "http://cran.us.r-project.org"; options(repos = r)})
if (!"R.utils" %in% rownames(installed.packages())) install.packages("R.utils")
if (!"plyr" %in% rownames(installed.packages())) install.packages("plyr")
#if (!"rgl" %in% rownames(installed.packages())) install.packages("rgl")
if (!"randomForest" %in% rownames(installed.packages())) install.packages("randomForest")
if (!"gains" %in% rownames(installed.packages())) install.packages("gains")
library(R.utils)
setwd(normalizePath(dirname(R.utils::commandArgs(asValues=TRUE)$"f")))
source("h2oR.R")
source("utilsR.R")
ipPort <- get_args(commandArgs(trailingOnly = TRUE))
failed <<- F
removePackage <- function(package) {
failed <<- F
tryCatch(remove.packages(package), error = function(e) {failed <<- T})
if (! failed) {
print(paste("Removed package", package))
}
}
removePackage('h2o')
failed <<- F
tryCatch(library(h2o), error = function(e) {failed <<- T})
if (! failed) {
stop("Failed to remove h2o library")
}
h2o_r_package_file <- NULL
dir_to_search = normalizePath("../../../target/R", winslash = "/")
files = dir(dir_to_search)
for (i in 1:length(files)) {
f = files[i]
# print(f)
arr = strsplit(f, '\\.')[[1]]
# print(arr)
lastidx = length(arr)
suffix = arr[lastidx]
# print(paste("SUFFIX", suffix))
if (suffix == "gz") {
h2o_r_package_file = f #arr[lastidx]
break
}
}
# if (is.null(h2o_r_package_file)) {
# stop(paste("H2O package not found in", dir_to_search))
# }
install.packages("h2o",
repos = c(H2O = paste0(ifelse(.Platform$OS.type == "windows", "file:", "file://"),
dir_to_search),
getOption("repos")))
library(h2o)
h2o.init(ip = ipPort[[1]],
port = ipPort[[2]],
startH2O = FALSE)
##generate master_seed
seed <- NULL
MASTER_SEED <- FALSE
if (file.exists("../master_seed")) {
MASTER_SEED <<- TRUE
seed <- read.table("../master_seed")[[1]]
SEED <<- seed
}
seed <- setupRandomSeed(seed, suppress = TRUE)
if (! file.exists("../master_seed")) {
write.table(seed, "../master_seed", row.names = F, col.names = F)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/get_sidra.R
\name{get_sidra}
\alias{get_sidra}
\title{Get SIDRA's table}
\usage{
get_sidra(x, variable = "allxp", period = "last", geo = "Brazil",
geo.filter = NULL, classific = "all", category = "all", header = TRUE,
format = 4, digits = "default", api = NULL)
}
\arguments{
\item{x}{A table from IBGE's SIDRA API.}
\item{variable}{An integer vector of the variables' codes to be returned.
Defaults to all variables with exception of "Total".}
\item{period}{A character vector describing the period of data. Defaults to
the last available.}
\item{geo}{A character vector describing the geographic levels of the data.
Defauts to "Brazil".}
\item{geo.filter}{A (named) list object with the specific item of the
geographic level or all itens of a determined higher geografic level. It should
be used when geo argument is provided, otherwise all geographic units of
'geo' argument are considered.}
\item{classific}{A character vector with the table's classification(s). Defaults to
all.}
\item{category}{"all" or a list object with the categories of the classifications
of \code{classific(s)} argument. Defaults to "all".}
\item{header}{Logical. should the data frame be returned with the description
names in header?}
\item{format}{An integer ranging between 1 and 4. Default to 4. See more in details.}
\item{digits}{An integer, "default" or "max". Default to "default" that returns the
defaults digits to each variable.}
\item{api}{A character with the api's parameters. Defaults to NULL.}
}
\value{
The function returns a data frame printed by default functions
}
\description{
This function allows the user to connect with IBGE's (Instituto Brasileiro de
Geografia e Estatistica) SIDRA API in a flexible way. \acronym{SIDRA} is the
acronym to "Sistema IBGE de Recuperação Automática" and it is the system where
IBGE makes aggregate data from their researches available.
}
\details{
\code{period} can be a integer vector with names "first" and/or "last",
or "all" or a simply character vector with date format %Y%m-%Y%m.
The \code{geo} argument can be one of "Brazil", "Region", "State",
"MesoRegion", "MicroRegion", "MetroRegion", "MetroRegionDiv", "IRD",
"UrbAglo", "City", "District","subdistrict","Neighborhood","PopArrang".
'geo.filter' lists can/must be named with the same characters.
When NULL, the arguments \code{classific} and \code{category} return all options
available.
When argument \code{api} is not NULL, all others arguments informed are desconsidered
The \code{format} argument can be set to:
\itemize{
\item 1: Return only the descriptors' codes
\item 2: Return only the descriptor's names
\item 3: Return the codes and names of the geographic level and descriptors' names
\item 4: Return the codes and names of the descriptors (Default)
}
}
\examples{
\dontrun{
## Requesting table 1419 (Consumer Price Index - IPCA) from the API
ipca <- get_sidra(1419,
variable = 69,
period = c("201212","201401-201412"),
geo = "City",
geo.filter = list("State" = 50))
## Urban population count from Census data (2010) for States and cities of Southest region.
get_sidra(1378,
variable = 93,
geo = c("State","City"),
geo.filter = list("Region" = 3, "Region" = 3),
classific = c("c1"),
category = list(1))
## Number of informants by state in the Inventory Research (last data available)
get_sidra(api = "/t/254/n1/all/n3/all/v/151/p/last\%201/c162/118423/c163/0")
}
}
\seealso{
\code{\link{info_sidra}}
}
\author{
Renato Prado Siqueira \email{<rpradosiqueira@gmail.com>}
}
\keyword{IBGE}
\keyword{sidra}
| /man/get_sidra.Rd | no_license | RikFerreira/sidrar | R | false | true | 3,784 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/get_sidra.R
\name{get_sidra}
\alias{get_sidra}
\title{Get SIDRA's table}
\usage{
get_sidra(x, variable = "allxp", period = "last", geo = "Brazil",
geo.filter = NULL, classific = "all", category = "all", header = TRUE,
format = 4, digits = "default", api = NULL)
}
\arguments{
\item{x}{A table from IBGE's SIDRA API.}
\item{variable}{An integer vector of the variables' codes to be returned.
Defaults to all variables with exception of "Total".}
\item{period}{A character vector describing the period of data. Defaults to
the last available.}
\item{geo}{A character vector describing the geographic levels of the data.
Defauts to "Brazil".}
\item{geo.filter}{A (named) list object with the specific item of the
geographic level or all itens of a determined higher geografic level. It should
be used when geo argument is provided, otherwise all geographic units of
'geo' argument are considered.}
\item{classific}{A character vector with the table's classification(s). Defaults to
all.}
\item{category}{"all" or a list object with the categories of the classifications
of \code{classific(s)} argument. Defaults to "all".}
\item{header}{Logical. should the data frame be returned with the description
names in header?}
\item{format}{An integer ranging between 1 and 4. Default to 4. See more in details.}
\item{digits}{An integer, "default" or "max". Default to "default" that returns the
defaults digits to each variable.}
\item{api}{A character with the api's parameters. Defaults to NULL.}
}
\value{
The function returns a data frame printed by default functions
}
\description{
This function allows the user to connect with IBGE's (Instituto Brasileiro de
Geografia e Estatistica) SIDRA API in a flexible way. \acronym{SIDRA} is the
acronym to "Sistema IBGE de Recuperação Automática" and it is the system where
IBGE makes aggregate data from their researches available.
}
\details{
\code{period} can be a integer vector with names "first" and/or "last",
or "all" or a simply character vector with date format %Y%m-%Y%m.
The \code{geo} argument can be one of "Brazil", "Region", "State",
"MesoRegion", "MicroRegion", "MetroRegion", "MetroRegionDiv", "IRD",
"UrbAglo", "City", "District","subdistrict","Neighborhood","PopArrang".
'geo.filter' lists can/must be named with the same characters.
When NULL, the arguments \code{classific} and \code{category} return all options
available.
When argument \code{api} is not NULL, all others arguments informed are desconsidered
The \code{format} argument can be set to:
\itemize{
\item 1: Return only the descriptors' codes
\item 2: Return only the descriptor's names
\item 3: Return the codes and names of the geographic level and descriptors' names
\item 4: Return the codes and names of the descriptors (Default)
}
}
\examples{
\dontrun{
## Requesting table 1419 (Consumer Price Index - IPCA) from the API
ipca <- get_sidra(1419,
variable = 69,
period = c("201212","201401-201412"),
geo = "City",
geo.filter = list("State" = 50))
## Urban population count from Census data (2010) for States and cities of Southest region.
get_sidra(1378,
variable = 93,
geo = c("State","City"),
geo.filter = list("Region" = 3, "Region" = 3),
classific = c("c1"),
category = list(1))
## Number of informants by state in the Inventory Research (last data available)
get_sidra(api = "/t/254/n1/all/n3/all/v/151/p/last\%201/c162/118423/c163/0")
}
}
\seealso{
\code{\link{info_sidra}}
}
\author{
Renato Prado Siqueira \email{<rpradosiqueira@gmail.com>}
}
\keyword{IBGE}
\keyword{sidra}
|
R_from_book_interface <- function(){
con <- url("http://www.jhsph.edu","r")
head(x)
# multiple elements
x <- list(a=list(10,12,14),b=c(3.14,2.81))
x[[c(2,1)]]
x <- list(foo=1:4, bar=0.6, baz="hello")
x[c(1,3)]
# removing NA
x <- c(1,2,NA,4,NA,5)
x[!is.na(x)]
x <- c(1,2,NA,4,NA,5)
y<-c("a","b",NA,"d",NA,"f")
good <- complete.cases(x,y)
x[good]
y[good]
}
from_book_vectorized <- function() {
}
from_book_dates <- function() {
x <- as.Date("1970-01-01")
x <- Sys.time()
p <- as.POSIXlt(x)
names(unclass(p))
p$wday
datestring <- c("January 10, 2012 10:40", "December 9, 2011 9:10")
x <- strptime(datestring, "%B %d, %Y %H:%M")
x <- as.Date("2012-01-01")
y <- strptime("9 Jan 2011 11:34:21", "%d %b %Y %H:%M:%S")
x <- as.POSITlt(x)
x - y
}
from_book_dplyr <- function() {
# functions is dplyr package: select, filter, arrange, rename, mutate,
# summarise, %>%
install.packages("dplyr")
library(dplyr)
# select: for extraction of columns
chicago <- readRDS("chicago.rds")
subset <- select(chicago, -(city:dptp))
i <- match("city", names(chicago))
j <- match("dptp", names(chicago))
head(chicago[,-(i:j)])
subset <- select(chicago, ends_with("2"))
# filter: for extraction of rows
chic.f <- filter(chicago, pm25tmean2 > 30)
chic.f <- filter(chicago, pm25tmean2 > 30 & tmpd > 80)
select(chic.f, data, tmpd, pm25tmean2)
# arrange: reorder rows
chicago <- arrange(chicago, date)
# raname:
chicago <- rename(chicago, dewpoint=dptp, pm25=pm25tmean2)
chicago <- mutate(chicago, pm25detrend=pm25 - mean(pm25, na.rm=TRUE))
# group_by: generate summary statistics from the data frame within
# strata defined by a variable
chicago <- mutate(chicago, year=as.POSIXlt(date)$year+1900)
years <- group_by(chicago, year)
summarize(years, pm25=mean(pm25,na.rm=TRUE),
o3=max(o3tmean2,na.rm=TRUE),
no2=median(no2tmean2,nam.rm=TRUE))
# by quantile
qq <- quantile(chicago$pm25, seq(0,1,0.2), na.rm=TRUE)
chicago <- mutate(chicago, pm25.quint=cut(pm25,qq))
quint <- group_by(chicago, pm25.quint)
summarize(quint, o3=mean(o3mean2,na.rm=TRUE),
no2=mean(no2tmean2,na.rm=TRUE))
# %>%
mutate(chicago, month=as.POSIXlt(data)$mon+1) %>%
group_by(month) %>%
summarize(pm25=mean(pm25,na.rm=TRUE),
o3=max(o3tmean2,na.rm=TRUE),
no2=median(no2tmean2,na.rm=TRUE))
}
from_book_control <- function(){
}
function_book_functions <- function() {
myplot <- function(x,y,type="l",...){
plot(x,y,type=type,...) ## pass '...' to plot function
}
make.power <- function(n) {
pow <- function(x) {
x^n
}
pow
}
cube <- make.power(3)
square <- make.power(2)
cube(3)
square(3)
ls(environment(cube))
} | /from_book.R | no_license | SYYoung/Intro-to-R | R | false | false | 3,058 | r | R_from_book_interface <- function(){
con <- url("http://www.jhsph.edu","r")
head(x)
# multiple elements
x <- list(a=list(10,12,14),b=c(3.14,2.81))
x[[c(2,1)]]
x <- list(foo=1:4, bar=0.6, baz="hello")
x[c(1,3)]
# removing NA
x <- c(1,2,NA,4,NA,5)
x[!is.na(x)]
x <- c(1,2,NA,4,NA,5)
y<-c("a","b",NA,"d",NA,"f")
good <- complete.cases(x,y)
x[good]
y[good]
}
from_book_vectorized <- function() {
}
from_book_dates <- function() {
x <- as.Date("1970-01-01")
x <- Sys.time()
p <- as.POSIXlt(x)
names(unclass(p))
p$wday
datestring <- c("January 10, 2012 10:40", "December 9, 2011 9:10")
x <- strptime(datestring, "%B %d, %Y %H:%M")
x <- as.Date("2012-01-01")
y <- strptime("9 Jan 2011 11:34:21", "%d %b %Y %H:%M:%S")
x <- as.POSITlt(x)
x - y
}
from_book_dplyr <- function() {
# functions is dplyr package: select, filter, arrange, rename, mutate,
# summarise, %>%
install.packages("dplyr")
library(dplyr)
# select: for extraction of columns
chicago <- readRDS("chicago.rds")
subset <- select(chicago, -(city:dptp))
i <- match("city", names(chicago))
j <- match("dptp", names(chicago))
head(chicago[,-(i:j)])
subset <- select(chicago, ends_with("2"))
# filter: for extraction of rows
chic.f <- filter(chicago, pm25tmean2 > 30)
chic.f <- filter(chicago, pm25tmean2 > 30 & tmpd > 80)
select(chic.f, data, tmpd, pm25tmean2)
# arrange: reorder rows
chicago <- arrange(chicago, date)
# raname:
chicago <- rename(chicago, dewpoint=dptp, pm25=pm25tmean2)
chicago <- mutate(chicago, pm25detrend=pm25 - mean(pm25, na.rm=TRUE))
# group_by: generate summary statistics from the data frame within
# strata defined by a variable
chicago <- mutate(chicago, year=as.POSIXlt(date)$year+1900)
years <- group_by(chicago, year)
summarize(years, pm25=mean(pm25,na.rm=TRUE),
o3=max(o3tmean2,na.rm=TRUE),
no2=median(no2tmean2,nam.rm=TRUE))
# by quantile
qq <- quantile(chicago$pm25, seq(0,1,0.2), na.rm=TRUE)
chicago <- mutate(chicago, pm25.quint=cut(pm25,qq))
quint <- group_by(chicago, pm25.quint)
summarize(quint, o3=mean(o3mean2,na.rm=TRUE),
no2=mean(no2tmean2,na.rm=TRUE))
# %>%
mutate(chicago, month=as.POSIXlt(data)$mon+1) %>%
group_by(month) %>%
summarize(pm25=mean(pm25,na.rm=TRUE),
o3=max(o3tmean2,na.rm=TRUE),
no2=median(no2tmean2,na.rm=TRUE))
}
from_book_control <- function(){
}
function_book_functions <- function() {
myplot <- function(x,y,type="l",...){
plot(x,y,type=type,...) ## pass '...' to plot function
}
make.power <- function(n) {
pow <- function(x) {
x^n
}
pow
}
cube <- make.power(3)
square <- make.power(2)
cube(3)
square(3)
ls(environment(cube))
} |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/apa_print_glht.R
\name{apa_print.glht}
\alias{apa_print.glht}
\alias{apa_print.lsmobj}
\alias{apa_print.summary.glht}
\alias{apa_print.summary.ref.grid}
\title{Format statistics (APA 6th edition)}
\usage{
\method{apa_print}{glht}(x, test = multcomp::adjusted(), ...)
\method{apa_print}{summary.glht}(x, ci = 0.95, in_paren = FALSE, ...)
\method{apa_print}{lsmobj}(x, ...)
\method{apa_print}{summary.ref.grid}(x, contrast_names = NULL,
in_paren = FALSE, ...)
}
\arguments{
\item{x}{See details.}
\item{test}{Function.}
\item{...}{Further arguments to pass to \code{\link{printnum}} to format the estimate.}
\item{ci}{Numeric. If \code{NULL} (default) the function tries to obtain confidence intervals from \code{x}.
Other confidence intervals can be supplied as a \code{vector} of length 2 (lower and upper boundary, respectively)
with attribute \code{conf.level}, e.g., when calculating bootstrapped confidence intervals.}
\item{in_paren}{Logical. Indicates if the formated string will be reported inside parentheses.}
\item{contrast_names}{Character. A vector of names to identify calculated contrasts.}
}
\value{
\code{apa_print()} returns a list containing the following components according to the input:
\describe{
\item{\code{statistic}}{A character string giving the test statistic, parameters (e.g., degrees of freedom),
and \emph{p} value.}
\item{\code{estimate}}{A character string giving the descriptive estimates and confidence intervals if possible}
% , either in units of the analyzed scale or as standardized effect size.
\item{\code{full_result}}{A joint character string comprised of \code{est} and \code{stat}.}
\item{\code{table}}{A data.frame containing the complete contrast table, which can be passed to \code{\link{apa_table}}.}
}
}
\description{
Takes various \code{lsmeans} objects methods to create formatted chraracter strings to report the results in
accordance with APA manuscript guidelines. \emph{Not yet ready for use.}
}
\details{
The function should work on a wide range of \code{htest} objects. Due to the large number of functions
that produce these objects and their idiosyncracies, the produced strings may sometimes be inaccurate. If you
experience inaccuracies you may report these \href{https://github.com/crsh/papaja/issues}{here} (please include
a reproducible example in your report!).
ADJUSTED CONFIDENCE INTERVALS
\code{stat_name} and \code{est_name} are placed in the output string and are thus passed to pandoc or LaTeX through
\pkg{kntir}. Thus, to the extent it is supported by the final document type, you can pass LaTeX-markup to format the
final text (e.g., \code{\\\\tau} yields \eqn{\tau}).
If \code{in_paren} is \code{TRUE} parentheses in the formated string, such as those surrounding degrees
of freedom, are replaced with brackets.
}
\examples{
NULL
}
\seealso{
Other apa_print: \code{\link{apa_print.aov}},
\code{\link{apa_print.htest}},
\code{\link{apa_print.list}}, \code{\link{apa_print.lm}},
\code{\link{apa_print}}
}
| /man/apa_print.glht.Rd | no_license | jmpasmoi/papaja | R | false | true | 3,154 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/apa_print_glht.R
\name{apa_print.glht}
\alias{apa_print.glht}
\alias{apa_print.lsmobj}
\alias{apa_print.summary.glht}
\alias{apa_print.summary.ref.grid}
\title{Format statistics (APA 6th edition)}
\usage{
\method{apa_print}{glht}(x, test = multcomp::adjusted(), ...)
\method{apa_print}{summary.glht}(x, ci = 0.95, in_paren = FALSE, ...)
\method{apa_print}{lsmobj}(x, ...)
\method{apa_print}{summary.ref.grid}(x, contrast_names = NULL,
in_paren = FALSE, ...)
}
\arguments{
\item{x}{See details.}
\item{test}{Function.}
\item{...}{Further arguments to pass to \code{\link{printnum}} to format the estimate.}
\item{ci}{Numeric. If \code{NULL} (default) the function tries to obtain confidence intervals from \code{x}.
Other confidence intervals can be supplied as a \code{vector} of length 2 (lower and upper boundary, respectively)
with attribute \code{conf.level}, e.g., when calculating bootstrapped confidence intervals.}
\item{in_paren}{Logical. Indicates if the formated string will be reported inside parentheses.}
\item{contrast_names}{Character. A vector of names to identify calculated contrasts.}
}
\value{
\code{apa_print()} returns a list containing the following components according to the input:
\describe{
\item{\code{statistic}}{A character string giving the test statistic, parameters (e.g., degrees of freedom),
and \emph{p} value.}
\item{\code{estimate}}{A character string giving the descriptive estimates and confidence intervals if possible}
% , either in units of the analyzed scale or as standardized effect size.
\item{\code{full_result}}{A joint character string comprised of \code{est} and \code{stat}.}
\item{\code{table}}{A data.frame containing the complete contrast table, which can be passed to \code{\link{apa_table}}.}
}
}
\description{
Takes various \code{lsmeans} objects methods to create formatted chraracter strings to report the results in
accordance with APA manuscript guidelines. \emph{Not yet ready for use.}
}
\details{
The function should work on a wide range of \code{htest} objects. Due to the large number of functions
that produce these objects and their idiosyncracies, the produced strings may sometimes be inaccurate. If you
experience inaccuracies you may report these \href{https://github.com/crsh/papaja/issues}{here} (please include
a reproducible example in your report!).
ADJUSTED CONFIDENCE INTERVALS
\code{stat_name} and \code{est_name} are placed in the output string and are thus passed to pandoc or LaTeX through
\pkg{kntir}. Thus, to the extent it is supported by the final document type, you can pass LaTeX-markup to format the
final text (e.g., \code{\\\\tau} yields \eqn{\tau}).
If \code{in_paren} is \code{TRUE} parentheses in the formated string, such as those surrounding degrees
of freedom, are replaced with brackets.
}
\examples{
NULL
}
\seealso{
Other apa_print: \code{\link{apa_print.aov}},
\code{\link{apa_print.htest}},
\code{\link{apa_print.list}}, \code{\link{apa_print.lm}},
\code{\link{apa_print}}
}
|
Subject2 <- read.delim("~/GitHub/mestrado_UFCG/java workspace/LODCrawler/data/Subject2.txt", header=F)
map_subjet <- read.delim("~/GitHub/LastfmDataset/new data/map/map_subjet.tsv", header=F)
Subject2 = Subject2[,c(2,1)]
colnames(Subject2) = c("V1","V2")
x = Subject2[Subject2$V1%in%map_subjet$V1,]
colnames(x) = c("url_subj","url_pred")
x = merge(x, map_subjet, by.y="V1",by.x="url_subj")
x = x[,c(3,2)]
y = x[!(x$url_pred%in%map_subjet$V1),]
y = data.frame(V1 = unique(y$url_pred))
length_id = length(map_subjet$V2)+1
last_id = length_id+length(y$V1)
y$V2 = c(length_id:(last_id - 1))
map_subjet = rbind(map_subjet, y)
write.table(map_subjet, file="map_subjet.tsv", col.names=F, row.names=F, quote=F, sep="\t")
colnames(map_subjet) = c("url_1","url_2")
x = merge(x, map_subjet, by.y="url_1",by.x="url_pred")
x = x[,c(2,3)]
x = x[order(x$V2,x$url_2),]
write.table(x, file="subject_broader.tsv", col.names=F, row.names=F, quote=F, sep="\t")
| /R/subject_broader.R | no_license | nailson/LastfmDataset | R | false | false | 955 | r |
Subject2 <- read.delim("~/GitHub/mestrado_UFCG/java workspace/LODCrawler/data/Subject2.txt", header=F)
map_subjet <- read.delim("~/GitHub/LastfmDataset/new data/map/map_subjet.tsv", header=F)
Subject2 = Subject2[,c(2,1)]
colnames(Subject2) = c("V1","V2")
x = Subject2[Subject2$V1%in%map_subjet$V1,]
colnames(x) = c("url_subj","url_pred")
x = merge(x, map_subjet, by.y="V1",by.x="url_subj")
x = x[,c(3,2)]
y = x[!(x$url_pred%in%map_subjet$V1),]
y = data.frame(V1 = unique(y$url_pred))
length_id = length(map_subjet$V2)+1
last_id = length_id+length(y$V1)
y$V2 = c(length_id:(last_id - 1))
map_subjet = rbind(map_subjet, y)
write.table(map_subjet, file="map_subjet.tsv", col.names=F, row.names=F, quote=F, sep="\t")
colnames(map_subjet) = c("url_1","url_2")
x = merge(x, map_subjet, by.y="url_1",by.x="url_pred")
x = x[,c(2,3)]
x = x[order(x$V2,x$url_2),]
write.table(x, file="subject_broader.tsv", col.names=F, row.names=F, quote=F, sep="\t")
|
testlist <- list(Beta = 0, CVLinf = -1.37672045511449e-268, FM = 3.81962480282366e-313, L50 = 0, L95 = 0, LenBins = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), LenMids = numeric(0), Linf = 0, MK = 0, Ml = numeric(0), Prob = structure(0, .Dim = c(1L, 1L)), SL50 = 9.97941197291525e-316, SL95 = 2.1224816047267e-314, nage = 682962941L, nlen = 537479424L, rLens = numeric(0))
result <- do.call(DLMtool::LBSPRgen,testlist)
str(result) | /DLMtool/inst/testfiles/LBSPRgen/AFL_LBSPRgen/LBSPRgen_valgrind_files/1615830445-test.R | no_license | akhikolla/updatedatatype-list2 | R | false | false | 486 | r | testlist <- list(Beta = 0, CVLinf = -1.37672045511449e-268, FM = 3.81962480282366e-313, L50 = 0, L95 = 0, LenBins = c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), LenMids = numeric(0), Linf = 0, MK = 0, Ml = numeric(0), Prob = structure(0, .Dim = c(1L, 1L)), SL50 = 9.97941197291525e-316, SL95 = 2.1224816047267e-314, nage = 682962941L, nlen = 537479424L, rLens = numeric(0))
result <- do.call(DLMtool::LBSPRgen,testlist)
str(result) |
with(afebc81dda2a04b3f8a22a55a17320c85, {ROOT <- 'C:/semoss/semosshome/db/Atadata2__3b3e4a3b-d382-4e98-9950-9b4e8b308c1c/version/f6ba5938-ef1b-4430-b7b0-261d1cc8174d';FRAME951665$Long[FRAME951665$Location == ""] <- 0;}); | /f6ba5938-ef1b-4430-b7b0-261d1cc8174d/R/Temp/aDrXLGS3E35Rz.R | no_license | ayanmanna8/test | R | false | false | 220 | r | with(afebc81dda2a04b3f8a22a55a17320c85, {ROOT <- 'C:/semoss/semosshome/db/Atadata2__3b3e4a3b-d382-4e98-9950-9b4e8b308c1c/version/f6ba5938-ef1b-4430-b7b0-261d1cc8174d';FRAME951665$Long[FRAME951665$Location == ""] <- 0;}); |
# load data & ibraries
library(ggplot2)
library(plyr)
library(RCurl)
link <- getURL("https://raw.githubusercontent.com/jlaurito/CUNY_IS608/master/lecture1/data/inc5000_data.csv")
inc_all <- read.csv(text = link)
# data exploration
head(inc_all)
summary(inc_all)
# check for missing value
sapply(inc_all,function(x) sum(is.na(x)))
# count companies by state
grpby_state <- count(inc_all, "State")
# find state with 3rd most companies
sort_state <- arrange(grpby_state, freq)
head(sort_state, 3)
# make a horizontal bar chart by descending order
sort_state$State <- factor(sort_state$State, levels=sort_state$State)
ggplot(sort_state, aes(x=State, y=freq)) + geom_bar(stat='identity') + coord_flip()
# remove NAs
inc <- inc_all[complete.cases(inc_all), ]
# subset ny data
nyemp <- subset(x = inc, State == 'NY')
# check range of variables
ggplot(nyemp, aes(factor(Industry), Employees)) + geom_boxplot() + coord_flip()
# use boxplot's stats function to remove outliers
rm_o <- function(x) {
x[x %in% boxplot.stats(x)$out] <- NA
return(x)
}
# do this for every industry
ny_no <- data.frame()
industries <- levels(nyemp$Industry)
for (industry in industries) {
sub <- subset(x = nyemp, Industry == industry)
sub$Employees <- rm_o(sub$Employees)
ny_no <- rbind(ny_no, sub)
}
# plot the new data
ny_no <- ny_no[complete.cases(ny_no), ]
ggplot(ny_no, aes(Industry, Employees)) + geom_boxplot() + coord_flip()
# calculate ranges and spread
ny_avg <- ddply(ny_no, .(Industry), summarize, mean <- mean(Employees), sd <- sd(Employees), median <- median(Employees),lower <- quantile(Employees)[2], upper <- quantile(Employees)[4]
)
names(ny_avg) <- c('Industry', 'mean', 'sd', 'median', 'lower', 'upper')
# plot error bars
ny_avg$ind = reorder(ny_avg$Industry, ny_avg$median)
ggplot(ny_avg, aes(x = ind, y = median)) + geom_bar(stat = "identity") + geom_errorbar(ymin = ny_avg$lower, ymax = ny_avg$upper, width = 0.1, color = "coral") + coord_flip() + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())
# data exploration on revenye / employee
ggplot(inc, aes(x = Industry, y = Employees)) + geom_point() + coord_flip()
ggplot(inc, aes(x = Industry, y = Revenue)) + geom_point() + coord_flip()
# there are significant outliers. normalize the data
inc_no <- data.frame()
for (industry in industries) {
s <- subset(x = inc, Industry == industry)
s$Employees <- rm_o(s$Employees)
s$Revenue <- rm_o(s$Revenue)
inc_no <- rbind(inc_no, s)
}
inc_no <- inc_no[complete.cases(inc_no), ]
# total revenue against total employees
rev_emp <- aggregate(cbind(Employees, Revenue) ~ Industry, data=inc_no, sum, na.rm=TRUE)
ggplot(rev_emp, aes(x=Industry, y=rev_emp$Revenue/rev_emp$Employees)) + geom_bar(stat='identity') + coord_flip()
# spread of revenue per employee
ggplot(inc_no,aes(x = Industry, y = inc_no$Revenue/inc_no$Employees)) + geom_boxplot() + coord_flip()
| /lecture1/ctaylor_hw1.R | no_license | christinataylor/CUNY_IS608 | R | false | false | 3,002 | r |
# load data & ibraries
library(ggplot2)
library(plyr)
library(RCurl)
link <- getURL("https://raw.githubusercontent.com/jlaurito/CUNY_IS608/master/lecture1/data/inc5000_data.csv")
inc_all <- read.csv(text = link)
# data exploration
head(inc_all)
summary(inc_all)
# check for missing value
sapply(inc_all,function(x) sum(is.na(x)))
# count companies by state
grpby_state <- count(inc_all, "State")
# find state with 3rd most companies
sort_state <- arrange(grpby_state, freq)
head(sort_state, 3)
# make a horizontal bar chart by descending order
sort_state$State <- factor(sort_state$State, levels=sort_state$State)
ggplot(sort_state, aes(x=State, y=freq)) + geom_bar(stat='identity') + coord_flip()
# remove NAs
inc <- inc_all[complete.cases(inc_all), ]
# subset ny data
nyemp <- subset(x = inc, State == 'NY')
# check range of variables
ggplot(nyemp, aes(factor(Industry), Employees)) + geom_boxplot() + coord_flip()
# use boxplot's stats function to remove outliers
rm_o <- function(x) {
x[x %in% boxplot.stats(x)$out] <- NA
return(x)
}
# do this for every industry
ny_no <- data.frame()
industries <- levels(nyemp$Industry)
for (industry in industries) {
sub <- subset(x = nyemp, Industry == industry)
sub$Employees <- rm_o(sub$Employees)
ny_no <- rbind(ny_no, sub)
}
# plot the new data
ny_no <- ny_no[complete.cases(ny_no), ]
ggplot(ny_no, aes(Industry, Employees)) + geom_boxplot() + coord_flip()
# calculate ranges and spread
ny_avg <- ddply(ny_no, .(Industry), summarize, mean <- mean(Employees), sd <- sd(Employees), median <- median(Employees),lower <- quantile(Employees)[2], upper <- quantile(Employees)[4]
)
names(ny_avg) <- c('Industry', 'mean', 'sd', 'median', 'lower', 'upper')
# plot error bars
ny_avg$ind = reorder(ny_avg$Industry, ny_avg$median)
ggplot(ny_avg, aes(x = ind, y = median)) + geom_bar(stat = "identity") + geom_errorbar(ymin = ny_avg$lower, ymax = ny_avg$upper, width = 0.1, color = "coral") + coord_flip() + theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())
# data exploration on revenye / employee
ggplot(inc, aes(x = Industry, y = Employees)) + geom_point() + coord_flip()
ggplot(inc, aes(x = Industry, y = Revenue)) + geom_point() + coord_flip()
# there are significant outliers. normalize the data
inc_no <- data.frame()
for (industry in industries) {
s <- subset(x = inc, Industry == industry)
s$Employees <- rm_o(s$Employees)
s$Revenue <- rm_o(s$Revenue)
inc_no <- rbind(inc_no, s)
}
inc_no <- inc_no[complete.cases(inc_no), ]
# total revenue against total employees
rev_emp <- aggregate(cbind(Employees, Revenue) ~ Industry, data=inc_no, sum, na.rm=TRUE)
ggplot(rev_emp, aes(x=Industry, y=rev_emp$Revenue/rev_emp$Employees)) + geom_bar(stat='identity') + coord_flip()
# spread of revenue per employee
ggplot(inc_no,aes(x = Industry, y = inc_no$Revenue/inc_no$Employees)) + geom_boxplot() + coord_flip()
|
diffexp.child <-
function(Xmat,Ymat,feature_table_file,parentoutput_dir,class_labels_file,num_replicates,feat.filt.thresh,summarize.replicates,summary.method,
summary.na.replacement,missing.val,rep.max.missing.thresh,
all.missing.thresh,group.missing.thresh,input.intensity.scale,
log2transform,medcenter,znormtransform,quantile_norm,lowess_norm,madscaling,TIC_norm,rangescaling,mstus,paretoscaling,sva_norm,eigenms_norm,vsn_norm,
normalization.method,rsd.filt.list,
pairedanalysis,featselmethod,fdrthresh,fdrmethod,cor.method,networktype,network.label.cex,abs.cor.thresh,cor.fdrthresh,kfold,pred.eval.method,feat_weight,globalcor,
target.metab.file,target.mzmatch.diff,target.rtmatch.diff,max.cor.num, samplermindex,pcacenter,pcascale,
numtrees,analysismode,net_node_colors,net_legend,svm_kernel,heatmap.col.opt,manhattanplot.col.opt,boxplot.col.opt,barplot.col.opt,sample.col.opt,lineplot.col.opt,scatterplot.col.opt,hca_type,alphacol,pls_vip_thresh,num_nodes,max_varsel,
pls_ncomp,pca.stage2.eval,scoreplot_legend,pca.global.eval,rocfeatlist,rocfeatincrement,
rocclassifier,foldchangethresh,wgcnarsdthresh,WGCNAmodules,optselect,max_comp_sel,saveRda,legendlocation,degree_rank_method,
pca.cex.val,pca.ellipse,ellipse.conf.level,pls.permut.count,svm.acc.tolerance,limmadecideTests,pls.vip.selection,globalclustering,plots.res,plots.width,plots.height,plots.type,output.device.type,pvalue.thresh,individualsampleplot.col.opt,
pamr.threshold.select.max,mars.gcv.thresh,error.bar,cex.plots,modeltype,barplot.xaxis,lineplot.lty.option,match_class_dist,timeseries.lineplots,alphabetical.order,kegg_species_code,database,reference_set,target.data.annot,
add.pvalues=TRUE,add.jitter=TRUE,fcs.permutation.type,fcs.method,
fcs.min.hits,names_with_mz_time,ylab_text,xlab_text,boxplot.type,
degree.centrality.method,log2.transform.constant,balance.classes,
balance.classes.sizefactor,balance.classes.method,balance.classes.seed,
cv.perm.count=100,multiple.figures.perpanel=TRUE,labRow.value = TRUE, labCol.value = TRUE,
alpha.col=1,similarity.matrix,outlier.method,removeRda=TRUE,color.palette=c("journal"),
plot_DiNa_graph=FALSE,limma.contrasts.type=c("contr.sum","contr.treatment"),hca.cex.legend=0.7,differential.network.analysis.method,
plot.boxplots.raw=FALSE,vcovHC.type,ggplot.type1,facet.nrow,facet.ncol,pairwise.correlation.analysis=FALSE,
generate.boxplots=FALSE,pvalue.dist.plot=TRUE,...)
{
#############
options(warn=-1)
roc_res<-NA
lme.modeltype=modeltype
remove_firstrun=FALSE #TRUE or FALSE
run_number=1
minmaxtransform=FALSE
pca.CV=TRUE
max_rf_var=5000
alphacol=alpha.col
hca.labRow.value=labRow.value
hca.labCol.value=labCol.value
logistic_reg=FALSE
poisson_reg=FALSE
goodfeats_allfields={}
mwan_fdr={}
targetedan_fdr={}
data_m_fc_withfeats={}
classlabels_orig={}
robust.estimate=FALSE
#alphabetical.order=FALSE
analysistype="oneway"
plot.ylab_text=ylab_text
limmarobust=FALSE
featselmethod<-unique(featselmethod)
if(featselmethod=="rf"){
featselmethod="RF"
}
parentfeatselmethod=featselmethod
factor1_msg=NA
factor2_msg=NA
cat(paste("Running feature selection method: ",featselmethod,sep=""),sep="\n")
#}
if(featselmethod=="limmarobust"){
featselmethod="limma"
limmarobust=TRUE
}else{
if(featselmethod=="limma1wayrepeatrobust"){
featselmethod="limma1wayrepeat"
limmarobust=TRUE
}else{
if(featselmethod=="limma2wayrepeatrobust"){
featselmethod="limma2wayrepeat"
limmarobust=TRUE
}else{
if(featselmethod=="limma2wayrobust"){
featselmethod="limma2way"
limmarobust=TRUE
}else{
if(featselmethod=="limma1wayrobust"){
featselmethod="limma1way"
limmarobust=TRUE
}
}
}
}
}
#if(FALSE)
{
if(normalization.method=="log2quantilenorm" || normalization.method=="log2quantnorm"){
cat("Performing log2 transformation and quantile normalization",sep="\n")
log2transform=TRUE
quantile_norm=TRUE
}else{
if(normalization.method=="log2transform"){
cat("Performing log2 transformation",sep="\n")
log2transform=TRUE
}else{
if(normalization.method=="znormtransform"){
cat("Performing autoscaling",sep="\n")
znormtransform=TRUE
}else{
if(normalization.method=="quantile_norm"){
suppressMessages(library(limma))
cat("Performing quantile normalization",sep="\n")
quantile_norm=TRUE
}else{
if(normalization.method=="lowess_norm"){
suppressMessages(library(limma))
cat("Performing Cyclic Lowess normalization",sep="\n")
lowess_norm=TRUE
}else{
if(normalization.method=="rangescaling"){
cat("Performing Range scaling",sep="\n")
rangescaling=TRUE
}else{
if(normalization.method=="paretoscaling"){
cat("Performing Pareto scaling",sep="\n")
paretoscaling=TRUE
}else{
if(normalization.method=="mstus"){
cat("Performing MS Total Useful Signal (MSTUS) normalization",sep="\n")
mstus=TRUE
}else{
if(normalization.method=="sva_norm"){
suppressMessages(library(sva))
cat("Performing Surrogate Variable Analysis (SVA) normalization",sep="\n")
sva_norm=TRUE
log2transform=TRUE
}else{
if(normalization.method=="eigenms_norm"){
cat("Performing EigenMS normalization",sep="\n")
eigenms_norm=TRUE
if(input.intensity.scale=="raw"){
log2transform=TRUE
}
}else{
if(normalization.method=="vsn_norm"){
suppressMessages(library(limma))
cat("Performing variance stabilizing normalization",sep="\n")
vsn_norm=TRUE
}
}
}
}
}
}
}
}
}
}
}
}
if(input.intensity.scale=="log2"){
log2transform=FALSE
}
rfconditional=FALSE
# print("############################")
#print("############################")
if(featselmethod=="rf" | featselmethod=="RF"){
suppressMessages(library(randomForest))
suppressMessages(library(Boruta))
featselmethod="RF"
rfconditional=FALSE
}else{
if(featselmethod=="rfconditional" | featselmethod=="RFconditional" | featselmethod=="RFcond" | featselmethod=="rfcond"){
suppressMessages(library(party))
featselmethod="RF"
rfconditional=TRUE
}
}
if(featselmethod=="rf"){
featselmethod="RF"
}else{
if(featselmethod=="mars"){
suppressMessages(library(earth))
featselmethod="MARS"
}
}
if(featselmethod=="lmregrobust"){
suppressMessages(library(sandwich))
robust.estimate=TRUE
featselmethod="lmreg"
}else{
if(featselmethod=="logitregrobust"){
robust.estimate=TRUE
suppressMessages(library(sandwich))
featselmethod="logitreg"
}else{
if(featselmethod=="poissonregrobust"){
robust.estimate=TRUE
suppressMessages(library(sandwich))
featselmethod="poissonreg"
}
}
}
if(featselmethod=="plsrepeat"){
featselmethod="pls"
pairedanalysis=TRUE
}else{
if(featselmethod=="splsrepeat"){
featselmethod="spls"
pairedanalysis=TRUE
}else{
if(featselmethod=="o1plsrepeat"){
featselmethod="o1pls"
pairedanalysis=TRUE
}else{
if(featselmethod=="o1splsrepeat"){
featselmethod="o1spls"
pairedanalysis=TRUE
}
}
}
}
log2.fold.change.thresh_list<-rsd.filt.list
if(featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="limma2wayrepeat" | featselmethod=="limma1wayrepeat"){
if(analysismode=="regression"){
stop("Invalid analysis mode. Please set analysismode=\"classification\".")
}else{
suppressMessages(library(limma))
# print("##############Level 1: Using LIMMA function to find differentially expressed metabolites###########")
}
}else{
if(featselmethod=="RF"){
#print("##############Level 1: Using random forest function to find discriminatory metabolites###########")
}else{
if(featselmethod=="RFcond"){
suppressMessages(library(party))
# print("##############Level 1: Using conditional random forest function to find discriminatory metabolites###########")
#stop("Please use \"limma\", \"RF\", or \"MARS\".")
}else{
if(featselmethod=="MARS"){
suppressMessages(library(earth))
# print("##############Level 1: Using MARS to find discriminatory metabolites###########")
#log2.fold.change.thresh_list<-c(0)
}else{
if(featselmethod=="lmreg" | featselmethod=="logitreg" | featselmethod=="poissonreg" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat" | featselmethod=="rfesvm" |
featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="pamr" | featselmethod=="ttestrepeat" | featselmethod=="wilcoxrepeat" | featselmethod=="lmregrepeat"){
# print("##########Level 1: Finding discriminatory metabolites ###########")
if(featselmethod=="logitreg"){
featselmethod="lmreg"
logistic_reg=TRUE
poisson_reg=FALSE
}else{
if(featselmethod=="poissonreg"){
poisson_reg=TRUE
featselmethod="lmreg"
logistic_reg=FALSE
}else{
logistic_reg=FALSE
poisson_reg=FALSE
if(featselmethod=="rfesvm"){
suppressMessages(library(e1071))
}else{
if(featselmethod=="pamr"){
suppressMessages(library(pamr))
}else{
if(featselmethod=="lm2wayanovarepeat" | featselmethod=="lm1wayanovarepeat"){
suppressMessages(library(nlme))
suppressMessages(library(lsmeans))
}
}
}
}
}
}else{
if(featselmethod=="pls" | featselmethod=="o1pls" | featselmethod=="o2pls" | featselmethod=="spls" | featselmethod=="spls1wayrepeat" | featselmethod=="spls2wayrepeat" | featselmethod=="pls2way" | featselmethod=="spls2way" | featselmethod=="o1spls" | featselmethod=="o2spls"){
suppressMessages(library(mixOmics))
# suppressMessages(library(pls))
suppressMessages(library(plsgenomics))
# print("##########Level 1: Finding discriminatory metabolites ###########")
}else{
stop("Invalid featselmethod specified.")
}
}
#stop("Invalid featselmethod specified. Please use \"limma\", \"RF\", or \"MARS\".")
}
}
}
}
####################################################################################
dir.create(parentoutput_dir,showWarnings=FALSE)
parentoutput_dir1<-paste(parentoutput_dir,"/Stage1/",sep="")
dir.create(parentoutput_dir1,showWarnings=FALSE)
setwd(parentoutput_dir1)
if(is.na(Xmat[1])==TRUE){
X<-read.table(feature_table_file,sep="\t",header=TRUE,stringsAsFactors=FALSE,check.names=FALSE)
cnames<-colnames(X)
cnames<- gsub(cnames,pattern="[\\s]*",replacement="",perl=TRUE)
cnames<- gsub(cnames,pattern="[(|)|\\[|\\]]",replacement="",perl=TRUE)
cnames<-gsub(cnames,pattern="\\||-|;|,|\\.",replacement="_",perl=TRUE)
colnames(X)<-cnames
cnames<-tolower(cnames)
check_names<-grep(cnames,pattern="^name$")
#if the Name column exists
if(length(check_names)>0){
if(check_names==1){
check_names1<-grep(cnames,pattern="^mz$")
check_names2<-grep(cnames,pattern="^time$")
if(length(check_names1)<1 & length(check_names2)<1){
mz<-seq(1.00001,nrow(X)+1,1)
time<-seq(1.01,nrow(X)+1,1.00)
check_ind<-gregexpr(cnames,pattern="^name$")
check_ind<-which(check_ind>0)
X<-as.data.frame(X)
Name<-as.character(X[,check_ind])
if(length(which(duplicated(Name)==TRUE))>0){
stop("Duplicate variable names are not allowed.")
}
X<-cbind(mz,time,X[,-check_ind])
names_with_mz_time=cbind(Name,mz,time)
names_with_mz_time<-as.data.frame(names_with_mz_time)
X<-as.data.frame(X)
write.table(names_with_mz_time,file="Name_mz_time_mapping.txt",sep="\t",row.names=FALSE)
}else{
if(length(check_names1)>0 & length(check_names2)>0){
check_ind<-gregexpr(cnames,pattern="^name$")
check_ind<-which(check_ind>0)
Name<-as.character(X[,check_ind])
X<-X[,-check_ind]
names_with_mz_time=cbind(Name,X$mz,X$time)
colnames(names_with_mz_time)<-c("Name","mz","time")
names_with_mz_time<-as.data.frame(names_with_mz_time)
X<-as.data.frame(X)
write.table(names_with_mz_time,file="Name_mz_time_mapping.txt",sep="\t",row.names=FALSE)
}
}
}
}else{
#mz time format
check_names1<-grep(cnames[1],pattern="^mz$")
check_names2<-grep(cnames[2],pattern="^time$")
if(length(check_names1)<1 || length(check_names2)<1){
stop("Invalid feature table format. The format should be either Name in column A or mz and time in columns A and B. Please check example files.")
}
X[,1]<-round(X[,1],5)
X[,2]<-round(X[,2],2)
mz_time<-paste(round(X[,1],5),"_",round(X[,2],2),sep="")
if(length(which(duplicated(mz_time)==TRUE))>0){
stop("Duplicate variable names are not allowed.")
}
Name<-mz_time
names_with_mz_time=cbind(Name,X$mz,X$time)
colnames(names_with_mz_time)<-c("Name","mz","time")
names_with_mz_time<-as.data.frame(names_with_mz_time)
X<-as.data.frame(X)
write.table(names_with_mz_time,file="Name_mz_time_mapping.txt",sep="\t",row.names=FALSE)
}
X[,1]<-round(X[,1],5)
X[,2]<-round(X[,2],2)
Xmat<-t(X[,-c(1:2)])
rownames(Xmat)<-colnames(X[,-c(1:2)])
Xmat<-as.data.frame(Xmat)
colnames(Xmat)<-names_with_mz_time$Name
}else{
X<-Xmat
cnames<-colnames(X)
cnames<- gsub(cnames,pattern="[\\s]*",replacement="",perl=TRUE)
cnames<- gsub(cnames,pattern="[(|)|\\[|\\]]",replacement="",perl=TRUE)
cnames<-gsub(cnames,pattern="\\||-|;|,|\\.",replacement="_",perl=TRUE)
colnames(X)<-cnames
cnames<-tolower(cnames)
check_names<-grep(cnames,pattern="^name$")
if(length(check_names)>0){
if(check_names==1){
check_names1<-grep(cnames,pattern="^mz$")
check_names2<-grep(cnames,pattern="^time$")
if(length(check_names1)<1 & length(check_names2)<1){
mz<-seq(1.00001,nrow(X)+1,1)
time<-seq(1.01,nrow(X)+1,1.00)
check_ind<-gregexpr(cnames,pattern="^name$")
check_ind<-which(check_ind>0)
X<-as.data.frame(X)
Name<-as.character(X[,check_ind])
X<-cbind(mz,time,X[,-check_ind])
names_with_mz_time=cbind(Name,mz,time)
names_with_mz_time<-as.data.frame(names_with_mz_time)
X<-as.data.frame(X)
# print(getwd())
write.table(names_with_mz_time,file="Name_mz_time_mapping.txt",sep="\t",row.names=FALSE)
}else{
if(length(check_names1)>0 & length(check_names2)>0){
check_ind<-gregexpr(cnames,pattern="^name$")
check_ind<-which(check_ind>0)
Name<-as.character(X[,check_ind])
X<-X[,-check_ind]
names_with_mz_time=cbind(Name,X$mz,X$time)
colnames(names_with_mz_time)<-c("Name","mz","time")
names_with_mz_time<-as.data.frame(names_with_mz_time)
X<-as.data.frame(X)
write.table(names_with_mz_time,file="Name_mz_time_mapping.txt",sep="\t",row.names=FALSE)
}
}
}
}else{
check_names1<-grep(cnames[1],pattern="^mz$")
check_names2<-grep(cnames[2],pattern="^time$")
if(length(check_names1)<1 || length(check_names2)<1){
stop("Invalid feature table format. The format should be either Name in column A or mz and time in columns A and B. Please check example files.")
}
X[,1]<-round(X[,1],5)
X[,2]<-round(X[,2],3)
mz_time<-paste(round(X[,1],5),"_",round(X[,2],3),sep="")
if(length(which(duplicated(mz_time)==TRUE))>0){
stop("Duplicate variable names are not allowed.")
}
Name<-mz_time
names_with_mz_time=cbind(Name,X$mz,X$time)
colnames(names_with_mz_time)<-c("Name","mz","time")
names_with_mz_time<-as.data.frame(names_with_mz_time)
X<-as.data.frame(X)
write.table(names_with_mz_time,file="Name_mz_time_mapping.txt",sep="\t",row.names=FALSE)
}
Xmat<-t(X[,-c(1:2)])
rownames(Xmat)<-colnames(X[,-c(1:2)])
Xmat<-as.data.frame(Xmat)
colnames(Xmat)<-names_with_mz_time$Name
}
####saveXmat,file="Xmat.Rda")
if(analysismode=="regression")
{
#log2.fold.change.thresh_list<-c(0)
#print("Performing regression analysis")
if(is.na(Ymat[1])==TRUE){
classlabels<-read.table(class_labels_file,sep="\t",header=TRUE)
Ymat<-classlabels
}else{
classlabels<-Ymat
}
classlabels[,1]<- gsub(classlabels[,1],pattern="[\\s]*",replacement="",perl=TRUE)
classlabels[,1]<- gsub(classlabels[,1],pattern="[(|)|\\[|\\]]",replacement="",perl=TRUE)
classlabels[,1]<-gsub(classlabels[,1],pattern="\\||-|;|,|\\.",replacement="_",perl=TRUE)
#classlabels[,1]<-gsub(classlabels[,1],pattern=" |-",replacement=".")
# Ymat[,1]<-gsub(Ymat[,1],pattern=" |-",replacement=".")
Ymat<-classlabels
classlabels_orig<-classlabels
classlabels_sub<-classlabels
class_labels_levels<-c("A")
if(featselmethod=="lmregrepeat" || featselmethod=="splsrepeat" || featselmethod=="plsrepeat" || featselmethod=="spls" || featselmethod=="pls" || featselmethod=="o1pls" || featselmethod=="o1splsrepeat"){
if(pairedanalysis==TRUE){
colnames(classlabels)<-c("SampleID","SubjectNum",paste("Response",sep=""))
#Xmat<-chocolate[,1]
Xmat_temp<-Xmat #t(Xmat)
Xmat_temp<-cbind(classlabels,Xmat_temp)
#Xmat_temp<-Xmat_temp[order(Xmat_temp[,3],Xmat_temp[,2]),]
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Response", cnames)
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
subject_inf<-classlabels[,2]
classlabels<-classlabels[,-c(2)]
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
}
}
classlabels<-as.data.frame(classlabels)
classlabels_response_mat<-classlabels[,-c(1)]
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
Ymat<-classlabels
Ymat<-as.data.frame(Ymat)
rnames_xmat<-as.character(rownames(Xmat))
rnames_ymat<-as.character(Ymat[,1])
if(length(which(duplicated(rnames_ymat)==TRUE))>0){
stop("Duplicate sample IDs are not allowed. Please represent replicates by _1,_2,_3.")
}
check_ylabel<-regexpr(rnames_ymat[1],pattern="^[0-9]*",perl=TRUE)
check_xlabel<-regexpr(rnames_xmat[1],pattern="^X[0-9]*",perl=TRUE)
if(length(check_ylabel)>0 && length(check_xlabel)>0){
if(attr(check_ylabel,"match.length")>0 && attr(check_xlabel,"match.length")>0){
rnames_ymat<-paste("X",rnames_ymat,sep="")
}
}
match_names<-match(rnames_xmat,rnames_ymat)
bad_colnames<-length(which(is.na(match_names)==TRUE))
# save(rnames_xmat,rnames_ymat,Xmat,Ymat,file="debugnames.Rda")
# print("Check here2")
#if(is.na()==TRUE){
bool_names_match_check<-all(rnames_xmat==rnames_ymat)
if(bad_colnames>0 | bool_names_match_check==FALSE){
print("Sample names do not match between feature table and class labels files.\n Please try replacing any \"-\" with \".\" in sample names.")
print("Sample names in feature table")
print(head(rnames_xmat))
print("Sample names in classlabels file")
print(head(rnames_ymat))
stop("Sample names do not match between feature table and class labels files.\n Please try replacing any \"-\" with \".\" in sample names. Please try again.")
}
Xmat<-t(Xmat)
Xmat<-cbind(X[,c(1:2)],Xmat)
Xmat<-as.data.frame(Xmat)
rownames(Xmat)<-names_with_mz_time$Name
num_features_total=nrow(Xmat)
if(is.na(all(diff(match(rnames_xmat,rnames_ymat))))==FALSE){
if(all(diff(match(rnames_xmat,rnames_ymat)) > 0)==TRUE){
setwd("../")
#data preprocess regression
data_matrix<-data_preprocess(Xmat=Xmat,Ymat=Ymat,feature_table_file=feature_table_file,parentoutput_dir=parentoutput_dir,class_labels_file=NA,num_replicates=num_replicates,feat.filt.thresh=NA,summarize.replicates=summarize.replicates,summary.method=summary.method,
all.missing.thresh=all.missing.thresh,group.missing.thresh=NA,
log2transform=log2transform,medcenter=medcenter,znormtransform=znormtransform,,quantile_norm=quantile_norm,lowess_norm=lowess_norm,
rangescaling=rangescaling,paretoscaling=paretoscaling,mstus=mstus,sva_norm=sva_norm,eigenms_norm=eigenms_norm,
vsn_norm=vsn_norm,madscaling=madscaling,missing.val=0,samplermindex=NA, rep.max.missing.thresh=rep.max.missing.thresh,
summary.na.replacement=summary.na.replacement,featselmethod=featselmethod,TIC_norm=TIC_norm,normalization.method=normalization.method,
input.intensity.scale=input.intensity.scale,log2.transform.constant=log2.transform.constant,alphabetical.order=alphabetical.order)
}
}else{
#print(diff(match(rnames_xmat,rnames_ymat)))
stop("Orders of feature table and classlabels do not match")
}
}else{
if(analysismode=="classification")
{
analysistype="oneway"
classlabels_sub<-NA
if(featselmethod=="limma2way" | featselmethod=="lm2wayanova" | featselmethod=="spls2way"){
analysistype="twoway"
}else{
if(featselmethod=="limma2wayrepeat" | featselmethod=="lm2wayanovarepeat" | featselmethod=="spls2wayrepeat"){
analysistype="twowayrepeat"
pairedanalysis=TRUE
}else{
if(featselmethod=="limma1wayrepeat" | featselmethod=="lm1wayanovarepeat" | featselmethod=="spls1wayrepeat" | featselmethod=="lmregrepeat"){
analysistype="onewayrepeat"
pairedanalysis=TRUE
}
}
}
if(is.na(Ymat)==TRUE){
classlabels<-read.table(class_labels_file,sep="\t",header=TRUE)
Ymat<-classlabels
}else{
classlabels<-Ymat
}
classlabels[,1]<- gsub(classlabels[,1],pattern="[\\s]*",replacement="",perl=TRUE)
classlabels[,1]<- gsub(classlabels[,1],pattern="[(|)|\\[|\\]]",replacement="",perl=TRUE)
classlabels[,1]<-gsub(classlabels[,1],pattern="\\||-|;|,|\\.",replacement="_",perl=TRUE)
#classlabels[,1]<-gsub(classlabels[,1],pattern=" |-",replacement=".")
# Ymat[,1]<-gsub(Ymat[,1],pattern=" |-",replacement=".")
Ymat<-classlabels
# classlabels[,1]<-gsub(classlabels[,1],pattern=" |-",replacement=".")
Ymat[,1]<-gsub(Ymat[,1],pattern=" |-",replacement=".")
# print(paste("Number of samples in class labels file:",dim(Ymat)[1],sep=""))
#print(paste("Number of samples in feature table:",dim(Xmat)[1],sep=""))
if(dim(Ymat)[1]!=(dim(Xmat)[1]))
{
stop("Number of samples are different in feature table and class labels file.")
}
if(fdrmethod=="none"){
fdrthresh=pvalue.thresh
}
if(featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="limma2wayrepeat" | featselmethod=="limma1way" | featselmethod=="limma1wayrepeat" |
featselmethod=="MARS" | featselmethod=="RF" | featselmethod=="pls" | featselmethod=="o1pls" | featselmethod=="o2pls" | featselmethod=="lmreg" | featselmethod=="logitreg" |
featselmethod=="spls" | featselmethod=="pls1wayrepeat" | featselmethod=="spls1wayrepeat" | featselmethod=="pls2wayrepeat" |
featselmethod=="spls2wayrepeat" | featselmethod=="pls2way" | featselmethod=="spls2way" | featselmethod=="o1spls" |
featselmethod=="o2spls" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" |
featselmethod=="lm2wayanovarepeat" | featselmethod=="rfesvm" | featselmethod=="wilcox" | featselmethod=="ttest" |
featselmethod=="pamr" | featselmethod=="ttestrepeat" | featselmethod=="poissonreg" | featselmethod=="wilcoxrepeat" | featselmethod=="lmregrepeat")
{
#analysismode="classification"
#save(classlabels,file="thisclasslabels.Rda")
#if(is.na(Ymat)==TRUE)
{
#classlabels<-read.table(class_labels_file,sep="\t",header=TRUE)
if(analysismode=="classification"){
if(featselmethod=="lmreg" | featselmethod=="logitreg" | featselmethod=="poissonreg")
{
if(alphabetical.order==FALSE){
classlabels[,2] <- factor(classlabels[,2], levels=unique(classlabels[,2]))
}
levels_classA<-levels(factor(classlabels[,2]))
for(l1 in levels_classA){
g1<-grep(x=l1,pattern="[0-9]")
if(length(g1)>0){
#stop("Class labels or factor levels should not have any numbers.")
}
}
}else{
if(featselmethod=="lmregrepeat"){
if(alphabetical.order==FALSE){
classlabels[,3] <- factor(classlabels[,3], levels=unique(classlabels[,3]))
}
levels_classA<-levels(factor(classlabels[,3]))
for(l1 in levels_classA){
g1<-grep(x=l1,pattern="[0-9]")
if(length(g1)>0){
#stop("Class labels or factor levels should not have any numbers.")
}
}
}else{
for(c1 in 2:dim(classlabels)[2]){
if(alphabetical.order==FALSE){
classlabels[,c1] <- factor(classlabels[,c1], levels=unique(classlabels[,c1]))
}
levels_classA<-levels(factor(classlabels[,c1]))
for(l1 in levels_classA){
g1<-grep(x=l1,pattern="[0-9]")
if(length(g1)>0){
#stop("Class labels or factor levels should not have any numbers.")
}
}
}
}
}
}
classlabels_orig<-classlabels
if(featselmethod=="limma1way"){
featselmethod="limma"
}
# | featselmethod=="limma1wayrepeat"
if(featselmethod=="limma" | featselmethod=="limma1way" | featselmethod=="MARS" | featselmethod=="RF" | featselmethod=="pls" | featselmethod=="o1pls" | featselmethod=="o2pls" | featselmethod=="lmreg" |
featselmethod=="logitreg" | featselmethod=="spls" | featselmethod=="o1spls" | featselmethod=="o2spls" | featselmethod=="rfesvm" | featselmethod=="pamr" |
featselmethod=="poissonreg" | featselmethod=="ttest" | featselmethod=="wilcox" | featselmethod=="lm1wayanova")
{
if(featselmethod=="lmreg" | featselmethod=="logitreg" | featselmethod=="poissonreg")
{
factor_inf<-classlabels[,-c(1)]
factor_inf<-as.data.frame(factor_inf)
#print(factor_inf)
classlabels_orig<-colnames(classlabels[,-c(1)])
colnames(classlabels)<-c("SampleID",paste("Factor",seq(1,dim(factor_inf)[2]),sep=""))
Xmat_temp<-Xmat #t(Xmat)
#print(Xmat_temp[1:2,1:3])
Xmat_temp<-cbind(classlabels,Xmat_temp)
#print("here")
if(alphabetical.order==TRUE){
Xmat_temp<-Xmat_temp[order(Xmat_temp[,2]),]
}else{
if(analysismode=="classification"){
Xmat_temp[,2] <- factor(Xmat_temp[,2], levels=unique(Xmat_temp[,2]))
}
}
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Factor", cnames)
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
levels_classA<-levels(factor(classlabels[,2]))
factor1_msg=(paste("Factor 1 levels: ",paste(levels_classA,collapse=","),sep=""))
classlabels_class<-as.factor(classlabels[,2])
classtable1<-table(classlabels[,2])
classlabels_xyplots<-classlabels
#classlabels_orig<-classlabels
# classlabels_orig<-classlabels_orig[seq(1,dim(classlabels)[1],num_replicates),]
classlabels<-cbind(as.data.frame(classlabels[,1]),as.data.frame(classlabels_class))
classlabels_xyplots<-classlabels
rownames(Xmat_temp)<-as.character(Xmat_temp[,1])
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
classlabels_response_mat<-classlabels[,-c(1)]
classlabels<-as.data.frame(classlabels)
#keeps the class order as in the input file
if(alphabetical.order==FALSE){
classlabels[,2] <- factor(classlabels[,2], levels=unique(classlabels[,2]))
}
classlabels_response_mat<-classlabels[,-c(1)]
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
#colnames(classlabels_response_mat)<-as.character(classlabels_orig)
Ymat<-classlabels
classlabels_orig<-classlabels
}else
{
if(dim(classlabels)[2]>2){
if(pairedanalysis==FALSE){
#print("Invalid classlabels file format. Correct format: \nColumnA: SampleID\nColumnB: Class")
print("Using the first column as sample ID and second column as Class. Ignoring additional columns.")
classlabels<-classlabels[,c(1:2)]
}
}
if(analysismode=="classification")
{
factor_inf<-classlabels[,-c(1)]
factor_inf<-as.data.frame(factor_inf)
colnames(classlabels)<-c("SampleID",paste("Factor",seq(1,dim(factor_inf)[2]),sep=""))
Xmat_temp<-Xmat #t(Xmat)
Xmat_temp<-cbind(classlabels,Xmat_temp)
# ##save(Xmat_temp,file="Xmat_temp.Rda")
rownames(Xmat_temp)<-as.character(Xmat_temp[,1])
if(alphabetical.order==TRUE){
Xmat_temp<-Xmat_temp[order(Xmat_temp[,2]),]
}else{
Xmat_temp[,2] <- factor(Xmat_temp[,2], levels=unique(Xmat_temp[,2]))
}
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Factor", cnames)
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
levels_classA<-levels(factor(classlabels[,2]))
factor1_msg=(paste("Factor 1 levels: ",paste(levels_classA,collapse=","),sep=""))
classlabels_class<-as.factor(classlabels[,2])
classtable1<-table(classlabels[,2])
classlabels_xyplots<-classlabels
#classlabels_orig<-classlabels
# classlabels_orig<-classlabels_orig[seq(1,dim(classlabels)[1],num_replicates),]
classlabels<-cbind(as.data.frame(classlabels[,1]),as.data.frame(classlabels_class))
#rownames(Xmat)<-rownames(Xmat_temp)
classlabels_xyplots<-classlabels
classlabels_sub<-classlabels[,-c(1)]
if(alphabetical.order==FALSE){
classlabels[,2] <- factor(classlabels[,2], levels=unique(classlabels[,2]))
if(dim(classlabels)[2]>2){
#classlabels[,3] <- factor(classlabels[,3], levels=unique(classlabels[,3]))
stop("Invalid classlabels format.")
}
}
}
classlabels_response_mat<-classlabels[,-c(1)]
classlabels<-as.data.frame(classlabels)
classlabels_response_mat<-classlabels[,-c(1)]
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
#classlabels[,1]<-as.factor(classlabels[,1])
Ymat<-classlabels
classlabels_orig<-classlabels
}
#print("here 2")
}
if(featselmethod=="limma1wayrepeat"){
factor_inf<-classlabels[,-c(1:2)]
factor_inf<-as.data.frame(factor_inf)
# print("here")
colnames(classlabels)<-c("SampleID","SubjectNum",paste("Factor",seq(1,length(factor_inf)),sep=""))
#Xmat<-chocolate[,1]
Xmat_temp<-Xmat #t(Xmat)
Xmat_temp<-cbind(classlabels,Xmat_temp)
if(alphabetical.order==TRUE){
Xmat_temp<-Xmat_temp[order(Xmat_temp[,3],Xmat_temp[,2]),]
}else{
Xmat_temp[,3] <- factor(Xmat_temp[,3], levels=unique(Xmat_temp[,3]))
}
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Factor", cnames)
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
if(alphabetical.order==FALSE){
classlabels[,3] <- factor(classlabels[,3], levels=unique(classlabels[,3]))
}
subject_inf<-classlabels[,2]
classlabels_sub<-classlabels[,-c(1)]
subject_inf<-subject_inf[seq(1,dim(classlabels)[1],num_replicates)]
classlabels<-classlabels[,-c(2)]
levels_classA<-levels(factor(classlabels[,2]))
factor1_msg=(paste("Factor 1 levels: ",paste(levels_classA,collapse=","),sep=""))
classlabels_class<-as.factor(classlabels[,2])
classtable1<-table(classlabels[,2])
classlabels_xyplots<-classlabels
#classlabels_orig<-classlabels
# classlabels_orig<-classlabels_orig[seq(1,dim(classlabels)[1],num_replicates),]
classlabels<-cbind(as.data.frame(classlabels[,1]),as.data.frame(classlabels_class))
classlabels_xyplots<-classlabels
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
classlabels_response_mat<-classlabels[,-c(1)]
classlabels<-as.data.frame(classlabels)
classlabels_response_mat<-classlabels[,-c(1)]
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
Ymat<-classlabels
if(featselmethod=="limma1wayrepeat"){
featselmethod="limma"
pairedanalysis = TRUE
}else{
if(featselmethod=="spls1wayrepeat"){
featselmethod="spls"
pairedanalysis = TRUE
}else{
if(featselmethod=="pls1wayrepeat"){
featselmethod="pls"
pairedanalysis = TRUE
}
}
}
pairedanalysis = TRUE
}
if(featselmethod=="limma2way"){
factor_inf<-classlabels[,-c(1)]
factor_inf<-as.data.frame(factor_inf)
colnames(classlabels)<-c("SampleID",paste("Factor",seq(1,dim(factor_inf)[2]),sep=""))
Xmat_temp<-Xmat #t(Xmat)
####saveXmat,file="Xmat.Rda")
####saveclasslabels,file="Xmat_classlabels.Rda")
if(dim(classlabels)[2]>2){
# save(Xmat_temp,classlabels,file="Xmat_temp_limma.Rda")
Xmat_temp<-cbind(classlabels,Xmat_temp)
# print(Xmat_temp[1:10,1:10])
if(alphabetical.order==TRUE){
Xmat_temp<-Xmat_temp[order(Xmat_temp[,2],Xmat_temp[,3]),]
}else{
Xmat_temp[,2] <- factor(Xmat_temp[,2], levels=unique(Xmat_temp[,2]))
Xmat_temp[,3] <- factor(Xmat_temp[,3], levels=unique(Xmat_temp[,3]))
}
# print(Xmat_temp[1:10,1:10])
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Factor", cnames)
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
classlabels_sub<-classlabels[,-c(1)]
classlabels_response_mat<-classlabels[,-c(1)]
classlabels<-as.data.frame(classlabels)
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
if(alphabetical.order==FALSE){
classlabels[,2] <- factor(classlabels[,2], levels=unique(classlabels[,2]))
classlabels[,3] <- factor(classlabels[,3], levels=unique(classlabels[,3]))
}
levels_classA<-levels(factor(classlabels[,2]))
levels_classB<-levels(factor(classlabels[,3]))
factor1_msg=(paste("Factor 1 levels: ",paste(levels_classA,collapse=","),sep=""))
factor2_msg=(paste("Factor 2 levels: ",paste(levels_classB,collapse=","),sep=""))
classlabels_class<-as.factor(classlabels[,2]):as.factor(classlabels[,3])
classtable1<-table(classlabels[,2],classlabels[,3])
classlabels_xyplots<-classlabels
#classlabels_orig<-classlabels
# classlabels_orig<-classlabels_orig[seq(1,dim(classlabels)[1],num_replicates),]
classlabels<-cbind(as.data.frame(classlabels[,1]),as.data.frame(classlabels_class))
Ymat<-classlabels
#classlabels_response_mat<-classlabels[,-c(1)]
classlabels<-as.data.frame(classlabels)
#classlabels_response_mat<-classlabels[,-c(1)]
#classlabels_response_mat<-as.data.frame(classlabels_response_mat)
Ymat<-classlabels
#classlabels_orig<-classlabels
}
else{
stop("Only one factor specificied in the class labels file.")
}
}
if(featselmethod=="limma2wayrepeat"){
factor_inf<-classlabels[,-c(1:2)]
factor_inf<-as.data.frame(factor_inf)
colnames(classlabels)<-c("SampleID","SubjectNum",paste("Factor",seq(1,dim(factor_inf)[2]),sep=""))
Xmat_temp<-Xmat
if(dim(classlabels)[2]>2)
{
levels_classA<-levels(factor(classlabels[,3]))
if(length(levels_classA)>2){
#stop("Factor 1 can only have two levels/categories. Factor 2 can have upto 6 levels. \nPlease rearrange the factors in your classlabels file.")
# classtemp<-classlabels[,3]
# classlabels[,3]<-classlabels[,4]
# classlabels[,4]<-classtemp
}
levels_classA<-levels(factor(classlabels[,3]))
if(length(levels_classA)>2){
#stop("Only one of the factors can have more than 2 levels/categories. \nPlease rearrange the factors in your classlabels file or use lm2wayanovarepeat.")
#stop("Please select lm2wayanova or lm2wayanovarepeat option for greater than 2x2 designs.")
stop("Factor 1 can only have two levels/categories. Factor 2 can have upto 6 levels. \nPlease rearrange the factors in your classlabels file. Or use lm2wayanova option.")
}
levels_classB<-levels(factor(classlabels[,4]))
if(length(levels_classB)>7){
#stop("Only one of the factors can have more than 2 levels/categories. \nPlease rearrange the factors in your classlabels file or use lm2wayanova.")
stop("Please select lm2wayanovarepeat option for greater than 2x7 designs.")
}
Xmat_temp<-cbind(classlabels,Xmat_temp)
if(alphabetical.order==TRUE){
#Xmat_temp<-Xmat_temp[order(Xmat_temp[,2],Xmat_temp[,3]),]
Xmat_temp<-Xmat_temp[order(Xmat_temp[,3],Xmat_temp[,4],Xmat_temp[,2]),]
}else{
Xmat_temp[,4] <- factor(Xmat_temp[,4], levels=unique(Xmat_temp[,4]))
Xmat_temp[,3] <- factor(Xmat_temp[,3], levels=unique(Xmat_temp[,3]))
}
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Factor", cnames)
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
classlabels_sub<-classlabels[,-c(1)]
subject_inf<-classlabels[,2]
classlabels<-classlabels[,-c(2)]
classlabels_response_mat<-classlabels[,-c(1)]
classlabels<-as.data.frame(classlabels)
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
classlabels_xyplots<-classlabels
subject_inf<-subject_inf[seq(1,dim(classlabels)[1],num_replicates)]
#write.table(classlabels,file="organized_classlabelsA1.txt",sep="\t",row.names=FALSE)
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
#write.table(Xmat_temp,file="organized_featuretableA1.txt",sep="\t",row.names=TRUE)
if(alphabetical.order==FALSE){
classlabels[,2] <- factor(classlabels[,2], levels=unique(classlabels[,2]))
classlabels[,3] <- factor(classlabels[,3], levels=unique(classlabels[,3]))
}
levels_classA<-levels(factor(classlabels[,2]))
levels_classB<-levels(factor(classlabels[,3]))
factor1_msg=(paste("Factor 1 levels: ",paste(levels_classA,collapse=","),sep=""))
factor2_msg=(paste("Factor 2 levels: ",paste(levels_classB,collapse=","),sep=""))
classlabels_class<-as.factor(classlabels[,2]):as.factor(classlabels[,3])
classtable1<-table(classlabels[,2],classlabels[,3])
#classlabels_orig<-classlabels
#classlabels<-cbind(as.character(classlabels[,1]),as.character(classlabels_class))
classlabels<-cbind(as.data.frame(classlabels[,1]),as.data.frame(classlabels_class))
Ymat<-classlabels
# print("Class labels file limma2wayrep:")
# print(head(classlabels))
#rownames(Xmat)<-as.character(classlabels[,1])
#write.table(classlabels,file="organized_classlabels.txt",sep="\t",row.names=FALSE)
Xmat1<-cbind(classlabels,Xmat)
#write.table(Xmat1,file="organized_featuretable.txt",sep="\t",row.names=TRUE)
featselmethod="limma2way"
pairedanalysis = TRUE
}
else{
stop("Only one factor specificied in the class labels file.")
}
}
}
classlabels<-as.data.frame(classlabels)
if(featselmethod=="lm2wayanova" | featselmethod=="pls2way" | featselmethod=="spls2way"){
analysismode="classification"
#classlabels<-read.table(class_labels_file,sep="\t",header=TRUE)
if(is.na(Ymat)==TRUE){
classlabels<-read.table(class_labels_file,sep="\t",header=TRUE)
Ymat<-classlabels
}else{
classlabels<-Ymat
}
#cnames[2]<-"Factor1"
cnames<-colnames(classlabels)
factor_inf<-classlabels[,-c(1)]
factor_inf<-as.data.frame(factor_inf)
colnames(classlabels)<-c("SampleID",paste("Factor",seq(1,dim(factor_inf)[2]),sep=""))
analysismode="classification"
Xmat_temp<-Xmat #t(Xmat)
# save(Xmat_temp,classlabels,file="Xmat_temp_lm2way.Rda")
Xmat_temp<-cbind(classlabels,Xmat_temp)
rnames_xmat<-rownames(Xmat)
rnames_ymat<-as.character(Ymat[,1])
# ###saveXmat_temp,file="Xmat_temp.Rda")
if(featselmethod=="lm2wayanova" | featselmethod=="pls2way" | featselmethod=="spls2way"){
if(alphabetical.order==TRUE){
Xmat_temp<-Xmat_temp[order(Xmat_temp[,2],Xmat_temp[,3]),]
}
}
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Factor", cnames)
# save(Xmat_temp,classlabels,factor_lastcol,file="debudsort.Rda")
if(alphabetical.order==FALSE){
Xmat_temp[,2] <- factor(Xmat_temp[,2], levels=unique(Xmat_temp[,2]))
Xmat_temp[,3] <- factor(Xmat_temp[,3], levels=unique(Xmat_temp[,3]))
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
classlabels[,2] <- factor(classlabels[,2], levels=unique(classlabels[,2]))
classlabels[,3] <- factor(classlabels[,3], levels=unique(classlabels[,3]))
}else{
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
}
levels_classA<-levels(factor(classlabels[,2]))
levels_classB<-levels(factor(classlabels[,3]))
factor1_msg=(paste("Factor 1 levels: ",paste(levels_classA,collapse=","),sep=""))
factor2_msg=(paste("Factor 2 levels: ",paste(levels_classB,collapse=","),sep=""))
classlabels_sub<-classlabels[,-c(1)]
classlabels_response_mat<-classlabels[,-c(1)]
Ymat<-classlabels
classlabels_orig<-classlabels
#Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
###save(Xmat,file="Xmat2.Rda")
if(featselmethod=="lm2wayanova" | featselmethod=="pls2way" | featselmethod=="spls2way"){
classlabels_class<-as.factor(classlabels[,2]):as.factor(classlabels[,3])
classtable1<-table(classlabels[,2],classlabels[,3])
classlabels_xyplots<-classlabels
#classlabels_orig<-classlabels
# classlabels_orig<-classlabels_orig[seq(1,dim(classlabels)[1],num_replicates),]
classlabels<-cbind(as.data.frame(classlabels[,1]),as.data.frame(classlabels_class))
Ymat<-classlabels
if(featselmethod=="pls2way"){
featselmethod="pls"
}else{
if(featselmethod=="spls2way"){
featselmethod="spls"
}
}
}
# write.table(classlabels,file="organized_classlabelsB.txt",sep="\t",row.names=FALSE)
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
#write.table(Xmat_temp,file="organized_featuretableA.txt",sep="\t",row.names=TRUE)
#write.table(classlabels,file="organized_classlabelsA.txt",sep="\t",row.names=FALSE)
}
if(featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat" | featselmethod=="pls1wayrepeat" | featselmethod=="spls1wayrepeat" | featselmethod=="pls2wayrepeat" |
featselmethod=="spls2wayrepeat" | featselmethod=="ttestrepeat" | featselmethod=="wilcoxrepeat" | featselmethod=="lmregrepeat"){
#analysismode="classification"
pairedanalysis=TRUE
# classlabels<-read.table(class_labels_file,sep="\t",header=TRUE)
if(is.na(Ymat)==TRUE){
classlabels<-read.table(class_labels_file,sep="\t",header=TRUE)
Ymat<-classlabels
}else{
classlabels<-Ymat
}
cnames<-colnames(classlabels)
factor_inf<-classlabels[,-c(1:2)]
factor_inf<-as.data.frame(factor_inf)
colnames(classlabels)<-c("SampleID","SubjectNum",paste("Factor",seq(1,dim(factor_inf)[2]),sep=""))
classlabels_orig<-classlabels
#Xmat<-chocolate[,1]
Xmat_temp<-Xmat #t(Xmat)
Xmat_temp<-cbind(classlabels,Xmat_temp)
pairedanalysis=TRUE
if(featselmethod=="lm1wayanovarepeat" | featselmethod=="pls1wayrepeat" | featselmethod=="spls1wayrepeat" | featselmethod=="ttestrepeat" | featselmethod=="wilcoxrepeat" | featselmethod=="lmregrepeat"){
if(alphabetical.order==TRUE){
Xmat_temp<-Xmat_temp[order(Xmat_temp[,3],Xmat_temp[,2]),]
}else{
Xmat_temp[,3] <- factor(Xmat_temp[,3], levels=unique(Xmat_temp[,3]))
}
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Factor", cnames)
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
subject_inf<-classlabels[,2]
subject_inf<-subject_inf[seq(1,dim(classlabels)[1],num_replicates)]
classlabels_response_mat<-classlabels[,-c(1:2)]
# classlabels_orig<-classlabels
classlabels_sub<-classlabels[,-c(1)]
if(alphabetical.order==FALSE){
classlabels[,3] <- factor(classlabels[,3], levels=unique(classlabels[,3]))
}
levels_classA<-levels(factor(classlabels[,3]))
factor1_msg=(paste("Factor 1 levels: ",paste(levels_classA,collapse=","),sep=""))
classlabels<-classlabels[,-c(2)]
if(alphabetical.order==FALSE){
classlabels[,2] <- factor(classlabels[,2], levels=unique(classlabels[,2]))
}
classlabels_class<-classlabels[,2]
classtable1<-table(classlabels[,2])
#classlabels<-cbind(as.character(classlabels[,1]),as.character(classlabels_class))
classlabels<-cbind(as.data.frame(classlabels[,1]),as.data.frame(classlabels_class))
Ymat<-classlabels
classlabels_xyplots<-classlabels
# classlabels<-classlabels[seq(1,dim(classlabels)[1],num_replicates),]
Ymat<-classlabels
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
# write.table(Xmat_temp,file="organized_featuretableA.txt",sep="\t",row.names=FALSE)
####saveYmat,file="Ymat.Rda")
# ###saveXmat,file="Xmat.Rda")
if(featselmethod=="spls1wayrepeat"){
featselmethod="spls"
}else{
if(featselmethod=="pls1wayrepeat"){
featselmethod="pls"
}
}
if(featselmethod=="wilcoxrepeat"){
featselmethod=="wilcox"
pairedanalysis=TRUE
}
if(featselmethod=="ttestrepeat"){
featselmethod=="ttest"
pairedanalysis=TRUE
}
}
if(featselmethod=="lm2wayanovarepeat" | featselmethod=="pls2wayrepeat" | featselmethod=="spls2wayrepeat"){
if(alphabetical.order==TRUE){
Xmat_temp<-Xmat_temp[order(Xmat_temp[,3],Xmat_temp[,4],Xmat_temp[,2]),]
}else{
Xmat_temp[,3] <- factor(Xmat_temp[,3], levels=unique(Xmat_temp[,3]))
Xmat_temp[,4] <- factor(Xmat_temp[,4], levels=unique(Xmat_temp[,4]))
}
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Factor", cnames)
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
classlabels_sub<-classlabels[,-c(1)]
subject_inf<-classlabels[,2]
subject_inf<-subject_inf[seq(1,dim(classlabels)[1],num_replicates)]
classlabels_response_mat<-classlabels[,-c(1:2)]
Ymat<-classlabels
classlabels_xyplots<-classlabels[,-c(2)]
if(alphabetical.order==FALSE){
classlabels[,4] <- factor(classlabels[,4], levels=unique(classlabels[,4]))
classlabels[,3] <- factor(classlabels[,3], levels=unique(classlabels[,3]))
}
levels_classA<-levels(factor(classlabels[,3]))
factor1_msg=(paste("Factor 1 levels: ",paste(levels_classA,collapse=","),sep=""))
levels_classB<-levels(factor(classlabels[,4]))
factor2_msg=(paste("Factor 2 levels: ",paste(levels_classB,collapse=","),sep=""))
Ymat<-classlabels
#print(head(classlabels))
classlabels<-classlabels[,-c(2)]
classlabels_class<-paste(classlabels[,2],":",classlabels[,3],sep="")
classtable1<-table(classlabels[,2],classlabels[,3])
#classlabels<-cbind(as.character(classlabels[,1]),as.character(classlabels_class))
classlabels<-cbind(as.data.frame(classlabels[,1]),as.data.frame(classlabels_class))
Ymat<-classlabels
# write.table(classlabels,file="organized_classlabelsA1.txt",sep="\t",row.names=FALSE)
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
#write.table(Xmat_temp,file="organized_featuretableA.txt",sep="\t",row.names=FALSE)
#write.table(Xmat,file="organized_featuretableB1.txt",sep="\t",row.names=FALSE)
pairedanalysis=TRUE
if(featselmethod=="spls2wayrepeat"){
featselmethod="spls"
}
}
}
}
rownames(Xmat)<-as.character(Xmat_temp[,1])
# save(Xmat,Xmat_temp,file="Xmat1.Rda")
#save(Ymat,file="Ymat1.Rda")
rnames_xmat<-rownames(Xmat)
rnames_ymat<-as.character(Ymat[,1])
if(length(which(duplicated(rnames_ymat)==TRUE))>0){
stop("Duplicate sample IDs are not allowed. Please represent replicates by _1,_2,_3.")
}
check_ylabel<-regexpr(rnames_ymat[1],pattern="^[0-9]*",perl=TRUE)
check_xlabel<-regexpr(rnames_xmat[1],pattern="^X[0-9]*",perl=TRUE)
if(length(check_ylabel)>0 && length(check_xlabel)>0){
if(attr(check_ylabel,"match.length")>0 && attr(check_xlabel,"match.length")>0){
rnames_ymat<-paste("X",rnames_ymat,sep="") #gsub(rnames_ymat,pattern="\\.[0-9]*",replacement="")
}
}
Xmat<-t(Xmat)
colnames(Xmat)<-as.character(Ymat[,1])
Xmat<-cbind(X[,c(1:2)],Xmat)
Xmat<-as.data.frame(Xmat)
Ymat<-as.data.frame(Ymat)
match_names<-match(rnames_xmat,rnames_ymat)
bad_colnames<-length(which(is.na(match_names)==TRUE))
#print(match_names)
#if(is.na()==TRUE){
#save(rnames_xmat,rnames_ymat,Xmat,Ymat,file="debugnames.Rda")
bool_names_match_check<-all(rnames_xmat==rnames_ymat)
if(bad_colnames>0 | bool_names_match_check==FALSE){
# if(bad_colnames>0){
print("Sample names do not match between feature table and class labels files.\n Please try replacing any \"-\" with \".\" in sample names.")
print("Sample names in feature table")
print(head(rnames_xmat))
print("Sample names in classlabels file")
print(head(rnames_ymat))
stop("Sample names do not match between feature table and class labels files.\n Please try replacing any \"-\" with \".\" in sample names. Please try again.")
}
if(is.na(all(diff(match(rnames_xmat,rnames_ymat))))==FALSE){
if(all(diff(match(rnames_xmat,rnames_ymat)) > 0)==TRUE){
setwd("../")
#save(Xmat,Ymat,names_with_mz_time,feature_table_file,parentoutput_dir,class_labels_file,num_replicates,feat.filt.thresh,summarize.replicates,
# summary.method,all.missing.thresh,group.missing.thresh,missing.val,samplermindex,rep.max.missing.thresh,summary.na.replacement,featselmethod,pairedanalysis,input.intensity.scale,file="data_preprocess_in.Rda")
######
rownames(Xmat)<-names_with_mz_time$Name
num_features_total=nrow(Xmat)
#data preprocess classification
data_matrix<-data_preprocess(Xmat=Xmat,Ymat=Ymat,feature_table_file=feature_table_file,parentoutput_dir=parentoutput_dir,class_labels_file=NA,num_replicates=num_replicates,feat.filt.thresh=NA,summarize.replicates=summarize.replicates,summary.method=summary.method,
all.missing.thresh=all.missing.thresh,group.missing.thresh=group.missing.thresh,
log2transform=log2transform,medcenter=medcenter,znormtransform=znormtransform,,quantile_norm=quantile_norm,lowess_norm=lowess_norm,rangescaling=rangescaling,paretoscaling=paretoscaling,
mstus=mstus,sva_norm=sva_norm,eigenms_norm=eigenms_norm,vsn_norm=vsn_norm,madscaling=madscaling,missing.val=missing.val, rep.max.missing.thresh=rep.max.missing.thresh,
summary.na.replacement=summary.na.replacement,featselmethod=featselmethod,TIC_norm=TIC_norm,normalization.method=normalization.method,
input.intensity.scale=input.intensity.scale,log2.transform.constant=log2.transform.constant,alphabetical.order=alphabetical.order)
# save(data_matrix,names_with_mz_time,file="data_preprocess_out.Rda")
}else{
stop("Orders of feature table and classlabels do not match")
}
}else{
#print(diff(match(rnames_xmat,rnames_ymat)))
stop("Orders of feature table and classlabels do not match")
}
if(FALSE){
data_matrix<-data_preprocess(Xmat,Ymat,
feature_table_file,
parentoutput_dir="C:/Users/kuppal2/Documents/Projects/EGCG_pos//xmsPANDA_preprocess3/",
class_labels_file=NA,num_replicates=1,feat.filt.thresh=NA,summarize.replicates=TRUE,
summary.method="mean", all.missing.thresh=0.5,group.missing.thresh=0.5,
log2transform =FALSE, medcenter=FALSE, znormtransform = FALSE,
quantile_norm = FALSE, lowess_norm = FALSE, madscaling = FALSE,
missing.val=0, samplermindex=NA,rep.max.missing.thresh=0.5,summary.na.replacement="zeros")
}
}else{
stop("Invalid value for analysismode parameter. Please use regression or classification.")
}
}
if(is.na(names_with_mz_time)==TRUE){
names_with_mz_time=data_matrix$names_with_mz_time
}
# #save(data_matrix,file="data_matrix.Rda")
data_matrix_beforescaling<-data_matrix$data_matrix_prescaling
data_matrix_beforescaling<-as.data.frame( data_matrix_beforescaling)
data_matrix<-data_matrix$data_matrix_afternorm_scaling
#classlabels<-as.data.frame(classlabels)
if(dim(classlabels)[2]<2){
stop("The class labels/response matrix should have two columns: SampleID, Class/Response. Please see the example.")
}
data_m<-data_matrix[,-c(1:2)]
classlabels<-classlabels[seq(1,dim(classlabels)[1],num_replicates),]
# #save(classlabels,data_matrix,classlabels_orig,Ymat,file="Stage1/datarose.Rda")
classlabels_raw_boxplots<-classlabels
if(dim(classlabels)[2]==2){
if(length(levels(as.factor(classlabels[,2])))==2){
if(balance.classes==TRUE){
table_classes<-table(classlabels[,2])
suppressWarnings(library(ROSE))
Ytrain<-classlabels[,2]
data1=cbind(Ytrain,t(data_matrix[,-c(1:2)]))
##save(data1,classlabels,data_matrix,file="Stage1/data1.Rda")
# data_matrix_presim<-data_matrix
data1<-as.data.frame(data1)
colnames(data1)<-c("Ytrain",paste("var",seq(1,ncol(data1)-1),sep=""))
data1$Ytrain<-classlabels[,2]
if(table_classes[1]==table_classes[2])
{
set.seed(balance.classes.seed)
data1[,-c(1)]<-apply(data1[,-c(1)],2,as.numeric)
new_sample<-aggregate(x=data1[,-c(1)],by=list(as.factor(data1$Ytrain)),mean)
colnames(new_sample)<-colnames(data1)
data1<-rbind(data1,new_sample[1,])
set.seed(balance.classes.seed)
# #save(data1,classlabels,file="Stage1/dataB.Rda")
newData <- ROSE((Ytrain) ~ ., data1, seed = balance.classes.seed,N=nrow(data1)*balance.classes.sizefactor)$data
# newData <- SMOTE(Ytrain ~ ., data=data1, perc.over = 100)
#*balance.classes.sizefactor,perc.under=200*(balance.classes.sizefactor/(balance.classes.sizefactor/0.5)))
}else{
if(balance.classes.method=="ROSE"){
set.seed(balance.classes.seed)
data1[,-c(1)]<-apply(data1[,-c(1)],2,as.numeric)
newData <- ROSE((Ytrain) ~ ., data1, seed = balance.classes.seed,N=nrow(data1)*balance.classes.sizefactor)$data
}else{
set.seed(balance.classes.seed)
newData <- SMOTE(Ytrain ~ ., data=data1, perc.over = 100)
#*balance.classes.sizefactor,perc.under=200*(balance.classes.sizefactor/(balance.classes.sizefactor/0.5)))
}
}
newData<-na.omit(newData)
Xtrain<-newData[,-c(1)]
Xtrain<-as.matrix(Xtrain)
Ytrain<-newData[,c(1)]
Ytrain_mat<-cbind((rownames(Xtrain)),(Ytrain))
Ytrain_mat<-as.data.frame(Ytrain_mat)
print("new data")
print(dim(Xtrain))
print(dim(Ytrain_mat))
print(table(newData$Ytrain))
data_m<-t(Xtrain)
data_matrix<-cbind(data_matrix[,c(1:2)],data_m)
classlabels<-cbind(paste("S",seq(1,nrow(newData)),sep=""),Ytrain)
classlabels<-as.data.frame(classlabels)
print(dim(classlabels))
classlabels_orig<-classlabels
classlabels_sub<-classlabels[,-c(1)]
Ymat<-classlabels
##save(newData,file="Stage1/newData.Rda")
}
}
}
classlabelsA<-classlabels
Xmat<-data_matrix
#if(dim(classlabels_orig)==TRUE){
classlabels_orig<-classlabels_orig[seq(1,dim(classlabels_orig)[1],num_replicates),]
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
classlabels_response_mat<-classlabels_response_mat[seq(1,dim(classlabels_response_mat)[1],num_replicates),]
class_labels_levels_main<-c("S")
Ymat<-classlabels
rnames1<-as.character(Ymat[,1])
rnames2<-as.character(classlabels_orig[,1])
sorted_index<-{}
for(i in 1:length(rnames1)){
sorted_index<-c(sorted_index,grep(x=rnames2,pattern=paste("^",rnames1[i],"$",sep="")))
}
classlabels_orig<-classlabels_orig[sorted_index,]
#write.table(classlabels_response_mat,file="original_classlabelsB.txt",sep="\t",row.names=TRUE)
classlabelsA<-classlabels
if(length(which(duplicated(classlabels)==TRUE))>0){
rownames(classlabels)<-paste("S",seq(1,dim(classlabels)[1]),sep="")
}else{
rownames(classlabels)<-as.character(classlabels[,1])
}#as.character(classlabels[,1])
#print(classlabels)
#print(classlabels[1:10,])
# ###saveclasslabels,file="classlabels.Rda")
# ###saveclasslabels_orig,file="classlabels_orig.Rda")
# ###saveclasslabels_response_mat,file="classlabels_response_mat.Rda")
if(pairedanalysis==TRUE){
###savesubject_inf,file="subjectinf.Rda")
}
if(analysismode=="classification")
{
class_labels_levels<-levels(as.factor(classlabels[,2]))
# print("Using the following class labels")
#print(class_labels_levels)
class_labels_levels_main<-class_labels_levels
class_labels_levels<-unique(class_labels_levels)
bad_rows<-which(class_labels_levels=="")
if(length(bad_rows)>0){
class_labels_levels<-class_labels_levels[-bad_rows]
}
ordered_labels={}
num_samps_group<-new("list")
num_samps_group[[1]]<-0
groupwiseindex<-new("list")
groupwiseindex[[1]]<-0
for(c in 1:length(class_labels_levels))
{
classlabels_index<-which(classlabels[,2]==class_labels_levels[c])
ordered_labels<-c(ordered_labels,as.character(classlabels[classlabels_index,2]))
num_samps_group[[c]]<-length(classlabels_index)
groupwiseindex[[c]]<-classlabels_index
}
Ymatorig<-classlabels
#debugclasslabels
#save(classlabels,class_labels_levels,num_samps_group,Ymatorig,data_matrix,data_m_fc_withfeats,data_m,file="classlabels_1.Rda")
####saveclass_labels_levels,file="class_labels_levels.Rda")
# print("HERE1")
classlabels_dataframe<-classlabels
class_label_alphabets<-class_labels_levels
classlabels<-{}
if(length(class_labels_levels)==2){
#num_samps_group[[1]]=length(which(ordered_labels==class_labels_levels[1]))
#num_samps_group[[2]]=length(which(ordered_labels==class_labels_levels[2]))
class_label_A<-class_labels_levels[[1]]
class_label_B<-class_labels_levels[[2]]
#classlabels<-c(rep("ClassA",num_samps_group[[1]]),rep("ClassB",num_samps_group[[2]]))
classlabels<-c(rep(class_label_A,num_samps_group[[1]]),rep(class_label_B,num_samps_group[[2]]))
}else{
if(length(class_labels_levels)==3){
class_label_A<-class_labels_levels[[1]]
class_label_B<-class_labels_levels[[2]]
class_label_C<-class_labels_levels[[3]]
classlabels<-c(rep(class_label_A,num_samps_group[[1]]),rep(class_label_B,num_samps_group[[2]]),rep(class_label_C,num_samps_group[[3]]))
}else{
for(c in 1:length(class_labels_levels)){
num_samps_group_cur=length(which(Ymatorig[,2]==class_labels_levels[c]))
classlabels<-c(classlabels,rep(paste(class_labels_levels[c],sep=""),num_samps_group_cur))
#,rep("ClassB",num_samps_group[[2]]),rep("ClassC",num_samps_group[[3]]))
}
}
}
# print("Class mapping:")
# print(cbind(class_labels_levels,classlabels))
classlabels<-classlabels_dataframe[,2]
classlabels_2=classlabels
#save(classlabels_2,class_labels_levels,Ymatorig,data_matrix,data_m_fc_withfeats,data_m,file="classlabels_2.Rda")
####################################################################################
#print(head(data_m))
snames<-colnames(data_m)
Ymat<-as.data.frame(classlabels)
m1<-match(snames,Ymat[,1])
#Ymat<-Ymat[m1,]
data_temp<-data_matrix_beforescaling[,-c(1:2)]
rnames<-paste("mzid_",seq(1,nrow(data_matrix)),sep="")
rownames(data_m)=rnames
mzid_mzrt<-data_matrix[,c(1:2)]
colnames(mzid_mzrt)<-c("mz","time")
rownames(mzid_mzrt)=rnames
write.table(mzid_mzrt, file="Stage1/mzid_mzrt.txt",sep="\t",row.names=TRUE)
cl<-makeCluster(num_nodes)
mean_overall<-apply(data_temp,1,do_mean)
#clusterExport(cl,"do_mean")
#mean_overall<-parApply(cl,data_temp,1,do_mean)
#stopCluster(cl)
#mean_overall<-unlist(mean_overall)
# print("mean overall")
#print(summary(mean_overall))
bad_feat<-which(mean_overall==0)
if(length(bad_feat)>0){
data_matrix_beforescaling<-data_matrix_beforescaling[-bad_feat,]
data_m<-data_m[-bad_feat,]
data_matrix<-data_matrix[-bad_feat,]
}
#Step 5) RSD/CV calculation
}else{
classlabels<-(classlabels[,-c(1)])
}
# print("######classlabels#########")
#print(classlabels)
class_labels_levels_new<-levels(classlabels)
if(analysismode=="classification"){
test_classlabels<-cbind(class_labels_levels_main,class_labels_levels_new)
}
if(featselmethod=="ttest" | featselmethod=="wilcox"){
if(length(class_labels_levels)>2){
print("#######################")
print(paste("Warning: More than two classes detected. Invalid feature selection option. Skipping the feature selection for option ",featselmethod,sep=""))
print("#######################")
return("More than two classes detected. Invalid feature selection option.")
}
}
#print("here 2")
######################################################################################
#Step 6) Log2 mean fold change criteria from 0 to 1 with step of 0.1
feat_eval<-{}
feat_sigfdrthresh<-{}
feat_sigfdrthresh_cv<-{}
feat_sigfdrthresh_permut<-{}
permut_acc<-{}
feat_sigfdrthresh<-rep(0,length(log2.fold.change.thresh_list))
feat_sigfdrthresh_cv<-rep(NA,length(log2.fold.change.thresh_list))
feat_sigfdrthresh_permut<-rep(NA,length(log2.fold.change.thresh_list))
res_score_vec<-rep(0,length(log2.fold.change.thresh_list))
#feat_eval<-seq(0,1,0.1)
if(analysismode=="classification"){
best_cv_res<-(-1)*10^30
}else{
best_cv_res<-(1)*10^30
}
best_feats<-{}
goodfeats<-{}
mwan_fdr<-{}
targetedan_fdr<-{}
best_limma_res<-{}
best_acc<-{}
termA<-{}
fheader="transformed_log2fc_threshold_"
X<-t(data_m)
X<-replace(as.matrix(X),which(is.na(X)==TRUE),0)
# rm(pcaMethods)
#try(detach("package:pcaMethods",unload=TRUE),silent=TRUE)
#library(mixOmics)
if(featselmethod=="lmreg" || featselmethod=="lmregrobust" || featselmethod=="logitreg" || featselmethod=="logitregrobust"){
if(length(class_labels_levels)>2){
stop(paste(featselmethod, " feature selection option is only available for 2 class comparisons."),sep="")
}
}
if(sample.col.opt=="default"){
col_vec<-c("#CC0000","#AAC000","blue","mediumpurple4","mediumpurple1","blueviolet","cornflowerblue","cyan4","skyblue",
"darkgreen", "seagreen1", "green","yellow","orange","pink", "coral1", "palevioletred2",
"red","saddlebrown","brown","brown3","white","darkgray","aliceblue",
"aquamarine","aquamarine3","bisque","burlywood1","lavender","khaki3","black")
}else{
if(sample.col.opt=="topo"){
#col_vec<-topo.colors(256) #length(class_labels_levels))
#col_vec<-col_vec[seq(1,length(col_vec),)]
col_vec <- topo.colors(length(class_labels_levels), alpha=alphacol)
}else{
if(sample.col.opt=="heat"){
#col_vec<-heat.colors(256) #length(class_labels_levels))
col_vec <- heat.colors(length(class_labels_levels), alpha=alphacol)
}else{
if(sample.col.opt=="rainbow"){
#col_vec<-heat.colors(256) #length(class_labels_levels))
col_vec<-rainbow(length(class_labels_levels), start = 0, end = alphacol)
#col_vec <- heat.colors(length(class_labels_levels), alpha=alphacol)
}else{
if(sample.col.opt=="terrain"){
#col_vec<-heat.colors(256) #length(class_labels_levels))
#col_vec<-rainbow(length(class_labels_levels), start = 0, end = alphacol)
col_vec <- cm.colors(length(class_labels_levels), alpha=alphacol)
}else{
if(sample.col.opt=="colorblind"){
#col_vec <-c("#386cb0","#fdb462","#7fc97f","#ef3b2c","#662506","#a6cee3","#fb9a99","#984ea3","#ffff33")
# col_vec <- c("#0072B2", "#E69F00", "#009E73", "gold1", "#56B4E9", "#D55E00", "#CC79A7","black")
if(length(class_labels_levels)<9){
col_vec <- c("#0072B2", "#E69F00", "#009E73", "#56B4E9", "#D55E00", "#CC79A7", "#E64B35FF", "grey57")
}else{
#col_vec<-colorRampPalette(brewer.pal(10, "RdBu"))(length(class_labels_levels))
col_vec<-c("#0072B2", "#E69F00", "#009E73", "#56B4E9", "#D55E00", "#CC79A7","#E64B35B2", "#4DBBD5B2","#00A087B2","#3C5488B2","#F39B7FB2","#8491B4B2","#91D1C2B2","#DC0000B2","#7E6148B2",
"#374E55B2","#DF8F44B2","#00A1D5B2","#B24745B2","#79AF97B2","#6A6599B2","#80796BB2","#0073C2B2","#EFC000B2", "#868686B2","#CD534CB2","#7AA6DCB2","#003C67B2","grey57")
}
}else{
check_brewer<-grep(pattern="brewer",x=sample.col.opt)
if(length(check_brewer)>0){
sample.col.opt_temp=gsub(x=sample.col.opt,pattern="brewer.",replacement="")
col_vec <- colorRampPalette(brewer.pal(10, sample.col.opt_temp))(length(class_labels_levels))
}else{
if(sample.col.opt=="journal"){
col_vec<-c("#0072B2", "#E69F00", "#009E73", "#56B4E9", "#D55E00", "#CC79A7","#E64B35FF","#3C5488FF","#F39B7FFF",
"#8491B4FF","#91D1C2FF","#DC0000FF","#B09C85FF","#5F559BFF",
"#808180FF","#20854EFF","#FFDC91FF","#B24745FF",
"#374E55FF","#8F7700FF","#5050FFFF","#6BD76BFF",
"#E64B3519","#4DBBD519","#631879E5","grey75")
if(length(class_labels_levels)<8){
col_vec<-c("#0072B2", "#E69F00", "#009E73", "#56B4E9", "#D55E00", "#CC79A7","grey75")
#col_vec2<-brewer.pal(n = 8, name = "Dark2")
}else{
if(length(class_labels_levels)<=28){
# col_vec<-c("#0072B2", "#E69F00", "#009E73", "#56B4E9", "#D55E00", "#CC79A7", "grey75","#D95F02", "#7570B3", "#E7298A", "#66A61E", "#E6AB02", "#A6761D", "#666666","#1B9E77", "#7570B3", "#E7298A", "#A6761D", "#666666", "#1B9E77", "#D95F02", "#7570B3", "#E7298A", "#66A61E", "#E6AB02", "#A6761D", "#666666")
col_vec<-c("#0072B2", "#E69F00", "#009E73", "#56B4E9", "#D55E00", "#CC79A7","#E64B35FF","#3C5488FF","#F39B7FFF",
"#8491B4FF","#91D1C2FF","#DC0000FF","#B09C85FF","#5F559BFF",
"#808180FF","#20854EFF","#FFDC91FF","#B24745FF",
"#374E55FF","#8F7700FF","#5050FFFF","#6BD76BFF", "#8BD76BFF",
"#E64B3519","#9DBBD0FF","#631879E5","#666666","grey75")
}else{
colfunc <-colorRampPalette(c("#0072B2", "#E69F00", "#009E73", "#56B4E9", "#D55E00", "#CC79A7","grey75"));col_vec<-colfunc(length(class_labels_levels))
col_vec<-col_vec[sample(col_vec)]
}
}
}else{
#colfunc <-colorRampPalette(sample.col.opt);col_vec<-colfunc(length(class_labels_levels))
# if(length(sample.col.opt)==1){
# col_vec <-rep(sample.col.opt,length(class_labels_levels))
# }else{
# colfunc <-colorRampPalette(sample.col.opt);col_vec<-colfunc(length(class_labels_levels))
# col_vec<-col_vec[sample(col_vec)]
#}
if(length(sample.col.opt)==1){
col_vec <-rep(sample.col.opt,length(class_labels_levels))
}else{
if(length(sample.col.opt)>=length(class_labels_levels)){
col_vec <-sample.col.opt
col_vec <- rep(col_vec,length(class_labels_levels))
}else{
colfunc <-colorRampPalette(sample.col.opt);col_vec<-colfunc(length(class_labels_levels))
}
}
}
}
}
}
}
}
}
}
#pca_col_vec<-col_vec
pca_col_vec<-c("mediumpurple4","mediumpurple1","blueviolet","darkblue","blue","cornflowerblue","cyan4","skyblue",
"darkgreen", "seagreen1", "green","yellow","orange","pink", "coral1", "palevioletred2",
"red","saddlebrown","brown","brown3","white","darkgray","aliceblue",
"aquamarine","aquamarine3","bisque","burlywood1","lavender","khaki3","black")
if(is.na(individualsampleplot.col.opt)==TRUE){
individualsampleplot.col.opt=col_vec
}
#cl<-makeCluster(num_nodes)
#feat_sds<-parApply(cl,data_m,1,sd)
feat_sds<-apply(data_m,1,function(x){sd(x,na.rm=TRUE)})
#stopCluster(cl)
bad_sd_ind<-c(which(feat_sds==0),which(is.na(feat_sds)==TRUE))
bad_sd_ind<-unique(bad_sd_ind)
if(length(bad_sd_ind)>0){
data_matrix<-data_matrix[-c(bad_sd_ind),]
data_m<-data_m[-c(bad_sd_ind),]
data_matrix_beforescaling<-data_matrix_beforescaling[-c(bad_sd_ind),]
}
data_temp<-data_matrix_beforescaling[,-c(1:2)]
#cl<-makeCluster(num_nodes)
#clusterExport(cl,"do_rsd")
#feat_rsds<-parApply(cl,data_temp,1,do_rsd)
feat_rsds<-apply(data_temp,1,do_rsd)
#stopCluster(cl)
# #save(feat_rsds,data_temp,data_matrix_beforescaling,data_m,file="rsds.Rda")
sum_rsd<-summary(feat_rsds,na.rm=TRUE)
max_rsd<-max(feat_rsds,na.rm=TRUE)
max_rsd<-round(max_rsd,2)
# print("Summary of RSD across all features:")
#print(sum_rsd)
if(log2.fold.change.thresh_list[length(log2.fold.change.thresh_list)]>max_rsd){
stop(paste("The maximum relative standard deviation threshold in rsd.filt.list should be below ",max_rsd,sep=""))
}
classlabels_parent<-classlabels
classlabels_sub_parent<-classlabels_sub
classlabels_orig_parent<-classlabels_orig
#write.table(classlabels_orig,file="classlabels.txt",sep="\t",row.names=FALSE)
classlabels_response_mat_parent<-classlabels_response_mat
parent_data_m<-round(data_m,5)
res_score<-0
#best_cv_res<-0
best_feats<-{}
best_acc<-0
best_limma_res<-{}
best_logfc_ind<-1
output_dir1<-paste(parentoutput_dir,"/Stage2/",sep="")
dir.create(output_dir1,showWarnings=FALSE)
setwd(output_dir1)
classlabels_sub_parent<-classlabels_sub
classlabels_orig_parent<-classlabels_orig
#write.table(classlabels_orig,file="classlabels.txt",sep="\t",row.names=FALSE)
classlabels_response_mat_parent<-classlabels_response_mat
# rocfeatlist<-rocfeatlist+1
if(pairedanalysis==TRUE){
#print(subject_inf)
write.table(subject_inf,file="subject_inf.txt",sep="\t")
paireddesign=subject_inf
}else{
paireddesign=NA
}
#write.table(classlabels_orig,file="classlabels_orig.txt",sep="\t")
#write.table(classlabels,file="classlabels.txt",sep="\t")
#write.table(classlabels_response_mat,file="classlabels_response_mat.txt",sep="\t")
if(is.na(max_varsel)==TRUE){
max_varsel=dim(data_m)[1]
}
for(lf in 1:length(log2.fold.change.thresh_list))
{
allmetabs_res<-{}
classlabels_response_mat<-classlabels_response_mat_parent
classlabels_sub<-classlabels_sub_parent
classlabels_orig<-classlabels_orig_parent
setwd(parentoutput_dir)
log2.fold.change.thresh=log2.fold.change.thresh_list[lf]
output_dir1<-paste(parentoutput_dir,"/Stage2/",sep="")
dir.create(output_dir1,showWarnings=FALSE)
setwd(output_dir1)
if(logistic_reg==TRUE){
if(robust.estimate==FALSE){ output_dir<-paste(output_dir1,"logitreg","signalthresh",group.missing.thresh,"RSD",log2.fold.change.thresh,"/",sep="")
}else{
if(robust.estimate==TRUE){ output_dir<-paste(output_dir1,"logitregrobust","signalthresh",group.missing.thresh,"RSD",log2.fold.change.thresh,"/",sep="")
}
}
}else{
if(poisson_reg==TRUE){
if(robust.estimate==FALSE){ output_dir<-paste(output_dir1,"poissonreg","signalthresh",group.missing.thresh,"RSD",log2.fold.change.thresh,"/",sep="")
}else{
if(robust.estimate==TRUE){
output_dir<-paste(output_dir1,"poissonregrobust","signalthresh",group.missing.thresh,"RSD",log2.fold.change.thresh,"/",sep="")
}
}
}else{
if(featselmethod=="lmreg"){
if(robust.estimate==TRUE){
output_dir<-paste(output_dir1,"lmregrobust","signalthresh",group.missing.thresh,"RSD",log2.fold.change.thresh,"/",sep="")
}else{
output_dir<-paste(output_dir1,"lmreg","signalthresh",group.missing.thresh,"RSD",log2.fold.change.thresh,"/",sep="")
}
}else{
output_dir<-paste(output_dir1,parentfeatselmethod,"signalthresh",group.missing.thresh,"RSD",log2.fold.change.thresh,"/",sep="")
}
}
}
dir.create(output_dir,showWarnings=FALSE)
setwd(output_dir)
dir.create("Figures",showWarnings = FALSE)
dir.create("Tables",showWarnings = FALSE)
data_m<-parent_data_m
#print("dim of data_m")
#print(dim(data_m))
pdf_fname<-paste("Figures/Results_RSD",log2.fold.change.thresh,".pdf",sep="")
#zip_fname<-paste("Results_RSD",log2.fold.change.thresh,".zip",sep="")
if(output.device.type=="pdf"){
pdf(pdf_fname,width=10,height=10)
}
if(analysismode=="classification" | analysismode=="regression"){
rsd_filt_msg=(paste("Performing RSD filtering using ",log2.fold.change.thresh, " as threshold",sep=""))
if(log2.fold.change.thresh>=0){
if(log2.fold.change.thresh==0){
log2.fold.change.thresh=0.001
}
#good_metabs<-which(abs(mean_groups)>log2.fold.change.thresh)
abs_feat_rsds<-abs(feat_rsds)
good_metabs<-which(abs_feat_rsds>log2.fold.change.thresh)
#print("length of good_metabs")
#print(good_metabs)
}else{
good_metabs<-seq(1,dim(data_m)[1])
}
if(length(good_metabs)>0){
data_m_fc<-data_m[good_metabs,]
data_m_fc_withfeats<-data_matrix[good_metabs,c(1:2)]
data_matrix_beforescaling_rsd<-data_matrix_beforescaling[good_metabs,]
data_matrix<-data_matrix[good_metabs,]
}else{
#data_m_fc<-data_m
#data_m_fc_withfeats<-data_matrix[,c(1:2)]
stop(paste("Please decrease the maximum relative standard deviation (rsd.filt.thresh) threshold to ",max_rsd,sep=""))
}
}else{
data_m_fc<-data_m
data_m_fc_withfeats<-data_matrix[,c(1:2)]
}
# save(data_m_fc_withfeats,data_m_fc,data_m,data_matrix,file="datadebug.Rda")
ylab_text_raw<-ylab_text
if(log2transform==TRUE || input.intensity.scale=="log2"){
if(znormtransform==TRUE){
ylab_text_2="scale normalized"
}else{
if(quantile_norm==TRUE){
ylab_text_2="quantile normalized"
}else{
ylab_text_2=""
}
}
ylab_text=paste("log2 ",ylab_text," ",ylab_text_2,sep="")
}else{
if(znormtransform==TRUE){
ylab_text_2="scale normalized"
}else{
if(quantile_norm==TRUE){
ylab_text_2="quantile normalized"
}else{
ylab_text_2=""
}
}
ylab_text=paste("Raw ",ylab_text," ",ylab_text_2,sep="") #paste("Raw intensity ",ylab_text_2,sep="")
}
#ylab_text=paste("Abundance",sep="")
if(is.na(names_with_mz_time)==FALSE){
data_m_fc_with_names<-merge(names_with_mz_time,data_m_fc_withfeats,by=c("mz","time"))
data_m_fc_with_names<-data_m_fc_with_names[match(data_m_fc_withfeats$mz,data_m_fc_with_names$mz),]
#save(names_with_mz_time,goodfeats,goodfeats_with_names,file="goodfeats_with_names.Rda")
# goodfeats_name<-goodfeats_with_names$Name
#}
}
# save(data_m_fc_withfeats,data_matrix,data_m,data_m_fc,data_m_fc_with_names,names_with_mz_time,file="debugnames.Rda")
if(dim(data_m_fc)[2]>50){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/SampleIntensityDistribution.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
size_num<-min(100,dim(data_m_fc)[2])
par(mfrow=c(1,1),family="sans",cex=cex.plots)
samp_index<-sample(x=1:dim(data_m_fc)[2],size=size_num)
# try(boxplot(data_m_fc[,samp_index],main="Intensity distribution across samples after preprocessing",xlab="Samples",ylab=ylab_text,col=boxplot.col.opt),silent=TRUE)
#samp_dist_col<-get_boxplot_colors(boxplot.col.opt,class_labels_levels=c(1))
boxplot(data_m_fc[,samp_index],main="Intensity distribution across samples after preprocessing",xlab="Samples",ylab=ylab_text,col="white")
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}else{
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/SampleIntensityDistribution.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
par(mfrow=c(1,1),family="sans",cex=cex.plots)
try(boxplot(data_m_fc,main="Intensity distribution across samples after preprocessing",xlab="Samples",ylab=ylab_text,col="white"),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
if(is.na(outlier.method)==FALSE){
if(output.device.type!="pdf"){
temp_filename_1<-paste("Figures/OutlierDetection",outlier.method,".png",sep="")
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
par(mfrow=c(1,1),family="sans",cex=cex.plots)
##save(data_matrix,file="dm1.Rda")
outlier_detect(data_matrix=data_matrix,ncomp=2,column.rm.index=c(1,2),outlier.method=outlier.method[1])
# print("done outlier")
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
data_m_fc_withfeats<-cbind(data_m_fc_withfeats,data_m_fc)
allmetabs_res_withnames<-{}
feat_eval[lf]<-0
res_score_vec[lf]<-0
#feat_sigfdrthresh_cv[lf]<-0
filename<-paste(fheader,log2.fold.change.thresh,".txt",sep="")
#write.table(data_m_fc_withfeats, file=filename,sep="\t",row.names=FALSE)
if(length(data_m_fc)>=dim(parent_data_m)[2])
{
if(dim(data_m_fc)[1]>0){
if(ncol(data_m_fc)<30){
kfold=ncol(data_m_fc)
}
feat_eval[lf]<-dim(data_m_fc)[1]
# col_vec<-c("#CC0000","#AAC000","blue","mediumpurple4","mediumpurple1","blueviolet","darkblue","blue","cornflowerblue","cyan4","skyblue",
#"darkgreen", "seagreen1", "green","yellow","orange","pink", "coral1", "palevioletred2",
#"red","saddlebrown","brown","brown3","white","darkgray","aliceblue",
#"aquamarine","aquamarine3","bisque","burlywood1","lavender","khaki3","black")
if(analysismode=="classification")
{
sampleclass<-{}
patientcolors<-{}
#
classlabels<-as.data.frame(classlabels)
#print(classlabels)
f<-factor(classlabels[,1])
for(c in 1:length(class_labels_levels)){
num_samps_group_cur=length(which(ordered_labels==class_labels_levels[c]))
#classlabels<-c(classlabels,rep(paste("Class",class_label_alphabets,sep=""),num_samps_group_cur))
#,rep("ClassB",num_samps_group[[2]]),rep("ClassC",num_samps_group[[3]]))
sampleclass<-c(sampleclass,rep(paste("Class",class_label_alphabets[c],sep=""),num_samps_group_cur))
#sampleclass<-classlabels[,1] #c(sampleclass,rep(paste("Class",class_labels_levels[c],sep=""),num_samps_group_cur))
patientcolors <-c(patientcolors,rep(col_vec[c],num_samps_group_cur))
}
# library(pcaMethods)
#p1<-pcaMethods::pca(data_m_fc,method="rnipals",center=TRUE,scale="uv",cv="q2",nPcs=3)
tempX<-t(data_m_fc)
# p1<-pcaMethods::pca(tempX,method="rnipals",center=TRUE,scale="uv",cv="q2",nPcs=10)
if(output.device.type!="pdf"){
temp_filename_2<-"Figures/PCAdiagnostics_allfeats.png"
# png(temp_filename_2,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
if(output.device.type!="pdf"){
# dev.off()
}
# try(detach("package:pcaMethods",unload=TRUE),silent=TRUE)
if(dim(classlabels)[2]>2){
classgroup<-paste(classlabels[,1],":",classlabels[,2],sep="") #classlabels[,1]:classlabels[,2]
}else{
classgroup<-classlabels
}
classlabels_orig<-classlabels_orig_parent
if(pairedanalysis==TRUE){
#classlabels_orig<-classlabels_orig[,-c(2)]
}else{
if(featselmethod=="lmreg" || featselmethod=="logitreg" || featselmethod=="poissonreg"){
classlabels_orig<-classlabels_orig[,c(1:2)]
classlabels_orig<-as.data.frame(classlabels_orig)
}
}
if(analysismode=="classification"){
if(dim(classlabels_orig)[2]==2){
if(alphabetical.order==FALSE){
classlabels_orig[,2] <- factor(classlabels_orig[,2], levels=unique(classlabels_orig[,2]))
}
}
if(dim(classlabels_orig)[2]==3){
if(pairedanalysis==TRUE){
if(alphabetical.order==FALSE){
classlabels_orig[,3] <- factor(classlabels_orig[,3], levels=unique(classlabels_orig[,3]))
}
}else{
if(alphabetical.order==FALSE){
classlabels_orig[,2] <- factor(classlabels_orig[,2], levels=unique(classlabels_orig[,2]))
classlabels_orig[,3] <- factor(classlabels_orig[,3], levels=unique(classlabels_orig[,3]))
}
}
}else{
if(dim(classlabels_orig)[2]==4){
if(pairedanalysis==TRUE){
if(alphabetical.order==FALSE){
classlabels_orig[,3] <- factor(classlabels_orig[,3], levels=unique(classlabels_orig[,3]))
classlabels_orig[,4] <- factor(classlabels_orig[,4], levels=unique(classlabels_orig[,4]))
}
}
}
}
}
if(length(which(duplicated(data_m_fc_with_names$Name)==TRUE))>0){
print("Duplicate features detected")
print("Removing duplicate entries for the following features:")
# print(data_m_fc_with_names$Name[which(duplicated(data_m_fc_with_names$Name)==TRUE)])
data_m_fc_withfeats<-data_m_fc_withfeats[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
data_m_fc<-data_m_fc[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
data_matrix<-data_matrix[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
data_m<-data_m[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
data_m_fc_with_names<-data_m_fc_with_names[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
#parent_data_m<-parent_data_m[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
}
##Perform global PCA
if(pca.global.eval==TRUE){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/PCAplots_allfeats.pdf"
#png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
pdf(temp_filename_1,width=plots.width,height=plots.height)
}
plot(0:10, type = "n", xaxt="n", yaxt="n", bty="n", xlab = "", ylab = "")
text(5, 8, "PCA using all features left after pre-processing")
text(5, 7, "The figures include: ")
text(5, 6, "a. pairwise PC score plots ")
text(5, 5, "b. scores for individual samples on each PC")
text(5, 4, "c. Lineplots using PC scores for data with repeated measurements")
###savelist=ls(),file="pcaplotsall.Rda")
# save(data_m_fc_withfeats,classlabels_orig,sample.col.opt,col_vec,pairedanalysis,pca.cex.val,legendlocation,pca.ellipse,ellipse.conf.level,paireddesign,
# lineplot.col.opt,lineplot.lty.option,timeseries.lineplots,pcacenter,pcascale,alphabetical.order,
# analysistype,lme.modeltype,file="pcaplotsall.Rda")
rownames(data_m_fc_withfeats)<-data_m_fc_with_names$Name
# save(data_m_fc_withfeats,data_m_fc_with_names,file="data_m_fc_withfeats.Rda")
classlabels_orig_pca<-classlabels_orig
c1=try(get_pcascoredistplots(X=data_m_fc_withfeats,Y=classlabels_orig,feature_table_file=NA,parentoutput_dir=getwd(),class_labels_file=NA,sample.col.opt=sample.col.opt,
plots.width=2000,plots.height=2000,plots.res=300, alphacol=0.3,col_vec=col_vec,pairedanalysis=pairedanalysis,pca.cex.val=pca.cex.val,legendlocation=legendlocation,
pca.ellipse=pca.ellipse,ellipse.conf.level=ellipse.conf.level,
filename="all",paireddesign=paireddesign,lineplot.col.opt=lineplot.col.opt,
lineplot.lty.option=lineplot.lty.option,timeseries.lineplots=timeseries.lineplots,
pcacenter=pcacenter,pcascale=pcascale,alphabetical.order=alphabetical.order,
study.design=analysistype,lme.modeltype=lme.modeltype),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
classlabels_orig<-classlabels_orig_parent
}else{
#regression
tempgroup<-rep("A",dim(data_m_fc)[2]) #cbind(classlabels_orig[,1],
col_vec1<-rep("black",dim(data_m_fc)[2])
class_labels_levels_main1<-c("A")
analysistype="regression"
if(length(which(duplicated(data_m_fc_with_names$Name)==TRUE))>0){
data_m_fc_withfeats<-data_m_fc_withfeats[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
data_m_fc<-data_m_fc[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
data_matrix<-data_matrix[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
data_m<-data_m[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
data_m_fc_with_names<-data_m_fc_with_names[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
# parent_data_m<-parent_data_m[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
print("Duplicate features detected")
print("Removing duplicate entries for the following features:")
print(data_m_fc_with_names$Name[which(duplicated(data_m_fc_with_names$Name)==TRUE)])
}
rownames(data_m_fc_withfeats)<-data_m_fc_with_names$Name
# get_pca(X=data_m_fc,samplelabels=tempgroup,legendlocation=legendlocation,filename="all",
# ncomp=3,pcacenter=pcacenter,pcascale=pcascale,legendcex=0.5,outloc=getwd(),col_vec=col_vec1,
# sample.col.opt=sample.col.opt,alphacol=0.3,class_levels=NA,pca.cex.val=pca.cex.val,pca.ellipse=FALSE,
# paireddesign=paireddesign,alphabetical.order=alphabetical.order,pairedanalysis=pairedanalysis,classlabels_orig=classlabels_orig,analysistype=analysistype) #,silent=TRUE)
if(pca.global.eval==TRUE){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/PCAplots_allfeats.pdf"
#png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
pdf(temp_filename_1,width=plots.width,height=plots.height)
}
plot(0:10, type = "n", xaxt="n", yaxt="n", bty="n", xlab = "", ylab = "")
text(5, 8, "PCA using all features left after pre-processing")
text(5, 7, "The figures include: ")
text(5, 6, "a. pairwise PC score plots ")
text(5, 5, "b. scores for individual samples on each PC")
text(5, 4, "c. Lineplots using PC scores for data with repeated measurements")
###savelist=ls(),file="pcaplotsall.Rda")
###save(data_m_fc_withfeats,classlabels_orig,sample.col.opt,col_vec,pairedanalysis,pca.cex.val,legendlocation,pca.ellipse,ellipse.conf.level,paireddesign,lineplot.col.opt,lineplot.lty.option,timeseries.lineplots,pcacenter,pcascale,file="pcaplotsall.Rda")
c1=try(get_pcascoredistplots(X=data_m_fc_withfeats,Y=classlabels_orig,feature_table_file=NA,parentoutput_dir=getwd(),class_labels_file=NA,
sample.col.opt=sample.col.opt,
plots.width=2000,plots.height=2000,plots.res=300, alphacol=0.3,col_vec=col_vec,pairedanalysis=pairedanalysis,pca.cex.val=pca.cex.val,legendlocation=legendlocation,
pca.ellipse=pca.ellipse,ellipse.conf.level=ellipse.conf.level,filename="all",
paireddesign=paireddesign,lineplot.col.opt=lineplot.col.opt,lineplot.lty.option=lineplot.lty.option,
timeseries.lineplots=timeseries.lineplots,pcacenter=pcacenter,pcascale=pcascale,alphabetical.order=alphabetical.order,
study.design=analysistype,lme.modeltype=lme.modeltype),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
if(featselmethod=="pamr"){
#print("HERE")
#savedata_m_fc,classlabels,file="pamdebug.Rda")
if(is.na(fdrthresh)==FALSE){
if(fdrthresh>0.5){
pamrthresh=pvalue.thresh
}else{
pamrthresh=fdrthresh
}
}else{
pamrthresh=pvalue.thresh
}
pamr.res<-do_pamr(X=data_m_fc,Y=classlabels,fdrthresh=pamrthresh,nperms=1000,pamr.threshold.select.max=pamr.threshold.select.max,kfold=kfold)
###save(pamr.res,file="pamr.res.Rda")
goodip<-pamr.res$feature.list
if(length(goodip)<1){
goodip=NA
}
pamr.threshold_value<-pamr.res$threshold_value
feature_rowindex<-seq(1,nrow(data_m_fc))
discore<-rep(0,nrow(data_m_fc))
discore_all<-pamr.res$max.discore.allfeats
if(is.na(goodip)==FALSE){
discore[goodip]<-pamr.res$max.discore.sigfeats
sel.diffdrthresh<-feature_rowindex%in%goodip
max_absolute_standardized_centroids_thresh0<-pamr.res$max.discore.allfeats[goodip]
selected_id_withmztime<-cbind(data_m_fc_withfeats[goodip,c(1:2)],pamr.res$pam_toplist,max_absolute_standardized_centroids_thresh0)
###savepamr.res,file="pamr.res.Rda")
write.csv(selected_id_withmztime,file="dscores.selectedfeats.csv",row.names=FALSE)
rank_vec<-rank(-discore_all)
max_absolute_standardized_centroids_thresh0<-pamr.res$max.discore.allfeats
data_limma_fdrall_withfeats<-cbind(max_absolute_standardized_centroids_thresh0,data_m_fc_withfeats)
write.table(data_limma_fdrall_withfeats,file="Tables/pamr_ranked_feature_table.txt",sep="\t",row.names=FALSE)
}else{
goodip<-{}
sel.diffdrthresh<-rep(FALSE,length(feature_rowindex))
}
rank_vec<-rank(-discore_all)
pamr_ythresh<-pamr.res$max.discore.all.thresh-0.00000001
}
if(featselmethod=="rfesvm"){
svm_classlabels<-classlabels[,1]
if(analysismode=="classification"){
svm_classlabels<-as.data.frame(svm_classlabels)
}
# ##save(data_m_fc,svm_classlabels,svm_kernel,file="svmdebug.Rda")
if(length(class_labels_levels)<3){
rfesvmres = diffexpsvmrfe(x=t(data_m_fc),y=svm_classlabels,svmkernel=svm_kernel)
featureRankedList=rfesvmres$featureRankedList
featureWeights=rfesvmres$featureWeights
#best_subset<-featureRankedList$best_subset
}else{
rfesvmres = diffexpsvmrfemulticlass(x=t(data_m_fc),y=svm_classlabels,svmkernel=svm_kernel)
featureRankedList=rfesvmres$featureRankedList
featureWeights=rfesvmres$featureWeights
}
# ##save(rfesvmres,file="rfesvmres.Rda")
rank_vec<-seq(1,dim(data_m_fc_withfeats)[1])
goodip<-featureRankedList[1:max_varsel]
#dtemp1<-data_m_fc_withfeats[goodip,]
sel.diffdrthresh<-rank_vec%in%goodip
rank_vec<-sort(featureRankedList,index.return=TRUE)$ix
weight_vec<-featureWeights #[rank_vec]
data_limma_fdrall_withfeats<-cbind(featureWeights,data_m_fc_withfeats)
}
f1={}
corfit={}
if(featselmethod=="limma" | featselmethod=="limma1way")
{
# cat("Performing limma analysis",sep="\n")
# save(classlabels,classlabels_orig,classlabels_dataframe,classlabels_response_mat,file="cldebug.Rda")
classlabels_temp1<-classlabels
classlabels<-classlabels_dataframe #classlabels_orig
colnames(classlabels)<-c("SampleID","Factor1")
if(alphabetical.order==FALSE){
classlabels$Factor1<-factor(classlabels$Factor1,levels=unique(classlabels$Factor1))
Factor1<-factor(classlabels$Factor1,levels=unique(classlabels$Factor1))
}else{
Factor1<-factor(classlabels$Factor1)
}
if(limma.contrasts.type=="contr.sum"){
contrasts_factor1<-contr.sum(length(levels(factor(Factor1))))
rownames(contrasts_factor1)<-levels(factor(Factor1))
cnames_contr_factor1<-apply(contrasts_factor1,2,function(x){paste(names(x[which(abs(x)==1)]),collapse = "-")})
}else{
contrasts_factor1<-contr.treatment(length(levels(factor(Factor1))))
rownames(contrasts_factor1)<-levels(factor(Factor1))
cnames_contr_factor1<-apply(contrasts_factor1,2,function(x){paste(names(x[1]),names(x[which(abs(x)==1)]),sep = "-")})
}
colnames(contrasts_factor1)<-cnames_contr_factor1
contrasts(Factor1) <- contrasts_factor1
design <- model.matrix(~Factor1)
classlabels<-classlabels_temp1
# design <- model.matrix(~ -1+f)
#colnames(design) <- levels(f)
options(digit=3)
#parameterNames<-colnames(design)
design_mat_names=colnames(design)
design_mat_names<-design_mat_names[-c(1)]
# limma paired analysis
if(pairedanalysis==TRUE){
f1<-{}
for(c in 1:length(class_labels_levels)){
f1<-c(f1,seq(1,num_samps_group[[c]]))
}
#print("Paired samples order")
f1<-subject_inf
# print(subject_inf)
# print("Design matrix")
# print(design)
####savelist=ls(),file="limma.Rda")
##save(subject_inf,file="subject_inf.Rda")
corfit<-duplicateCorrelation(data_m_fc,design=design,block=subject_inf,ndups=1)
if(limmarobust==TRUE)
{
fit<-lmFit(data_m_fc,design,block=f1,cor=corfit$consensus,method="robust")
}else{
fit<-lmFit(data_m_fc,design,block=f1,cor=corfit$consensus)
}
}else{
#not paired analysis
if(limmarobust==TRUE)
{
fit <- lmFit(data_m_fc,design,method="robust")
}else{
fit <- lmFit(data_m_fc,design)
}
#fit<-treat(fit,lfc=lf)
}
cont.matrix=attributes(design)$contrasts
#print(data_m_fc[1:3,])
#fit2 <- contrasts.fit(fit, cont.matrix)
#remove the intercept coefficient
fit<-fit[,-1]
fit2 <- eBayes(fit)
# save(fit2,fit,data_m_fc,design,f1,corfit,classlabels,Factor1,cnames_contr_factor1,file="limma.eBayes.fit.Rda")
# Various ways of summarising or plotting the results
#topTable(fit,coef=2)
#write.table(t1,file="topTable_limma.txt",sep="\t")
if(dim(design)[2]>2){
pvalues<-fit2$F.p.value
p.value<-fit2$F.p.value
}else{
pvalues<-fit2$p.value
p.value<-fit2$p.value
}
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
fdr_adjust_pvalue<-pvalues
#fdr_adjust_pvalue<-p.adjust(pvalues,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
if(dim(design)[2]<3){
if(fdrmethod=="none"){
filename<-paste("Tables/",parentfeatselmethod,"_pvalall_withfeats.txt",sep="")
}else{
filename<-paste("Tables/",parentfeatselmethod,"_fdrall_withfeats.txt",sep="")
}
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab)
data_limma_fdrall_withfeats<-cbind(p.value,fdr_adjust_pvalue,data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
pvalues<-p.value
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
# write.table(data_limma_fdrall_withfeats,file=filename,sep="\t",row.names=FALSE)
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
goodip<-which(sel.diffdrthresh==TRUE)
d4<-as.data.frame(data_limma_fdrall_withfeats)
logp<-(-1)*log((d4[,1]+(10^-20)),10)
#tiff("pval_dist.tiff",compression="lzw")
#hist(d4[,1],xlab="p",main="Distribution of p-values")
#dev.off()
}else{
adjusted.P.value<-fdr_adjust_pvalue
if(limmadecideTests==TRUE){
results2<-decideTests(fit2,method="nestedF",adjust.method="BH",p.value=fdrthresh)
#tiff("comparison_contrast_overlap.tiff",width=plots.width,height=plots.height,res=plots.res, compression="lzw")
#if(length(class_labels_levels)<4){
if(ncol(results2)<5){
if(output.device.type!="pdf"){
temp_filename_5<-"Figures/LIMMA_venn_diagram.png"
png(temp_filename_5,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
vennDiagram(results2,cex=0.8)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}else{
#dev.off()
results2<-fit2$p.value[,-c(1)]
}
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab2<-colnames(results2)
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab2,cnames_tab)
data_limma_fdrall_withfeats<-cbind(p.value,adjusted.P.value,results2,data_m_fc_withfeats)
data_limma_fdrall_withfeats<-as.data.frame(data_limma_fdrall_withfeats)
if(limmarobust==FALSE){
filename<-"Tables/limma_posthoc1wayanova_results.txt"
}else{
filename<-"Tables/limmarobust_posthoc1wayanova_results.txt"
}
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
if(length(check_names)>0){
data_limma_fdrall_withfeats<-cbind(p.value,adjusted.P.value,results2,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
data_limma_fdrall_withfeats<-as.data.frame(data_limma_fdrall_withfeats)
#data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,results2,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
rem_col_ind1<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("mz"))
rem_col_ind2<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("time"))
rem_col_ind<-c(rem_col_ind1,rem_col_ind2)
}else{
rem_col_ind<-{}
}
if(length(rem_col_ind)>0){
write.table(data_limma_fdrall_withfeats[,-c(rem_col_ind)], file=filename,sep="\t",row.names=FALSE)
}else{
write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
}
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
data_limma_fdrall_withfeats<-cbind(p.value,adjusted.P.value,data_m_fc_withfeats)
if(fdrmethod=="none"){
filename<-paste("limma_posthoc1wayanova_pval",fdrthresh,"_results.txt",sep="")
}else{
filename<-paste("limma_posthoc1wayanova_fdr",fdrthresh,"_results.txt",sep="")
}
if(length(which(data_limma_fdrall_withfeats$adjusted.P.value<fdrthresh & data_limma_fdrall_withfeats$p.value<pvalue.thresh))>0){
data_limma_sig_withfeats<-data_limma_fdrall_withfeats[data_limma_fdrall_withfeats$adjusted.P.value<fdrthresh & data_limma_fdrall_withfeats$p.value<pvalue.thresh,]
#write.table(data_limma_sig_withfeats, file=filename,sep="\t",row.names=FALSE)
}
# data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,data_m_fc_withfeats)
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
final.pvalues<-pvalues
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
}
#pvalues<-data_limma_fdrall_withfeats$p.value
#final.pvalues<-pvalues
# print("checking here")
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
goodip<-which(sel.diffdrthresh==TRUE)
d4<-as.data.frame(data_limma_fdrall_withfeats)
logp<-(-1)*log((d4[,1]+(10^-20)),10)
#tiff("pval_dist.tiff",compression="lzw")
#hist(d4[,1],xlab="p",main="Distribution of p-values")
#dev.off()
if(length(goodip)<1){
print("No features selected.")
}
}
if(featselmethod=="limma2way")
{
# cat("Performing limma2way analysis",sep="\n")
#design <- cbind(Grp1vs2=c(rep(1,num_samps_group[[1]]),rep(0,num_samps_group[[2]])),Grp2vs1=c(rep(0,num_samps_group[[1]]),rep(1,num_samps_group[[2]])))
# print("here")
# save(f,sampleclass,data_m_fc,classlabels,classlabels_orig,file="limma2way.Rda")
classlabels_temp<-classlabels
colnames(classlabels_orig)<-c("SampleID","Factor1","Factor2")
classlabels<- classlabels_orig #classlabels_dataframe #
colnames(classlabels)<-c("SampleID","Factor1","Factor2")
#design <- model.matrix(~ -1+f)
#classlabels<-read.table("/Users/karanuppal/Documents/Emory/JonesLab/Projects/DifferentialExpression/xmsPaNDA/examples_and_manual/Example_feature_table_and_classlabels/classlabels_two_way_anova.txt",sep="\t",header=TRUE)
#classlabels<-classlabels[order(classlabels$Factor2,decreasing = T),]
if(alphabetical.order==FALSE){
classlabels$Factor1<-factor(classlabels$Factor1,levels=unique(classlabels$Factor1))
classlabels$Factor2<-factor(classlabels$Factor2,levels=unique(classlabels$Factor2))
Factor1<-factor(classlabels$Factor1,levels=unique(classlabels$Factor1))
Factor2<-factor(classlabels$Factor2,levels=unique(classlabels$Factor2))
}else{
Factor1<-factor(classlabels$Factor1)
Factor2<-factor(classlabels$Factor2)
}
#this will create sum to zero parametrization. Coefficient Comparison Interpretation
#contrasts(Strain) <- contr.sum(2)
#contrasts(Treatment) <- contr.sum(2)
#design <- model.matrix(~Strain*Treatment)
#Intercept (WT.U+WT.S+Mu.U+Mu.S)/4; Grand mean
#Strain1 (WT.U+WT.S-Mu.U-Mu.S)/4; strain main effect
#Treatment1 (WT.U-WT.S+Mu.U-Mu.S)/4; treatment main effect
#Strain1:Treatment1 (WT.U-WT.S-Mu.U+Mu.S)/4; Interaction
if(limma.contrasts.type=="contr.sum"){
contrasts_factor1<-contr.sum(length(levels(factor(Factor1))))
contrasts_factor2<-contr.sum(length(levels(factor(Factor2))))
rownames(contrasts_factor1)<-levels(factor(Factor1))
rownames(contrasts_factor2)<-levels(factor(Factor2))
cnames_contr_factor1<-apply(contrasts_factor1,2,function(x){paste(names(x[which(abs(x)==1)]),collapse = "-")})
cnames_contr_factor2<-apply(contrasts_factor2,2,function(x){paste(names(x[which(abs(x)==1)]),collapse = "-")})
}else{
contrasts_factor1<-contr.treatment(length(levels(factor(Factor1))))
contrasts_factor2<-contr.treatment(length(levels(factor(Factor2))))
rownames(contrasts_factor1)<-levels(factor(Factor1))
rownames(contrasts_factor2)<-levels(factor(Factor2))
cnames_contr_factor1<-apply(contrasts_factor1,2,function(x){paste(names(x[1]),names(x[which(abs(x)==1)]),sep = "-")})
cnames_contr_factor2<-apply(contrasts_factor2,2,function(x){paste(names(x[1]),names(x[which(abs(x)==1)]),sep= "-")})
}
colnames(contrasts_factor1)<-cnames_contr_factor1
colnames(contrasts_factor2)<-cnames_contr_factor2
contrasts(Factor1) <- contrasts_factor1
contrasts(Factor2) <- contrasts_factor2
design <- model.matrix(~Factor1*Factor2)
# fit<-lmFit(data_m_fc,design=design)
#2. this will create contrasts with respect to the reference group (first level in each factor)
if(FALSE){
contrasts(Factor1) <- contr.treatment(length(levels(factor(Factor1))))
contrasts(Factor2) <- contr.treatment(length(levels(factor(Factor2))))
design.trt <- model.matrix(~Factor1*Factor2)
fit.trt<-lmFit(data_m_fc,design=design.trt)
s1=apply(fit.trt$coefficients,2,function(x){
length(which(is.na(x))==TRUE)/length(x)
})
}
#3. this will create design matrix with all factors
call<-lapply(classlabels[,c(2:3)],contrasts,contrasts=FALSE)
design.all<-model.matrix(~Factor1*Factor2,data=classlabels,contrasts.arg=call)
#grand mean: mean of means (mean of each level)
#mean_per_level<-lapply(2:ncol(design.all),function(x){mean(data_m_fc[1,which(design.all[,x]==1)])})
#mean_per_level<-unlist(mean_per_level)
#names(mean_per_level)<-colnames(design.all[,-1])
#grand_mean<-mean(mean_per_level,na.rm=TRUE)
#grand_mean<-with(d,tapply(data_m_fc[1,],list(Factor1,Factor2),mean))
# colnames(design)<-gsub(colnames(design),pattern="Factor1",replacement="")
#colnames(design)<-gsub(colnames(design),pattern="Factor2",replacement="")
# save(design,f,sampleclass,data_m_fc,classlabels,classlabels_orig,file="limma2way.Rda")
classlabels<-classlabels_temp
# print(data_m_fc[1:4,])
#colnames(design) <- levels(f)
#colnames(design)<-levels(factor(sampleclass))
options(digit=3)
parameterNames<-colnames(design)
# print("Design matrix")
# print(design)
if(pairedanalysis==TRUE)
{
f1<-subject_inf
#print(data_m_fc[1:10,1:10])
#save(design,subject_inf,file="limmadesign.Rda")
}
if(dim(design)[2]>=1){
#cont.matrix <- makeContrasts(Grp1vs2="ClassA-ClassB",Grp1vs3="ClassC-ClassD",Grp2vs3=("ClassA-ClassB")-("ClassC-ClassD"),levels=design)
#cont.matrix <- makeContrasts(Grp1vs2=ClassA-ClassB,Grp1vs3=ClassC-ClassD,Grp2vs3=(ClassA-ClassB)-(ClassC-ClassD),Grp3vs4=ClassA-ClassC,Group2vs4=ClassB-ClassD,levels=design)
#cont.matrix <- makeContrasts(Factor1=(ClassA+ClassB)-(ClassC+ClassD),Factor2=(ClassA+ClassC)-(ClassB+ClassD),Factor1x2=(ClassA-ClassB)-(ClassC-ClassD),levels=design)
design.pairs <-
function(levels) {
n <- length(levels)
design <- matrix(0,n,choose(n,2))
rownames(design) <- levels
colnames(design) <- 1:choose(n,2)
k <- 0
for (i in 1:(n-1))
for (j in (i+1):n) {
k <- k+1
design[i,k] <- 1
design[j,k] <- -1
colnames(design)[k] <- paste(levels[i],"-",levels[j],sep="")
}
design
}
#levels_1<-levels(factor(classlabels[,2]))
#levels_2<-levels(factor(classlabels[,3]))
#design2<-design.pairs(c(as.character(levels_1),as.character(levels_2)))
#cont.matrix<-makeContrasts(contrasts=colnames(design2),levels=c(as.character(levels_1),as.character(levels_2)))
if(pairedanalysis==TRUE){
#class_table_facts<-table(classlabels)
#f1<-c(seq(1,num_samps_group[[1]]),seq(1,num_samps_group[[2]]),seq(1,num_samps_group[[1]]),seq(1,num_samps_group[[2]]))
corfit<-duplicateCorrelation(data_m_fc,design=design,block=subject_inf,ndups=1)
#print(f1)
if(limmarobust==TRUE)
{
fit<-lmFit(data_m_fc,design,block=f1,cor=corfit$consensus,method="robust")
}else
{
fit<-lmFit(data_m_fc,design,block=f1,cor=corfit$consensus)
}
s1=apply(fit$coefficients,2,function(x){
length(which(is.na(x))==TRUE)/length(x)
})
if(length(which(s1==1))>0){
design<-design[,-which(s1==1)]
#fit <- lmFit(data_m_fc,design)
if(limmarobust==TRUE)
{
fit<-lmFit(data_m_fc,design,block=f1,cor=corfit$consensus,method="robust")
}else{
fit<-lmFit(data_m_fc,design,block=f1,cor=corfit$consensus)
}
}
}
else{
# fit <- lmFit(data_m_fc,design)
if(limmarobust==TRUE)
{
fit<-lmFit(data_m_fc,design,method="robust")
}else{
fit <- lmFit(data_m_fc,design)
}
s1=apply(fit$coefficients,2,function(x){
return(length(which(is.na(x))==TRUE)/length(x))
})
if(length(which(s1==1))>0){
design<-design[,-which(s1==1)]
if(limmarobust==TRUE)
{
fit<-lmFit(data_m_fc,design,method="robust")
}else{
fit<-lmFit(data_m_fc,design)
}
}
}
}
fit<-fit[,-1]
fit2=eBayes(fit)
results <- topTableF(fit2, n=Inf)
# decideresults<-decideTests(fit2)
# Ordinary fit
# save(fit2,fit,results,file="limma.eBayes.fit.Rda")
#fit2 <- contrasts.fit(fit, cont.matrix)
#fit2 <- eBayes(fit2)
#as.data.frame(fit2[1:10,])
# Various ways of summarising or plotting the results
#topTable(fit2,coef=2)
# ##save(fit2,file="fit2.Rda")
if(dim(design)[2]>2){
pvalues<-fit2$F.p.value
p.value<-fit2$F.p.value
}else{
pvalues<-fit2$p.value
p.value<-fit2$p.value
}
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
#fdr_adjust_pvalue<-qvalue(pvalues)
#fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
# fdr_adjust_pvalue<-pvalues
fdr_adjust_pvalue<-p.adjust(pvalues,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
#print("Doing this:")
adjusted.p.value<-fdr_adjust_pvalue
data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,data_m_fc_withfeats)
if(limmadecideTests==TRUE){
results2<-decideTests(fit2,adjust.method="BH",method="nestedF",p.value=fdrthresh) #
#tiff("comparison_contrast_overlap.tiff",width=plots.width,height=plots.height,res=plots.res, compression="lzw")
# save(results2,file="results2.Rda")
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab2<-colnames(results2)
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab2,cnames_tab)
data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,results2,data_m_fc_withfeats)
if(limmarobust==FALSE){
filename<-"Tables/limma_2wayposthoc_decideresults.txt"
}else{
filename<-"Tables/limmarobust_2wayposthoc_decideresults.txt"
}
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
# write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
#if(length(class_labels_levels)<5){
if(ncol(results2)<6){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/LIMMA_venn_diagram.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
vennDiagram(results2,cex=0.8)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
else{
#dev.off()
results2<-fit2$p.value[,-c(1)]
}
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab2<-colnames(results2)
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab2,cnames_tab)
#save(data_m_fc_withfeats,names)
data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,results2,data_m_fc_withfeats)
if(limmarobust==FALSE){
filename<-"Tables/limma_2wayposthoc_pvalues.txt"
}else{
filename<-"Tables/limmarobust_2wayposthoc_pvalues.txt"
}
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
if(length(check_names)>0){
data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,results2,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
rem_col_ind1<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("mz"))
rem_col_ind2<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("time"))
rem_col_ind<-c(rem_col_ind1,rem_col_ind2)
}else{
rem_col_ind<-{}
}
if(length(rem_col_ind)>0){
write.table(data_limma_fdrall_withfeats[,-c(rem_col_ind)], file=filename,sep="\t",row.names=FALSE)
}else{
write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
}
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
# write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
#tiff("comparison_contrast_overlap.tiff",width=plots.width,height=plots.height,res=plots.res, compression="lzw")
#dev.off()
#results2<-fit2$p.value
classlabels_orig<-as.data.frame(classlabels_orig)
data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,data_m_fc_withfeats)
# data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#write.table(data_limma_fdrall_withfeats,file="Limma_posthoc2wayanova_results.txt",sep="\t",row.names=FALSE)
#print("checking here")
pvalues<-p.value
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
goodip<-which(sel.diffdrthresh==TRUE)
d4<-as.data.frame(data_limma_fdrall_withfeats)
logp<-(-1)*log((d4[,1]+(10^-20)),10)
#results2<-decideTests(fit2,method="nestedF",adjust.method=fdrmethod,p.value=fdrthresh)
if(length(goodip)<1){
print("No features selected.")
}
}
if(featselmethod=="RF")
{
# cat("Performing RF analysis",sep="\n")
maxint<-apply(data_m_fc,1,max)
data_m_fc_withfeats<-as.data.frame(data_m_fc_withfeats)
data_m_fc<-as.data.frame(data_m_fc)
#write.table(classlabels,file="classlabels_rf.txt",sep="\t",row.names=FALSE)
#save(data_m_fc,classlabels,numtrees,analysismode,file="rfdebug.Rda")
if(rfconditional==TRUE){
cat("Performing random forest analysis using the cforest",sep="\n")
#rfcondres1<-do_rf_conditional(X=data_m_fc,rf_classlabels,ntrees=numtrees,analysismode) #,silent=TRUE)
filename<-"RFconditional_VIM_allfeats.txt"
}else{
#varimp_res2<-do_rf(X=data_m_fc,classlabels=rf_classlabels,ntrees=numtrees,analysismode)
if(analysismode=="classification"){
rf_classlabels<-classlabels[,1]
#print("Performing random forest analysis using the randomForest and Boruta functions")
varimp_res2<-do_rf_boruta(X=data_m_fc,classlabels=rf_classlabels) #,ntrees=numtrees,analysismode)
filename<-"RF_VIM_Boruta_allfeats.txt"
varimp_rf_thresh=0
}else{
rf_classlabels<-classlabels
#print("Performing random forest analysis using the randomForest function")
varimp_res2<-do_rf(X=data_m_fc,classlabels=rf_classlabels,ntrees=numtrees,analysismode)
# save(varimp_res2,data_m_fc,rf_classlabels,numtrees,analysismode,file="varimp_res2.Rda")
filename<-"RF_VIM_regression_allfeats.txt"
varimp_res2<-varimp_res2$rf_varimp #rf_varimp_scaled
#find the lowest value within the top max_varsel features to use as threshold
varimp_rf_thresh<-min(varimp_res2[order(varimp_res2,decreasing=TRUE)[1:(max_varsel+1)]],na.rm=TRUE)
}
}
names(varimp_res2)<-rownames(data_m_fc)
varimp_res3<-cbind(data_m_fc_withfeats[,c(1:2)],varimp_res2)
rownames(varimp_res3)<-rownames(data_m_fc)
filename<-paste("Tables/",filename,sep="")
write.table(varimp_res3, file=filename,sep="\t",row.names=TRUE)
goodip<-which(varimp_res2>varimp_rf_thresh)
if(length(goodip)<1){
print("No features were selected using the selection criteria.")
}
var_names<-rownames(data_m_fc) #paste(sprintf("%.3f",data_m_fc_withfeats[,1]),sprintf("%.1f",data_m_fc_withfeats[,2]),sep="_")
names(varimp_res2)<-as.character(var_names)
sel.diffdrthresh<-varimp_res2>varimp_rf_thresh
if(length(which(sel.diffdrthresh==TRUE))<1){
print("No features were selected using the selection criteria")
}
num_var_rf<-length(which(sel.diffdrthresh==TRUE))
if(num_var_rf>10){
num_var_rf=10
}
sorted_varimp_res<-varimp_res2[order(varimp_res2,decreasing=TRUE)[1:(num_var_rf)]]
sorted_varimp_res<-rev(sort(sorted_varimp_res))
barplot_text=paste("Variable Importance Measure (VIM) \n(top ",length(sorted_varimp_res)," shown)\n",sep="")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/RF_selectfeats_VIMbarplot.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
par(mar=c(10,7,4,2))
# ##save(varimp_res2,data_m_fc,rf_classlabels,sorted_varimp_res,file="test_rf.Rda")
#xaxt="n",
x=barplot(sorted_varimp_res, xlab="", main=barplot_text,cex.axis=0.9,
cex.names=0.9, ylab="",las=2,ylim=range(pretty(c(0,sorted_varimp_res))))
title(ylab = "VIM", cex.lab = 1.5,
line = 4.5)
#x <- barplot(table(mtcars$cyl), xaxt="n")
# labs <- names(sorted_varimp_res)
# text(cex=0.7, labs, xpd=FALSE, srt=45) #,x=x-.25, y=-1.25)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
par(mfrow = c(1,1))
rank_num<-rank(-varimp_res2)
data_limma_fdrall_withfeats<-cbind(varimp_res2,rank_num,data_m_fc_withfeats)
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("VIM","Rank",cnames_tab)
goodip<-which(sel.diffdrthresh==TRUE)
feat_sigfdrthresh[lf]<-length(which(sel.diffdrthresh==TRUE))
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
#write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
}
if(featselmethod=="MARS"){
# cat("Performing MARS analysis",sep="\n")
#print(head(classlabels))
mars_classlabels<-classlabels #[,1]
marsres1<-do_mars(X=data_m_fc,mars_classlabels, analysismode,kfold)
#save(data_m_fc,mars_classlabels, analysismode,kfold,marsres1,file="mars.Rda")
varimp_marsres1<-marsres1$mars_varimp
rownames(varimp_marsres1)<-rownames(data_m_fc)
mars_mznames<-rownames(varimp_marsres1)
#all_names<-paste("mz",seq(1,dim(data_m_fc)[1]),sep="")
#com1<-match(all_names,mars_mznames)
filename<-"MARS_variable_importance.txt"
if(is.na(max_varsel)==FALSE){
if(max_varsel>dim(data_m_fc)[1]){
max_varsel=dim(data_m_fc)[1]
}
varimp_res2<-varimp_marsres1[,4]
#sort by VIM; and keep the top max_varsel scores
sorted_varimp_res<-varimp_res2[order(varimp_res2,decreasing=TRUE)[1:(max_varsel)]]
#get the minimum VIM from the top max_varsel scores
min_thresh<-min(sorted_varimp_res[which(sorted_varimp_res>=mars.gcv.thresh)],na.rm=TRUE)
row_num_vec<-seq(1,length(varimp_res2))
#only use the top max_varsel scores
#goodip<-order(varimp_res2,decreasing=TRUE)[1:(max_varsel)]
#sel.diffdrthresh<-row_num_vec%in%goodip
#use a threshold of mars.gcv.thresh
sel.diffdrthresh<-varimp_marsres1[,4]>=min_thresh
goodip<-which(sel.diffdrthresh==TRUE)
}else{
#use a threshold of mars.gcv.thresh
sel.diffdrthresh<-varimp_marsres1[,4]>=mars.gcv.thresh
goodip<-which(sel.diffdrthresh==TRUE)
}
num_var_rf<-length(which(sel.diffdrthresh==TRUE))
if(num_var_rf>10){
num_var_rf=10
}
sorted_varimp_res<-varimp_res2[order(varimp_res2,decreasing=TRUE)[1:(num_var_rf)]]
sorted_varimp_res<-sort(sorted_varimp_res)
barplot_text=paste("Generalized cross validation (top ",length(sorted_varimp_res)," shown)\n",sep="")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/MARS_selectfeats_GCVbarplot.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
# barplot(sorted_varimp_res, xlab="Selected features", main=barplot_text,cex.axis=0.5,cex.names=0.4, ylab="GCV",range(pretty(c(0,sorted_varimp_res))),space=0.1)
par(mar=c(10,7,4,2))
# ##save(varimp_res2,data_m_fc,rf_classlabels,sorted_varimp_res,file="test_rf.Rda")
#xaxt="n",
x=barplot(sorted_varimp_res, xlab="", main=barplot_text,cex.axis=0.9,
cex.names=0.9, ylab="",las=2,ylim=range(pretty(c(0,sorted_varimp_res))))
title(ylab = "GCV", cex.lab = 1.5,
line = 4.5)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
data_limma_fdrall_withfeats<-cbind(varimp_marsres1[,c(4,6)],data_m_fc_withfeats)
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("GCV importance","RSS importance",cnames_tab)
feat_sigfdrthresh[lf]<-length(which(sel.diffdrthresh==TRUE))
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
goodip<-which(sel.diffdrthresh==TRUE)
}
if(featselmethod=="pls" | featselmethod=="o1pls" | featselmethod=="o2pls" | featselmethod=="spls" | featselmethod=="o1spls" | featselmethod=="o2spls")
{
cat(paste("Performing ",featselmethod," analysis",sep=""),sep="\n")
classlabels<-as.data.frame(classlabels)
if(is.na(max_comp_sel)==TRUE){
max_comp_sel=pls_ncomp
}
rand_pls_sel<-{} #new("list")
if(featselmethod=="spls" | featselmethod=="o1spls" | featselmethod=="o2spls"){
if(featselmethod=="o1spls"){
featselmethod="o1pls"
}else{
if(featselmethod=="o2spls"){
featselmethod="o2pls"
}
}
if(pairedanalysis==TRUE){
classlabels_temp<-cbind(classlabels_sub[,2],classlabels)
set.seed(999)
plsres1<-do_plsda(X=data_m_fc,Y=classlabels_sub,oscmode=featselmethod,numcomp=pls_ncomp,kfold=kfold,evalmethod=pred.eval.method,keepX=max_varsel,sparseselect=TRUE,
analysismode,sample.col.opt=sample.col.opt,sample.col.vec=col_vec,scoreplot_legend=scoreplot_legend,pairedanalysis=pairedanalysis,
optselect=optselect,class_labels_levels_main=class_labels_levels_main,legendlocation=legendlocation,output.device.type=output.device.type,
plots.res=plots.res,plots.width=plots.width,plots.height=plots.height,plots.type=plots.type,pls.ellipse=pca.ellipse,alphabetical.order=alphabetical.order)
if (is(plsres1, "try-error")){
print(paste("sPLS could not be performed at RSD threshold: ",log2.fold.change.thresh,sep=""))
#break;
}
opt_comp<-plsres1$opt_comp
#for(randindex in 1:100)
#save(plsres1,file="plsres1.Rda")
if(is.na(pls.permut.count)==FALSE){
set.seed(999)
seedvec<-runif(pls.permut.count,10,10*pls.permut.count)
if(pls.permut.count>0){
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterEvalQ(cl,library(plsgenomics))
clusterEvalQ(cl,library(dplyr))
clusterEvalQ(cl,library(plyr))
clusterExport(cl,"pls.lda.cv",envir = .GlobalEnv)
clusterExport(cl,"plsda_cv",envir = .GlobalEnv)
#clusterExport(cl,"%>%",envir = .GlobalEnv) #%>%
clusterExport(cl,"do_plsda_rand",envir = .GlobalEnv)
clusterEvalQ(cl,library(mixOmics))
clusterEvalQ(cl,library(pls))
rand_pls_sel<-parLapply(cl,1:pls.permut.count,function(x)
{
set.seed(seedvec[x])
plsresrand<-do_plsda_rand(X=data_m_fc,Y=classlabels_sub[sample(x=seq(1,dim(classlabels_sub)[1]),
size=dim(classlabels_sub)[1]),],oscmode=featselmethod,
numcomp=opt_comp,kfold=kfold,evalmethod=pred.eval.method,keepX=max_varsel,sparseselect=TRUE,
analysismode,sample.col.vec=col_vec,scoreplot_legend=scoreplot_legend,pairedanalysis=pairedanalysis,
optselect=FALSE,class_labels_levels_main=class_labels_levels_main,legendlocation=legendlocation,plotindiv=FALSE,alphabetical.order=alphabetical.order) #,silent=TRUE)
#rand_pls_sel<-cbind(rand_pls_sel,plsresrand$vip_res[,1])
if (is(plsresrand, "try-error")){
return(rep(0,dim(data_m_fc)[1]))
}else{
return(plsresrand$vip_res[,1])
}
})
stopCluster(cl)
}
}
}else{
#plsres1<-try(do_plsda(X=data_m_fc,Y=classlabels,oscmode=featselmethod,numcomp=pls_ncomp,kfold=kfold,evalmethod=pred.eval.method,keepX=max_varsel,sparseselect=TRUE,analysismode,sample.col.vec=col_vec,scoreplot_legend=scoreplot_legend,pairedanalysis=pairedanalysis,optselect=optselect,class_labels_levels_main=class_labels_levels_main,legendlocation=legendlocation,pls.vip.selection=pls.vip.selection),silent=TRUE)
# ##save(data_m_fc,classlabels,pls_ncomp,kfold,file="pls1.Rda")
set.seed(999)
plsres1<-do_plsda(X=data_m_fc,Y=classlabels,oscmode=featselmethod,numcomp=pls_ncomp,kfold=kfold,evalmethod=pred.eval.method,
keepX=max_varsel,sparseselect=TRUE,analysismode,sample.col.opt=sample.col.opt,sample.col.vec=col_vec,
scoreplot_legend=scoreplot_legend,pairedanalysis=pairedanalysis,optselect=optselect,
class_labels_levels_main=class_labels_levels_main,legendlocation=legendlocation,
pls.vip.selection=pls.vip.selection,output.device.type=output.device.type,
plots.res=plots.res,plots.width=plots.width,plots.height=plots.height,plots.type=plots.type,pls.ellipse=pca.ellipse,alphabetical.order=alphabetical.order)
opt_comp<-plsres1$opt_comp
if (is(plsres1, "try-error")){
print(paste("sPLS could not be performed at RSD threshold: ",log2.fold.change.thresh,sep=""))
break;
}
#for(randindex in 1:100)
if(is.na(pls.permut.count)==FALSE){
set.seed(999)
seedvec<-runif(pls.permut.count,10,10*pls.permut.count)
if(pls.permut.count>0){
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterEvalQ(cl,library(plsgenomics))
clusterEvalQ(cl,library(dplyr))
clusterEvalQ(cl,library(plyr))
clusterExport(cl,"pls.lda.cv",envir = .GlobalEnv)
clusterExport(cl,"plsda_cv",envir = .GlobalEnv)
#clusterExport(cl,"%>%",envir = .GlobalEnv) #%>%
clusterExport(cl,"do_plsda_rand",envir = .GlobalEnv)
clusterEvalQ(cl,library(mixOmics))
clusterEvalQ(cl,library(pls))
rand_pls_sel<-parLapply(cl,1:pls.permut.count,function(x)
{
set.seed(seedvec[x])
plsresrand<-do_plsda_rand(X=data_m_fc,Y=classlabels[sample(x=seq(1,dim(classlabels)[1]),size=dim(classlabels)[1]),],oscmode=featselmethod,numcomp=opt_comp,kfold=kfold,
evalmethod=pred.eval.method,keepX=max_varsel,sparseselect=TRUE,analysismode,sample.col.vec=col_vec,scoreplot_legend=scoreplot_legend,
pairedanalysis=pairedanalysis,optselect=FALSE,class_labels_levels_main=class_labels_levels_main,
legendlocation=legendlocation,plotindiv=FALSE,alphabetical.order=alphabetical.order)
#rand_pls_sel<-cbind(rand_pls_sel,plsresrand$vip_res[,1])
#return(plsresrand$vip_res[,1])
if (is(plsresrand, "try-error")){
return(rep(0,dim(data_m_fc)[1]))
}else{
return(plsresrand$vip_res[,1])
}
})
stopCluster(cl)
}
}
}
pls_vip_thresh<-0
if (is(plsres1, "try-error")){
print(paste("sPLS could not be performed at RSD threshold: ",log2.fold.change.thresh,sep=""))
break;
}else{
opt_comp<-plsres1$opt_comp
}
}else{
#PLS
if(pairedanalysis==TRUE){
classlabels_temp<-cbind(classlabels_sub[,2],classlabels)
plsres1<-do_plsda(X=data_m_fc,Y=classlabels_temp,oscmode=featselmethod,numcomp=pls_ncomp,kfold=kfold,evalmethod=pred.eval.method,
keepX=max_varsel,sparseselect=FALSE,analysismode=analysismode,vip.thresh=pls_vip_thresh,sample.col.opt=sample.col.opt,
sample.col.vec=col_vec,scoreplot_legend=scoreplot_legend,pairedanalysis=pairedanalysis,optselect=optselect,
class_labels_levels_main=class_labels_levels_main,legendlocation=legendlocation,pls.vip.selection=pls.vip.selection,
output.device.type=output.device.type,plots.res=plots.res,plots.width=plots.width,
plots.height=plots.height,plots.type=plots.type,pls.ellipse=pca.ellipse,alphabetical.order=alphabetical.order)
if (is(plsres1, "try-error")){
print(paste("PLS could not be performed at RSD threshold: ",log2.fold.change.thresh,sep=""))
break;
}else{
opt_comp<-plsres1$opt_comp
}
}else{
plsres1<-do_plsda(X=data_m_fc,Y=classlabels,oscmode=featselmethod,numcomp=pls_ncomp,kfold=kfold,evalmethod=pred.eval.method,keepX=max_varsel,
sparseselect=FALSE,analysismode=analysismode,vip.thresh=pls_vip_thresh,sample.col.opt=sample.col.opt,
sample.col.vec=col_vec,scoreplot_legend=scoreplot_legend,pairedanalysis=pairedanalysis,optselect=optselect,
class_labels_levels_main=class_labels_levels_main,legendlocation=legendlocation,pls.vip.selection=pls.vip.selection,
output.device.type=output.device.type,plots.res=plots.res,plots.width=plots.width,plots.height=plots.height,
plots.type=plots.type,pls.ellipse=pca.ellipse,alphabetical.order=alphabetical.order)
if (is(plsres1, "try-error")){
print(paste("PLS could not be performed at RSD threshold: ",log2.fold.change.thresh,sep=""))
break;
}else{
opt_comp<-plsres1$opt_comp
}
#for(randindex in 1:100){
if(is.na(pls.permut.count)==FALSE){
set.seed(999)
seedvec<-runif(pls.permut.count,10,10*pls.permut.count)
if(pls.permut.count>0){
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterEvalQ(cl,library(plsgenomics))
clusterEvalQ(cl,library(dplyr))
clusterEvalQ(cl,library(plyr))
clusterExport(cl,"pls.lda.cv",envir = .GlobalEnv)
clusterExport(cl,"plsda_cv",envir = .GlobalEnv)
#clusterExport(cl,"%>%",envir = .GlobalEnv) #%>%
clusterExport(cl,"do_plsda_rand",envir = .GlobalEnv)
clusterEvalQ(cl,library(mixOmics))
clusterEvalQ(cl,library(pls))
#here
rand_pls_sel<-parLapply(cl,1:pls.permut.count,function(x)
{
set.seed(seedvec[x])
#t1fname<-paste("ranpls",x,".Rda",sep="")
####savelist=ls(),file=t1fname)
print(paste("PLSDA permutation number: ",x,sep=""))
plsresrand<-do_plsda_rand(X=data_m_fc,Y=classlabels[sample(x=seq(1,dim(classlabels)[1]),size=dim(classlabels)[1]),],
oscmode=featselmethod,numcomp=opt_comp,kfold=kfold,evalmethod=pred.eval.method,
keepX=max_varsel,sparseselect=FALSE,analysismode,sample.col.vec=col_vec,
scoreplot_legend=scoreplot_legend,pairedanalysis=pairedanalysis,optselect=FALSE,
class_labels_levels_main=class_labels_levels_main,legendlocation=legendlocation,plotindiv=FALSE,alphabetical.order=alphabetical.order) #,silent=TRUE)
if (is(plsresrand, "try-error")){
return(1)
}else{
return(plsresrand$vip_res[,1])
}
})
stopCluster(cl)
}
####saverand_pls_sel,file="rand_pls_sel1.Rda")
}
}
opt_comp<-plsres1$opt_comp
}
if(length(plsres1$bad_variables)>0){
data_m_fc_withfeats<-data_m_fc_withfeats[-c(plsres1$bad_variables),]
data_m_fc<-data_m_fc[-c(plsres1$bad_variables),]
}
if(is.na(pls.permut.count)==FALSE){
if(pls.permut.count>0){
###saverand_pls_sel,file="rand_pls_sel.Rda")
#rand_pls_sel<-ldply(rand_pls_sel,rbind) #unlist(rand_pls_sel)
rand_pls_sel<-as.data.frame(rand_pls_sel)
rand_pls_sel<-t(rand_pls_sel)
rand_pls_sel<-as.data.frame(rand_pls_sel)
if(featselmethod=="spls"){
rand_pls_sel[rand_pls_sel!=0]<-1
}else{
rand_pls_sel[rand_pls_sel<pls_vip_thresh]<-0
rand_pls_sel[rand_pls_sel>=pls_vip_thresh]<-1
}
####saverand_pls_sel,file="rand_pls_sel2.Rda")
rand_pls_sel_prob<-apply(rand_pls_sel,2,sum)/pls.permut.count
#rand_pls_sel_fdr<-p.adjust(rand_pls_sel_prob,method=fdrmethod)
pvalues<-rand_pls_sel_prob
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
#fdr_adjust_pvalue<-qvalue(pvalues)
#fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
fdr_adjust_pvalue<-pvalues
#fdr_adjust_pvalue<-p.adjust(pvalues,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
rand_pls_sel_fdr<-fdr_adjust_pvalue
vip_res<-cbind(data_m_fc_withfeats[,c(1:2)],plsres1$vip_res,rand_pls_sel_prob,rand_pls_sel_fdr)
}else{
vip_res<-cbind(data_m_fc_withfeats[,c(1:2)],plsres1$vip_res)
rand_pls_sel_fdr<-rep(0,dim(data_m_fc_withfeats[,c(1:2)])[1])
rand_pls_sel_prob<-rep(0,dim(data_m_fc_withfeats[,c(1:2)])[1])
}
}else{
vip_res<-cbind(data_m_fc_withfeats[,c(1:2)],plsres1$vip_res)
rand_pls_sel_fdr<-rep(0,dim(data_m_fc_withfeats[,c(1:2)])[1])
rand_pls_sel_prob<-rep(0,dim(data_m_fc_withfeats[,c(1:2)])[1])
}
write.table(vip_res,file="Tables/vip_res.txt",sep="\t",row.names=FALSE)
# write.table(r2_q2_valid_res,file="pls_r2_q2_res.txt",sep="\t",row.names=TRUE)
varimp_plsres1<-plsres1$selected_variables
opt_comp<-plsres1$opt_comp
if(max_comp_sel>opt_comp){
max_comp_sel<-opt_comp
}
# print("opt comp is")
#print(opt_comp)
if(featselmethod=="spls"){
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("Loading (absolute)","Rank",cnames_tab)
#
if(opt_comp>1){
#abs
vip_res1<-abs(plsres1$vip_res)
if(max_comp_sel>1){
vip_res1<-apply(vip_res1[,c(1:max_comp_sel)],1,max)
}else{
vip_res1<-vip_res1[,c(1)]
}
}else{
vip_res1<-abs(plsres1$vip_res)
}
pls_vip<-vip_res1 #(plsres1$vip_res)
if(is.na(pls.permut.count)==FALSE){
#based on loadings for sPLS
sel.diffdrthresh<-pls_vip!=0 & rand_pls_sel_fdr<fdrthresh & rand_pls_sel_prob<pvalue.thresh
}else{
# print("DOING SPLS #here999")
sel.diffdrthresh<-pls_vip!=0
}
goodip<-which(sel.diffdrthresh==TRUE)
# save(goodip,pls_vip,rand_pls_sel_fdr,rand_pls_sel_prob,sel.diffdrthresh,file="splsdebug1.Rda")
}else{
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("VIP","Rank",cnames_tab)
if(max_comp_sel>opt_comp){
max_comp_sel<-opt_comp
}
#pls_vip<-plsres1$vip_res[,c(1:max_comp_sel)]
if(opt_comp>1){
vip_res1<-(plsres1$vip_res)
if(max_comp_sel>1){
if(pls.vip.selection=="mean"){
vip_res1<-apply(vip_res1[,c(1:max_comp_sel)],1,mean)
}else{
vip_res1<-apply(vip_res1[,c(1:max_comp_sel)],1,max)
}
}else{
vip_res1<-vip_res1[,c(1)]
}
}else{
vip_res1<-plsres1$vip_res
}
#vip_res1<-plsres1$vip_res
pls_vip<-vip_res1
#pls
sel.diffdrthresh<-pls_vip>=pls_vip_thresh & rand_pls_sel_fdr<fdrthresh & rand_pls_sel_prob<pvalue.thresh
goodip<-which(sel.diffdrthresh==TRUE)
}
rank_vec<-order(pls_vip,decreasing=TRUE)
rank_vec2<-seq(1,length(rank_vec))
ranked_vec<-pls_vip[rank_vec]
rank_num<-match(pls_vip,ranked_vec)
data_limma_fdrall_withfeats<-cbind(pls_vip,rank_num,data_m_fc_withfeats)
feat_sigfdrthresh[lf]<-length(which(sel.diffdrthresh==TRUE)) #length(plsres1$selected_variables) #length(which(sel.diffdrthresh==TRUE))
filename<-paste("Tables/",parentfeatselmethod,"_variable_importance.txt",sep="")
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
}
#stop("Please choose limma, RF, RFcond, or MARS for featselmethod.")
if(featselmethod=="lmreg" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat"| featselmethod=="logitreg" | featselmethod=="wilcox" | featselmethod=="ttest" |
featselmethod=="ttestrepeat" | featselmethod=="poissonreg" | featselmethod=="wilcoxrepeat" | featselmethod=="lmregrepeat")
{
pvalues<-{}
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
if(featselmethod=="ttestrepeat"){
featselmethod="ttest"
pairedanalysis=TRUE
}
if(featselmethod=="wilcoxrepeat"){
featselmethod="wilcox"
pairedanalysis=TRUE
}
if(featselmethod=="lm1wayanova")
{
# cat("Performing one-way ANOVA analysis",sep="\n")
#print(dim(data_m_fc))
#print(dim(classlabels_response_mat))
#print(dim(classlabels))
#data_mat_anova<-cbind(t(data_m_fc),classlabels_response_mat)
numcores<-round(detectCores()*0.6)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"diffexponewayanova",envir = .GlobalEnv)
clusterExport(cl,"anova",envir = .GlobalEnv)
clusterExport(cl,"TukeyHSD",envir = .GlobalEnv)
clusterExport(cl,"aov",envir = .GlobalEnv)
res1<-parApply(cl,data_m_fc,1,function(x,classlabels_response_mat){
xvec<-x
data_mat_anova<-cbind(xvec,classlabels_response_mat)
data_mat_anova<-as.data.frame(data_mat_anova)
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-c("Response","Factor1")
data_mat_anova$Factor1<-as.factor(data_mat_anova$Factor1)
anova_res<-diffexponewayanova(dataA=data_mat_anova)
return(anova_res)
},classlabels_response_mat)
stopCluster(cl)
main_pval_mat<-{}
posthoc_pval_mat<-{}
pvalues<-{}
#print(head(res1))
for(i in 1:length(res1)){
main_pval_mat<-rbind(main_pval_mat,res1[[i]]$mainpvalues)
pvalues<-c(pvalues,res1[[i]]$mainpvalues[1])
posthoc_pval_mat<-rbind(posthoc_pval_mat,res1[[i]]$posthocfactor1)
}
pvalues<-unlist(pvalues)
#print(summary(pvalues))
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
#fdr_adjust_pvalue<-qvalue(pvalues)
#fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
#fdr_adjust_pvalue<-pvalues
fdr_adjust_pvalue<-p.adjust(pvalues,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-"lm1wayanova_pvalall_posthoc.txt"
}else{
filename<-"lm1wayanova_fdrall_posthoc.txt"
}
cnames_tab<-colnames(data_m_fc_withfeats)
posthoc_names<-colnames(posthoc_pval_mat)
if(length(posthoc_names)<1){
posthoc_names<-c("Factor1vs2")
}
cnames_tab<-c("P.value","adjusted.P.value",posthoc_names,cnames_tab)
#cnames_tab<-c("P.value","adjusted.P.value","posthoc.pvalue",cnames_tab)
pvalues<-as.data.frame(pvalues)
#pvalues<-t(pvalues)
pvalues<-as.data.frame(pvalues)
final.pvalues<-pvalues
#final.pvalues<-pvalues
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,posthoc_pval_mat,data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#gohere
if(length(check_names)>0){
# data_limma_fdrall_withfeats<-cbind(pvalues1,fdr_adjust_pvalue1,pvalues2,fdr_adjust_pvalue2,pvalues3,fdr_adjust_pvalue3,posthoc_pval_mat,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
#colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,posthoc_pval_mat,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
data_limma_fdrall_withfeats<-as.data.frame(data_limma_fdrall_withfeats)
#data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,results2,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
rem_col_ind1<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("mz"))
rem_col_ind2<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("time"))
rem_col_ind<-c(rem_col_ind1,rem_col_ind2)
}else{
rem_col_ind<-{}
}
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
filename<-paste("Tables/",filename,sep="")
if(length(rem_col_ind)>0){
#write.table(data_limma_fdrall_withfeats[,-c(rem_col_ind)], file="Tables/twowayanova_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
write.table(data_limma_fdrall_withfeats[,-c(rem_col_ind)], file=filename,sep="\t",row.names=FALSE)
}else{
#write.table(data_limma_fdrall_withfeats,file="Tables/twowayanova_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
}
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
}
if(featselmethod=="ttest" && pairedanalysis==TRUE)
{
# cat("Performing paired t-test analysis",sep="\n")
#print(dim(data_m_fc))
#print(dim(classlabels_response_mat))
#print(dim(classlabels))
#data_mat_anova<-cbind(t(data_m_fc),classlabels_response_mat)
numcores<-round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"t.test",envir = .GlobalEnv)
res1<-parApply(cl,data_m_fc,1,function(x,classlabels_response_mat){
xvec<-x
data_mat_anova<-cbind(xvec,classlabels_response_mat)
data_mat_anova<-as.data.frame(data_mat_anova)
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-c("Response","Factor1")
#print(data_mat_anova)
data_mat_anova$Factor1<-as.factor(data_mat_anova$Factor1)
#anova_res<-diffexponewayanova(dataA=data_mat_anova)
x1<-data_mat_anova$Response[which(data_mat_anova$Factor1==class_labels_levels[1])]
y1<-data_mat_anova$Response[which(data_mat_anova$Factor1==class_labels_levels[2])]
w1<-t.test(x=x1,y=y1,alternative="two.sided",paired=TRUE)
return(w1$p.value)
},classlabels_response_mat)
stopCluster(cl)
main_pval_mat<-{}
posthoc_pval_mat<-{}
pvalues<-{}
pvalues<-unlist(res1)
#print(summary(pvalues))
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
#fdr_adjust_pvalue<-qvalue(pvalues)
#fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
#fdr_adjust_pvalue<-pvalues
fdr_adjust_pvalue<-p.adjust(pvalues,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-"pairedttest_pvalall_withfeats.txt"
}else{
filename<-"pairedttest_fdrall_withfeats.txt"
}
cnames_tab<-colnames(data_m_fc_withfeats)
posthoc_names<-colnames(posthoc_pval_mat)
if(length(posthoc_names)<1){
posthoc_names<-c("Factor1vs2")
}
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab)
#cnames_tab<-c("P.value","adjusted.P.value","posthoc.pvalue",cnames_tab)
pvalues<-as.data.frame(pvalues)
#pvalues<-t(pvalues)
# print(dim(pvalues))
#print(dim(data_m_fc_withfeats))
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
# write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
}
if(featselmethod=="ttest" && pairedanalysis==FALSE)
{
#cat("Performing t-test analysis",sep="\n")
#print(dim(data_m_fc))
#print(dim(classlabels_response_mat))
#print(dim(classlabels))
#data_mat_anova<-cbind(t(data_m_fc),classlabels_response_mat)
numcores<-round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"t.test",envir = .GlobalEnv)
res1<-parApply(cl,data_m_fc,1,function(x,classlabels_response_mat){
xvec<-x
data_mat_anova<-cbind(xvec,classlabels_response_mat)
data_mat_anova<-as.data.frame(data_mat_anova)
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-c("Response","Factor1")
#print(data_mat_anova)
data_mat_anova$Factor1<-as.factor(data_mat_anova$Factor1)
#anova_res<-diffexponewayanova(dataA=data_mat_anova)
x1<-data_mat_anova$Response[which(data_mat_anova$Factor1==class_labels_levels[1])]
y1<-data_mat_anova$Response[which(data_mat_anova$Factor1==class_labels_levels[2])]
w1<-t.test(x=x1,y=y1,alternative="two.sided")
return(w1$p.value)
},classlabels_response_mat)
stopCluster(cl)
main_pval_mat<-{}
posthoc_pval_mat<-{}
pvalues<-{}
pvalues<-unlist(res1)
#print(summary(pvalues))
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
#fdr_adjust_pvalue<-qvalue(pvalues)
#fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
#fdr_adjust_pvalue<-pvalues
fdr_adjust_pvalue<-p.adjust(pvalues,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-"ttest_pvalall_withfeats.txt"
}else{
filename<-"ttest_fdrall_withfeats.txt"
}
cnames_tab<-colnames(data_m_fc_withfeats)
posthoc_names<-colnames(posthoc_pval_mat)
if(length(posthoc_names)<1){
posthoc_names<-c("Factor1vs2")
}
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab)
#cnames_tab<-c("P.value","adjusted.P.value","posthoc.pvalue",cnames_tab)
pvalues<-as.data.frame(pvalues)
#pvalues<-t(pvalues)
# print(dim(pvalues))
#print(dim(data_m_fc_withfeats))
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
# write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
}
if(featselmethod=="wilcox")
{
# cat("Performing Wilcox rank-sum analysis",sep="\n")
#print(dim(data_m_fc))
#print(dim(classlabels_response_mat))
#print(dim(classlabels))
#data_mat_anova<-cbind(t(data_m_fc),classlabels_response_mat)
numcores<-round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"wilcox.test",envir = .GlobalEnv)
res1<-parApply(cl,data_m_fc,1,function(x,classlabels_response_mat){
xvec<-x
data_mat_anova<-cbind(xvec,classlabels_response_mat)
data_mat_anova<-as.data.frame(data_mat_anova)
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-c("Response","Factor1")
#print(data_mat_anova)
data_mat_anova$Factor1<-as.factor(data_mat_anova$Factor1)
#anova_res<-diffexponewayanova(dataA=data_mat_anova)
x1<-data_mat_anova$Response[which(data_mat_anova$Factor1==class_labels_levels[1])]
y1<-data_mat_anova$Response[which(data_mat_anova$Factor1==class_labels_levels[2])]
w1<-wilcox.test(x=x1,y=y1,alternative="two.sided")
return(w1$p.value)
},classlabels_response_mat)
stopCluster(cl)
main_pval_mat<-{}
posthoc_pval_mat<-{}
pvalues<-{}
pvalues<-unlist(res1)
#print(summary(pvalues))
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
#fdr_adjust_pvalue<-qvalue(pvalues)
#fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
#fdr_adjust_pvalue<-pvalues
fdr_adjust_pvalue<-p.adjust(pvalues,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-"wilcox_pvalall_withfeats.txt"
}else{
filename<-"wilcox_fdrall_withfeats.txt"
}
cnames_tab<-colnames(data_m_fc_withfeats)
posthoc_names<-colnames(posthoc_pval_mat)
if(length(posthoc_names)<1){
posthoc_names<-c("Factor1vs2")
}
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab)
#cnames_tab<-c("P.value","adjusted.P.value","posthoc.pvalue",cnames_tab)
pvalues<-as.data.frame(pvalues)
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
# write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
}
#lmreg:feature selections
if(featselmethod=="lmreg")
{
if(logistic_reg==TRUE){
if(length(levels(classlabels_response_mat[,1]))>2){
print("More than 2 classes found. Skipping logistic regression analysis.")
next;
}
# cat("Performing logistic regression analysis",sep="\n")
classlabels_response_mat[,1]<-as.numeric((classlabels_response_mat[,1]))-1
fileheader="logitreg"
}else{
if(poisson_reg==TRUE){
# cat("Performing poisson regression analysis",sep="\n")
fileheader="poissonreg"
classlabels_response_mat[,1]<-as.numeric((classlabels_response_mat[,1]))
}else{
# cat("Performing linear regression analysis",sep="\n")
fileheader="lmreg"
}
}
numcores<-num_nodes #round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"diffexplmreg",envir = .GlobalEnv)
clusterExport(cl,"lm",envir = .GlobalEnv)
clusterExport(cl,"glm",envir = .GlobalEnv)
clusterExport(cl,"summary",envir = .GlobalEnv)
clusterExport(cl,"anova",envir = .GlobalEnv)
clusterEvalQ(cl,library(sandwich))
#data_mat_anova<-cbind(t(data_m_fc),classlabels_response_mat)
res1<-parApply(cl,data_m_fc,1,function(x,classlabels_response_mat,logistic_reg,poisson_reg,robust.estimate,vcovHC.type){
xvec<-x
data_mat_anova<-cbind(xvec,classlabels_response_mat)
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-cnames
#lmreg feature selection
anova_res<-diffexplmreg(dataA=data_mat_anova,logistic_reg,poisson_reg,robust.estimate,vcovHC.type)
return(anova_res)
},classlabels_response_mat,logistic_reg,poisson_reg,robust.estimate,vcovHC.type)
stopCluster(cl)
main_pval_mat<-{}
posthoc_pval_mat<-{}
pvalues<-{}
#save(res1,file="res1.Rda")
all_inf_mat<-{}
for(i in 1:length(res1)){
main_pval_mat<-rbind(main_pval_mat,res1[[i]]$mainpvalues)
pvalues<-c(pvalues,res1[[i]]$mainpvalues[1])
#posthoc_pval_mat<-rbind(posthoc_pval_mat,res1[[i]]$posthocfactor1)
cur_pvals<-t(res1[[i]]$mainpvalues)
cur_est<-t(res1[[i]]$estimates)
cur_stderr<-t(res1[[i]]$stderr)
cur_tstat<-t(res1[[i]]$statistic)
#cur_pvals<-as.data.frame(cur_pvals)
cur_res<-cbind(cur_pvals,cur_est,cur_stderr,cur_tstat)
all_inf_mat<-rbind(all_inf_mat,cur_res)
}
cnames_1<-c(paste("P.value_",colnames(cur_pvals),sep=""),paste("Estimate_",colnames(cur_pvals),sep=""),paste("StdError_var_",colnames(cur_pvals),sep=""),paste("t-statistic_",colnames(cur_pvals),sep=""))
# print("here after lm reg")
#print(summary(pvalues))
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
#fdr_adjust_pvalue<-qvalue(pvalues)
#fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
#fdr_adjust_pvalue<-pvalues
fdr_adjust_pvalue<-p.adjust(pvalues,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-paste(fileheader,"_pvalall_withfeats.txt",sep="")
}else{
filename<-paste(fileheader,"_fdrall_withfeats.txt",sep="")
}
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab)
pvalues<-as.data.frame(pvalues)
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
#write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
if(analysismode=="regression"){
filename<-paste(fileheader,"_results_allfeatures.txt",sep="")
filename<-paste("Tables/",filename,sep="")
# write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
}
filename<-paste(fileheader,"_pval_coef_stderr.txt",sep="")
data_allinf_withfeats<-cbind(all_inf_mat,data_m_fc_withfeats)
filename<-paste("Tables/",filename,sep="")
# write.table(data_allinf_withfeats, file=filename,sep="\t",row.names=FALSE)
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c(cnames_1,cnames_tab)
class_column_names<-colnames(classlabels_response_mat)
colnames(data_allinf_withfeats)<-as.character(cnames_tab)
###save(data_allinf_withfeats,cnames_tab,cnames_1,file="data_allinf_withfeats.Rda")
pval_columns<-grep(colnames(data_allinf_withfeats),pattern="P.value")
fdr_adjusted_pvalue<-get_fdr_adjusted_pvalue(data_matrix=data_allinf_withfeats,fdrmethod=fdrmethod)
# data_allinf_withfeats1<-cbind(data_allinf_withfeats[,pval_columns],fdr_adjusted_pvalue,data_allinf_withfeats[,-c(pval_columns)])
cnames_tab1<-c(cnames_tab[pval_columns],colnames(fdr_adjusted_pvalue),cnames_tab[-pval_columns])
pval_columns<-grep(colnames(data_allinf_withfeats),pattern="P.value")
fdr_adjusted_pvalue<-get_fdr_adjusted_pvalue(data_matrix=data_allinf_withfeats,fdrmethod=fdrmethod)
data_allinf_withfeats<-cbind(data_allinf_withfeats[,pval_columns],fdr_adjusted_pvalue,data_allinf_withfeats[,-c(pval_columns)])
cnames_tab1<-c(cnames_tab[pval_columns],colnames(fdr_adjusted_pvalue),cnames_tab[-pval_columns])
filename<-paste(fileheader,"_pval_coef_stderr.txt",sep="")
filename<-paste("Tables/",filename,sep="")
colnames(data_allinf_withfeats)<-cnames_tab1
###save(data_allinf_withfeats,file="d2.Rda")
write.table(data_allinf_withfeats, file=filename,sep="\t",row.names=FALSE)
}
if(featselmethod=="lm2wayanova")
{
cat("Performing two-way ANOVA analysis with Tukey post hoc comparisons",sep="\n")
#print(dim(data_m_fc))
# print(dim(classlabels_response_mat))
numcores<-num_nodes #round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"diffexplmtwowayanova",envir = .GlobalEnv)
clusterExport(cl,"TukeyHSD",envir = .GlobalEnv)
clusterExport(cl,"plotTukeyHSD1",envir = .GlobalEnv)
clusterExport(cl,"aov",envir = .GlobalEnv)
clusterExport(cl,"anova",envir = .GlobalEnv)
clusterEvalQ(cl,library(ggpubr))
clusterEvalQ(cl,library(ggplot2))
# clusterEvalQ(cl,library(cowplot))
#res1<-apply(data_m_fc,1,function(x){
res1<-parRapply(cl,data_m_fc,function(x,classlabels_response_mat){
xvec<-x
colnames(classlabels_response_mat)<-paste("Factor",seq(1,dim(classlabels_response_mat)[2]),sep="")
data_mat_anova<-cbind(xvec,classlabels_response_mat)
#print("2way anova")
# print(data_mat_anova[1:2,])
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-cnames
####savedata_mat_anova,file="data_mat_anova.Rda")
#diffexplmtwowayanova
anova_res<-diffexplmtwowayanova(dataA=data_mat_anova)
return(anova_res)
},classlabels_response_mat)
stopCluster(cl)
# print("done")
####saveres1,file="res1.Rda")
main_pval_mat<-{}
posthoc_pval_mat<-{}
pvalues1<-{}
pvalues2<-{}
pvalues3<-{}
save(res1,file="tukeyhsd_plots.Rda")
for(i in 1:length(res1)){
#print(i)
#print(res1[[i]]$mainpvalues)
#print(res1[[i]]$posthoc)
main_pval_mat<-rbind(main_pval_mat,res1[[i]]$mainpvalues)
pvalues1<-c(pvalues1,as.vector(res1[[i]]$mainpvalues[1]))
pvalues2<-c(pvalues2,as.vector(res1[[i]]$mainpvalues[2]))
pvalues3<-c(pvalues3,as.vector(res1[[i]]$mainpvalues[3]))
posthoc_pval_mat<-rbind(posthoc_pval_mat,res1[[i]]$posthoc)
}
twoanova_res<-cbind(data_m_fc_withfeats[,c(1:2)],main_pval_mat,posthoc_pval_mat)
#write.table(twoanova_res,file="Tables/twoanova_with_posthoc_pvalues.txt",sep="\t",row.names=FALSE)
pvalues1<-main_pval_mat[,1]
pvalues2<-main_pval_mat[,2]
pvalues3<-main_pval_mat[,3]
if(fdrmethod=="none"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="none")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="none")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="none")
}
if(fdrmethod=="BH"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="BH")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="BH")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="BH")
}else{
if(fdrmethod=="ST"){
fdr_adjust_pvalue1<-try(qvalue(pvalues1),silent=TRUE)
if(is(fdr_adjust_pvalue1,"try-error")){
fdr_adjust_pvalue1<-qvalue(pvalues1,lambda=max(pvalues1,na.rm=TRUE))
}
fdr_adjust_pvalue1<-fdr_adjust_pvalue1$qvalues
fdr_adjust_pvalue2<-try(qvalue(pvalues2),silent=TRUE)
if(is(fdr_adjust_pvalue2,"try-error")){
fdr_adjust_pvalue2<-qvalue(pvalues2,lambda=max(pvalues2,na.rm=TRUE))
}
fdr_adjust_pvalue2<-fdr_adjust_pvalue2$qvalues
fdr_adjust_pvalue3<-try(qvalue(pvalues3),silent=TRUE)
if(is(fdr_adjust_pvalue3,"try-error")){
fdr_adjust_pvalue3<-qvalue(pvalues3,lambda=max(pvalues3,na.rm=TRUE))
}
fdr_adjust_pvalue3<-fdr_adjust_pvalue3$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue1<-fdrtool(as.vector(pvalues1),statistic="pvalue",verbose=FALSE)
fdr_adjust_pvalue1<-fdr_adjust_pvalue1$qval
fdr_adjust_pvalue2<-fdrtool(as.vector(pvalues2),statistic="pvalue",verbose=FALSE)
fdr_adjust_pvalue2<-fdr_adjust_pvalue2$qval
fdr_adjust_pvalue3<-fdrtool(as.vector(pvalues3),statistic="pvalue",verbose=FALSE)
fdr_adjust_pvalue3<-fdr_adjust_pvalue3$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="none")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="none")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="BY")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="BY")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="BY")
}else{
if(fdrmethod=="bonferroni"){
# fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="bonferroni")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="bonferroni")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-paste(featselmethod,"_pvalall_withfeats.txt",sep="")
}else{
filename<-paste(featselmethod,"_fdrall_withfeats.txt",sep="")
}
cnames_tab<-colnames(data_m_fc_withfeats)
posthoc_names<-colnames(posthoc_pval_mat)
cnames_tab<-c("Factor1.P.value","Factor1.adjusted.P.value","Factor2.P.value","Factor2.adjusted.P.value","Interact.P.value","Interact.adjusted.P.value",posthoc_names,cnames_tab)
if(FALSE)
{
pvalues1<-as.data.frame(pvalues1)
pvalues1<-t(pvalues1)
fdr_adjust_pvalue1<-as.data.frame(fdr_adjust_pvalue1)
pvalues2<-as.data.frame(pvalues2)
pvalues2<-t(pvalues2)
fdr_adjust_pvalue2<-as.data.frame(fdr_adjust_pvalue2)
pvalues3<-as.data.frame(pvalues3)
pvalues3<-t(pvalues3)
fdr_adjust_pvalue3<-as.data.frame(fdr_adjust_pvalue3)
posthoc_pval_mat<-as.data.frame(posthoc_pval_mat)
}
# ###savedata_m_fc_withfeats,file="data_m_fc_withfeats.Rda")
data_limma_fdrall_withfeats<-cbind(pvalues1,fdr_adjust_pvalue1,pvalues2,fdr_adjust_pvalue2,pvalues3,fdr_adjust_pvalue3,posthoc_pval_mat,data_m_fc_withfeats)
fdr_adjust_pvalue<-cbind(fdr_adjust_pvalue1,fdr_adjust_pvalue2,fdr_adjust_pvalue3)
fdr_adjust_pvalue<-apply(fdr_adjust_pvalue,1,function(x){min(x,na.rm=TRUE)})
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
if(length(check_names)>0){
data_limma_fdrall_withfeats<-cbind(pvalues1,fdr_adjust_pvalue1,pvalues2,fdr_adjust_pvalue2,pvalues3,fdr_adjust_pvalue3,posthoc_pval_mat,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
data_limma_fdrall_withfeats<-as.data.frame(data_limma_fdrall_withfeats)
#data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,results2,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
rem_col_ind1<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("mz"))
rem_col_ind2<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("time"))
rem_col_ind<-c(rem_col_ind1,rem_col_ind2)
}else{
rem_col_ind<-{}
}
if(length(rem_col_ind)>0){
write.table(data_limma_fdrall_withfeats[,-c(rem_col_ind)], file="Tables/twowayanova_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
}else{
write.table(data_limma_fdrall_withfeats,file="Tables/twowayanova_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
}
filename<-paste("Tables/",filename,sep="")
#write.table(data_limma_fdrall_withfeats,file="Tables/twowayanova_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
#write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
fdr_matrix<-cbind(fdr_adjust_pvalue1,fdr_adjust_pvalue2,fdr_adjust_pvalue3)
fdr_matrix<-as.data.frame(fdr_matrix)
fdr_adjust_pvalue_all<-apply(fdr_matrix,1,function(x){return(min(x,na.rm=TRUE)[1])})
pvalues<-cbind(pvalues1,pvalues2,pvalues3)
pvalues<-apply(pvalues,1,function(x){min(x,na.rm=TRUE)[1]})
#pvalues1<-t(pvalues1)
#print("here")
pvalues1<-as.data.frame(pvalues1)
pvalues1<-t(pvalues1)
#print(dim(pvalues1))
#pvalues2<-t(pvalues2)
pvalues2<-as.data.frame(pvalues2)
pvalues2<-t(pvalues2)
#pvalues3<-t(pvalues3)
pvalues3<-as.data.frame(pvalues3)
pvalues3<-t(pvalues3)
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue_all<fdrthresh & final.pvalues<pvalue.thresh
if(length(which(fdr_adjust_pvalue1<fdrthresh))>0){
X1=data_m_fc_withfeats[which(fdr_adjust_pvalue1<fdrthresh),]
Y1=cbind(classlabels_orig[,1],as.character(classlabels_response_mat[,1]))
Y1<-as.data.frame(Y1)
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/HCA_Factor1selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
hca_f1<-get_hca(feature_table_file=NA,parentoutput_dir=output_dir,class_labels_file=NA,X=X1,Y=Y1,heatmap.col.opt=heatmap.col.opt,cor.method=cor.method,is.data.znorm=FALSE,
analysismode="classification",
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300,
alphacol=0.3, hca_type=hca_type,newdevice=FALSE,input.type="intensity",mainlab="Factor1",
alphabetical.order=alphabetical.order,study.design=analysistype,labRow.value = labRow.value, labCol.value = labCol.value,similarity.matrix=similarity.matrix,
cexLegend=hca.cex.legend,cexRow=cex.plots,cexCol=cex.plots)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}else{
print("No significant features for Factor 1.")
}
if(length(which(fdr_adjust_pvalue2<fdrthresh))>0){
X2=data_m_fc_withfeats[which(fdr_adjust_pvalue2<fdrthresh),]
Y2=cbind(classlabels_orig[,1],as.character(classlabels_response_mat[,2]))
Y2<-as.data.frame(Y2)
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/HCA_Factor2selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
hca_f2<-get_hca(feature_table_file=NA,parentoutput_dir=output_dir,class_labels_file=NA,X=X2,Y=Y2,heatmap.col.opt=heatmap.col.opt,cor.method=cor.method,is.data.znorm=FALSE,analysismode="classification",
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300, alphacol=0.3,
hca_type=hca_type,newdevice=FALSE,input.type="intensity",mainlab="Factor2",
alphabetical.order=alphabetical.order,study.design=analysistype,labRow.value = labRow.value, labCol.value = labCol.value,similarity.matrix=similarity.matrix,
cexLegend=hca.cex.legend,cexRow=cex.plots,cexCol=cex.plots)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}else{
print("No significant features for Factor 2.")
}
class_interact<-paste(classlabels_response_mat[,1],":",classlabels_response_mat[,2],sep="")
#classlabels_response_mat[,1]:classlabels_response_mat[,2]
if(length(which(fdr_adjust_pvalue3<fdrthresh))>0){
X3=data_m_fc_withfeats[which(fdr_adjust_pvalue3<fdrthresh),]
Y3=cbind(classlabels_orig[,1],class_interact)
Y3<-as.data.frame(Y3)
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/HCA_Factor1xFactor2selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
hca_f3<-get_hca(feature_table_file=NA,parentoutput_dir=output_dir,class_labels_file=NA,X=X3,Y=Y3,heatmap.col.opt=heatmap.col.opt,cor.method=cor.method,is.data.znorm=FALSE,analysismode="classification",
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300, alphacol=0.3,
hca_type=hca_type,newdevice=FALSE,input.type="intensity",mainlab="Factor1 x Factor2",
alphabetical.order=alphabetical.order,study.design=analysistype,labRow.value = labRow.value, labCol.value = labCol.value,similarity.matrix=similarity.matrix,
cexLegend=hca.cex.legend,cexRow=cex.plots,cexCol=cex.plots)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}else{
print("No significant features for the interaction.")
}
data_limma_fdrall_withfeats<-cbind(final.pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("P.value.Min(Factor1,Factor2,Interaction)","adjusted.P.value.Min(Factor1,Factor2,Interaction)",cnames_tab)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#filename2<-"test2.txt"
#data_limma_fdrsig_withfeats<-data_limma_fdrall_withfeats[sel.diffdrthresh==TRUE,]
#write.table(data_limma_fdrsig_withfeats, file=filename2,sep="\t",row.names=FALSE)
fdr_adjust_pvalue<-fdr_adjust_pvalue_all
}
if(featselmethod=="lm1wayanovarepeat"| featselmethod=="lmregrepeat"){
# save(data_m_fc,classlabels_response_mat,subject_inf,modeltype,file="1waydebug.Rda")
#clusterExport(cl,"classlabels_response_mat",envir = .GlobalEnv)
#clusterExport(cl,"subject_inf",envir = .GlobalEnv)
#res1<-apply(data_m_fc,1,function(x){
if(featselmethod=="lm1wayanovarepeat"){
cat("Performing one-way ANOVA with repeated measurements analysis using nlme::lme()",,sep="\n")
numcores<-num_nodes #round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"diffexplmonewayanovarepeat",envir = .GlobalEnv)
clusterEvalQ(cl,library(nlme))
clusterEvalQ(cl,library(multcomp))
clusterEvalQ(cl,library(lsmeans))
clusterExport(cl,"lme",envir = .GlobalEnv)
clusterExport(cl,"interaction",envir = .GlobalEnv)
clusterExport(cl,"anova",envir = .GlobalEnv)
res1<-parApply(cl,data_m_fc,1,function(x,classlabels_response_mat,subject_inf,modeltype){
#res1<-apply(data_m_fc,1,function(x){
xvec<-x
colnames(classlabels_response_mat)<-paste("Factor",seq(1,dim(classlabels_response_mat)[2]),sep="")
data_mat_anova<-cbind(xvec,classlabels_response_mat)
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-cnames
anova_res<-diffexplmonewayanovarepeat(dataA=data_mat_anova,subject_inf=subject_inf,modeltype=modeltype)
return(anova_res)
},classlabels_response_mat,subject_inf,modeltype)
main_pval_mat<-{}
posthoc_pval_mat<-{}
pvalues<-{}
bad_lm1feats<-{}
###saveres1,file="res1.Rda")
for(i in 1:length(res1)){
if(is.na(res1[[i]]$mainpvalues)==FALSE){
main_pval_mat<-rbind(main_pval_mat,res1[[i]]$mainpvalues)
pvalues<-c(pvalues,res1[[i]]$mainpvalues[1])
posthoc_pval_mat<-rbind(posthoc_pval_mat,res1[[i]]$posthoc)
}else{
bad_lm1feats<-c(bad_lm1feats,i)
}
}
if(length(bad_lm1feats)>0){
data_m_fc_withfeats<-data_m_fc_withfeats[-c(bad_lm1feats),]
data_m_fc<-data_m_fc[-c(bad_lm1feats),]
}
#twoanovarepeat_res<-cbind(data_m_fc_withfeats[,c(1:2)],main_pval_mat,posthoc_pval_mat)
#write.table(twoanovarepeat_res,file="Tables/lm2wayanovarepeat_with_posthoc_pvalues.txt",sep="\t",row.names=FALSE)
pvalues1<-main_pval_mat[,1]
onewayanova_res<-cbind(data_m_fc_withfeats[,c(1:2)],main_pval_mat,posthoc_pval_mat)
# write.table(twoanova_res,file="twoanova_with_posthoc_pvalues.txt",sep="\t",row.names=FALSE)
if(fdrmethod=="none"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="none")
}
if(fdrmethod=="BH"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="BH")
}else{
if(fdrmethod=="ST"){
#print(head(pvalues1))
#print(head(pvalues2))
#print(head(pvalues3))
#print(summary(pvalues1))
#print(summary(pvalues2))
#print(summary(pvalues3))
fdr_adjust_pvalue1<-try(qvalue(pvalues1),silent=TRUE)
if(is(fdr_adjust_pvalue1,"try-error")){
fdr_adjust_pvalue1<-qvalue(pvalues1,lambda=max(pvalues1,na.rm=TRUE))
}
fdr_adjust_pvalue1<-fdr_adjust_pvalue1$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue1<-fdrtool(as.vector(pvalues1),statistic="pvalue",verbose=FALSE)
fdr_adjust_pvalue1<-fdr_adjust_pvalue1$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="BY")
}else{
if(fdrmethod=="bonferroni"){
# fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-paste("Tables/",featselmethod,"_pvalall_withfeats.txt",sep="")
}else{
filename<-paste("Tables/",featselmethod,"_fdrall_withfeats.txt",sep="")
}
cnames_tab<-colnames(data_m_fc_withfeats)
posthoc_names<-colnames(posthoc_pval_mat)
#
cnames_tab<-c("Factor1.P.value","Factor1.adjusted.P.value",posthoc_names,cnames_tab)
data_limma_fdrall_withfeats<-cbind(pvalues1,fdr_adjust_pvalue1,posthoc_pval_mat,data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#gohere
if(length(check_names)>0){
data_limma_fdrall_withfeats<-cbind(pvalues1,fdr_adjust_pvalue1,posthoc_pval_mat,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
data_limma_fdrall_withfeats<-as.data.frame(data_limma_fdrall_withfeats)
#data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,results2,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
rem_col_ind1<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("mz"))
rem_col_ind2<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("time"))
rem_col_ind<-c(rem_col_ind1,rem_col_ind2)
}else{
rem_col_ind<-{}
}
if(length(rem_col_ind)>0){
write.table(data_limma_fdrall_withfeats[,-c(rem_col_ind)],file="Tables/onewayanovarepeat_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
}else{
write.table(data_limma_fdrall_withfeats,file="Tables/onewayanovarepeat_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
}
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
filename<-paste("Tables/",filename,sep="")
fdr_adjust_pvalue<-fdr_adjust_pvalue1
final.pvalues<-pvalues1
sel.diffdrthresh<-fdr_adjust_pvalue1<fdrthresh & final.pvalues<pvalue.thresh
}else{
cat("Performing linear regression with repeated measurements analysis using nlme::lme()",sep="\n")
numcores<-num_nodes #round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"diffexplmregrepeat",envir = .GlobalEnv)
clusterEvalQ(cl,library(nlme))
clusterEvalQ(cl,library(multcomp))
clusterEvalQ(cl,library(lsmeans))
clusterExport(cl,"lme",envir = .GlobalEnv)
clusterExport(cl,"interaction",envir = .GlobalEnv)
clusterExport(cl,"anova",envir = .GlobalEnv)
res1<-parApply(cl,data_m_fc,1,function(x,classlabels_response_mat,subject_inf,modeltype){
#res1<-apply(data_m_fc,1,function(x){
xvec<-x
colnames(classlabels_response_mat)<-paste("Factor",seq(1,dim(classlabels_response_mat)[2]),sep="")
data_mat_anova<-cbind(xvec,classlabels_response_mat)
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-cnames
# save(data_mat_anova,subject_inf,modeltype,file="lmregdebug.Rda")
if(ncol(data_mat_anova)>2){
covar.matrix=classlabels_response_mat[,-c(1)]
}else{
covar.matrix=NA
}
anova_res<-diffexplmregrepeat(dataA=data_mat_anova,subject_inf=subject_inf,modeltype=modeltype,covar.matrix = covar.matrix)
return(anova_res)
},classlabels_response_mat,subject_inf,modeltype)
stopCluster(cl)
main_pval_mat<-{}
pvalues<-{}
# save(res1,file="lmres1.Rda")
posthoc_pval_mat<-{}
bad_lm1feats<-{}
res2<-t(res1)
res2<-as.data.frame(res2)
colnames(res2)<-c("pvalue","coefficient","std.error","t.value")
pvalues<-res2$pvalue
pvalues<-unlist(pvalues)
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
fdr_adjust_pvalue<-pvalues
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-paste(featselmethod,"_pvalall_withfeats.txt",sep="")
}else{
filename<-paste(featselmethod,"_fdrall_withfeats.txt",sep="")
}
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("P.value","adjusted.P.value",c("coefficient","std.error","t.value"),cnames_tab)
pvalues<-as.data.frame(pvalues)
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
#pvalues<-t(pvalues)
#print(dim(pvalues))
#print(dim(data_m_fc_withfeats))
if(length(bad_lm1feats)>0){
data_m_fc_withfeats<-data_m_fc_withfeats[-c(bad_lm1feats),]
data_m_fc<-data_m_fc[-c(bad_lm1feats),]
}
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,res2[,-c(1)],data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
filename<-paste("Tables/",filename,sep="")
# write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
}
}
if(featselmethod=="lm2wayanovarepeat"){
cat("Performing two-way ANOVA with repeated measurements analysis using nlme::lme()",sep="\n")
numcores<-num_nodes #round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"diffexplmtwowayanovarepeat",envir = .GlobalEnv)
clusterEvalQ(cl,library(nlme))
clusterEvalQ(cl,library(multcomp))
clusterEvalQ(cl,library(lsmeans))
clusterExport(cl,"lme",envir = .GlobalEnv)
clusterExport(cl,"interaction",envir = .GlobalEnv)
clusterExport(cl,"anova",envir = .GlobalEnv)
#clusterExport(cl,"classlabels_response_mat",envir = .GlobalEnv)
#clusterExport(cl,"subject_inf",envir = .GlobalEnv)
#res1<-apply(data_m_fc,1,function(x){
# print(dim(data_m_fc))
# print(dim(classlabels_response_mat))
res1<-parApply(cl,data_m_fc,1,function(x,classlabels_response_mat,subject_inf,modeltype){
# res1<-apply(data_m_fc,1,function(x){
# ###saveclasslabels_response_mat,file="classlabels_response_mat.Rda")
# ###savesubject_inf,file="subject_inf.Rda")
xvec<-x
####savexvec,file="xvec.Rda")
colnames(classlabels_response_mat)<-paste("Factor",seq(1,dim(classlabels_response_mat)[2]),sep="")
data_mat_anova<-cbind(xvec,classlabels_response_mat)
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-cnames
#print(subject_inf)
#print(dim(data_mat_anova))
subject_inf<-as.data.frame(subject_inf)
#print(dim(subject_inf))
anova_res<-diffexplmtwowayanovarepeat(dataA=data_mat_anova,subject_inf=subject_inf[,1],modeltype=modeltype)
return(anova_res)
},classlabels_response_mat,subject_inf,modeltype)
main_pval_mat<-{}
stopCluster(cl)
posthoc_pval_mat<-{}
#print(head(res1))
# print("here")
pvalues<-{}
bad_lm1feats<-{}
###saveres1,file="res1.Rda")
for(i in 1:length(res1)){
if(is.na(res1[[i]]$mainpvalues)==FALSE){
main_pval_mat<-rbind(main_pval_mat,res1[[i]]$mainpvalues)
pvalues<-c(pvalues,res1[[i]]$mainpvalues[1])
posthoc_pval_mat<-rbind(posthoc_pval_mat,res1[[i]]$posthoc)
}else{
bad_lm1feats<-c(bad_lm1feats,i)
}
}
if(length(bad_lm1feats)>0){
data_m_fc_withfeats<-data_m_fc_withfeats[-c(bad_lm1feats),]
data_m_fc<-data_m_fc[-c(bad_lm1feats),]
}
twoanovarepeat_res<-cbind(data_m_fc_withfeats[,c(1:2)],main_pval_mat,posthoc_pval_mat)
#write.table(twoanovarepeat_res,file="Tables/lm2wayanovarepeat_with_posthoc_pvalues.txt",sep="\t",row.names=FALSE)
pvalues1<-main_pval_mat[,1]
pvalues2<-main_pval_mat[,2]
pvalues3<-main_pval_mat[,3]
twoanova_res<-cbind(data_m_fc_withfeats[,c(1:2)],main_pval_mat,posthoc_pval_mat)
# write.table(twoanova_res,file="twoanova_with_posthoc_pvalues.txt",sep="\t",row.names=FALSE)
if(fdrmethod=="none"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="none")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="none")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="none")
}
if(fdrmethod=="BH"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="BH")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="BH")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="BH")
}else{
if(fdrmethod=="ST"){
#print(head(pvalues1))
#print(head(pvalues2))
#print(head(pvalues3))
#print(summary(pvalues1))
#print(summary(pvalues2))
#print(summary(pvalues3))
fdr_adjust_pvalue1<-try(qvalue(pvalues1),silent=TRUE)
fdr_adjust_pvalue2<-try(qvalue(pvalues2),silent=TRUE)
fdr_adjust_pvalue3<-try(qvalue(pvalues3),silent=TRUE)
if(is(fdr_adjust_pvalue1,"try-error")){
fdr_adjust_pvalue1<-qvalue(pvalues1,lambda=max(pvalues1,na.rm=TRUE))
}
if(is(fdr_adjust_pvalue2,"try-error")){
fdr_adjust_pvalue2<-qvalue(pvalues2,lambda=max(pvalues2,na.rm=TRUE))
}
if(is(fdr_adjust_pvalue3,"try-error")){
fdr_adjust_pvalue3<-qvalue(pvalues3,lambda=max(pvalues3,na.rm=TRUE))
}
fdr_adjust_pvalue1<-fdr_adjust_pvalue1$qvalues
fdr_adjust_pvalue2<-fdr_adjust_pvalue2$qvalues
fdr_adjust_pvalue3<-fdr_adjust_pvalue3$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue1<-fdrtool(as.vector(pvalues1),statistic="pvalue",verbose=FALSE)
fdr_adjust_pvalue1<-fdr_adjust_pvalue1$qval
fdr_adjust_pvalue2<-fdrtool(as.vector(pvalues2),statistic="pvalue",verbose=FALSE)
fdr_adjust_pvalue2<-fdr_adjust_pvalue2$qval
fdr_adjust_pvalue3<-fdrtool(as.vector(pvalues3),statistic="pvalue",verbose=FALSE)
fdr_adjust_pvalue3<-fdr_adjust_pvalue3$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="none")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="none")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="BY")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="BY")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="BY")
}else{
if(fdrmethod=="bonferroni"){
# fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="bonferroni")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="bonferroni")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-paste("Tables/",featselmethod,"_pvalall_withfeats.txt",sep="")
}else{
filename<-paste("Tables/",featselmethod,"_fdrall_withfeats.txt",sep="")
}
cnames_tab<-colnames(data_m_fc_withfeats)
posthoc_names<-colnames(posthoc_pval_mat)
#
cnames_tab<-c("Factor1.P.value","Factor1.adjusted.P.value","Factor2.P.value","Factor2.adjusted.P.value","Interact.P.value","Interact.adjusted.P.value",posthoc_names,cnames_tab)
data_limma_fdrall_withfeats<-cbind(pvalues1,fdr_adjust_pvalue1,pvalues2,fdr_adjust_pvalue2,pvalues3,fdr_adjust_pvalue3,posthoc_pval_mat,data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
if(length(check_names)>0){
data_limma_fdrall_withfeats<-cbind(pvalues1,fdr_adjust_pvalue1,pvalues2,fdr_adjust_pvalue2,pvalues3,fdr_adjust_pvalue3,posthoc_pval_mat,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
data_limma_fdrall_withfeats<-as.data.frame(data_limma_fdrall_withfeats)
#data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,results2,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
rem_col_ind1<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("mz"))
rem_col_ind2<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("time"))
rem_col_ind<-c(rem_col_ind1,rem_col_ind2)
}else{
rem_col_ind<-{}
}
if(length(rem_col_ind)>0){
write.table(data_limma_fdrall_withfeats[,-c(rem_col_ind)], file="Tables/twowayanovarepeat_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
}else{
#write.table(data_limma_fdrall_withfeats,file="Tables/twowayanova_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
write.table(data_limma_fdrall_withfeats,file="Tables/twowayanovarepeat_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
}
#filename<-paste("Tables/",filename,sep="")
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
filename<-paste("Tables/",filename,sep="")
fdr_matrix<-cbind(fdr_adjust_pvalue1,fdr_adjust_pvalue2,fdr_adjust_pvalue3)
fdr_matrix<-as.data.frame(fdr_matrix)
fdr_adjust_pvalue_all<-apply(fdr_matrix,1,function(x){return(min(x,na.rm=TRUE))})
pvalues_all<-cbind(pvalues1,pvalues2,pvalues3)
pvalue_matrix<-as.data.frame(pvalues_all)
pvalue_all<-apply(pvalue_matrix,1,function(x){return(min(x,na.rm=TRUE)[1])})
#pvalues1<-t(pvalues1)
#print("here")
#pvalues1<-as.data.frame(pvalues1)
#pvalues1<-t(pvalues1)
#print(dim(pvalues1))
#pvalues2<-t(pvalues2)
#pvalues2<-as.data.frame(pvalues2)
#pvalues2<-t(pvalues2)
#pvalues3<-t(pvalues3)
#pvalues3<-as.data.frame(pvalues3)
#pvalues3<-t(pvalues3)
#pvalues<-t(pvalues)
#print(dim(pvalues1))
#print(dim(pvalues2))
#print(dim(pvalues3))
#print(dim(data_m_fc_withfeats))
pvalues<-pvalue_all
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue_all<fdrthresh & final.pvalues<pvalue.thresh
if(length(which(fdr_adjust_pvalue1<fdrthresh))>0){
X1=data_m_fc_withfeats[which(fdr_adjust_pvalue1<fdrthresh),]
Y1=cbind(classlabels_orig[,1],as.character(classlabels_response_mat[,1]))
Y1<-as.data.frame(Y1)
###saveclasslabels_orig,file="classlabels_orig.Rda")
###saveclasslabels_response_mat,file="classlabels_response_mat.Rda")
#print("Performing HCA using features selected for Factor1")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/HCA_Factor1selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
hca_f1<-get_hca(feature_table_file=NA,parentoutput_dir=output_dir,class_labels_file=NA,X=X1,Y=Y1,heatmap.col.opt=heatmap.col.opt,cor.method=cor.method,is.data.znorm=FALSE,analysismode="classification",
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300,
alphacol=0.3, hca_type=hca_type,newdevice=FALSE,input.type="intensity",mainlab="Factor 1",
alphabetical.order=alphabetical.order,study.design="oneway",labRow.value = labRow.value, labCol.value = labCol.value,similarity.matrix=similarity.matrix,
cexLegend=hca.cex.legend,cexRow=cex.plots,cexCol=cex.plots)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}else{
print("No significant features for Factor 1.")
}
if(length(which(fdr_adjust_pvalue2<fdrthresh))>0){
X2=data_m_fc_withfeats[which(fdr_adjust_pvalue2<fdrthresh),]
Y2=cbind(classlabels_orig[,1],as.character(classlabels_response_mat[,2]))
Y2<-as.data.frame(Y2)
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/HCA_Factor2selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
# print("Performing HCA using features selected for Factor2")
hca_f2<-get_hca(feature_table_file=NA,parentoutput_dir=output_dir,class_labels_file=NA,X=X2,Y=Y2,heatmap.col.opt=heatmap.col.opt,cor.method=cor.method,is.data.znorm=FALSE,analysismode="classification",
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300,
alphacol=alphacol, hca_type=hca_type,newdevice=FALSE,input.type="intensity",mainlab="Factor 2",
alphabetical.order=alphabetical.order,study.design="oneway",labRow.value = labRow.value, labCol.value = labCol.value,similarity.matrix=similarity.matrix,
cexLegend=hca.cex.legend,cexRow=cex.plots,cexCol=cex.plots)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}else{
print("No significant features for Factor 2.")
}
class_interact<-paste(classlabels_response_mat[,1],":",classlabels_response_mat[,2],sep="") #classlabels_response_mat[,1]:classlabels_response_mat[,2]
if(length(which(fdr_adjust_pvalue3<fdrthresh))>0){
X3=data_m_fc_withfeats[which(fdr_adjust_pvalue3<fdrthresh),]
Y3=cbind(classlabels_orig[,1],class_interact)
Y3<-as.data.frame(Y3)
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/HCA_Factor1xFactor2selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
#print("Performing HCA using features selected for Factor1x2")
hca_f3<-get_hca(feature_table_file=NA,parentoutput_dir=output_dir,class_labels_file=NA,X=X3,Y=Y3,heatmap.col.opt=heatmap.col.opt,cor.method=cor.method,is.data.znorm=FALSE,analysismode="classification",
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300,
alphacol=0.3, hca_type=hca_type,newdevice=FALSE,input.type="intensity",mainlab="Factor 1 x Factor 2",
alphabetical.order=alphabetical.order,study.design="oneway",labRow.value = labRow.value, labCol.value = labCol.value,similarity.matrix=similarity.matrix,
cexLegend=hca.cex.legend,cexRow=cex.plots,cexCol=cex.plots)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}else{
print("No significant features for Factor 1x2 interaction.")
}
#data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,posthoc_pval_mat,data_m_fc_withfeats)
#
data_limma_fdrall_withfeats<-cbind(pvalues1,fdr_adjust_pvalue1,pvalues2,fdr_adjust_pvalue2,pvalues3,fdr_adjust_pvalue3,posthoc_pval_mat,data_m_fc_withfeats)
fdr_adjust_pvalue<-cbind(fdr_adjust_pvalue1,fdr_adjust_pvalue2,fdr_adjust_pvalue3)
fdr_adjust_pvalue<-apply(fdr_adjust_pvalue,1,function(x){min(x,na.rm=TRUE)})
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
#write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
data_limma_fdrall_withfeats<-cbind(final.pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("P.value.Min(Factor1,Factor2,Interaction)","adjusted.P.value.Min(Factor1,Factor2,Interaction)",cnames_tab)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#filename2<-"test2.txt"
#data_limma_fdrsig_withfeats<-data_limma_fdrall_withfeats[sel.diffdrthresh==TRUE,]
#write.table(data_limma_fdrsig_withfeats, file=filename2,sep="\t",row.names=FALSE)
fdr_adjust_pvalue<-fdr_adjust_pvalue_all
}
}
#end of feature selection methods
if(featselmethod=="lmreg" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat"
| featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="logitreg" | featselmethod=="limma2wayrepeat" | featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="poissonreg" | featselmethod=="lmregrepeat")
{
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
goodip<-which(sel.diffdrthresh==TRUE)
classlabels<-as.data.frame(classlabels)
# if(featselmethod=="limma2way"){
# vennDiagram(results2,cex=0.8)
# }
#print(summary(fdr_adjust_pvalue))
#pheadrint(summary(final.pvalues))
}
pred_acc<-0 #("NA")
#print("here")
feat_sigfdrthresh[lf]<-length(goodip) #which(sel.diffdrthresh==TRUE))
if(kfold>dim(data_m_fc)[2]){
kfold=dim(data_m_fc)[2]
}
if(analysismode=="classification"){
#print("classification")
if(length(goodip)>0 & dim(data_m_fc)[2]>=kfold){
#save(classlabels,classlabels_orig, data_m_fc,file="debug2.rda")
if(alphabetical.order==FALSE){
Targetvar <- factor(classlabels[,1], levels=unique(classlabels[,1]))
}else{
Targetvar<-factor(classlabels[,1])
}
dataA<-cbind(Targetvar,t(data_m_fc))
dataA<-as.data.frame(dataA)
dataA$Targetvar<-factor(Targetvar)
#df.summary <- dataA %>% group_by(Targetvar) %>% summarize_all(funs(mean))
# df.summary <- dataA %>% group_by(Targetvar) %>% summarize_all(funs(mean))
dataA[,-c(1)]<-apply(dataA[,-c(1)],2,function(x){as.numeric(as.character(x))})
if(alphabetical.order==FALSE){
dataA$Targetvar <- factor(dataA$Targetvar, levels=unique(dataA$Targetvar))
}
df.summary <-aggregate(x=dataA,by=list(as.factor(dataA$Targetvar)),function(x){mean(x,na.rm=TRUE)})
#save(dataA,file="errordataA.Rda")
df.summary.sd <-aggregate(x=dataA[,-c(1)],by=list(as.factor(dataA$Targetvar)),function(x){sd(x,na.rm=TRUE)})
df2<-as.data.frame(df.summary[,-c(1:2)])
group_means<-t(df.summary)
# save(classlabels,classlabels_orig, classlabels_class,Targetvar,dataA,data_m_fc,df.summary,df2,group_means,file="debugfoldchange.Rda")
colnames(group_means)<-paste("mean",levels(as.factor(dataA$Targetvar)),sep="") #paste("Group",seq(1,length(unique(dataA$Targetvar))),sep="")
group_means<-cbind(data_m_fc_withfeats[,c(1:2)],group_means[-c(1:2),])
group_sd<-t(df.summary.sd)
colnames(group_sd)<-paste("std.dev",levels(as.factor(dataA$Targetvar)),sep="") #paste("Group",seq(1,length(unique(dataA$Targetvar))),sep="")
group_sd<-cbind(data_m_fc_withfeats[,c(1:2)],group_sd[-c(1),])
# write.table(group_means,file="group_means.txt",sep="\t",row.names=FALSE)
# ###savedf2,file="df2.Rda")
# ###savedataA,file="dataA.Rda")
# ###saveTargetvar,file="Targetvar.Rda")
if(log2transform==TRUE || input.intensity.scale=="log2"){
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
foldchangeres<-parApply(cl,df2,2,function(x){
res<-lapply(1:length(x),function(i){
return((x[i]-x[-i]))
})
res<-unlist(res)
tempres<-abs(res)
res_ind<-which(tempres==max(tempres,na.rm=TRUE))
return(res[res_ind[1]])
})
stopCluster(cl)
# print("Using log2 fold change threshold of")
# print(foldchangethresh)
}else{
#raw intensities
if(znormtransform==FALSE)
{
# foldchangeres<-apply(log2(df2+1),2,function(x){res<-{};for(i in 1:length(x)){res<-c(res,(x[i]-x[-i]));};tempres<-abs(res);res_ind<-which(tempres==max(tempres,na.rm=TRUE));return(res[res_ind[1]]);})
if(FALSE){
foldchangeres<-apply(log2(df2+log2.transform.constant),2,dist)
if(length(nrow(foldchangeres))>0){
foldchangeres<-apply(foldchangeres,2,function(x)
{
max_ind<-which(x==max(abs(x)))[1];
return(x[max_ind])
}
)
}
}
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
foldchangeres<-parApply(cl,log2(df2+0.0000001),2,function(x){
res<-lapply(1:length(x),function(i){
return((x[i]-x[-i]))
})
res<-unlist(res)
tempres<-abs(res)
res_ind<-which(tempres==max(tempres,na.rm=TRUE))
return(res[res_ind[1]])
})
stopCluster(cl)
foldchangethresh=foldchangethresh
# print("Using raw fold change threshold of")
# print(foldchangethresh)
}else{
# foldchangeres<-apply(df2,2,function(x){res<-{};for(i in 1:length(x)){res<-c(res,(x[i]-(x[-i])));};tempres<-abs(res);res_ind<-which(tempres==max(tempres,na.rm=TRUE));return(res[res_ind[1]]);})
if(FALSE){
foldchangeres<-apply(df2,2,dist)
if(length(nrow(foldchangeres))>0){
foldchangeres<-apply(foldchangeres,2,function(x)
{
max_ind<-which(x==max(abs(x)))[1];
return(x[max_ind])
}
)
}
}
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
foldchangeres<-parApply(cl,df2,2,function(x){
res<-lapply(1:length(x),function(i){
return((x[i]-x[-i]))
})
res<-unlist(res)
tempres<-abs(res)
res_ind<-which(tempres==max(tempres,na.rm=TRUE))
return(res[res_ind[1]])
})
stopCluster(cl)
#print(summary(foldchangeres))
#foldchangethresh=2^foldchangethresh
print("Using Z-score change threshold of")
print(foldchangethresh)
}
}
if(length(class_labels_levels)==2){
zvec=foldchangeres
}else{
zvec=NA
if(featselmethod=="lmreg" && analysismode=="regression"){
cnames_matrix<-colnames(data_limma_fdrall_withfeats)
cnames_colindex<-grep("Estimate_",cnames_matrix)
zvec<-data_limma_fdrall_withfeats[,c(cnames_colindex[1])]
}
}
maxfoldchange<-foldchangeres
goodipfoldchange<-which(abs(maxfoldchange)>foldchangethresh)
#if(FALSE)
{
if(input.intensity.scale=="raw" && log2transform==FALSE && znormtransform==FALSE){
foldchangeres<-2^((foldchangeres))
}
}
maxfoldchange1<-foldchangeres
roundUpNice <- function(x, nice=c(1,2,4,5,6,8,10))
{
if(length(x) != 1) stop("'x' must be of length 1")
10^floor(log10(x)) * nice[[which(x <= 10^floor(log10(x)) * nice)[[1]]]]
}
d4<-as.data.frame(data_limma_fdrall_withfeats)
max_mz_val<-roundUpNice(max(d4$mz)[1])
max_time_val<-roundUpNice(max(d4$time)[1])
x1increment=round_any(max_mz_val/10,10,f=floor)
x2increment=round_any(max_time_val/10,10,f=floor)
if(x2increment<1){
x2increment=0.5
}
if(x1increment<1){
x1increment=0.5
}
if(featselmethod=="lmreg" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat"
| featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="logitreg" | featselmethod=="limma2wayrepeat" | featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="poissonreg" | featselmethod=="lmregrepeat")
{
# print("Plotting manhattan plots")
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
goodip<-which(sel.diffdrthresh==TRUE)
classlabels<-as.data.frame(classlabels)
logp<-(-1)*log((d4[,1]+(10^-20)),10)
if(fdrmethod=="none"){
ythresh<-(-1)*log10(pvalue.thresh)
}else{
ythresh<-min(logp[goodip],na.rm=TRUE)
}
maintext1="Type 1 manhattan plot (-logp vs mz) \n m/z features above the dashed horizontal line meet the selection criteria"
maintext2="Type 2 manhattan plot (-logp vs time) \n m/z features above the dashed horizontal line meet the selection criteria"
if(is.na(zvec[1])==FALSE){
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
}
yvec_val=logp
ylabel="(-)log10p"
yincrement=1
y2thresh=(-1)*log10(pvalue.thresh)
# save(list=c("d4","logp","yvec_val","ythresh","zvec","x1increment","yincrement","maintext1","x2increment","maintext2","ylabel","y2thresh"),file="manhattanplot_objects.Rda")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type1.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
# get_manhattanplots(xvec=d4$mz,yvec=logp,ythresh=ythresh,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab=ylabel,xincrement=x1increment,yincrement=yincrement,maintext=maintext1,col_seq=c("black"),y2thresh=y2thresh,colorvec=manhattanplot.col.opt)
####savelist=ls(),file="m1.Rda")
try(get_manhattanplots(xvec=d4$mz,yvec=logp,ythresh=ythresh,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab=ylabel,
xincrement=x1increment,yincrement=yincrement,maintext=maintext1,col_seq=c("black"),y2thresh=y2thresh,colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type2.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$time,yvec=logp,ythresh=ythresh,up_or_down=zvec,xlab="Retention time",ylab="-log10p",xincrement=x2increment,yincrement=1,maintext=maintext2,col_seq=c("black"),y2thresh=y2thresh,colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(length(class_labels_levels)==2){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/VolcanoPlot.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
maintext1="Volcano plot (-logp vs log2(fold change)) \n colored m/z features meet the selection criteria"
if(is.na(zvec[1])==FALSE){
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
}
##save(maxfoldchange,logp,zvec,ythresh,y2thresh,foldchangethresh,manhattanplot.col.opt,d4,file="debugvolcano.Rda")
try(get_volcanoplots(xvec=maxfoldchange,yvec=logp,up_or_down=zvec,ythresh=ythresh,y2thresh=y2thresh,xthresh=foldchangethresh,maintext=maintext1,ylab="-log10(p-value)",xlab="log2(fold change)",colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}else{
if(featselmethod=="pls" | featselmethod=="o1pls"){
# print("Time 2")
#print(Sys.time())
maintext1="Type 1 manhattan plot (VIP vs mz) \n m/z features above the dashed horizontal line meet the selection criteria"
maintext2="Type 2 manhattan plot (VIP vs time) \n m/z features above the dashed horizontal line meet the selection criteria"
if(is.na(zvec[1])==FALSE){
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
}
yvec_val<-data_limma_fdrall_withfeats[,1]
ythresh=pls_vip_thresh
vip_res<-as.data.frame(vip_res)
bad.feature.index={}
if(is.na(pls.permut.count)==FALSE){
#yvec_val[which(vip_res$rand_pls_sel_prob>=pvalue.thresh | vip_res$rand_pls_sel_fdr>=fdrthresh)]<-0 #(ythresh)*0.5
bad.feature.index=which(vip_res$rand_pls_sel_prob>=pvalue.thresh | vip_res$rand_pls_sel_fdr>=fdrthresh)
}
ylabel="VIP"
yincrement=0.5
y2thresh=NA
# ###savelist=ls(),file="manhattandebug.Rda")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type1.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$mz,yvec=yvec_val,ythresh=pls_vip_thresh,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab="VIP",xincrement=x1increment,yincrement=0.5,maintext=maintext1,col_seq=c("black"),colorvec=manhattanplot.col.opt,bad.feature.index=bad.feature.index),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type2.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$time,yvec=yvec_val,ythresh=pls_vip_thresh,up_or_down=zvec,xlab="Retention time",ylab="VIP",xincrement=x2increment,yincrement=0.5,maintext=maintext2,col_seq=c("black"),colorvec=manhattanplot.col.opt,bad.feature.index=bad.feature.index),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(length(class_labels_levels)==2){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/VolcanoPlot_VIP_vs_foldchange.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
maintext1="Volcano plot (VIP vs log2(fold change)) \n colored m/z features meet the selection criteria"
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
# ###savelist=ls(),file="volcanodebug.Rda")
try(get_volcanoplots(xvec=maxfoldchange,yvec=yvec_val,up_or_down=maxfoldchange,ythresh=ythresh,xthresh=foldchangethresh,maintext=maintext1,ylab="VIP",xlab="log2(fold change)",bad.feature.index=bad.feature.index,colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}else{
if(featselmethod=="spls" | featselmethod=="o1spls"){
maintext1="Type 1 manhattan plot (|loading| vs mz) \n m/z features with non-zero loadings meet the selection criteria"
maintext2="Type 2 manhattan plot (|loading| vs time) \n m/z features with non-zero loadings meet the selection criteria"
if(is.na(zvec[1])==FALSE){
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
}
yvec_val<-data_limma_fdrall_withfeats[,1]
vip_res<-as.data.frame(vip_res)
bad.feature.index={}
if(is.na(pls.permut.count)==FALSE){
# yvec_val[which(vip_res$rand_pls_sel_prob>=pvalue.thresh | vip_res$rand_pls_sel_fdr>=fdrthresh)]<-0
bad.feature.index=which(vip_res$rand_pls_sel_prob>=pvalue.thresh | vip_res$rand_pls_sel_fdr>=fdrthresh)
}
ythresh=0
ylabel="Loading (absolute)"
yincrement=0.1
y2thresh=NA
####savelist=c("d4","yvec_val","ythresh","zvec","x1increment","yincrement","maintext1","x2increment","maintext2","ylabel","y2thresh"),file="manhattanplot_objects.Rda")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type1.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$mz,yvec=yvec_val,ythresh=0,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab="Loading (absolute)",xincrement=x1increment,yincrement=0.1,maintext=maintext1,col_seq=c("black"),colorvec=manhattanplot.col.opt,bad.feature.index=bad.feature.index),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type2.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$time,yvec=yvec_val,ythresh=0,up_or_down=zvec,xlab="Retention time",ylab="Loading (absolute)",xincrement=x2increment,yincrement=0.1,maintext=maintext2,col_seq=c("black"),colorvec=manhattanplot.col.opt,bad.feature.index=bad.feature.index),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
#volcanoplot
if(length(class_labels_levels)==2){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/VolcanoPlot_Loading_vs_foldchange.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
maintext1="Volcano plot (absolute) Loading vs log2(fold change)) \n colored m/z features meet the selection criteria"
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
try(get_volcanoplots(xvec=maxfoldchange,yvec=yvec_val,up_or_down=maxfoldchange,ythresh=ythresh,xthresh=foldchangethresh,maintext=maintext1,ylab="(absolute) Loading",xlab="log2(fold change)",yincrement=0.1,bad.feature.index=bad.feature.index,colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}else{
if(featselmethod=="pamr"){
maintext1="Type 1 manhattan plot (max |standardized centroids (d-statistic)| vs mz) \n m/z features with above the horizontal line meet the selection criteria"
maintext2="Type 2 manhattan plot (max |standardized centroids (d-statistic)| vs time) \n m/z features with above the horizontal line meet the selection criteria"
if(is.na(zvec[1])==FALSE){
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
}
yvec_val<-data_limma_fdrall_withfeats[,1]
##error point
#vip_res<-as.data.frame(vip_res)
discore<-as.data.frame(discore)
bad.feature.index={}
if(is.na(pls.permut.count)==FALSE){
# yvec_val[which(vip_res$rand_pls_sel_prob>=pvalue.thresh | vip_res$rand_pls_sel_fdr>=fdrthresh)]<-0
# bad.feature.index=which(vip_res$rand_pls_sel_prob>=pvalue.thresh | vip_res$rand_pls_sel_fdr>=fdrthresh)
}
ythresh=pamr_ythresh
ylabel="d-statistic (absolute)"
yincrement=0.1
y2thresh=NA
####savelist=c("d4","yvec_val","ythresh","zvec","x1increment","yincrement","maintext1","x2increment","maintext2","ylabel","y2thresh"),file="manhattanplot_objects.Rda")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type1.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$mz,yvec=yvec_val,ythresh=pamr_ythresh,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab="d-statistic (absolute) at threshold=0",xincrement=x1increment,yincrement=0.1,maintext=maintext1,col_seq=c("black"),colorvec=manhattanplot.col.opt,bad.feature.index=NA),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type2.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$time,yvec=yvec_val,ythresh=pamr_ythresh,up_or_down=zvec,xlab="Retention time",ylab="d-statistic (absolute) at threshold=0",xincrement=x2increment,yincrement=0.1,maintext=maintext2,col_seq=c("black"),colorvec=manhattanplot.col.opt,bad.feature.index=NA),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
#volcanoplot
if(length(class_labels_levels)==2){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/VolcanoPlot_Dstatistic_vs_foldchange.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
maintext1="Volcano plot (absolute) max standardized centroid (d-statistic) vs log2(fold change)) \n colored m/z features meet the selection criteria"
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
try(get_volcanoplots(xvec=maxfoldchange,yvec=yvec_val,up_or_down=maxfoldchange,ythresh=pamr_ythresh,xthresh=foldchangethresh,maintext=maintext1,ylab="(absolute) d-statistic at threshold=0",xlab="log2(fold change)",yincrement=0.1,bad.feature.index=NA,colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
}
}
}
goodip<-intersect(goodip,goodipfoldchange)
dataA<-cbind(maxfoldchange,data_m_fc_withfeats)
#write.table(dataA,file="foldchange.txt",sep="\t",row.names=FALSE)
goodfeats_allfields<-{}
if(length(goodip)>0){
feat_sigfdrthresh[lf]<-length(goodip)
subdata<-t(data_m_fc[goodip,])
#save(parent_data_m,file="parent_data_m.Rda")
data_minval<-min(parent_data_m[,-c(1:2)],na.rm=TRUE)*0.5
#svm_model<-svm_cv(v=kfold,x=subdata,y=classlabels,kname=svm_kernel,errortype=pred.eval.method,conflevel=95)
exp_fp<-1
best_feats<-goodip
}else{
print("No features meet the fold change criteria.")
}
}else{
if(dim(data_m_fc)[2]<kfold){
print("Number of samples is too small to calculate cross-validation accuracy.")
}
}
#feat_sigfdrthresh_cv<-c(feat_sigfdrthresh_cv,pred_acc)
if(length(goodip)<1){
# print("########################################")
# print(paste("Relative standard deviation (RSD) threshold: ", log2.fold.change.thresh," %",sep=""))
#print(paste("FDR threshold: ", fdrthresh,sep=""))
print(paste("Number of features left after RSD filtering: ", dim(data_m_fc)[1],sep=""))
print(paste("Number of selected features: ", length(goodip),sep=""))
try(dev.off(),silent=TRUE)
next
}
# save(data_m_fc_withfeats,data_matrix,data_m,goodip,names_with_mz_time,file="gdebug.Rda")
#print("######################################")
suppressMessages(library(cluster))
t1<-table(classlabels)
if(is.na(names_with_mz_time)==FALSE){
data_m_fc_withfeats_A1<-merge(names_with_mz_time,data_m_fc_withfeats,by=c("mz","time"))
rownames(data_m_fc_withfeats)<-as.character(data_m_fc_withfeats_A1$Name)
}else{
rownames(data_m_fc_withfeats)<-as.character(paste(data_m_fc_withfeats[,1],data_m_fc_withfeats[,2],sep="_"))
}
#patientcolors <- unlist(lapply(sampleclass, color.map))
if(length(goodip)>2){
goodfeats<-as.data.frame(data_m_fc_withfeats[goodip,]) #[sel.diffdrthresh==TRUE,])
goodfeats<-unique(goodfeats)
rnames_goodfeats<-rownames(goodfeats) #as.character(paste(goodfeats[,1],goodfeats[,2],sep="_"))
if(length(which(duplicated(rnames_goodfeats)==TRUE))>0){
print("WARNING: Duplicated features found. Removing duplicate entries.")
goodfeats<-goodfeats[-which(duplicated(rnames_goodfeats)==TRUE),]
rnames_goodfeats<-rnames_goodfeats[-which(duplicated(rnames_goodfeats)==TRUE)]
}
#rownames(goodfeats)<-as.character(paste(goodfeats[,1],goodfeats[,2],sep="_"))
data_m<-as.matrix(goodfeats[,-c(1:2)])
rownames(data_m)<-rownames(goodfeats) #as.character(paste(goodfeats[,1],goodfeats[,2],sep="_"))
data_m<-unique(data_m)
X<-t(data_m)
{
heatmap_file<-paste("heatmap_",featselmethod,".tiff",sep="")
heatmap_mainlabel="" #2-way HCA using all significant features"
if(FALSE)
{
# print("this step")
# save(hc,file="hc.Rda")
# save(hr,file="hr.Rda")
#save(distc,file="distc.Rda")
#save(distr,file="distr.Rda")
# save(data_m,heatmap.col.opt,hca_type,classlabels,classlabels_orig,outloc,goodfeats,data_m_fc_withfeats,goodip,names_with_mz_time,plots.height,plots.width,plots.res,file="hcadata_m.Rda")
#save(classlabels,file="classlabels.Rda")
}
# pdf("Testhca.pdf")
#try(
#
#dev.off()
if(is.na(names_with_mz_time)==FALSE){
goodfeats_with_names<-merge(names_with_mz_time,goodfeats,by=c("mz","time"))
goodfeats_with_names<-goodfeats_with_names[match(paste(goodfeats$mz,"_",goodfeats$time,sep=""),paste(goodfeats_with_names$mz,"_",goodfeats_with_names$time,sep="")),]
# save(names_with_mz_time,goodfeats,goodfeats_with_names,file="goodfeats_with_names.Rda")
goodfeats_name<-goodfeats_with_names$Name
rownames(goodfeats)<-goodfeats_name
}else{
#print(head(names_with_mz_time))
# print(head(goodfeats))
#goodfeats_name<-NA
}
if(output.device.type!="pdf"){
# print(getwd())
# save(data_m,heatmap.col.opt,hca_type,classlabels,classlabels_orig,output_dir,goodfeats,names_with_mz_time,data_m_fc_withfeats,goodip,goodfeats_name,names_with_mz_time,
# plots.height,plots.width,plots.res,alphabetical.order,analysistype,labRow.value, labCol.value,hca.cex.legend,file="hcadata_mD.Rda")
temp_filename_1<-"Figures/HCA_All_selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type="cairo",units="in")
#Generate HCA for selected features
hca_res<-get_hca(feature_table_file=NA,parentoutput_dir=output_dir,class_labels_file=NA,X=goodfeats,Y=classlabels_orig,heatmap.col.opt=heatmap.col.opt,
cor.method=cor.method,is.data.znorm=FALSE,analysismode="classification",
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300, alphacol=0.3, hca_type=hca_type,newdevice=FALSE,
input.type="intensity",mainlab="",alphabetical.order=alphabetical.order,study.design=analysistype,
labRow.value = labRow.value, labCol.value = labCol.value,similarity.matrix=similarity.matrix,cexLegend=hca.cex.legend,cexRow=cex.plots,cexCol=cex.plots)
dev.off()
}else{
#Generate HCA for selected features
hca_res<-get_hca(feature_table_file=NA,parentoutput_dir=output_dir,class_labels_file=NA,X=goodfeats,Y=classlabels_orig,heatmap.col.opt=heatmap.col.opt,cor.method=cor.method,is.data.znorm=FALSE,analysismode="classification",
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300, alphacol=0.3, hca_type=hca_type,newdevice=FALSE,
input.type="intensity",mainlab="",alphabetical.order=alphabetical.order,study.design=analysistype,
labRow.value = labRow.value, labCol.value = labCol.value,similarity.matrix=similarity.matrix,cexLegend=hca.cex.legend,cexRow=cex.plots,cexCol=cex.plots)
# get_hca(parentoutput_dir=getwd(),X=goodfeats,Y=classlabels_orig,heatmap.col.opt=heatmap.col.opt,cor.method="spearman",is.data.znorm=FALSE,analysismode="classification",
# sample.col.opt="rainbow",plots.width=2000,plots.height=2000,plots.res=300, alphacol=0.3, hca_type=hca_type,newdevice=FALSE) #,silent=TRUE)
}
}
# print("Done with HCA.")
}
}
else
{
#print("regression")
# print("########################################")
# print(paste("RSD threshold: ", log2.fold.change.thresh,sep=""))
#print(paste("FDR threshold: ", fdrthresh,sep=""))
#print(paste("Number of metabolites left after RSD filtering: ", dim(data_m_fc)[1],sep=""))
#print(paste("Number of sig metabolites: ", length(goodip),sep=""))
#print for regression
#print(paste("Summary for method: ",featselmethod,sep=""))
#print(paste("Relative standard deviation (RSD) threshold: ", log2.fold.change.thresh," %",sep=""))
cat("Analysis summary:",sep="\n")
cat(paste("Number of samples: ", dim(data_m_fc)[2],sep=""),sep="\n")
cat(paste("Number of features in the original dataset: ", num_features_total,sep=""),sep="\n")
# cat(rsd_filt_msg,sep="\n")
cat(paste("Number of features left after preprocessing: ", dim(data_m_fc)[1],sep=""),sep="\n")
cat(paste("Number of selected features: ", length(goodip),sep=""),sep="\n")
#cat("", sep="\n")
if(featselmethod=="lmreg"){
#d4<-read.table(paste(parentoutput_dir,"/Stage2/lmreg_pval_coef_stderr.txt",sep=""),sep="\t",header=TRUE,quote = "")
d4<-read.table("Tables/lmreg_pval_coef_stderr.txt",sep="\t",header=TRUE)
}
if(length(goodip)>=1){
subdata<-t(data_m_fc[goodip,])
if(length(class_labels_levels)==2){
#zvec=foldchangeres
}else{
zvec=NA
if(featselmethod=="lmreg" && analysismode=="regression"){
cnames_matrix<-colnames(d4)
cnames_colindex<-grep("Estimate_",cnames_matrix)
zvec<-d4[,c(cnames_colindex[1])]
#zvec<-d4$Estimate_var1
#if(length(zvec)<1){
# zvec<-d4$X.Estimate_var1.
#}
}
}
roundUpNice <- function(x, nice=c(1,2,4,5,6,8,10)) {
if(length(x) != 1) stop("'x' must be of length 1")
10^floor(log10(x)) * nice[[which(x <= 10^floor(log10(x)) * nice)[[1]]]]
}
d4<-as.data.frame(data_limma_fdrall_withfeats)
# d4<-as.data.frame(d1)
# save(d4,file="mtype1.rda")
x1increment=round_any(max(d4$mz)/10,10,f=floor)
x2increment=round_any(max(d4$time)/10,10,f=floor)
#manplots
if(featselmethod=="lmreg" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat"
| featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="logitreg" | featselmethod=="limma2wayrepeat" | featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="poissonreg" | featselmethod=="lmregrepeat")
{
#print("Plotting manhattan plots")
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
goodip<-which(sel.diffdrthresh==TRUE)
classlabels<-as.data.frame(classlabels)
logp<-(-1)*log((d4[,1]+(10^-20)),10)
ythresh<-min(logp[goodip],na.rm=TRUE)
maintext1="Type 1 manhattan plot (-logp vs mz) \n m/z features above the dashed horizontal line meet the selection criteria"
maintext2="Type 2 manhattan plot (-logp vs time) \n m/z features above the dashed horizontal line meet the selection criteria"
# print("here1 A")
#print(zvec)
if(is.na(zvec[1])==FALSE){
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": negative association "," & ",manhattanplot.col.opt[1],": positive association ",sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": negative association "," & ",manhattanplot.col.opt[1],": positive association ",sep="")
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type1.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$mz,yvec=logp,ythresh=ythresh,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab="-logP",xincrement=x1increment,yincrement=1,
maintext=maintext1,col_seq=c("black"),y2thresh=(-1)*log10(pvalue.thresh),colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type2.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$time,yvec=logp,ythresh=ythresh,up_or_down=zvec,xlab="Retention time",ylab="-logP",xincrement=x2increment,yincrement=1,
maintext=maintext2,col_seq=c("black"),y2thresh=(-1)*log10(pvalue.thresh),colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
#print("Plotting manhattan plots")
#get_manhattanplots(xvec=d4$mz,yvec=logp,ythresh=ythresh,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab="-logP",xincrement=x1increment,yincrement=1,maintext=maintext1)
#get_manhattanplots(xvec=d4$time,yvec=logp,ythresh=ythresh,up_or_down=zvec,xlab="Retention time",ylab="-logP",xincrement=x2increment,yincrement=1,maintext=maintext2)
}else{
if(featselmethod=="pls" | featselmethod=="o1pls"){
maintext1="Type 1 manhattan plot (VIP vs mz) \n m/z features above the dashed horizontal line meet the selection criteria"
maintext2="Type 2 manhattan plot (VIP vs time) \n m/z features above the dashed horizontal line meet the selection criteria"
if(is.na(zvec[1])==FALSE){
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": negative association "," & ",manhattanplot.col.opt[1],": positive association ",sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": negative association "," & ",manhattanplot.col.opt[1],": positive association ",sep="")
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type1.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$mz,yvec=data_limma_fdrall_withfeats[,1],ythresh=pls_vip_thresh,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab="VIP",xincrement=x1increment,yincrement=0.5,maintext=maintext1,col_seq=c("black"),colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type2.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$time,yvec=data_limma_fdrall_withfeats[,1],ythresh=pls_vip_thresh,up_or_down=zvec,xlab="Retention time",ylab="VIP",xincrement=x2increment,yincrement=0.5,maintext=maintext2,col_seq=c("black"),colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
else{
if(featselmethod=="spls" | featselmethod=="o1spls"){
maintext1="Type 1 manhattan plot (|loading| vs mz) \n m/z features with non-zero loadings meet the selection criteria"
maintext2="Type 2 manhattan plot (|loading| vs time) \n m/z features with non-zero loadings meet the selection criteria"
if(is.na(zvec[1])==FALSE){
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": negative association "," & ",manhattanplot.col.opt[1],": positive association ",sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": negative association "," & ",manhattanplot.col.opt[1],": positive association ",sep="")
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type1.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$mz,yvec=data_limma_fdrall_withfeats[,1],ythresh=0,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab="Loading",xincrement=x1increment,yincrement=0.1,maintext=maintext1,col_seq=c("black"),colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type2.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$time,yvec=data_limma_fdrall_withfeats[,1],ythresh=0,up_or_down=zvec,xlab="Retention time",ylab="Loading",xincrement=x2increment,yincrement=0.1,maintext=maintext2,col_seq=c("black"),colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
}
data_minval<-min(parent_data_m[,-c(1:2)],na.rm=TRUE)*0.5
#subdata<-apply(subdata,2,function(x){naind<-which(is.na(x)==TRUE);if(length(naind)>0){ x[naind]<-median(x,na.rm=TRUE)};return(x)})
subdata<-apply(subdata,2,function(x){naind<-which(is.na(x)==TRUE);if(length(naind)>0){ x[naind]<-data_minval};return(x)})
#print(head(subdata))
#print(dim(subdata))
#print(dim(classlabels))
#print(dim(classlabels))
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
if(length(classlabels)>dim(parent_data_m)[2]){
#classlabels<-as.data.frame(classlabels[,1])
classlabels_response_mat<-as.data.frame(classlabels_response_mat[,1])
}
if(FALSE){
svm_model_reg<-try(svm(x=subdata,y=(classlabels_response_mat[,1]),type="eps",cross=kfold),silent=TRUE)
if(is(svm_model_reg,"try-error")){
print("SVM could not be performed. Skipping to the next step.")
termA<-(-1)
pred_acc<-termA
}else{
termA<-svm_model_reg$tot.MSE
pred_acc<-termA
print(paste(kfold,"-fold mean squared error: ", pred_acc,sep=""))
}
}
termA<-(-1)
pred_acc<-termA
# print("######################################")
}else{
#print("Number of selected variables is too small to perform CV.")
}
#print("termA is ")
#print(termA)
# print("dim of goodfeats")
goodfeats<-as.data.frame(data_m_fc_withfeats[sel.diffdrthresh==TRUE,])
goodip<-which(sel.diffdrthresh==TRUE)
#print(length(goodip))
res_score<-termA
#if(res_score<best_cv_res){
if(length(which(sel.diffdrthresh==TRUE))>0){
if(res_score<best_cv_res){
best_logfc_ind<-lf
best_feats<-goodip
best_cv_res<-res_score
best_acc<-pred_acc
best_limma_res<-data_limma_fdrall_withfeats[sel.diffdrthresh==TRUE,]
}
}else{
res_score<-(9999999)
}
res_score_vec[lf]<-res_score
goodfeats<-unique(goodfeats)
# save(names_with_mz_time,goodfeats,file="goodfeats_1.Rda")
if(length(which(is.na(goodfeats$mz)==TRUE))>0){
goodfeats<-goodfeats[-which(is.na(goodfeats$mz)==TRUE),]
}
if(is.na(names_with_mz_time)==FALSE){
goodfeats_with_names<-merge(names_with_mz_time,goodfeats,by=c("mz","time"))
goodfeats_with_names<-goodfeats_with_names[match(goodfeats$mz,goodfeats_with_names$mz),]
#
goodfeats_name<-goodfeats_with_names$Name
#}
}else{
goodfeats_name<-as.character(paste(goodfeats[,1],goodfeats[,2],sep="_"))
}
if(length(which(sel.diffdrthresh==TRUE))>2){
##save(goodfeats,file="goodfeats.Rda")
#rownames(goodfeats)<-as.character(goodfeats[,1])
rownames(goodfeats)<-goodfeats_name #as.character(paste(goodfeats[,1],goodfeats[,2],sep="_"))
data_m<-as.matrix(goodfeats[,-c(1:2)])
rownames(data_m)<-rownames(goodfeats) #as.character(paste(goodfeats[,1],goodfeats[,2],sep="_"))
X<-t(data_m)
pca_comp<-min(dim(X)[1],dim(X)[2])
t1<-seq(1,dim(data_m)[2])
col <-col_vec[1:length(t1)]
hr <- try(hclust(as.dist(1-cor(t(data_m),method=cor.method,use="pairwise.complete.obs"))),silent=TRUE) #metabolites
hc <- try(hclust(as.dist(1-cor(data_m,method=cor.method,use="pairwise.complete.obs"))),silent=TRUE) #samples
if(heatmap.col.opt=="RdBu"){
heatmap.col.opt="redblue"
}
heatmap_cols <- colorRampPalette(brewer.pal(10, "RdBu"))(256)
heatmap_cols<-rev(heatmap_cols)
if(heatmap.col.opt=="topo"){
heatmap_cols<-topo.colors(256)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="heat"){
heatmap_cols<-heat.colors(256)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="yellowblue"){
heatmap_cols<-colorRampPalette(c("yellow","blue"))(256) #colorRampPalette(c("yellow","white","blue"))(256)
#heatmap_cols<-blue2yellow(256) #colorRampPalette(c("yellow","blue"))(256)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="redblue"){
heatmap_cols <- colorRampPalette(brewer.pal(10, "RdBu"))(256)
heatmap_cols<-rev(heatmap_cols)
}else{
#my_palette <- colorRampPalette(c("red", "yellow", "green"))(n = 299)
if(heatmap.col.opt=="redyellowgreen"){
heatmap_cols <- colorRampPalette(c("red", "yellow", "green"))(n = 299)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="yellowwhiteblue"){
heatmap_cols<-colorRampPalette(c("yellow2","white","blue"))(256) #colorRampPalette(c("yellow","white","blue"))(256)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="redwhiteblue"){
heatmap_cols<-colorRampPalette(c("red","white","blue"))(256) #colorRampPalette(c("yellow","white","blue"))(256)
heatmap_cols<-rev(heatmap_cols)
}else{
heatmap_cols <- colorRampPalette(brewer.pal(10, heatmap.col.opt))(256)
heatmap_cols<-rev(heatmap_cols)
}
}
}
}
}
}
}
if(is(hr,"try-error") || is(hc,"try-error")){
print("Hierarchical clustering can not be performed. ")
}else{
mycl_samples <- cutree(hc, h=max(hc$height)/2)
t1<-table(mycl_samples)
col_clust<-topo.colors(length(t1))
patientcolors=rep(col_clust,t1) #mycl_samples[col_clust]
heatmap_file<-paste("heatmap_",featselmethod,"_imp_features.tiff",sep="")
#tiff(heatmap_file,width=plots.width,height=plots.height,res=plots.res, compression="lzw")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/HCA_all_selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
if(znormtransform==FALSE){
h73<-heatmap.2(data_m, Rowv=as.dendrogram(hr), Colv=as.dendrogram(hc), col=heatmap_cols, scale="row",key=TRUE, symkey=FALSE,
density.info="none", trace="none", cexRow=1, cexCol=1,xlab="",ylab="", main="Using all selected features",labRow = hca.labRow.value, labCol = hca.labCol.value)
}else{
h73<-heatmap.2(data_m, Rowv=as.dendrogram(hr), Colv=as.dendrogram(hc), col=heatmap_cols, scale="none",key=TRUE,
symkey=FALSE, density.info="none", trace="none", cexRow=1, cexCol=1,xlab="",ylab="", main="Using all selected features",labRow = hca.labRow.value, labCol = hca.labCol.value)
}
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
mycl_samples <- cutree(hc, h=max(hc$height)/2)
mycl_metabs <- cutree(hr, h=max(hr$height)/2)
ord_data<-cbind(mycl_metabs[rev(h73$rowInd)],goodfeats[rev(h73$rowInd),c(1:2)],data_m[rev(h73$rowInd),h73$colInd])
cnames1<-colnames(ord_data)
cnames1[1]<-"mz_cluster_label"
colnames(ord_data)<-cnames1
fname1<-paste("Tables/Clustering_based_sorted_intensity_data.txt",sep="")
write.table(ord_data,file=fname1,sep="\t",row.names=FALSE)
fname2<-paste("Tables/Sample_clusterlabels.txt",sep="")
sample_clust_num<-mycl_samples[h73$colInd]
classlabels<-as.data.frame(classlabels)
temp1<-classlabels[h73$colInd,]
temp3<-cbind(temp1,sample_clust_num)
rnames1<-rownames(temp3)
temp4<-cbind(rnames1,temp3)
temp4<-as.data.frame(temp4)
if(analysismode=="regression"){
#names(temp3[,1)<-as.character(temp4[,1])
temp3<-temp4[,-c(1)]
temp3<-as.data.frame(temp3)
temp3<-apply(temp3,2,as.numeric)
temp_vec<-as.vector(temp3[,1])
names(temp_vec)<-as.character(temp4[,1])
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/Barplot_dependent_variable_ordered_by_HCA.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
#tiff("Barplot_sample_cluster_ymat.tiff", width=plots.width,height=plots.height,res=plots.res, compression="lzw")
barplot(temp_vec,col="brown",ylab="Y",cex.axis=0.5,cex.names=0.5,main="Dependent variable levels in samples; \n ordered based on hierarchical clustering")
#dev.off()
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
# print(head(temp_vec))
#temp4<-temp4[,-c(2)]
write.table(temp4,file=fname2,sep="\t",row.names=FALSE)
fname3<-paste("Metabolite_clusterlabels.txt",sep="")
mycl_metabs_ord<-mycl_metabs[rev(h73$rowInd)]
}
}
}
classlabels_orig<-classlabels_orig_parent
if(pairedanalysis==TRUE){
classlabels_orig<-classlabels_orig[,-c(2)]
}else{
if(featselmethod=="lmreg" || featselmethod=="logitreg" || featselmethod=="poissonreg"){
classlabels_orig<-classlabels_orig[,c(1:2)]
classlabels_orig<-as.data.frame(classlabels_orig)
}
}
node_names=rownames(data_m_fc_withfeats)
#save(data_limma_fdrall_withfeats,goodip,data_m_fc_withfeats,data_matrix,names_with_mz_time,file="data_limma_fdrall_withfeats1.Rda")
classlabels_orig_wgcna<-classlabels_orig
if(analysismode=="classification"){
classlabels_temp<-classlabels_orig_wgcna #cbind(classlabels_sub[,1],classlabels)
sigfeats=data_m_fc_withfeats[goodip,c(1:2)]
# save(data_m_fc_withfeats,classlabels_temp,sigfeats,goodip,num_nodes,abs.cor.thresh,cor.fdrthresh,alphabetical.order,
# plot_DiNa_graph,degree.centrality.method,node_names,networktype,file="debugdiffrank_eval.Rda")
if(degree_rank_method=="diffrank"){
# degree_eval_res<-try(diffrank_eval(X=data_m_fc_withfeats,Y=classlabels_temp,sigfeats=data_m_fc_withfeats[goodip,c(1:2)],sigfeatsind=goodip,
# num_nodes=num_nodes,abs.cor.thresh=abs.cor.thresh,cor.fdrthresh=cor.fdrthresh,alphabetical.order=alphabetical.order),silent=TRUE)
degree_eval_res<-diffrank_eval(X=data_m_fc_withfeats,Y=classlabels_temp,sigfeats=sigfeats,sigfeatsind=goodip,
num_nodes=num_nodes,abs.cor.thresh=abs.cor.thresh,cor.fdrthresh=cor.fdrthresh,alphabetical.order=alphabetical.order,
node_names=node_names,plot_graph_bool=plot_DiNa_graph,
degree.centrality.method = degree.centrality.method,networktype=networktype) #,silent=TRUE)
}else{
degree_eval_res<-{}
}
}
sample_names_vec<-colnames(data_m_fc_withfeats[,-c(1:2)])
# save(degree_eval_res,file="DiNa_results.Rda")
# save(data_limma_fdrall_withfeats,goodip,sample_names_vec,data_m_fc_withfeats,data_matrix,names_with_mz_time,file="data_limma_fdrall_withfeats.Rda")
if(analysismode=="classification")
{
degree_rank<-rep(1,dim(data_m_fc_withfeats)[1])
if(is(degree_eval_res,"try-error")){
degree_rank<-rep(1,dim(data_m_fc_withfeats)[1])
}else{
if(degree_rank_method=="diffrank"){
diff_degree_measure<-degree_eval_res$all
degree_rank<-diff_degree_measure$DiffRank #rank((1)*diff_degree_measure)
}
}
# save(degree_rank,file="degree_rank.Rda")
if(featselmethod=="lmreg" | featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="limma1way" | featselmethod=="lmreg" | featselmethod=="logitreg" | featselmethod=="limma1wayrepeat" | featselmethod=="limma2wayrepeat" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat" | featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="poissonreg" | featselmethod=="lmregrepeat")
{
diffexp_rank<-rank(data_limma_fdrall_withfeats[,2]) #order(data_limma_fdrall_withfeats[,2],decreasing=FALSE)
type.statistic="pvalue"
if(pvalue.dist.plot==TRUE){
x1=Sys.time()
stat_val<-(-1)*log10(data_limma_fdrall_withfeats[,2])
if(output.device.type!="pdf"){
pdf("Figures/pvalue.distribution.pdf",width=10,height=8)
}
par(mfrow=c(1,2))
kstest_res<-ks.test(data_limma_fdrall_withfeats[,2],"punif",0,1)
kstest_res<-round(kstest_res$p.value,3)
hist(as.numeric(data_limma_fdrall_withfeats[,2]),main=paste("Distribution of pvalues\n","Kolmogorov-Smirnov test for uniform distribution, p=",kstest_res,sep=""),cex.main=0.75,xlab="pvalues")
simpleQQPlot = function (observedPValues,mainlab) {
plot(-log10(1:length(observedPValues)/length(observedPValues)),
-log10(sort(observedPValues)),main=mainlab,xlab=paste("Expected -log10pvalue",sep=""),ylab=paste("Observed -logpvalue",sep=""),cex.main=0.75)
abline(0, 1, col = "brown")
}
inflation <- function(pvalue) {
chisq <- qchisq(1 - pvalue, 1)
lambda <- median(chisq) / qchisq(0.5, 1)
lambda
}
inflation_res<-round(inflation(data_limma_fdrall_withfeats[,2]),2)
simpleQQPlot(data_limma_fdrall_withfeats[,2],mainlab=paste("QQplot pvalues","\np-value inflation factor: ",inflation_res," (no inflation: close to 1; bias: greater than 1)",sep=""))
x2=Sys.time()
#print(x2-x1)
if(output.device.type!="pdf"){
dev.off()
}
}
par(mfrow=c(1,1))
}else{
if(featselmethod=="rfesvm"){
diffexp_rank<-rank((1)*abs(data_limma_fdrall_withfeats[,2]))
#diffexp_rank<-rank_vec
#data_limma_fdrall_withfeats<-cbind(rank_vec,data_limma_fdrall_withfeats)
}else{
if(featselmethod=="pamr"){
diffexp_rank<-rank_vec
#data_limma_fdrall_withfeats<-cbind(rank_vec,data_limma_fdrall_withfeats[,-c(1)])
}else{
if(featselmethod=="MARS"){
diffexp_rank<-rank((-1)*data_limma_fdrall_withfeats[,2])
}else{
diffexp_rank<-rank((1)*data_limma_fdrall_withfeats[,2])
}
}
}
}
if(input.intensity.scale=="raw" && log2transform==FALSE){
fold.change.log2<-maxfoldchange
data_limma_fdrall_withfeats_2<-cbind(fold.change.log2,degree_rank,diffexp_rank,data_limma_fdrall_withfeats)
}else{
if(input.intensity.scale=="log2" || log2transform==TRUE){
fold.change.log2<-maxfoldchange
data_limma_fdrall_withfeats_2<-cbind(fold.change.log2,degree_rank,diffexp_rank,data_limma_fdrall_withfeats)
}
}
# save(data_limma_fdrall_withfeats_2,file="data_limma_fdrall_withfeats_2.Rda")
allmetabs_res<-data_limma_fdrall_withfeats_2
if(analysismode=="classification"){
if(logistic_reg==TRUE){
fname4<-paste("logitreg","results_allfeatures.txt",sep="")
}else{
if(poisson_reg==TRUE){
fname4<-paste("poissonreg","results_allfeatures.txt",sep="")
}else{
fname4<-paste(parentfeatselmethod,"results_allfeatures.txt",sep="")
}
}
fname4<-paste("Tables/",fname4,sep="")
if(is.na(names_with_mz_time)==FALSE){
group_means1<-merge(group_means,group_sd,by=c("mz","time"))
allmetabs_res_temp<-merge(group_means1,allmetabs_res,by=c("mz","time"))
allmetabs_res_withnames<-merge(names_with_mz_time,allmetabs_res_temp,by=c("mz","time"))
# allmetabs_res_withnames<-merge(diff_degree_measure[,c("mz","time","DiffRank")],allmetabs_res_withnames,by=c("mz","time"))
#allmetabs_res_withnames<-cbind(degree_rank,diffexp_rank,allmetabs_res_withnames)
# allmetabs_res_withnames<-allmetabs_res_withnames[,c("DiffRank")]
# save(allmetabs_res_withnames,file="allmetabs_res_withnames.Rda")
# allmetabs_res_withnames<-allmetabs_res_withnames[order(allmetabs_res_withnames$mz,allmetabs_res_withnames$time),]
allmetabs_res_withnames<-allmetabs_res_withnames[order(as.numeric(as.character(allmetabs_res_withnames$mz)),as.numeric(as.character(allmetabs_res_withnames$time))),]
if(length(check_names)>0){
rem_col_ind1<-grep(colnames(allmetabs_res_withnames),pattern=c("mz"))
rem_col_ind2<-grep(colnames(allmetabs_res_withnames),pattern=c("time"))
rem_col_ind<-c(rem_col_ind1,rem_col_ind2)
}else{
rem_col_ind<-{}
}
if(length(rem_col_ind)>0){
write.table(allmetabs_res_withnames[,-c(rem_col_ind)], file=fname4,sep="\t",row.names=FALSE)
}else{
write.table(allmetabs_res_withnames, file=fname4,sep="\t",row.names=FALSE)
}
#rm(data_allinf_withfeats_withnames)
#}
}else{
group_means1<-merge(group_means,group_sd,by=c("mz","time"))
allmetabs_res_temp<-merge(group_means1,allmetabs_res,by=c("mz","time"))
#allmetabs_res_temp<-merge(group_means,allmetabs_res,by=c("mz","time"))
# allmetabs_res_temp<-cbind(degree_rank,diffexp_rank,allmetabs_res_temp)
Name<-paste(allmetabs_res_temp$mz,allmetabs_res_temp$time,sep="_")
allmetabs_res_withnames<-cbind(Name,allmetabs_res_temp)
allmetabs_res_withnames<-as.data.frame(allmetabs_res_withnames)
# allmetabs_res_withnames<-allmetabs_res_withnames[order(allmetabs_res_withnames$mz,allmetabs_res_withnames$time),]
allmetabs_res_withnames<-allmetabs_res_withnames[order(as.numeric(as.character(allmetabs_res_withnames$mz)),as.numeric(as.character(allmetabs_res_withnames$time))),]
write.table(allmetabs_res_withnames,file=fname4,sep="\t",row.names=FALSE)
}
rm(allmetabs_res_temp)
}else{
}
#rm(allmetabs_res)
if(length(goodip)>=1){
# data_limma_fdrall_withfeats_2<-data_limma_fdrall_withfeats_2[goodip,]
#data_limma_fdrall_withfeats_2<-as.data.frame(data_limma_fdrall_withfeats_2)
# save(allmetabs_res_withnames,goodip,file="allmetabs_res_withnames.Rda")
allmetabs_res_withnames<-allmetabs_res_withnames[order(as.numeric(as.character(allmetabs_res_withnames$mz)),as.numeric(as.character(allmetabs_res_withnames$time))),]
goodfeats<-as.data.frame(allmetabs_res_withnames[goodip,]) #data_limma_fdrall_withfeats_2)
goodfeats_allfields<-goodfeats
# write.table(allmetabs_res_withnames,file=fname4,sep="\t",row.names=FALSE)
if(logistic_reg==TRUE){
fname4<-paste("logitreg","results_selectedfeatures.txt",sep="")
}else{
if(poisson_reg==TRUE){
fname4<-paste("poissonreg","results_selectedfeatures.txt",sep="")
}else{
fname4<-paste(featselmethod,"results_selectedfeatures.txt",sep="")
}
}
#fname4<-paste("Tables/",fname4,sep="")
write.table(goodfeats,file=fname4,sep="\t",row.names=FALSE)
if(length(rocfeatlist)>length(goodip)){
rocfeatlist<-rocfeatlist[-which(rocfeatlist>length(goodip))] #seq(1,(length(goodip)))
numselect<-length(goodip)
#rocfeatlist<-rocfeatlist+1
}else{
numselect<-length(rocfeatlist)
}
}
}else{
#analysismode=="regression"
if(featselmethod=="lmreg" | featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="limma1way" | featselmethod=="lmreg" | featselmethod=="logitreg" | featselmethod=="limma1wayrepeat" | featselmethod=="limma2wayrepeat" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat" | featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="poissonreg" | featselmethod=="lmregrepeat")
{
diffexp_rank<-rank(data_limma_fdrall_withfeats[,1]) #order(data_limma_fdrall_withfeats[,2],decreasing=FALSE)
}else{
if(featselmethod=="rfesvm"){
diffexp_rank<-rank_vec
data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats
}else{
if(featselmethod=="pamr"){
diffexp_rank<-rank_vec
# data_limma_fdrall_withfeats<-cbind(rank_vec,data_limma_fdrall_withfeats)
}else{
if(featselmethod=="MARS"){
diffexp_rank<-rank((-1)*data_limma_fdrall_withfeats[,1])
}else{
diffexp_rank<-rank((1)*data_limma_fdrall_withfeats[,2])
}
}
}
}
#save(goodfeats,diffexp_rank,data_limma_fdrall_withfeats,file="t3.Rda")
data_limma_fdrall_withfeats_2<-cbind(diffexp_rank,data_limma_fdrall_withfeats)
# fname4<-paste(featselmethod,"_sigfeats.txt",sep="")
if(logistic_reg==TRUE){
fname4<-paste("logitreg","results_allfeatures.txt",sep="")
}else{
if(poisson_reg==TRUE){
fname4<-paste("poissonreg","results_allfeatures.txt",sep="")
}else{
fname4<-paste(parentfeatselmethod,"results_allfeatures.txt",sep="")
}
}
fname4<-paste("Tables/",fname4,sep="")
allmetabs_res<-data_limma_fdrall_withfeats_2
if(is.na(names_with_mz_time)==FALSE){
allmetabs_res_withnames<-merge(names_with_mz_time,data_limma_fdrall_withfeats_2,by=c("mz","time"))
# allmetabs_res_withnames<-cbind(degree_rank,diffexp_rank,allmetabs_res_withnames)
allmetabs_res_withnames<-allmetabs_res_withnames[order(as.numeric(as.character(allmetabs_res_withnames$mz)),as.numeric(as.character(allmetabs_res_withnames$time))),]
# allmetabs_res_withnames<-allmetabs_res_withnames[order(allmetabs_res_withnames$mz,allmetabs_res_withnames$time),]
#write.table(allmetabs_res_withnames[,-c("mz","time")], file=fname4,sep="\t",row.names=FALSE)
# save(allmetabs_res_withnames,file="allmetabs_res_withnames.Rda")
#rem_col_ind<-grep(colnames(allmetabs_res_withnames),pattern=c("mz","time"))
if(length(check_names)>0){
rem_col_ind1<-grep(colnames(allmetabs_res_withnames),pattern=c("mz"))
rem_col_ind2<-grep(colnames(allmetabs_res_withnames),pattern=c("time"))
rem_col_ind<-c(rem_col_ind1,rem_col_ind2)
}else{
rem_col_ind<-{}
}
if(length(rem_col_ind)>0){
write.table(allmetabs_res_withnames[,-c(rem_col_ind)], file=fname4,sep="\t",row.names=FALSE)
}else{
write.table(allmetabs_res_withnames, file=fname4,sep="\t",row.names=FALSE)
}
# rm(data_allinf_withfeats_withnames)
}else{
# allmetabs_res_temp<-cbind(degree_rank,diffexp_rank,allmetabs_res)
allmetabs_res_withnames<-allmetabs_res
write.table(allmetabs_res,file=fname4,sep="\t",row.names=FALSE)
}
goodfeats<-allmetabs_res_withnames[goodip,] #data_limma_fdrall_withfeats_2[goodip,] #[sel.diffdrthresh==TRUE,]
# save(allmetabs_res_withnames,goodip,file="allmetabs_res_withnames.Rda")
goodfeats<-as.data.frame(allmetabs_res_withnames[goodip,]) #data_limma_fdrall_withfeats_2)
goodfeats_allfields<-goodfeats
if(logistic_reg==TRUE){
fname4<-paste("logitreg","results_selectedfeatures.txt",sep="")
}else{
if(poisson_reg==TRUE){
fname4<-paste("poissonreg","results_selectedfeatures.txt",sep="")
}else{
fname4<-paste(featselmethod,"results_selectedfeatures.txt",sep="")
}
}
# fname4<-paste("Tables/",fname4,sep="")
write.table(goodfeats,file=fname4,sep="\t",row.names=FALSE)
fname4<-paste("Tables/",parentfeatselmethod,"results_allfeatures.txt",sep="")
#allmetabs_res<-goodfeats #data_limma_fdrall_withfeats_2
}
}
# save(goodfeats,file="goodfeats455.Rda")
if(length(goodip)>1){
goodfeats_by_DICErank<-{}
if(analysismode=="classification"){
if(featselmethod=="lmreg" | featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="limma1way" | featselmethod=="lmreg" | featselmethod=="logitreg" | featselmethod=="limma1wayrepeat" | featselmethod=="limma2wayrepeat" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat" | featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="poissonreg")
{
goodfeats<-goodfeats[order(goodfeats$diffexp_rank,decreasing=FALSE),]
if(length(goodip)>1){
# goodfeats_by_DICErank<-data_limma_fdrall_withfeats_2[r1$top.list,]
}
}else{
goodfeats<-goodfeats[order(goodfeats$diffexp_rank,decreasing=FALSE),]
if(length(goodip)>1){
#goodfeats_by_DICErank<-data_limma_fdrall_withfeats_2[r1$top.list,]
}
}
cnamesd1<-colnames(goodfeats)
time_ind<-which(cnamesd1=="time")
mz_ind<-which(cnamesd1=="mz")
goodfeats_name<-goodfeats$Name
goodfeats_temp<-cbind(goodfeats[,mz_ind],goodfeats[,time_ind],goodfeats[,which(colnames(goodfeats)%in%sample_names_vec)]) #goodfeats[,-c(1:time_ind)])
# save(goodfeats_temp,file="goodfeats_temp.Rda")
cnames_temp<-colnames(goodfeats_temp)
cnames_temp<-c("mz","time",cnames_temp[-c(1:2)])
colnames(goodfeats_temp)<-cnames_temp
goodfeats<-goodfeats_temp
}else{
if(analysismode=="regression"){
# save(goodfeats,file="goodfeats455.Rda")
try(dev.off(),silent=TRUE)
if(featselmethod=="lmreg" | featselmethod=="pls" | featselmethod=="spls" | featselmethod=="o1pls" | featselmethod=="RF" | featselmethod=="MARS"){
####savegoodfeats,file="goodfeats.Rda")
goodfeats<-goodfeats[order(goodfeats$diffexp_rank,decreasing=FALSE),]
}else{
#goodfeats<-goodfeats[order(goodfeats[,1],decreasing=TRUE),]
}
goodfeats<-as.data.frame(goodfeats)
cnamesd1<-colnames(goodfeats)
time_ind<-which(cnamesd1=="time")
mz_ind<-which(cnamesd1=="mz")
goodfeats_name<-goodfeats$Name
goodfeats_temp<-cbind(goodfeats[,mz_ind],goodfeats[,time_ind],goodfeats[,which(colnames(goodfeats)%in%sample_names_vec)]) #goodfeats[,-c(1:time_ind)])
#save(goodfeats_temp,goodfeats,goodfeats_name,file="goodfeats_temp.Rda")
cnames_temp<-colnames(goodfeats_temp)
cnames_temp<-c("mz","time",cnames_temp[-c(1:2)])
colnames(goodfeats_temp)<-cnames_temp
goodfeats<-goodfeats_temp
rm(goodfeats_temp)
# #save(goodfeats,goodfeats_temp,mz_ind,time_ind,classlabels_orig,analysistype,alphabetical.order,col_vec,file="pca1.Rda")
num_sig_feats<-nrow(goodfeats)
if(num_sig_feats>=3 & pca.stage2.eval==TRUE){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/PCAplots_selectedfeats.pdf"
#png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
#pdf(temp_filename_1)
pdf(temp_filename_1,width=plots.width,height=plots.height)
}
plot(0:10, type = "n", xaxt="n", yaxt="n", bty="n", xlab = "", ylab = "")
text(5, 8, "PCA using selected features after feature selection")
text(5, 7, "The figures include: ")
text(5, 6, "a. pairwise PC score plots ")
text(5, 5, "b. scores for individual samples on each PC")
text(5, 4, "c. Lineplots using PC scores for data with repeated measurements")
par(mfrow=c(1,1),family="sans",cex=cex.plots)
rownames(goodfeats)<-goodfeats$Name
get_pcascoredistplots(X=goodfeats,Y=classlabels_orig,feature_table_file=NA,parentoutput_dir=getwd(),
class_labels_file=NA,sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,
plots.res=300, alphacol=0.3,col_vec=col_vec,pairedanalysis=pairedanalysis,
pca.cex.val=pca.cex.val,legendlocation=legendlocation,pca.ellipse=pca.ellipse,
ellipse.conf.level=ellipse.conf.level,filename="selected",paireddesign=paireddesign,
lineplot.col.opt=lineplot.col.opt,lineplot.lty.option=lineplot.lty.option,
timeseries.lineplots=timeseries.lineplots,pcacenter=pcacenter,pcascale=pcascale,
alphabetical.order=alphabetical.order,study.design=analysistype,lme.modeltype=modeltype) #,silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
}
}
class_label_A<-class_labels_levels[1]
class_label_B<-class_labels_levels[2]
#goodfeats_allfields<-{}
if(length(which(sel.diffdrthresh==TRUE))>1){
goodfeats<-as.data.frame(goodfeats)
mzvec<-goodfeats$mz
timevec<-goodfeats$time
if(length(mzvec)>4){
max_per_row<-3
par_rows<-ceiling(9/max_per_row)
}else{
max_per_row<-length(mzvec)
par_rows<-1
}
goodfeats<-as.data.frame(goodfeats)
cnamesd1<-colnames(goodfeats)
time_ind<-which(cnamesd1=="time")
# goodfeats_allfields<-as.data.frame(goodfeats)
file_ind<-1
mz_ind<-which(cnamesd1=="mz")
goodfeats_temp<-cbind(goodfeats[,mz_ind],goodfeats[,time_ind],goodfeats[,-c(1:time_ind)])
cnames_temp<-colnames(goodfeats_temp)
cnames_temp[1]<-"mz"
cnames_temp[2]<-"time"
colnames(goodfeats_temp)<-cnames_temp
#if(length(class_labels_levels)<10)
if(analysismode=="classification" && nrow(goodfeats)>=1 && length(goodip)>=1)
{
if(is.na(rocclassifier)==FALSE){
if(length(class_labels_levels)==2){
#print("Generating ROC curve using top features on training set")
# save(kfold,goodfeats_temp,classlabels,svm_kernel,pred.eval.method,match_class_dist,rocfeatlist,rocfeatincrement,file="rocdebug.Rda")
# roc_res<-try(get_roc(dataA=goodfeats_temp,classlabels=classlabels,classifier=rocclassifier,kname="radial",
# rocfeatlist=rocfeatlist,rocfeatincrement=rocfeatincrement,mainlabel="Training set ROC curve using top features"),silent=TRUE)
if(output.device.type=="pdf"){
roc_newdevice=FALSE
}else{
roc_newdevice=TRUE
}
roc_res<-try(get_roc(dataA=goodfeats_temp,classlabels=classlabels,classifier=rocclassifier,kname="radial",
rocfeatlist=rocfeatlist,rocfeatincrement=rocfeatincrement,
mainlabel="Training set ROC curve using top features",newdevice=roc_newdevice),silent=TRUE)
# print(roc_res)
}
subdata=t(goodfeats[,-c(1:time_ind)])
# save(kfold,subdata,goodfeats,classlabels,svm_kernel,pred.eval.method,match_class_dist,file="svmdebug.Rda")
svm_model<-try(svm_cv(v=kfold,x=subdata,y=classlabels,kname=svm_kernel,errortype=pred.eval.method,conflevel=95,match_class_dist=match_class_dist),silent=TRUE)
#svm_model<-try(svm_cv(v=kfold,x=subdata,y=classlabels,kname=svm_kernel,errortype=pred.eval.method,conflevel=95,match_class_dist=match_class_dist),silent=TRUE)
#svm_model<-try(svm(x=subdata,y=(classlabels),type="nu-classification",cross=kfold,kernel=svm_kernel),silent=TRUE)
#svm_model<-try(svm_cv(v=kfold,x=subdata,y=classlabels,kname=svm_kernel,errortype=pred.eval.method,conflevel=95,match_class_dist=match_class_dist),silent=TRUE)
classlabels<-as.data.frame(classlabels)
if(is(svm_model,"try-error")){
print("SVM could not be performed. Please try lowering the kfold or set kfold=total number of samples for Leave-one-out CV. Skipping to the next step.")
print(svm_model)
termA<-(-1)
pred_acc<-termA
permut_acc<-(-1)
}else{
pred_acc<-svm_model$avg_acc
#print("Accuracy is:")
#print(pred_acc)
if(is.na(cv.perm.count)==FALSE){
print("Calculating permuted CV accuracy")
permut_acc<-{}
#permut_acc<-lapply(1:100,function(j){
numcores<-num_nodes #round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterEvalQ(cl,library(e1071))
clusterEvalQ(cl,library(pROC))
clusterEvalQ(cl,library(ROCR))
clusterEvalQ(cl,library(CMA))
clusterExport(cl,"svm_cv",envir = .GlobalEnv)
permut_acc<-parLapply(cl,1:cv.perm.count,function(p1){
rand_order<-sample(1:dim(classlabels)[1],size=dim(classlabels)[1])
classlabels_permut<-classlabels[rand_order,]
classlabels_permut<-as.data.frame(classlabels_permut)
svm_permut_res<-try(svm_cv(v=kfold,x=subdata,y=classlabels_permut,kname=svm_kernel,errortype=pred.eval.method,conflevel=95,match_class_dist=match_class_dist),silent=TRUE)
#svm_permut_res<-try(svm(x=subdata,y=(classlabels_permut),type="nu-classification",cross=kfold,kernel=svm_kernel),silent=TRUE)
#svm_permut_res<-svm_cv(v=kfold,x=subdata,y=classlabels_permut,kname=svm_kernel,errortype=pred.eval.method,conflevel=95,match_class_dist=match_class_dist)
if(is(svm_permut_res,"try-error")){
cur_perm_acc<-NA
}else{
cur_perm_acc<-svm_permut_res$avg_acc #tot.accuracy #
}
return(cur_perm_acc)
})
stopCluster(cl)
permut_acc<-unlist(permut_acc)
permut_acc<-mean(permut_acc,na.rm=TRUE)
permut_acc<-round(permut_acc,2)
print("mean Permuted accuracy is:")
print(permut_acc)
}else{
permut_acc<-(-1)
}
}
}else{
termA<-(-1)
pred_acc<-termA
permut_acc<-(-1)
}
termA<-100*pred_acc
if(featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="limma2wayrepeat" | featselmethod=="lmreg" | featselmethod=="logitreg"
| featselmethod=="lm2wayanova" | featselmethod=="lm1wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat" | featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="poissonreg" | featselmethod=="lmregrepeat")
{
if(fdrmethod=="none"){
exp_fp<-(dim(data_m_fc)[1]*fdrthresh)+1
}else{
exp_fp<-(feat_sigfdrthresh[lf]*fdrthresh)+1
}
}
termB<-(dim(parent_data_m)[1]*dim(parent_data_m)[1])/(dim(data_m_fc)[1]*dim(data_m_fc)[1]*100)
res_score<-(100*(termA-permut_acc))-(feat_weight*termB*exp_fp)
res_score<-round(res_score,2)
if(lf==0)
{
best_logfc_ind<-lf
best_feats<-goodip
best_cv_res<-res_score
best_acc<-pred_acc
best_limma_res<-data_limma_fdrall_withfeats[goodip,] #[sel.diffdrthresh==TRUE,]
}else{
if(res_score>best_cv_res){
best_logfc_ind<-lf
best_feats<-goodip
best_cv_res<-res_score
best_acc<-pred_acc
best_limma_res<-data_limma_fdrall_withfeats[goodip,] #[sel.diffdrthresh==TRUE,]
}
}
pred_acc=round(pred_acc,2)
res_score_vec[lf]<-res_score
if(pred.eval.method=="CV"){
feat_sigfdrthresh_cv[lf]<-pred_acc
feat_sigfdrthresh_permut[lf]<-permut_acc
acc_message=(paste(kfold,"-fold CV accuracy: ", pred_acc,sep=""))
if(is.na(cv.perm.count)==FALSE){
perm_acc_message=(paste("Permuted ",kfold,"-fold CV accuracy: ", permut_acc,sep=""))
}
}else{
if(pred.eval.method=="AUC"){
feat_sigfdrthresh_cv[lf]<-pred_acc
feat_sigfdrthresh_permut[lf]<-permut_acc
acc_message=(paste("ROC area under the curve (AUC) is : ", pred_acc,sep=""))
if(is.na(cv.perm.count)==FALSE){
perm_acc_message=(paste("Permuted ROC area under the curve (AUC) is : ", permut_acc,sep=""))
}
}else{
if(pred.eval.method=="BER"){
feat_sigfdrthresh_cv[lf]<-pred_acc
feat_sigfdrthresh_permut[lf]<-permut_acc
acc_message=(paste(kfold, "-fold CV balanced accuracy rate is: ", pred_acc,sep=""))
if(is.na(cv.perm.count)==FALSE){
perm_acc_message=(paste("Permuted balanced accuracy rate is : ", permut_acc,sep=""))
}
}
}
}
# print("########################################")
# cat("", sep="\n\n")
#print(paste("Summary for method: ",featselmethod,sep=""))
#print(paste("Relative standard deviation (RSD) threshold: ", log2.fold.change.thresh," %",sep=""))
cat("Analysis summary:",sep="\n")
if(is.na(factor1_msg)==FALSE){
cat(factor1_msg,sep="\n")
}
if(is.na(factor2_msg)==FALSE){
cat(factor2_msg,sep="\n")
}
cat(paste("Number of samples: ", dim(data_m_fc)[2],sep=""),sep="\n")
cat(paste("Number of features in the original dataset: ", num_features_total,sep=""),sep="\n")
# cat(rsd_filt_msg,sep="\n")
cat(paste("Number of features left after preprocessing: ", dim(data_m_fc)[1],sep=""),sep="\n")
cat(paste("Number of selected features: ", length(goodip),sep=""),sep="\n")
if(is.na(rocclassifier)==FALSE){
cat(acc_message,sep="\n")
if(is.na(cv.perm.count)==FALSE){
cat(perm_acc_message,sep="\n")
}
}
# cat("", sep="\n\n")
#print("ROC done")
best_subset<-{}
best_acc<-0
xvec<-{}
yvec<-{}
#for(i in 2:max_varsel)
if(is.na(rocclassifier)==FALSE){
if(nrow(goodfeats_temp)<length(rocfeatlist)){
max_cv_varsel<-1:nrow(goodfeats_temp)
}else{
max_cv_varsel<-rocfeatlist #nrow(goodfeats_temp)
}
cv_yvec<-lapply(max_cv_varsel,function(i)
{
subdata<-t(goodfeats_temp[1:i,-c(1:2)])
svm_model<-try(svm_cv(v=kfold,x=subdata,y=classlabels,kname=svm_kernel,errortype=pred.eval.method,conflevel=95,match_class_dist=match_class_dist),silent=TRUE)
#svm_model<-svm_cv(v=kfold,x=subdata,y=classlabels,kname=svm_kernel,errortype=pred.eval.method,conflevel=95,match_class_dist=match_class_dist)
if(is(svm_model,"try-error")){
res1<-NA
}else{
res1<-svm_model$avg_acc
}
return(res1)
})
xvec<-max_cv_varsel
yvec<-unlist(cv_yvec)
if(pred.eval.method=="CV"){
ylab_text=paste(pred.eval.method," accuracy (%)",sep="")
}else{
if(pred.eval.method=="BER"){
ylab_text=paste("Balanced accuracy"," (%)",sep="")
}else{
ylab_text=paste("AUC"," (%)",sep="")
}
}
if(length(yvec)>0){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/kfoldCV_forward_selection.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}else{
# temp_filename_1<-"Figures/kfoldCV_forward_selection.pdf"
#pdf(temp_filename_1)
}
try(plot(x=xvec,y=yvec,main="k-fold CV classification accuracy based on forward selection of top features",xlab="Feature index",ylab=ylab_text,type="b",col="#0072B2",cex.main=0.7),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}else{
# try(dev.off(),silent=TRUE)
}
cv_mat<-cbind(xvec,yvec)
colnames(cv_mat)<-c("Feature Index",ylab_text)
write.table(cv_mat,file="Tables/kfold_cv_mat.txt",sep="\t")
}
}
if(pairedanalysis==TRUE)
{
if(featselmethod=="pls" | featselmethod=="spls"){
classlabels_sub<-classlabels_sub[,-c(1)]
classlabels_temp<-cbind(classlabels_sub)
}else{
classlabels_sub<-classlabels_sub[,-c(1)]
classlabels_temp<-cbind(classlabels_sub)
}
}else{
classlabels_temp<-cbind(classlabels_sub,classlabels)
}
num_sig_feats<-nrow(goodfeats)
if(num_sig_feats<3){
pca.stage2.eval=FALSE
}
if(pca.stage2.eval==TRUE)
{
pca_comp<-min(10,dim(X)[2])
#dev.off()
# print("plotting")
#pdf("sig_features_evaluation.pdf", height=2000,width=2000)
library(pcaMethods)
p1<-pcaMethods::pca(X,method="rnipals",center=TRUE,scale="uv",cv="q2",nPcs=pca_comp)
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/PCAdiagnostics_selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
p2<-plot(p1,col=c("darkgrey","grey"),main="PCA diagnostics after variable selection")
print(p2)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
#dev.off()
}
classlabels_orig<-classlabels_orig_parent
if(pairedanalysis==TRUE){
classlabels_orig<-classlabels_orig[,-c(2)]
}else{
if(featselmethod=="lmreg" || featselmethod=="logitreg" || featselmethod=="poissonreg"){
classlabels_orig<-classlabels_orig[,c(1:2)]
classlabels_orig<-as.data.frame(classlabels_orig)
}
}
classlabels_orig_wgcna<-classlabels_orig
goodfeats_temp<-cbind(goodfeats[,mz_ind],goodfeats[,time_ind],goodfeats[,-c(1:time_ind)])
cnames_temp<-colnames(goodfeats_temp)
cnames_temp<-c("mz","time",cnames_temp[-c(1:2)])
colnames(goodfeats_temp)<-cnames_temp
goodfeats_temp_with_names<-merge(names_with_mz_time,goodfeats_temp,by=c("mz","time"))
goodfeats_temp_with_names<-goodfeats_temp_with_names[match(paste(goodfeats_temp$mz,"_",goodfeats_temp$time,sep=""),paste(goodfeats_temp_with_names$mz,"_",goodfeats_temp_with_names$time,sep="")),]
# save(goodfeats,goodfeats_temp,names_with_mz_time,goodfeats_temp_with_names,file="goodfeats_pca.Rda")
rownames(goodfeats_temp)<-goodfeats_temp_with_names$Name
if(num_sig_feats>=3 & pca.stage2.eval==TRUE){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/PCAplots_selectedfeats.pdf"
#png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
#pdf(temp_filename_1)
pdf(temp_filename_1,width=plots.width,height=plots.height)
}
plot(0:10, type = "n", xaxt="n", yaxt="n", bty="n", xlab = "", ylab = "")
text(5, 8, "PCA using selected features after feature selection")
text(5, 7, "The figures include: ")
text(5, 6, "a. pairwise PC score plots ")
text(5, 5, "b. scores for individual samples on each PC")
text(5, 4, "c. Lineplots using PC scores for data with repeated measurements")
par(mfrow=c(1,1),family="sans",cex=cex.plots)
get_pcascoredistplots(X=goodfeats_temp,Y=classlabels_orig_pca,
feature_table_file=NA,parentoutput_dir=getwd(),class_labels_file=NA,
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300, alphacol=0.3,col_vec=col_vec,pairedanalysis=pairedanalysis,pca.cex.val=pca.cex.val,legendlocation=legendlocation,pca.ellipse=pca.ellipse,ellipse.conf.level=ellipse.conf.level,filename="selected",paireddesign=paireddesign,
lineplot.col.opt=lineplot.col.opt,lineplot.lty.option=lineplot.lty.option,timeseries.lineplots=timeseries.lineplots,pcacenter=pcacenter,pcascale=pcascale,alphabetical.order=alphabetical.order,study.design=analysistype,lme.modeltype=modeltype) #,silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
####savelist=ls(),file="timeseries.Rda")
#if(FALSE)
{
#if(FALSE)
{
if(log2transform==TRUE || input.intensity.scale=="log2"){
if(znormtransform==TRUE){
ylab_text_2="scale normalized"
}else{
if(quantile_norm==TRUE){
ylab_text_2="quantile normalized"
}else{
if(eigenms_norm==TRUE){
ylab_text_2="EigenMS normalized"
}else{
if(sva_norm==TRUE){
ylab_text_2="SVA normalized"
}else{
ylab_text_2=""
}
}
}
}
ylab_text=paste("log2 intensity ",ylab_text_2,sep="")
}else{
if(znormtransform==TRUE){
ylab_text_2="scale normalized"
}else{
if(quantile_norm==TRUE){
ylab_text_2="quantile normalized"
}else{
#ylab_text_2=""
if(medcenter==TRUE){
ylab_text_2="median centered"
}else{
if(lowess_norm==TRUE){
ylab_text_2="LOWESS normalized"
}else{
if(rangescaling==TRUE){
ylab_text_2="range scaling normalized"
}else{
if(paretoscaling==TRUE){
ylab_text_2="pareto scaling normalized"
}else{
if(mstus==TRUE){
ylab_text_2="MSTUS normalized"
}else{
if(vsn_norm==TRUE){
ylab_text_2="VSN normalized"
}else{
ylab_text_2=""
}
}
}
}
}
}
}
}
ylab_text=paste("Intensity ",ylab_text_2,sep="")
}
}
#ylab_text_2=""
#ylab_text=paste("Abundance",ylab_text_2,sep="")
par(mfrow=c(1,1),family="sans",cex=cex.plots)
if(pairedanalysis==TRUE || timeseries.lineplots==TRUE)
{
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/Lineplots_selectedfeats.pdf"
#png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
#pdf(temp_filename_1)
pdf(temp_filename_1,width=plots.width,height=plots.height)
# par(mfrow=c(1,1))
par(mfrow=c(1,1),family="sans",cex=cex.plots)
}
#plot(0:10, type = "n", xaxt="n", yaxt="n", bty="n", xlab = "", ylab = "")
#text(5, 8, "Lineplots using selected features")
# text(5, 7, "The error bars represent the 95% \nconfidence interval in each group (or timepoint)")
# save(goodfeats_temp,classlabels_orig,lineplot.col.opt,col_vec,pairedanalysis,
# pca.cex.val,pca.ellipse,ellipse.conf.level,legendlocation,ylab_text,error.bar,
# cex.plots,lineplot.lty.option,timeseries.lineplots,analysistype,goodfeats_name,alphabetical.order,
# multiple.figures.perpanel,plot.ylab_text,plots.height,plots.width,file="debuga_lineplots.Rda")
#try(
var_sum_list<-get_lineplots(X=goodfeats_temp,Y=classlabels_orig,feature_table_file=NA,
parentoutput_dir=getwd(),class_labels_file=NA,
lineplot.col.opt=lineplot.col.opt,alphacol=alphacol,col_vec=col_vec,
pairedanalysis=pairedanalysis,point.cex.val=pca.cex.val,
legendlocation=legendlocation,pca.ellipse=pca.ellipse,
ellipse.conf.level=ellipse.conf.level,filename="selected",
ylabel=plot.ylab_text,error.bar=error.bar,cex.plots=cex.plots,
lineplot.lty.option=lineplot.lty.option,timeseries.lineplots=timeseries.lineplots,
name=goodfeats_name,study.design=analysistype,
alphabetical.order=alphabetical.order,multiple.figures.perpanel=multiple.figures.perpanel,
plot.height = plots.height,plot.width=plots.width)
#,silent=TRUE) #,silent=TRUE)
#save(var_sum_list,file="var_sum_list.Rda")
var_sum_mat<-{}
# for(i in 1:length(var_sum_list))
#{
# var_sum_mat<-rbind(var_sum_mat,var_sum_list[[i]]$df_write_temp)
#}
# var_sum_mat<-ldply(var_sum_list,rbind)
# write.table(var_sum_mat,file="Tables/data_summary.txt",sep="\t",row.names=FALSE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
# save(goodfeats_temp,classlabels_orig,lineplot.col.opt,alphacol,col_vec,pairedanalysis,pca.cex.val,legendlocation,pca.ellipse,ellipse.conf.level,plot.ylab_text,error.bar,cex.plots,
# lineplot.lty.option,timeseries.lineplots,goodfeats_name,analysistype,alphabetical.order,multiple.figures.perpanel,plots.height,plots.width,file="var_sum.Rda")
var_sum_list<-get_data_summary(X=goodfeats_temp,Y=classlabels_orig,feature_table_file=NA,
parentoutput_dir=getwd(),class_labels_file=NA,
lineplot.col.opt=lineplot.col.opt,alphacol=alphacol,col_vec=col_vec,
pairedanalysis=pairedanalysis,point.cex.val=pca.cex.val,
legendlocation=legendlocation,pca.ellipse=pca.ellipse,
ellipse.conf.level=ellipse.conf.level,filename="selected",
ylabel=plot.ylab_text,error.bar=error.bar,cex.plots=cex.plots,
lineplot.lty.option=lineplot.lty.option,timeseries.lineplots=timeseries.lineplots,
name=goodfeats_name,study.design=analysistype,
alphabetical.order=alphabetical.order,multiple.figures.perpanel=multiple.figures.perpanel,plot.height = plots.height,plot.width=plots.width)
if(nrow(goodfeats)<1){
print(paste("No features selected for ",featselmethod,sep=""))
}
#else
{
#write.table(goodfeats_temp,file="Tables/boxplots_file.normalized.txt",sep="\t",row.names=FALSE)
goodfeats<-goodfeats[,-c(1:time_ind)]
goodfeats_raw<-data_matrix_beforescaling_rsd[goodip,]
#write.table(goodfeats_raw,file="Tables/boxplots_file.raw.txt",sep="\t",row.names=FALSE)
goodfeats_raw<-goodfeats_raw[match(paste(goodfeats_temp$mz,"_",goodfeats_temp$time,sep=""),paste(goodfeats_raw$mz,"_",goodfeats_raw$time,sep="")),]
goodfeats_name<-as.character(goodfeats_name)
# save(goodfeats_name,goodfeats_temp,classlabels_orig,output_dir,boxplot.col.opt,cex.plots,ylab_text,file="boxplotdebug.Rda")
if(pairwise.correlation.analysis==TRUE)
{
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
temp_filename_1<-"Figures/Pairwise.correlation.plots.pdf"
# pdf(temp_filename_1)
pdf(temp_filename_1,width=plots.width,height=plots.height)
}
par(mfrow=c(1,1),family="sans",cex=cex.plots,cex.main=0.7)
# cor1<-WGCNA::cor(t(goodfeats_temp[,-c(1:2)]))
rownames(goodfeats_temp)<-goodfeats_name
#Pairwise correlations between selected features
cor1<-WGCNA::cor(t(goodfeats_temp[,-c(1:2)]),nThreads=num_nodes,method=cor.method,use = 'p')
corpval1=apply(cor1,2,function(x){corPvalueStudent(x,n=ncol(goodfeats_temp[,-c(1:2)]))})
fdr_adjust_pvalue<-try(suppressWarnings(fdrtool(as.vector(cor1[upper.tri(cor1)]),statistic="correlation",verbose=FALSE,plot=FALSE)),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
print(fdr_adjust_pvalue)
}
cor1[(abs(cor1)<abs.cor.thresh)]<-0
newnet <- cor1
newnet[upper.tri(newnet)][fdr_adjust_pvalue$qval > cor.fdrthresh] <- 0
newnet[lower.tri(newnet)] <- t(newnet)[lower.tri(newnet)]
newnet <- as.matrix(newnet)
corqval1=newnet
diag(corqval1)<-0
upperTriangle<-upper.tri(cor1, diag=F)
lowerTriangle<-lower.tri(cor1, diag=F)
corqval1[upperTriangle]<-fdr_adjust_pvalue$qval
corqval1[lowerTriangle]<-corqval1[upperTriangle]
cor1=newnet
rm(newnet)
# rownames(cor1)<-paste(goodfeats_temp[,c(1)],goodfeats_temp[,c(2)],sep="_")
# colnames(cor1)<-rownames(cor1)
#dendrogram="none",
h1<-heatmap.2(cor1,col=rev(brewer.pal(11,"RdBu")),Rowv=TRUE,Colv=TRUE,scale="none",key=TRUE, symkey=FALSE, density.info="none", trace="none",main="Pairwise correlations between selected features",cexRow = 0.5,cexCol = 0.5,cex.main=0.7)
upperTriangle<-upper.tri(cor1, diag=F) #turn into a upper triangle
cor1.upperTriangle<-cor1 #take a copy of the original cor-mat
cor1.upperTriangle[!upperTriangle]<-NA#set everything not in upper triangle o NA
correlations_melted<-na.omit(melt(cor1.upperTriangle, value.name ="correlationCoef")) #use melt to reshape the matrix into triplets, na.omit to get rid of the NA rows
colnames(correlations_melted)<-c("from", "to", "weight")
# save(correlations_melted,cor1,file="correlations_melted.Rda")
correlations_melted<-as.data.frame(correlations_melted)
correlations_melted$from<-paste("X",correlations_melted$from,sep="")
correlations_melted$to<-paste("Y",correlations_melted$to,sep="")
write.table(correlations_melted,file="Tables/pairwise.correlations.selectedfeatures.linkmatrix.txt",sep="\t",row.names=FALSE)
if(ncol(cor1)>1000){
netres<-plot_graph(correlations_melted,filename="sigfeats_top1000pairwisecor",interactive=FALSE,maxnodesperclass=1000,label.cex=network.label.cex,mtext.val="Top 1000 pairwise correlations between selected features")
}
netres<-try(plot_graph(correlations_melted,filename="sigfeats_pairwisecorrelations",interactive=FALSE,maxnodesperclass=NA,label.cex=network.label.cex,mtext.val="Pairwise correlations between selected features"),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
temp_filename_1<-"Figures/Boxplots.selectedfeats.normalized.pdf"
if(boxplot.type=="simple"){
pdf(temp_filename_1,height=plots.height,width=plots.width)
}
}
goodfeats_name<-as.character(goodfeats_name)
# save(goodfeats_name,goodfeats_temp,classlabels_orig,output_dir,boxplot.col.opt,cex.plots,ylab_text,plot.ylab_text,
# analysistype,boxplot.type,alphabetical.order,goodfeats_name,add.pvalues,add.jitter,file="boxplotdebug.Rda")
par(mfrow=c(1,1),family="sans",cex=cex.plots)
# plot(0:10, type = "n", xaxt="n", yaxt="n", bty="n", xlab = "", ylab = "")
# text(5, 8, "Boxplots of selected features using the\n normalized intensities/abundance levels",cex=1.5,font=2)
#plot.ylab_text1=paste("(Normalized) ",ylab_text,sep="")
#classlabels_paired<-cbind(as.character(classlabels[,1]),as.character(subject_inf),as.character(classlabels[,2]))
#classlabels_paired<-as.data.frame(classlabels_paired)
if(generate.boxplots==TRUE){
# print("Generating boxplots")
if(normalization.method!="none"){
plot.ylab_text1=paste("(Normalized) ",ylab_text,sep="")
if(pairedanalysis==TRUE){
#classlabels_paired<-cbind(classlabels[,1],subject_inf,classlabels[,2])
res<-get_boxplots(X=goodfeats_temp,Y=classlabels_orig,parentoutput_dir=output_dir,boxplot.col.opt=boxplot.col.opt,
newdevice=FALSE,cex.plots=cex.plots,ylabel=plot.ylab_text1,name=goodfeats_name,add.pvalues=add.pvalues,add.jitter=add.jitter,
alphabetical.order=alphabetical.order,boxplot.type=boxplot.type,study.design=gsub(analysistype,pattern="repeat",replacement=""),
multiple.figures.perpanel=multiple.figures.perpanel,numnodes=num_nodes,
plot.height = plots.height,plot.width=plots.width,
filename="Figures/Boxplots.selectedfeats.normalized",alphacol = alpha.col,ggplot.type1=ggplot.type1,facet.nrow=facet.nrow)
}else{
res<-get_boxplots(X=goodfeats_temp,Y=classlabels_orig,parentoutput_dir=output_dir,boxplot.col.opt=boxplot.col.opt,
newdevice=FALSE,cex.plots=cex.plots,ylabel=plot.ylab_text1,name=goodfeats_name,add.pvalues=add.pvalues,add.jitter=add.jitter,
alphabetical.order=alphabetical.order,boxplot.type=boxplot.type,study.design=analysistype,
multiple.figures.perpanel=multiple.figures.perpanel,numnodes=num_nodes,
plot.height = plots.height,plot.width=plots.width,
filename="Figures/Boxplots.selectedfeats.normalized",alphacol = alpha.col,ggplot.type1=ggplot.type1,facet.nrow=facet.nrow)
}
}else{
plot.boxplots.raw=TRUE
goodfeats_raw=goodfeats_temp
}
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(plot.boxplots.raw==TRUE){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/Boxplots.selectedfeats.raw.pdf"
if(boxplot.type=="simple"){
pdf(temp_filename_1,height=plots.height,width=plots.width)
}
}
# save(goodfeats_raw,goodfeats_temp,classlabels_raw_boxplots,classlabels_orig,
# output_dir,boxplot.col.opt,cex.plots,ylab_text,boxplot.type,ylab_text_raw,
# analysistype,multiple.figures.perpanel,alphabetical.order,goodfeats_name,plots.height,plots.width,file="boxplotrawdebug.Rda")
par(mfrow=c(1,1),family="sans",cex=cex.plots)
par(mfrow=c(1,1),family="sans",cex=cex.plots)
#get_boxplots(X=goodfeats_raw,Y=classlabels_raw_boxplots,parentoutput_dir=output_dir,boxplot.col.opt=boxplot.col.opt,alphacol=0.3,newdevice=FALSE,cex.plots=cex.plots,ylabel=" Intensity",name=goodfeats_name,add.pvalues=add.pvalues,
# add.jitter=add.jitter,alphabetical.order=alphabetical.order,boxplot.type=boxplot.type,study.design=analysistype)
plot.ylab_text1=paste("",ylab_text,sep="")
if(pairedanalysis==TRUE){
#classlabels_paired<-cbind(classlabels[,1],subject_inf,classlabels[,2])
get_boxplots(X=goodfeats_raw,Y=classlabels_orig,parentoutput_dir=output_dir,boxplot.col.opt=boxplot.col.opt,
newdevice=FALSE,cex.plots=cex.plots,ylabel=ylab_text_raw,name=goodfeats_name,add.pvalues=add.pvalues,add.jitter=add.jitter,
alphabetical.order=alphabetical.order,boxplot.type=boxplot.type,
study.design=gsub(analysistype,pattern="repeat",replacement=""),multiple.figures.perpanel=multiple.figures.perpanel,numnodes=num_nodes,
plot.height = plots.height,plot.width=plots.width,
filename="Figures/Boxplots.selectedfeats.raw",alphacol = alpha.col,ggplot.type1=ggplot.type1,facet.nrow=facet.nrow)
}else{
get_boxplots(X=goodfeats_raw,Y=classlabels_orig,parentoutput_dir=output_dir,boxplot.col.opt=boxplot.col.opt,
newdevice=FALSE,cex.plots=cex.plots,ylabel=ylab_text_raw,name=goodfeats_name,add.pvalues=add.pvalues,add.jitter=add.jitter,
alphabetical.order=alphabetical.order,boxplot.type=boxplot.type,
study.design=analysistype,multiple.figures.perpanel=multiple.figures.perpanel,numnodes=num_nodes,plot.height = plots.height,plot.width=plots.width,
filename="Figures/Boxplots.selectedfeats.raw",alphacol = alpha.col,ggplot.type1=ggplot.type1,facet.nrow=facet.nrow)
}
#try(dev.off(),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
if(FALSE)
{
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/Barplots_selectedfeats.pdf"
#png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
#pdf(temp_filename_1,bg="transparent") #, height = 5.5, width = 3)
pdf(temp_filename_1,width=plots.width,height=plots.height)
}
plot(0:10, type = "n", xaxt="n", yaxt="n", bty="n", xlab = "", ylab = "")
text(5, 8, "Barplots of selected features using the\n normalized intensities/adundance levels")
par(mfrow=c(1,1),family="sans",cex=cex.plots,pty="s")
try(get_barplots(feature_table_file,class_labels_file,X=goodfeats_temp,Y=classlabels_orig,parentoutput_dir=output_dir
,newdevice=FALSE,ylabel=ylab_text,cex.val=cex.plots,barplot.col.opt=barplot.col.opt,error.bar=error.bar),silent=TRUE)
###savelist=ls(),file="getbarplots.Rda")
if(featselmethod=="limma2way" | featselmethod=="limma2wayrepeat" | featselmethod=="pls2wayrepeat" | featselmethod=="spls2wayrepeat" | featselmethod=="pls2way" | featselmethod=="spls2way" | featselmethod=="lm2wayanova" | featselmethod=="lm2wayanovarepeat")
{
#if(ggplot.type1==TRUE){
barplot.xaxis="Factor2"
# }else{
# }
}
get_barplots(feature_table_file,class_labels_file,X=goodfeats_temp,Y=classlabels_orig,parentoutput_dir=output_dir,
newdevice=FALSE,ylabel=plot.ylab_text,cex.plots=cex.plots,barplot.col.opt=barplot.col.opt,error.bar=error.bar,
barplot.xaxis=barplot.xaxis,alphabetical.order=alphabetical.order,name=goodfeats_name,study.design=analysistype)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(FALSE){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/Individual_sample_plots_selectedfeats.pdf"
#png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
#pdf(temp_filename_1)
pdf(temp_filename_1,width=plots.width,height=plots.height)
}
# par(mfrow=c(2,2))
par(mfrow=c(1,1),family="sans",cex=cex.plots)
#try(get_individualsampleplots(feature_table_file,class_labels_file,X=goodfeats_temp,Y=classlabels_orig,parentoutput_dir=output_dir,newdevice=FALSE,ylabel=ylab_text,cex.val=cex.plots,sample.col.opt=sample.col.opt),silent=TRUE)
get_individualsampleplots(feature_table_file,class_labels_file,X=goodfeats_temp,Y=classlabels_orig,parentoutput_dir=output_dir,newdevice=FALSE,ylabel=ylab_text,cex.plots=cex.plots,sample.col.opt=individualsampleplot.col.opt,alphabetical.order=alphabetical.order,name=goodfeats_name)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
if(globalclustering==TRUE){
print("Performing global clustering using EM")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/GlobalclusteringEM.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
m1<-Mclust(t(data_m_fc_withfeats[,-c(1:2)]))
s1<-m1$classification #summary(m1)
EMcluster<-m1$classification
col_vec <- colorRampPalette(brewer.pal(10, "RdBu"))(length(levels(as.factor(classlabels_orig[,2]))))
#col_vec<-topo.colors(length(levels(as.factor(classlabels_orig[,2])))) #patientcolors #heatmap_cols[1:length(levels(classlabels_orig[,2]))]
t1<-table(EMcluster,classlabels_orig[,2])
par(mfrow=c(1,1))
plot(t1,col=col_vec,main="EM cluster labels\n using all features",cex.axis=1,ylab="Class",xlab="Cluster number")
par(xpd=TRUE)
try(legend("bottomright",legend=levels(classlabels_orig[,2]),text.col=col_vec,pch=13,cex=0.4),silent=TRUE)
par(xpd=FALSE)
# save(m1,EMcluster,classlabels_orig,file="EMres.Rda")
t1<-cbind(EMcluster,classlabels_orig[,2])
write.table(t1,file="Tables/EM_clustering_labels_using_allfeatures.txt",sep="\t")
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
print("Performing global clustering using HCA")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/GlobalclusteringHCA.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
#if(FALSE)
{
#p1<-heatmap.2(as.matrix(data_m_fc_withfeats[,-c(1:2)]),scale="row",symkey=FALSE,col=topo.colors(n=256))
if(heatmap.col.opt=="RdBu"){
heatmap.col.opt="redblue"
}
heatmap_cols <- colorRampPalette(brewer.pal(10, "RdBu"))(256)
heatmap_cols<-rev(heatmap_cols)
if(heatmap.col.opt=="topo"){
heatmap_cols<-topo.colors(256)
heatmap_cols<-rev(heatmap_cols)
}else
{
if(heatmap.col.opt=="heat"){
heatmap_cols<-heat.colors(256)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="yellowblue"){
heatmap_cols<-colorRampPalette(c("yellow","blue"))(256) #colorRampPalette(c("yellow","white","blue"))(256)
#heatmap_cols<-blue2yellow(256) #colorRampPalette(c("yellow","blue"))(256)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="redblue"){
heatmap_cols <- colorRampPalette(brewer.pal(10, "RdBu"))(256)
heatmap_cols<-rev(heatmap_cols)
}else{
#my_palette <- colorRampPalette(c("red", "yellow", "green"))(n = 299)
if(heatmap.col.opt=="redyellowgreen"){
heatmap_cols <- colorRampPalette(c("red", "yellow", "green"))(n = 299)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="yellowwhiteblue"){
heatmap_cols<-colorRampPalette(c("yellow2","white","blue"))(256) #colorRampPalette(c("yellow","white","blue"))(256)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="redwhiteblue"){
heatmap_cols<-colorRampPalette(c("red","white","blue"))(256) #colorRampPalette(c("yellow","white","blue"))(256)
heatmap_cols<-rev(heatmap_cols)
}else{
heatmap_cols <- colorRampPalette(brewer.pal(10, heatmap.col.opt))(256)
heatmap_cols<-rev(heatmap_cols)
}
}
}
}
}
}
}
#col_vec<-heatmap_cols[1:length(levels(classlabels_orig[,2]))]
c1<-WGCNA::cor(as.matrix(data_m_fc_withfeats[,-c(1:2)]),method=cor.method,use="pairwise.complete.obs") #cor(d1[,-c(1:2)])
d2<-as.dist(1-c1)
clust1<-hclust(d2)
hr <- try(hclust(as.dist(1-WGCNA::cor(t(data_m_fc_withfeats),method=cor.method,use="pairwise.complete.obs"))),silent=TRUE) #metabolites
#hc <- try(hclust(as.dist(1-WGCNA::cor(data_m,method=cor.method,use="pairwise.complete.obs"))),silent=TRUE) #samples
h73<-heatmap.2(as.matrix(data_m_fc_withfeats[,-c(1:2)]), Rowv=as.dendrogram(hr), Colv=as.dendrogram(clust1),
col=heatmap_cols, scale="row",key=TRUE, symkey=FALSE, density.info="none", trace="none",
cexRow=1, cexCol=1,xlab="",ylab="", main="Global clustering\n using all features",
ColSideColors=patientcolors,labRow = FALSE, labCol = FALSE)
# par(xpd=TRUE)
#legend("bottomleft",legend=levels(classlabels_orig[,2]),text.col=unique(patientcolors),pch=13,cex=0.4)
#par(xpd=FALSE)
clust_res<-cutreeDynamic(distM=as.matrix(d2),dendro=clust1,cutHeight = 0.95,minClusterSize = 2,deepSplit = 4,verbose = FALSE)
#mycl_samples <- cutree(clust1, h=max(clust1$height)/2)
HCAcluster<-clust_res
c2<-cbind(clust1$labels,HCAcluster)
rownames(c2)<-c2[,1]
c2<-as.data.frame(c2)
t1<-table(HCAcluster,classlabels_orig[,2])
plot(t1,col=col_vec,main="HCA (Cutree Dynamic) cluster labels\n using all features",cex.axis=1,ylab="Class",xlab="Cluster number")
par(xpd=TRUE)
try(legend("bottomright",legend=levels(classlabels_orig[,2]),text.col=col_vec,pch=13,cex=0.4),silent=TRUE)
par(xpd=FALSE)
t1<-cbind(HCAcluster,classlabels_orig[,2])
write.table(t1,file="Tables/HCA_clustering_labels_using_allfeatures.txt",sep="\t")
}
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
#dev.off()
}
else
{
#goodfeats_allfields<-as.data.frame(goodfeats)
goodfeats<-goodfeats[,-c(1:time_ind)]
}
}
if(length(goodip)>0){
try(dev.off(),silent=TRUE)
}
}
else{
try(dev.off(),silent=TRUE)
break;
}
if(analysismode=="classification" & WGCNAmodules==TRUE){
classlabels_temp<-classlabels_orig_wgcna #cbind(classlabels_sub[,1],classlabels)
#print(classlabels_temp)
data_temp<-data_matrix_beforescaling[,-c(1:2)]
cl<-makeCluster(num_nodes)
#clusterExport(cl,"do_rsd")
#feat_rsds<-parApply(cl,data_temp,1,do_rsd)
#rm(data_temp)
#feat_rsds<-abs(feat_rsds) #round(max_rsd,2)
#print(summary(feat_rsds))
#if(length(which(feat_rsds>0))>0)
{
X<-data_m_fc_withfeats #data_matrix[which(feat_rsds>=wgcnarsdthresh),]
# print(head(X))
# print(dim(X))
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/WGCNA_preservation_plot.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
# #save(X,classlabels_temp,data_m_fc_withfeats,goodip,file="wgcna.Rda")
print("Performing WGCNA: generating preservation plot")
#preservationres<-try(do_wgcna(X=X,Y=classlabels_temp,sigfeats=data_m_fc_withfeats[goodip,c(1:2)]),silent=TRUE)
#pres<-try(do_wgcna(X=X,Y=classlabels_temp,sigfeats=data_m_fc_withfeats[goodip,c(1:2)]),silent=TRUE)
pres<-try(do_wgcna(X=X,Y=classlabels_temp,sigfeats=data_m_fc_withfeats[goodip,c(1:2)]),silent=TRUE)
#pres<-do_wgcna(X=X,Y=classlabels_temp,sigfeats=data_m_fc_withfeats[goodip,c(1:2)]) #,silent=TRUE)
if(is(pres,"try-error")){
print("WGCNA could not be performed. Error: ")
print(pres)
}
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
#print(lf)
#print("next iteration")
#dev.off()
}
setwd(parentoutput_dir)
summary_res<-cbind(log2.fold.change.thresh_list,feat_eval,feat_sigfdrthresh,feat_sigfdrthresh_cv,feat_sigfdrthresh_permut,res_score_vec)
if(fdrmethod=="none"){
exp_fp<-round(fdrthresh*feat_eval)
}else{
exp_fp<-round(fdrthresh*feat_sigfdrthresh)
}
rank_num<-order(summary_res[,5],decreasing=TRUE)
##save(allmetabs_res,file="allmetabs_res.Rda")
if(featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="limma2wayrepeat" | featselmethod=="lmreg" | featselmethod=="logitreg"
| featselmethod=="lm2wayanova" | featselmethod=="lm1wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat" | featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="poissonreg" | featselmethod=="limma1wayrepeat" | featselmethod=="lmregrepeat")
{
summary_res<-cbind(summary_res,exp_fp)
#print("HERE13134")
type.statistic="pvalue"
if(length(allmetabs_res)>0){
#stat_val<-(-1)*log10(allmetabs_res[,4])
stat_val<-allmetabs_res[,4]
}
colnames(summary_res)<-c("RSD.thresh","Number of features left after RSD filtering","Number of features selected",paste(pred.eval.method,"-accuracy",sep=""),paste(pred.eval.method," permuted accuracy",sep=""),"Score","Expected_False_Positives")
}else{
#exp_fp<-round(fdrthresh*feat_sigfdrthresh)
#if(featselmethod=="MARS" | featselmethod=="RF" | featselmethod=="pls" | featselmethod=="o1pls" | featselmethod=="o2pls"){
exp_fp<-rep(NA,dim(summary_res)[1])
#}
# print("HERE13135")
if(length(allmetabs_res)>0){
stat_val<-(allmetabs_res[,4])
}
type.statistic="other"
summary_res<-cbind(summary_res,exp_fp)
colnames(summary_res)<-c("RSD.thresh","Number of features left after RSD filtering","Number of features selected",paste(pred.eval.method,"-accuracy",sep=""),paste(pred.eval.method," permuted accuracy",sep=""),"Score","Expected_False_Positives")
}
featselmethod<-parentfeatselmethod
file_name<-paste(parentoutput_dir,"/Results_summary_",featselmethod,".txt",sep="")
write.table(summary_res,file=file_name,sep="\t",row.names=FALSE)
if(output.device.type=="pdf"){
try(dev.off(),silent=TRUE)
}
#print("##############Level 1: processing complete###########")
if(length(best_feats)>1)
{
mz_index<-best_feats
#par(mfrow=c(1,1),family="sans",cex=cex.plots)
# get_boxplots(X=goodfeats_raw,Y=classlabels_orig,parentoutput_dir=output_dir,boxplot.col.opt=boxplot.col.opt,alphacol=0.3,newdevice=FALSE,cex=cex.plots,ylabel="raw Intensity",name=goodfeats_name,add.pvalues=add.pvalues,add.jitter=add.jitter,boxplot.type=boxplot.type)
setwd(output_dir)
###save(goodfeats,goodfeats_temp,classlabels_orig,classlabels_response_mat,output_dir,xlab_text,ylab_text,goodfeats_name,file="debugscatter.Rda")
if(analysismode=="regression"){
pdf("Figures/Scatterplots.pdf")
if(is.na(xlab_text)==TRUE){
xlab_text=""
}
# save(goodfeats_temp,classlabels_orig,output_dir,ylab_text,xlab_text,goodfeats_name,cex.plots,scatterplot.col.opt,file="scdebug.Rda")
get_scatter_plots(X=goodfeats_temp,Y=classlabels_orig,parentoutput_dir=output_dir,newdevice=FALSE,ylabel=ylab_text,xlabel=xlab_text,
name=goodfeats_name,cex.plots=cex.plots,scatterplot.col.opt=scatterplot.col.opt)
dev.off()
}
setwd(parentoutput_dir)
if(analysismode=="classification"){
log2.fold.change.thresh=log2.fold.change.thresh_list[best_logfc_ind]
#print(paste("Best results found at RSD threshold ", log2.fold.change.thresh,sep=""))
#print(best_acc)
#print(paste(kfold,"-fold CV accuracy ", best_acc,sep=""))
if(FALSE){
if(pred.eval.method=="CV"){
print(paste(kfold,"-fold CV accuracy: ", best_acc,sep=""))
}else{
if(pred.eval.method=="AUC"){
print(paste("Area under the curve (AUC) is : ", best_acc,sep=""))
}
}
}
# data_m<-parent_data_m
# data_m_fc<-data_m #[which(abs(mean_groups)>log2.fold.change.thresh),]
data_m_fc_withfeats<-data_matrix[,c(1:2)]
data_m_fc_withfeats<-cbind(data_m_fc_withfeats,data_m_fc)
#when using a feature table generated by apLCMS
rnames<-paste("mzid_",seq(1,dim(data_m_fc)[1]),sep="")
#print(best_limma_res[1:3,])
goodfeats<-best_limma_res[order(best_limma_res$mz),-c(1:2)]
#goodfeats<-best_limma_res[,-c(1:2)]
goodfeats_all<-goodfeats
goodfeats<-goodfeats_all
rm(goodfeats_all)
}
try(unlink("Rplots.pdf"),silent=TRUE)
if(globalcor==TRUE){
if(length(best_feats)>2){
if(is.na(abs.cor.thresh)==FALSE){
#setwd(parentoutput_dir)
# print("##############Level 2: Metabolome wide correlation network analysis of differentially expressed metabolites###########")
#print(paste("Generating metabolome-wide ",cor.method," correlation network using RSD threshold ", log2.fold.change.thresh," results",sep=""))
#print(parentoutput_dir)
#print(output_dir)
setwd(output_dir)
data_m_fc_withfeats<-as.data.frame(data_m_fc_withfeats)
goodfeats<-as.data.frame(goodfeats)
#print(goodfeats[1:4,])
sigfeats_index<-which(data_m_fc_withfeats$mz%in%goodfeats$mz)
sigfeats<-sigfeats_index
if(globalcor==TRUE){
#outloc<-paste(parentoutput_dir,"/Allcornetworksigfeats","log2fcthresh",log2.fold.change.thresh,"/",sep="")
#outloc<-paste(parentoutput_dir,"/Stage2","/",sep="")
#dir.create(outloc)
#setwd(outloc)
#dir.create("CorrelationAnalysis")
#setwd("CorrelationAnalysis")
if(networktype=="complete"){
if(output.device.type=="pdf"){
mwan_newdevice=FALSE
}else{
mwan_newdevice=TRUE
}
#gohere
# save(data_matrix,sigfeats_index,output_dir,max.cor.num,net_node_colors,net_legend,cor.method,abs.cor.thresh,cor.fdrthresh,file="r1.Rda")
mwan_fdr<-try(do_cor(data_matrix,subindex=sigfeats_index,targetindex=NA,outloc=output_dir,networkscope="global",cor.method,abs.cor.thresh,cor.fdrthresh,
max.cor.num,net_node_colors,net_legend,newdevice=TRUE),silent=TRUE)
}else{
if(networktype=="GGM"){
mwan_fdr<-try(get_partial_cornet(data_matrix, sigfeats.index=sigfeats_index,targeted.index=NA,networkscope="global",
cor.method,abs.cor.thresh,cor.fdrthresh,outloc=output_dir,net_node_colors,net_legend,newdevice=TRUE),silent=TRUE)
}else{
print("Invalid option. Please use complete or GGM.")
}
}
#print("##############Level 2: processing complete###########")
}else{
#print("##############Skipping Level 2: global correlation analysis###########")
}
#temp_data_m<-cbind(allmetabs_res[,c("mz","time")],stat_val)
if(analysismode=="classification"){
# classlabels_temp<-cbind(classlabels_sub[,1],classlabels)
#do_wgcna(X=data_matrix,Y=classlabels,sigfeats.index=sigfeats_index)
}
#print("##############Level 3: processing complete###########")
#print("#########################")
}
}
else{
cat(paste("Can not perform network analysis. Too few metabolites.",sep=""),sep="\n")
}
}
}
if(FALSE){
if(length(featselmethod)>1){
abs.cor.thresh=NA
globalcor=FALSE
}
}
###save(stat_val,allmetabs_res,check_names,metab_annot,kegg_species_code,database,reference_set,type.statistic,file="fcsdebug.Rda")
setwd(output_dir)
unlink("fdrtoolB.pdf",force=TRUE)
if(is.na(target.data.annot)==FALSE){
#dir.create("NetworkAnalysis")
#setwd("NetworkAnalysis")
colnames(target.data.annot)<-c("mz","time","KEGGID")
if(length(check_names)<1){
allmetabs_res<-cbind(stat_val,allmetabs_res)
metab_data<-merge(allmetabs_res,target.data.annot,by=c("mz","time"))
dup.feature.check=TRUE
}else{
allmetabs_res_withnames<-cbind(stat_val,allmetabs_res_withnames)
metab_data<-merge(allmetabs_res_withnames,target.data.annot,by=c("Name"))
dup.feature.check=FALSE
}
###save(stat_val,allmetabs_res,check_names,metab_annot,kegg_species_code,database,metab_data,reference_set,type.statistic,file="fcsdebug.Rda")
if(length(check_names)<1){
metab_data<-metab_data[,c("KEGGID","stat_val","mz","time")]
colnames(metab_data)<-c("KEGGID","Statistic","mz","time")
}else{
metab_data<-metab_data[,c("KEGGID","stat_val")]
colnames(metab_data)<-c("KEGGID","Statistic")
}
# ##save(metab_annot,kegg_species_code,database,metab_data,reference_set,type.statistic,file="fcsdebug.Rda")
#metab_data: KEGGID, Statistic
fcs_res<-get_fcs(kegg_species_code=kegg_species_code,database=database,target.data=metab_data,target.data.annot=target.data.annot,reference_set=reference_set,type.statistic=type.statistic,fcs.min.hits=fcs.min.hits)
###save(fcs_res,file="fcs_res.Rda")
write.table(fcs_res,file="Tables/functional_class_scoring.txt",sep="\t",row.names=TRUE)
if(length(fcs_res)>0){
if(length(which(fcs_res$pvalue<pvalue.thresh))>10){
fcs_res_filt<-fcs_res[which(fcs_res$pvalue<pvalue.thresh)[1:10],]
}else{
fcs_res_filt<-fcs_res[which(fcs_res$pvalue<pvalue.thresh),]
}
fcs_res_filt<-fcs_res_filt[order(fcs_res_filt$pvalue,decreasing=FALSE),]
fcs_res_filt$Name<-gsub(as.character(fcs_res_filt$Name),pattern=" - Homo sapiens \\(human\\)",replacement="")
fcs_res_filt$pvalue=(-1)*log10(fcs_res_filt$pvalue)
fcs_res_filt<-fcs_res_filt[order(fcs_res_filt$pvalue,decreasing=FALSE),]
print(Sys.time())
p=ggbarplot(fcs_res_filt,x="Name",y="pvalue",orientation="horiz",ylab="(-)log10pvalue",xlab="",color="orange",fill="orange",title=paste("Functional classes significant at p<",pvalue.thresh," threhsold",sep=""))
p=p+font("title",size=10)
p=p+font("x.text",size=10)
p=p+font("y.text",size=10)
p=p + geom_hline(yintercept = (-1)*log10(pvalue.thresh), linetype="dotted",size=0.7)
print(Sys.time())
pdf("Figures/Functional_Class_Scoring.pdf")
print(p)
dev.off()
}
print(paste(featselmethod, " processing done.",sep=""))
}
setwd(parentoutput_dir)
#print("Note A: Please note that log2 fold-change based filtering is only applicable to two-class comparison.
#log2fcthresh of 0 will remove only those features that have exactly sample mean intensities between the two groups.
#More features will be filtered prior to FDR as log2fcthresh increases.")
#print("Note C: Please make sure all the packages are installed. You can use the command install.packages(packagename) to install packages.")
#print("Eg: install.packages(\"mixOmics\"),install.packages(\"snow\"), install.packages(\"e1071\"), biocLite(\"limma\"), install.packages(\"gplots\").")
#print("Eg: install.packages("mixOmics""),install.packages("snow"), install.packages("e1071"), biocLite("limma"), install.packages("gplots").")
##############################
##############################
###############################
if(length(best_feats)>0){
goodfeats<-as.data.frame(goodfeats)
#goodfeats<-data_matrix_beforescaling[which(data_matrix_beforescaling$mz%in%goodfeats$mz),]
}else{
goodfeats-{}
}
cur_date<-Sys.time()
cur_date<-gsub(x=cur_date,pattern="-",replacement="")
cur_date<-gsub(x=cur_date,pattern=":",replacement="")
cur_date<-gsub(x=cur_date,pattern=" ",replacement="")
if(saveRda==TRUE){
fname<-paste("Analysis_",featselmethod,"_",cur_date,".Rda",sep="")
###savelist=ls(),file=fname)
}
################################
fname_del<-paste(output_dir,"/Rplots.pdf",sep="")
try(unlink(fname_del),silent=TRUE)
if(removeRda==TRUE)
{
unlink("*.Rda",force=TRUE,recursive=TRUE)
#unlink("pairwise_results/*.Rda",force=TRUE,recursive=TRUE)
}
cat("",sep="\n")
return(list("diffexp_metabs"=goodfeats_allfields, "mw.an.fdr"=mwan_fdr,"targeted.an.fdr"=targetedan_fdr,
"classlabels"=classlabels_orig,"all_metabs"=allmetabs_res_withnames,"roc_res"=roc_res))
}
| /R/diffexp.child.R | no_license | kuppal2/xmsPANDA | R | false | false | 413,435 | r | diffexp.child <-
function(Xmat,Ymat,feature_table_file,parentoutput_dir,class_labels_file,num_replicates,feat.filt.thresh,summarize.replicates,summary.method,
summary.na.replacement,missing.val,rep.max.missing.thresh,
all.missing.thresh,group.missing.thresh,input.intensity.scale,
log2transform,medcenter,znormtransform,quantile_norm,lowess_norm,madscaling,TIC_norm,rangescaling,mstus,paretoscaling,sva_norm,eigenms_norm,vsn_norm,
normalization.method,rsd.filt.list,
pairedanalysis,featselmethod,fdrthresh,fdrmethod,cor.method,networktype,network.label.cex,abs.cor.thresh,cor.fdrthresh,kfold,pred.eval.method,feat_weight,globalcor,
target.metab.file,target.mzmatch.diff,target.rtmatch.diff,max.cor.num, samplermindex,pcacenter,pcascale,
numtrees,analysismode,net_node_colors,net_legend,svm_kernel,heatmap.col.opt,manhattanplot.col.opt,boxplot.col.opt,barplot.col.opt,sample.col.opt,lineplot.col.opt,scatterplot.col.opt,hca_type,alphacol,pls_vip_thresh,num_nodes,max_varsel,
pls_ncomp,pca.stage2.eval,scoreplot_legend,pca.global.eval,rocfeatlist,rocfeatincrement,
rocclassifier,foldchangethresh,wgcnarsdthresh,WGCNAmodules,optselect,max_comp_sel,saveRda,legendlocation,degree_rank_method,
pca.cex.val,pca.ellipse,ellipse.conf.level,pls.permut.count,svm.acc.tolerance,limmadecideTests,pls.vip.selection,globalclustering,plots.res,plots.width,plots.height,plots.type,output.device.type,pvalue.thresh,individualsampleplot.col.opt,
pamr.threshold.select.max,mars.gcv.thresh,error.bar,cex.plots,modeltype,barplot.xaxis,lineplot.lty.option,match_class_dist,timeseries.lineplots,alphabetical.order,kegg_species_code,database,reference_set,target.data.annot,
add.pvalues=TRUE,add.jitter=TRUE,fcs.permutation.type,fcs.method,
fcs.min.hits,names_with_mz_time,ylab_text,xlab_text,boxplot.type,
degree.centrality.method,log2.transform.constant,balance.classes,
balance.classes.sizefactor,balance.classes.method,balance.classes.seed,
cv.perm.count=100,multiple.figures.perpanel=TRUE,labRow.value = TRUE, labCol.value = TRUE,
alpha.col=1,similarity.matrix,outlier.method,removeRda=TRUE,color.palette=c("journal"),
plot_DiNa_graph=FALSE,limma.contrasts.type=c("contr.sum","contr.treatment"),hca.cex.legend=0.7,differential.network.analysis.method,
plot.boxplots.raw=FALSE,vcovHC.type,ggplot.type1,facet.nrow,facet.ncol,pairwise.correlation.analysis=FALSE,
generate.boxplots=FALSE,pvalue.dist.plot=TRUE,...)
{
#############
options(warn=-1)
roc_res<-NA
lme.modeltype=modeltype
remove_firstrun=FALSE #TRUE or FALSE
run_number=1
minmaxtransform=FALSE
pca.CV=TRUE
max_rf_var=5000
alphacol=alpha.col
hca.labRow.value=labRow.value
hca.labCol.value=labCol.value
logistic_reg=FALSE
poisson_reg=FALSE
goodfeats_allfields={}
mwan_fdr={}
targetedan_fdr={}
data_m_fc_withfeats={}
classlabels_orig={}
robust.estimate=FALSE
#alphabetical.order=FALSE
analysistype="oneway"
plot.ylab_text=ylab_text
limmarobust=FALSE
featselmethod<-unique(featselmethod)
if(featselmethod=="rf"){
featselmethod="RF"
}
parentfeatselmethod=featselmethod
factor1_msg=NA
factor2_msg=NA
cat(paste("Running feature selection method: ",featselmethod,sep=""),sep="\n")
#}
if(featselmethod=="limmarobust"){
featselmethod="limma"
limmarobust=TRUE
}else{
if(featselmethod=="limma1wayrepeatrobust"){
featselmethod="limma1wayrepeat"
limmarobust=TRUE
}else{
if(featselmethod=="limma2wayrepeatrobust"){
featselmethod="limma2wayrepeat"
limmarobust=TRUE
}else{
if(featselmethod=="limma2wayrobust"){
featselmethod="limma2way"
limmarobust=TRUE
}else{
if(featselmethod=="limma1wayrobust"){
featselmethod="limma1way"
limmarobust=TRUE
}
}
}
}
}
#if(FALSE)
{
if(normalization.method=="log2quantilenorm" || normalization.method=="log2quantnorm"){
cat("Performing log2 transformation and quantile normalization",sep="\n")
log2transform=TRUE
quantile_norm=TRUE
}else{
if(normalization.method=="log2transform"){
cat("Performing log2 transformation",sep="\n")
log2transform=TRUE
}else{
if(normalization.method=="znormtransform"){
cat("Performing autoscaling",sep="\n")
znormtransform=TRUE
}else{
if(normalization.method=="quantile_norm"){
suppressMessages(library(limma))
cat("Performing quantile normalization",sep="\n")
quantile_norm=TRUE
}else{
if(normalization.method=="lowess_norm"){
suppressMessages(library(limma))
cat("Performing Cyclic Lowess normalization",sep="\n")
lowess_norm=TRUE
}else{
if(normalization.method=="rangescaling"){
cat("Performing Range scaling",sep="\n")
rangescaling=TRUE
}else{
if(normalization.method=="paretoscaling"){
cat("Performing Pareto scaling",sep="\n")
paretoscaling=TRUE
}else{
if(normalization.method=="mstus"){
cat("Performing MS Total Useful Signal (MSTUS) normalization",sep="\n")
mstus=TRUE
}else{
if(normalization.method=="sva_norm"){
suppressMessages(library(sva))
cat("Performing Surrogate Variable Analysis (SVA) normalization",sep="\n")
sva_norm=TRUE
log2transform=TRUE
}else{
if(normalization.method=="eigenms_norm"){
cat("Performing EigenMS normalization",sep="\n")
eigenms_norm=TRUE
if(input.intensity.scale=="raw"){
log2transform=TRUE
}
}else{
if(normalization.method=="vsn_norm"){
suppressMessages(library(limma))
cat("Performing variance stabilizing normalization",sep="\n")
vsn_norm=TRUE
}
}
}
}
}
}
}
}
}
}
}
}
if(input.intensity.scale=="log2"){
log2transform=FALSE
}
rfconditional=FALSE
# print("############################")
#print("############################")
if(featselmethod=="rf" | featselmethod=="RF"){
suppressMessages(library(randomForest))
suppressMessages(library(Boruta))
featselmethod="RF"
rfconditional=FALSE
}else{
if(featselmethod=="rfconditional" | featselmethod=="RFconditional" | featselmethod=="RFcond" | featselmethod=="rfcond"){
suppressMessages(library(party))
featselmethod="RF"
rfconditional=TRUE
}
}
if(featselmethod=="rf"){
featselmethod="RF"
}else{
if(featselmethod=="mars"){
suppressMessages(library(earth))
featselmethod="MARS"
}
}
if(featselmethod=="lmregrobust"){
suppressMessages(library(sandwich))
robust.estimate=TRUE
featselmethod="lmreg"
}else{
if(featselmethod=="logitregrobust"){
robust.estimate=TRUE
suppressMessages(library(sandwich))
featselmethod="logitreg"
}else{
if(featselmethod=="poissonregrobust"){
robust.estimate=TRUE
suppressMessages(library(sandwich))
featselmethod="poissonreg"
}
}
}
if(featselmethod=="plsrepeat"){
featselmethod="pls"
pairedanalysis=TRUE
}else{
if(featselmethod=="splsrepeat"){
featselmethod="spls"
pairedanalysis=TRUE
}else{
if(featselmethod=="o1plsrepeat"){
featselmethod="o1pls"
pairedanalysis=TRUE
}else{
if(featselmethod=="o1splsrepeat"){
featselmethod="o1spls"
pairedanalysis=TRUE
}
}
}
}
log2.fold.change.thresh_list<-rsd.filt.list
if(featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="limma2wayrepeat" | featselmethod=="limma1wayrepeat"){
if(analysismode=="regression"){
stop("Invalid analysis mode. Please set analysismode=\"classification\".")
}else{
suppressMessages(library(limma))
# print("##############Level 1: Using LIMMA function to find differentially expressed metabolites###########")
}
}else{
if(featselmethod=="RF"){
#print("##############Level 1: Using random forest function to find discriminatory metabolites###########")
}else{
if(featselmethod=="RFcond"){
suppressMessages(library(party))
# print("##############Level 1: Using conditional random forest function to find discriminatory metabolites###########")
#stop("Please use \"limma\", \"RF\", or \"MARS\".")
}else{
if(featselmethod=="MARS"){
suppressMessages(library(earth))
# print("##############Level 1: Using MARS to find discriminatory metabolites###########")
#log2.fold.change.thresh_list<-c(0)
}else{
if(featselmethod=="lmreg" | featselmethod=="logitreg" | featselmethod=="poissonreg" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat" | featselmethod=="rfesvm" |
featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="pamr" | featselmethod=="ttestrepeat" | featselmethod=="wilcoxrepeat" | featselmethod=="lmregrepeat"){
# print("##########Level 1: Finding discriminatory metabolites ###########")
if(featselmethod=="logitreg"){
featselmethod="lmreg"
logistic_reg=TRUE
poisson_reg=FALSE
}else{
if(featselmethod=="poissonreg"){
poisson_reg=TRUE
featselmethod="lmreg"
logistic_reg=FALSE
}else{
logistic_reg=FALSE
poisson_reg=FALSE
if(featselmethod=="rfesvm"){
suppressMessages(library(e1071))
}else{
if(featselmethod=="pamr"){
suppressMessages(library(pamr))
}else{
if(featselmethod=="lm2wayanovarepeat" | featselmethod=="lm1wayanovarepeat"){
suppressMessages(library(nlme))
suppressMessages(library(lsmeans))
}
}
}
}
}
}else{
if(featselmethod=="pls" | featselmethod=="o1pls" | featselmethod=="o2pls" | featselmethod=="spls" | featselmethod=="spls1wayrepeat" | featselmethod=="spls2wayrepeat" | featselmethod=="pls2way" | featselmethod=="spls2way" | featselmethod=="o1spls" | featselmethod=="o2spls"){
suppressMessages(library(mixOmics))
# suppressMessages(library(pls))
suppressMessages(library(plsgenomics))
# print("##########Level 1: Finding discriminatory metabolites ###########")
}else{
stop("Invalid featselmethod specified.")
}
}
#stop("Invalid featselmethod specified. Please use \"limma\", \"RF\", or \"MARS\".")
}
}
}
}
####################################################################################
dir.create(parentoutput_dir,showWarnings=FALSE)
parentoutput_dir1<-paste(parentoutput_dir,"/Stage1/",sep="")
dir.create(parentoutput_dir1,showWarnings=FALSE)
setwd(parentoutput_dir1)
if(is.na(Xmat[1])==TRUE){
X<-read.table(feature_table_file,sep="\t",header=TRUE,stringsAsFactors=FALSE,check.names=FALSE)
cnames<-colnames(X)
cnames<- gsub(cnames,pattern="[\\s]*",replacement="",perl=TRUE)
cnames<- gsub(cnames,pattern="[(|)|\\[|\\]]",replacement="",perl=TRUE)
cnames<-gsub(cnames,pattern="\\||-|;|,|\\.",replacement="_",perl=TRUE)
colnames(X)<-cnames
cnames<-tolower(cnames)
check_names<-grep(cnames,pattern="^name$")
#if the Name column exists
if(length(check_names)>0){
if(check_names==1){
check_names1<-grep(cnames,pattern="^mz$")
check_names2<-grep(cnames,pattern="^time$")
if(length(check_names1)<1 & length(check_names2)<1){
mz<-seq(1.00001,nrow(X)+1,1)
time<-seq(1.01,nrow(X)+1,1.00)
check_ind<-gregexpr(cnames,pattern="^name$")
check_ind<-which(check_ind>0)
X<-as.data.frame(X)
Name<-as.character(X[,check_ind])
if(length(which(duplicated(Name)==TRUE))>0){
stop("Duplicate variable names are not allowed.")
}
X<-cbind(mz,time,X[,-check_ind])
names_with_mz_time=cbind(Name,mz,time)
names_with_mz_time<-as.data.frame(names_with_mz_time)
X<-as.data.frame(X)
write.table(names_with_mz_time,file="Name_mz_time_mapping.txt",sep="\t",row.names=FALSE)
}else{
if(length(check_names1)>0 & length(check_names2)>0){
check_ind<-gregexpr(cnames,pattern="^name$")
check_ind<-which(check_ind>0)
Name<-as.character(X[,check_ind])
X<-X[,-check_ind]
names_with_mz_time=cbind(Name,X$mz,X$time)
colnames(names_with_mz_time)<-c("Name","mz","time")
names_with_mz_time<-as.data.frame(names_with_mz_time)
X<-as.data.frame(X)
write.table(names_with_mz_time,file="Name_mz_time_mapping.txt",sep="\t",row.names=FALSE)
}
}
}
}else{
#mz time format
check_names1<-grep(cnames[1],pattern="^mz$")
check_names2<-grep(cnames[2],pattern="^time$")
if(length(check_names1)<1 || length(check_names2)<1){
stop("Invalid feature table format. The format should be either Name in column A or mz and time in columns A and B. Please check example files.")
}
X[,1]<-round(X[,1],5)
X[,2]<-round(X[,2],2)
mz_time<-paste(round(X[,1],5),"_",round(X[,2],2),sep="")
if(length(which(duplicated(mz_time)==TRUE))>0){
stop("Duplicate variable names are not allowed.")
}
Name<-mz_time
names_with_mz_time=cbind(Name,X$mz,X$time)
colnames(names_with_mz_time)<-c("Name","mz","time")
names_with_mz_time<-as.data.frame(names_with_mz_time)
X<-as.data.frame(X)
write.table(names_with_mz_time,file="Name_mz_time_mapping.txt",sep="\t",row.names=FALSE)
}
X[,1]<-round(X[,1],5)
X[,2]<-round(X[,2],2)
Xmat<-t(X[,-c(1:2)])
rownames(Xmat)<-colnames(X[,-c(1:2)])
Xmat<-as.data.frame(Xmat)
colnames(Xmat)<-names_with_mz_time$Name
}else{
X<-Xmat
cnames<-colnames(X)
cnames<- gsub(cnames,pattern="[\\s]*",replacement="",perl=TRUE)
cnames<- gsub(cnames,pattern="[(|)|\\[|\\]]",replacement="",perl=TRUE)
cnames<-gsub(cnames,pattern="\\||-|;|,|\\.",replacement="_",perl=TRUE)
colnames(X)<-cnames
cnames<-tolower(cnames)
check_names<-grep(cnames,pattern="^name$")
if(length(check_names)>0){
if(check_names==1){
check_names1<-grep(cnames,pattern="^mz$")
check_names2<-grep(cnames,pattern="^time$")
if(length(check_names1)<1 & length(check_names2)<1){
mz<-seq(1.00001,nrow(X)+1,1)
time<-seq(1.01,nrow(X)+1,1.00)
check_ind<-gregexpr(cnames,pattern="^name$")
check_ind<-which(check_ind>0)
X<-as.data.frame(X)
Name<-as.character(X[,check_ind])
X<-cbind(mz,time,X[,-check_ind])
names_with_mz_time=cbind(Name,mz,time)
names_with_mz_time<-as.data.frame(names_with_mz_time)
X<-as.data.frame(X)
# print(getwd())
write.table(names_with_mz_time,file="Name_mz_time_mapping.txt",sep="\t",row.names=FALSE)
}else{
if(length(check_names1)>0 & length(check_names2)>0){
check_ind<-gregexpr(cnames,pattern="^name$")
check_ind<-which(check_ind>0)
Name<-as.character(X[,check_ind])
X<-X[,-check_ind]
names_with_mz_time=cbind(Name,X$mz,X$time)
colnames(names_with_mz_time)<-c("Name","mz","time")
names_with_mz_time<-as.data.frame(names_with_mz_time)
X<-as.data.frame(X)
write.table(names_with_mz_time,file="Name_mz_time_mapping.txt",sep="\t",row.names=FALSE)
}
}
}
}else{
check_names1<-grep(cnames[1],pattern="^mz$")
check_names2<-grep(cnames[2],pattern="^time$")
if(length(check_names1)<1 || length(check_names2)<1){
stop("Invalid feature table format. The format should be either Name in column A or mz and time in columns A and B. Please check example files.")
}
X[,1]<-round(X[,1],5)
X[,2]<-round(X[,2],3)
mz_time<-paste(round(X[,1],5),"_",round(X[,2],3),sep="")
if(length(which(duplicated(mz_time)==TRUE))>0){
stop("Duplicate variable names are not allowed.")
}
Name<-mz_time
names_with_mz_time=cbind(Name,X$mz,X$time)
colnames(names_with_mz_time)<-c("Name","mz","time")
names_with_mz_time<-as.data.frame(names_with_mz_time)
X<-as.data.frame(X)
write.table(names_with_mz_time,file="Name_mz_time_mapping.txt",sep="\t",row.names=FALSE)
}
Xmat<-t(X[,-c(1:2)])
rownames(Xmat)<-colnames(X[,-c(1:2)])
Xmat<-as.data.frame(Xmat)
colnames(Xmat)<-names_with_mz_time$Name
}
####saveXmat,file="Xmat.Rda")
if(analysismode=="regression")
{
#log2.fold.change.thresh_list<-c(0)
#print("Performing regression analysis")
if(is.na(Ymat[1])==TRUE){
classlabels<-read.table(class_labels_file,sep="\t",header=TRUE)
Ymat<-classlabels
}else{
classlabels<-Ymat
}
classlabels[,1]<- gsub(classlabels[,1],pattern="[\\s]*",replacement="",perl=TRUE)
classlabels[,1]<- gsub(classlabels[,1],pattern="[(|)|\\[|\\]]",replacement="",perl=TRUE)
classlabels[,1]<-gsub(classlabels[,1],pattern="\\||-|;|,|\\.",replacement="_",perl=TRUE)
#classlabels[,1]<-gsub(classlabels[,1],pattern=" |-",replacement=".")
# Ymat[,1]<-gsub(Ymat[,1],pattern=" |-",replacement=".")
Ymat<-classlabels
classlabels_orig<-classlabels
classlabels_sub<-classlabels
class_labels_levels<-c("A")
if(featselmethod=="lmregrepeat" || featselmethod=="splsrepeat" || featselmethod=="plsrepeat" || featselmethod=="spls" || featselmethod=="pls" || featselmethod=="o1pls" || featselmethod=="o1splsrepeat"){
if(pairedanalysis==TRUE){
colnames(classlabels)<-c("SampleID","SubjectNum",paste("Response",sep=""))
#Xmat<-chocolate[,1]
Xmat_temp<-Xmat #t(Xmat)
Xmat_temp<-cbind(classlabels,Xmat_temp)
#Xmat_temp<-Xmat_temp[order(Xmat_temp[,3],Xmat_temp[,2]),]
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Response", cnames)
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
subject_inf<-classlabels[,2]
classlabels<-classlabels[,-c(2)]
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
}
}
classlabels<-as.data.frame(classlabels)
classlabels_response_mat<-classlabels[,-c(1)]
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
Ymat<-classlabels
Ymat<-as.data.frame(Ymat)
rnames_xmat<-as.character(rownames(Xmat))
rnames_ymat<-as.character(Ymat[,1])
if(length(which(duplicated(rnames_ymat)==TRUE))>0){
stop("Duplicate sample IDs are not allowed. Please represent replicates by _1,_2,_3.")
}
check_ylabel<-regexpr(rnames_ymat[1],pattern="^[0-9]*",perl=TRUE)
check_xlabel<-regexpr(rnames_xmat[1],pattern="^X[0-9]*",perl=TRUE)
if(length(check_ylabel)>0 && length(check_xlabel)>0){
if(attr(check_ylabel,"match.length")>0 && attr(check_xlabel,"match.length")>0){
rnames_ymat<-paste("X",rnames_ymat,sep="")
}
}
match_names<-match(rnames_xmat,rnames_ymat)
bad_colnames<-length(which(is.na(match_names)==TRUE))
# save(rnames_xmat,rnames_ymat,Xmat,Ymat,file="debugnames.Rda")
# print("Check here2")
#if(is.na()==TRUE){
bool_names_match_check<-all(rnames_xmat==rnames_ymat)
if(bad_colnames>0 | bool_names_match_check==FALSE){
print("Sample names do not match between feature table and class labels files.\n Please try replacing any \"-\" with \".\" in sample names.")
print("Sample names in feature table")
print(head(rnames_xmat))
print("Sample names in classlabels file")
print(head(rnames_ymat))
stop("Sample names do not match between feature table and class labels files.\n Please try replacing any \"-\" with \".\" in sample names. Please try again.")
}
Xmat<-t(Xmat)
Xmat<-cbind(X[,c(1:2)],Xmat)
Xmat<-as.data.frame(Xmat)
rownames(Xmat)<-names_with_mz_time$Name
num_features_total=nrow(Xmat)
if(is.na(all(diff(match(rnames_xmat,rnames_ymat))))==FALSE){
if(all(diff(match(rnames_xmat,rnames_ymat)) > 0)==TRUE){
setwd("../")
#data preprocess regression
data_matrix<-data_preprocess(Xmat=Xmat,Ymat=Ymat,feature_table_file=feature_table_file,parentoutput_dir=parentoutput_dir,class_labels_file=NA,num_replicates=num_replicates,feat.filt.thresh=NA,summarize.replicates=summarize.replicates,summary.method=summary.method,
all.missing.thresh=all.missing.thresh,group.missing.thresh=NA,
log2transform=log2transform,medcenter=medcenter,znormtransform=znormtransform,,quantile_norm=quantile_norm,lowess_norm=lowess_norm,
rangescaling=rangescaling,paretoscaling=paretoscaling,mstus=mstus,sva_norm=sva_norm,eigenms_norm=eigenms_norm,
vsn_norm=vsn_norm,madscaling=madscaling,missing.val=0,samplermindex=NA, rep.max.missing.thresh=rep.max.missing.thresh,
summary.na.replacement=summary.na.replacement,featselmethod=featselmethod,TIC_norm=TIC_norm,normalization.method=normalization.method,
input.intensity.scale=input.intensity.scale,log2.transform.constant=log2.transform.constant,alphabetical.order=alphabetical.order)
}
}else{
#print(diff(match(rnames_xmat,rnames_ymat)))
stop("Orders of feature table and classlabels do not match")
}
}else{
if(analysismode=="classification")
{
analysistype="oneway"
classlabels_sub<-NA
if(featselmethod=="limma2way" | featselmethod=="lm2wayanova" | featselmethod=="spls2way"){
analysistype="twoway"
}else{
if(featselmethod=="limma2wayrepeat" | featselmethod=="lm2wayanovarepeat" | featselmethod=="spls2wayrepeat"){
analysistype="twowayrepeat"
pairedanalysis=TRUE
}else{
if(featselmethod=="limma1wayrepeat" | featselmethod=="lm1wayanovarepeat" | featselmethod=="spls1wayrepeat" | featselmethod=="lmregrepeat"){
analysistype="onewayrepeat"
pairedanalysis=TRUE
}
}
}
if(is.na(Ymat)==TRUE){
classlabels<-read.table(class_labels_file,sep="\t",header=TRUE)
Ymat<-classlabels
}else{
classlabels<-Ymat
}
classlabels[,1]<- gsub(classlabels[,1],pattern="[\\s]*",replacement="",perl=TRUE)
classlabels[,1]<- gsub(classlabels[,1],pattern="[(|)|\\[|\\]]",replacement="",perl=TRUE)
classlabels[,1]<-gsub(classlabels[,1],pattern="\\||-|;|,|\\.",replacement="_",perl=TRUE)
#classlabels[,1]<-gsub(classlabels[,1],pattern=" |-",replacement=".")
# Ymat[,1]<-gsub(Ymat[,1],pattern=" |-",replacement=".")
Ymat<-classlabels
# classlabels[,1]<-gsub(classlabels[,1],pattern=" |-",replacement=".")
Ymat[,1]<-gsub(Ymat[,1],pattern=" |-",replacement=".")
# print(paste("Number of samples in class labels file:",dim(Ymat)[1],sep=""))
#print(paste("Number of samples in feature table:",dim(Xmat)[1],sep=""))
if(dim(Ymat)[1]!=(dim(Xmat)[1]))
{
stop("Number of samples are different in feature table and class labels file.")
}
if(fdrmethod=="none"){
fdrthresh=pvalue.thresh
}
if(featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="limma2wayrepeat" | featselmethod=="limma1way" | featselmethod=="limma1wayrepeat" |
featselmethod=="MARS" | featselmethod=="RF" | featselmethod=="pls" | featselmethod=="o1pls" | featselmethod=="o2pls" | featselmethod=="lmreg" | featselmethod=="logitreg" |
featselmethod=="spls" | featselmethod=="pls1wayrepeat" | featselmethod=="spls1wayrepeat" | featselmethod=="pls2wayrepeat" |
featselmethod=="spls2wayrepeat" | featselmethod=="pls2way" | featselmethod=="spls2way" | featselmethod=="o1spls" |
featselmethod=="o2spls" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" |
featselmethod=="lm2wayanovarepeat" | featselmethod=="rfesvm" | featselmethod=="wilcox" | featselmethod=="ttest" |
featselmethod=="pamr" | featselmethod=="ttestrepeat" | featselmethod=="poissonreg" | featselmethod=="wilcoxrepeat" | featselmethod=="lmregrepeat")
{
#analysismode="classification"
#save(classlabels,file="thisclasslabels.Rda")
#if(is.na(Ymat)==TRUE)
{
#classlabels<-read.table(class_labels_file,sep="\t",header=TRUE)
if(analysismode=="classification"){
if(featselmethod=="lmreg" | featselmethod=="logitreg" | featselmethod=="poissonreg")
{
if(alphabetical.order==FALSE){
classlabels[,2] <- factor(classlabels[,2], levels=unique(classlabels[,2]))
}
levels_classA<-levels(factor(classlabels[,2]))
for(l1 in levels_classA){
g1<-grep(x=l1,pattern="[0-9]")
if(length(g1)>0){
#stop("Class labels or factor levels should not have any numbers.")
}
}
}else{
if(featselmethod=="lmregrepeat"){
if(alphabetical.order==FALSE){
classlabels[,3] <- factor(classlabels[,3], levels=unique(classlabels[,3]))
}
levels_classA<-levels(factor(classlabels[,3]))
for(l1 in levels_classA){
g1<-grep(x=l1,pattern="[0-9]")
if(length(g1)>0){
#stop("Class labels or factor levels should not have any numbers.")
}
}
}else{
for(c1 in 2:dim(classlabels)[2]){
if(alphabetical.order==FALSE){
classlabels[,c1] <- factor(classlabels[,c1], levels=unique(classlabels[,c1]))
}
levels_classA<-levels(factor(classlabels[,c1]))
for(l1 in levels_classA){
g1<-grep(x=l1,pattern="[0-9]")
if(length(g1)>0){
#stop("Class labels or factor levels should not have any numbers.")
}
}
}
}
}
}
classlabels_orig<-classlabels
if(featselmethod=="limma1way"){
featselmethod="limma"
}
# | featselmethod=="limma1wayrepeat"
if(featselmethod=="limma" | featselmethod=="limma1way" | featselmethod=="MARS" | featselmethod=="RF" | featselmethod=="pls" | featselmethod=="o1pls" | featselmethod=="o2pls" | featselmethod=="lmreg" |
featselmethod=="logitreg" | featselmethod=="spls" | featselmethod=="o1spls" | featselmethod=="o2spls" | featselmethod=="rfesvm" | featselmethod=="pamr" |
featselmethod=="poissonreg" | featselmethod=="ttest" | featselmethod=="wilcox" | featselmethod=="lm1wayanova")
{
if(featselmethod=="lmreg" | featselmethod=="logitreg" | featselmethod=="poissonreg")
{
factor_inf<-classlabels[,-c(1)]
factor_inf<-as.data.frame(factor_inf)
#print(factor_inf)
classlabels_orig<-colnames(classlabels[,-c(1)])
colnames(classlabels)<-c("SampleID",paste("Factor",seq(1,dim(factor_inf)[2]),sep=""))
Xmat_temp<-Xmat #t(Xmat)
#print(Xmat_temp[1:2,1:3])
Xmat_temp<-cbind(classlabels,Xmat_temp)
#print("here")
if(alphabetical.order==TRUE){
Xmat_temp<-Xmat_temp[order(Xmat_temp[,2]),]
}else{
if(analysismode=="classification"){
Xmat_temp[,2] <- factor(Xmat_temp[,2], levels=unique(Xmat_temp[,2]))
}
}
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Factor", cnames)
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
levels_classA<-levels(factor(classlabels[,2]))
factor1_msg=(paste("Factor 1 levels: ",paste(levels_classA,collapse=","),sep=""))
classlabels_class<-as.factor(classlabels[,2])
classtable1<-table(classlabels[,2])
classlabels_xyplots<-classlabels
#classlabels_orig<-classlabels
# classlabels_orig<-classlabels_orig[seq(1,dim(classlabels)[1],num_replicates),]
classlabels<-cbind(as.data.frame(classlabels[,1]),as.data.frame(classlabels_class))
classlabels_xyplots<-classlabels
rownames(Xmat_temp)<-as.character(Xmat_temp[,1])
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
classlabels_response_mat<-classlabels[,-c(1)]
classlabels<-as.data.frame(classlabels)
#keeps the class order as in the input file
if(alphabetical.order==FALSE){
classlabels[,2] <- factor(classlabels[,2], levels=unique(classlabels[,2]))
}
classlabels_response_mat<-classlabels[,-c(1)]
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
#colnames(classlabels_response_mat)<-as.character(classlabels_orig)
Ymat<-classlabels
classlabels_orig<-classlabels
}else
{
if(dim(classlabels)[2]>2){
if(pairedanalysis==FALSE){
#print("Invalid classlabels file format. Correct format: \nColumnA: SampleID\nColumnB: Class")
print("Using the first column as sample ID and second column as Class. Ignoring additional columns.")
classlabels<-classlabels[,c(1:2)]
}
}
if(analysismode=="classification")
{
factor_inf<-classlabels[,-c(1)]
factor_inf<-as.data.frame(factor_inf)
colnames(classlabels)<-c("SampleID",paste("Factor",seq(1,dim(factor_inf)[2]),sep=""))
Xmat_temp<-Xmat #t(Xmat)
Xmat_temp<-cbind(classlabels,Xmat_temp)
# ##save(Xmat_temp,file="Xmat_temp.Rda")
rownames(Xmat_temp)<-as.character(Xmat_temp[,1])
if(alphabetical.order==TRUE){
Xmat_temp<-Xmat_temp[order(Xmat_temp[,2]),]
}else{
Xmat_temp[,2] <- factor(Xmat_temp[,2], levels=unique(Xmat_temp[,2]))
}
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Factor", cnames)
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
levels_classA<-levels(factor(classlabels[,2]))
factor1_msg=(paste("Factor 1 levels: ",paste(levels_classA,collapse=","),sep=""))
classlabels_class<-as.factor(classlabels[,2])
classtable1<-table(classlabels[,2])
classlabels_xyplots<-classlabels
#classlabels_orig<-classlabels
# classlabels_orig<-classlabels_orig[seq(1,dim(classlabels)[1],num_replicates),]
classlabels<-cbind(as.data.frame(classlabels[,1]),as.data.frame(classlabels_class))
#rownames(Xmat)<-rownames(Xmat_temp)
classlabels_xyplots<-classlabels
classlabels_sub<-classlabels[,-c(1)]
if(alphabetical.order==FALSE){
classlabels[,2] <- factor(classlabels[,2], levels=unique(classlabels[,2]))
if(dim(classlabels)[2]>2){
#classlabels[,3] <- factor(classlabels[,3], levels=unique(classlabels[,3]))
stop("Invalid classlabels format.")
}
}
}
classlabels_response_mat<-classlabels[,-c(1)]
classlabels<-as.data.frame(classlabels)
classlabels_response_mat<-classlabels[,-c(1)]
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
#classlabels[,1]<-as.factor(classlabels[,1])
Ymat<-classlabels
classlabels_orig<-classlabels
}
#print("here 2")
}
if(featselmethod=="limma1wayrepeat"){
factor_inf<-classlabels[,-c(1:2)]
factor_inf<-as.data.frame(factor_inf)
# print("here")
colnames(classlabels)<-c("SampleID","SubjectNum",paste("Factor",seq(1,length(factor_inf)),sep=""))
#Xmat<-chocolate[,1]
Xmat_temp<-Xmat #t(Xmat)
Xmat_temp<-cbind(classlabels,Xmat_temp)
if(alphabetical.order==TRUE){
Xmat_temp<-Xmat_temp[order(Xmat_temp[,3],Xmat_temp[,2]),]
}else{
Xmat_temp[,3] <- factor(Xmat_temp[,3], levels=unique(Xmat_temp[,3]))
}
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Factor", cnames)
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
if(alphabetical.order==FALSE){
classlabels[,3] <- factor(classlabels[,3], levels=unique(classlabels[,3]))
}
subject_inf<-classlabels[,2]
classlabels_sub<-classlabels[,-c(1)]
subject_inf<-subject_inf[seq(1,dim(classlabels)[1],num_replicates)]
classlabels<-classlabels[,-c(2)]
levels_classA<-levels(factor(classlabels[,2]))
factor1_msg=(paste("Factor 1 levels: ",paste(levels_classA,collapse=","),sep=""))
classlabels_class<-as.factor(classlabels[,2])
classtable1<-table(classlabels[,2])
classlabels_xyplots<-classlabels
#classlabels_orig<-classlabels
# classlabels_orig<-classlabels_orig[seq(1,dim(classlabels)[1],num_replicates),]
classlabels<-cbind(as.data.frame(classlabels[,1]),as.data.frame(classlabels_class))
classlabels_xyplots<-classlabels
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
classlabels_response_mat<-classlabels[,-c(1)]
classlabels<-as.data.frame(classlabels)
classlabels_response_mat<-classlabels[,-c(1)]
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
Ymat<-classlabels
if(featselmethod=="limma1wayrepeat"){
featselmethod="limma"
pairedanalysis = TRUE
}else{
if(featselmethod=="spls1wayrepeat"){
featselmethod="spls"
pairedanalysis = TRUE
}else{
if(featselmethod=="pls1wayrepeat"){
featselmethod="pls"
pairedanalysis = TRUE
}
}
}
pairedanalysis = TRUE
}
if(featselmethod=="limma2way"){
factor_inf<-classlabels[,-c(1)]
factor_inf<-as.data.frame(factor_inf)
colnames(classlabels)<-c("SampleID",paste("Factor",seq(1,dim(factor_inf)[2]),sep=""))
Xmat_temp<-Xmat #t(Xmat)
####saveXmat,file="Xmat.Rda")
####saveclasslabels,file="Xmat_classlabels.Rda")
if(dim(classlabels)[2]>2){
# save(Xmat_temp,classlabels,file="Xmat_temp_limma.Rda")
Xmat_temp<-cbind(classlabels,Xmat_temp)
# print(Xmat_temp[1:10,1:10])
if(alphabetical.order==TRUE){
Xmat_temp<-Xmat_temp[order(Xmat_temp[,2],Xmat_temp[,3]),]
}else{
Xmat_temp[,2] <- factor(Xmat_temp[,2], levels=unique(Xmat_temp[,2]))
Xmat_temp[,3] <- factor(Xmat_temp[,3], levels=unique(Xmat_temp[,3]))
}
# print(Xmat_temp[1:10,1:10])
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Factor", cnames)
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
classlabels_sub<-classlabels[,-c(1)]
classlabels_response_mat<-classlabels[,-c(1)]
classlabels<-as.data.frame(classlabels)
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
if(alphabetical.order==FALSE){
classlabels[,2] <- factor(classlabels[,2], levels=unique(classlabels[,2]))
classlabels[,3] <- factor(classlabels[,3], levels=unique(classlabels[,3]))
}
levels_classA<-levels(factor(classlabels[,2]))
levels_classB<-levels(factor(classlabels[,3]))
factor1_msg=(paste("Factor 1 levels: ",paste(levels_classA,collapse=","),sep=""))
factor2_msg=(paste("Factor 2 levels: ",paste(levels_classB,collapse=","),sep=""))
classlabels_class<-as.factor(classlabels[,2]):as.factor(classlabels[,3])
classtable1<-table(classlabels[,2],classlabels[,3])
classlabels_xyplots<-classlabels
#classlabels_orig<-classlabels
# classlabels_orig<-classlabels_orig[seq(1,dim(classlabels)[1],num_replicates),]
classlabels<-cbind(as.data.frame(classlabels[,1]),as.data.frame(classlabels_class))
Ymat<-classlabels
#classlabels_response_mat<-classlabels[,-c(1)]
classlabels<-as.data.frame(classlabels)
#classlabels_response_mat<-classlabels[,-c(1)]
#classlabels_response_mat<-as.data.frame(classlabels_response_mat)
Ymat<-classlabels
#classlabels_orig<-classlabels
}
else{
stop("Only one factor specificied in the class labels file.")
}
}
if(featselmethod=="limma2wayrepeat"){
factor_inf<-classlabels[,-c(1:2)]
factor_inf<-as.data.frame(factor_inf)
colnames(classlabels)<-c("SampleID","SubjectNum",paste("Factor",seq(1,dim(factor_inf)[2]),sep=""))
Xmat_temp<-Xmat
if(dim(classlabels)[2]>2)
{
levels_classA<-levels(factor(classlabels[,3]))
if(length(levels_classA)>2){
#stop("Factor 1 can only have two levels/categories. Factor 2 can have upto 6 levels. \nPlease rearrange the factors in your classlabels file.")
# classtemp<-classlabels[,3]
# classlabels[,3]<-classlabels[,4]
# classlabels[,4]<-classtemp
}
levels_classA<-levels(factor(classlabels[,3]))
if(length(levels_classA)>2){
#stop("Only one of the factors can have more than 2 levels/categories. \nPlease rearrange the factors in your classlabels file or use lm2wayanovarepeat.")
#stop("Please select lm2wayanova or lm2wayanovarepeat option for greater than 2x2 designs.")
stop("Factor 1 can only have two levels/categories. Factor 2 can have upto 6 levels. \nPlease rearrange the factors in your classlabels file. Or use lm2wayanova option.")
}
levels_classB<-levels(factor(classlabels[,4]))
if(length(levels_classB)>7){
#stop("Only one of the factors can have more than 2 levels/categories. \nPlease rearrange the factors in your classlabels file or use lm2wayanova.")
stop("Please select lm2wayanovarepeat option for greater than 2x7 designs.")
}
Xmat_temp<-cbind(classlabels,Xmat_temp)
if(alphabetical.order==TRUE){
#Xmat_temp<-Xmat_temp[order(Xmat_temp[,2],Xmat_temp[,3]),]
Xmat_temp<-Xmat_temp[order(Xmat_temp[,3],Xmat_temp[,4],Xmat_temp[,2]),]
}else{
Xmat_temp[,4] <- factor(Xmat_temp[,4], levels=unique(Xmat_temp[,4]))
Xmat_temp[,3] <- factor(Xmat_temp[,3], levels=unique(Xmat_temp[,3]))
}
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Factor", cnames)
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
classlabels_sub<-classlabels[,-c(1)]
subject_inf<-classlabels[,2]
classlabels<-classlabels[,-c(2)]
classlabels_response_mat<-classlabels[,-c(1)]
classlabels<-as.data.frame(classlabels)
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
classlabels_xyplots<-classlabels
subject_inf<-subject_inf[seq(1,dim(classlabels)[1],num_replicates)]
#write.table(classlabels,file="organized_classlabelsA1.txt",sep="\t",row.names=FALSE)
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
#write.table(Xmat_temp,file="organized_featuretableA1.txt",sep="\t",row.names=TRUE)
if(alphabetical.order==FALSE){
classlabels[,2] <- factor(classlabels[,2], levels=unique(classlabels[,2]))
classlabels[,3] <- factor(classlabels[,3], levels=unique(classlabels[,3]))
}
levels_classA<-levels(factor(classlabels[,2]))
levels_classB<-levels(factor(classlabels[,3]))
factor1_msg=(paste("Factor 1 levels: ",paste(levels_classA,collapse=","),sep=""))
factor2_msg=(paste("Factor 2 levels: ",paste(levels_classB,collapse=","),sep=""))
classlabels_class<-as.factor(classlabels[,2]):as.factor(classlabels[,3])
classtable1<-table(classlabels[,2],classlabels[,3])
#classlabels_orig<-classlabels
#classlabels<-cbind(as.character(classlabels[,1]),as.character(classlabels_class))
classlabels<-cbind(as.data.frame(classlabels[,1]),as.data.frame(classlabels_class))
Ymat<-classlabels
# print("Class labels file limma2wayrep:")
# print(head(classlabels))
#rownames(Xmat)<-as.character(classlabels[,1])
#write.table(classlabels,file="organized_classlabels.txt",sep="\t",row.names=FALSE)
Xmat1<-cbind(classlabels,Xmat)
#write.table(Xmat1,file="organized_featuretable.txt",sep="\t",row.names=TRUE)
featselmethod="limma2way"
pairedanalysis = TRUE
}
else{
stop("Only one factor specificied in the class labels file.")
}
}
}
classlabels<-as.data.frame(classlabels)
if(featselmethod=="lm2wayanova" | featselmethod=="pls2way" | featselmethod=="spls2way"){
analysismode="classification"
#classlabels<-read.table(class_labels_file,sep="\t",header=TRUE)
if(is.na(Ymat)==TRUE){
classlabels<-read.table(class_labels_file,sep="\t",header=TRUE)
Ymat<-classlabels
}else{
classlabels<-Ymat
}
#cnames[2]<-"Factor1"
cnames<-colnames(classlabels)
factor_inf<-classlabels[,-c(1)]
factor_inf<-as.data.frame(factor_inf)
colnames(classlabels)<-c("SampleID",paste("Factor",seq(1,dim(factor_inf)[2]),sep=""))
analysismode="classification"
Xmat_temp<-Xmat #t(Xmat)
# save(Xmat_temp,classlabels,file="Xmat_temp_lm2way.Rda")
Xmat_temp<-cbind(classlabels,Xmat_temp)
rnames_xmat<-rownames(Xmat)
rnames_ymat<-as.character(Ymat[,1])
# ###saveXmat_temp,file="Xmat_temp.Rda")
if(featselmethod=="lm2wayanova" | featselmethod=="pls2way" | featselmethod=="spls2way"){
if(alphabetical.order==TRUE){
Xmat_temp<-Xmat_temp[order(Xmat_temp[,2],Xmat_temp[,3]),]
}
}
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Factor", cnames)
# save(Xmat_temp,classlabels,factor_lastcol,file="debudsort.Rda")
if(alphabetical.order==FALSE){
Xmat_temp[,2] <- factor(Xmat_temp[,2], levels=unique(Xmat_temp[,2]))
Xmat_temp[,3] <- factor(Xmat_temp[,3], levels=unique(Xmat_temp[,3]))
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
classlabels[,2] <- factor(classlabels[,2], levels=unique(classlabels[,2]))
classlabels[,3] <- factor(classlabels[,3], levels=unique(classlabels[,3]))
}else{
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
}
levels_classA<-levels(factor(classlabels[,2]))
levels_classB<-levels(factor(classlabels[,3]))
factor1_msg=(paste("Factor 1 levels: ",paste(levels_classA,collapse=","),sep=""))
factor2_msg=(paste("Factor 2 levels: ",paste(levels_classB,collapse=","),sep=""))
classlabels_sub<-classlabels[,-c(1)]
classlabels_response_mat<-classlabels[,-c(1)]
Ymat<-classlabels
classlabels_orig<-classlabels
#Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
###save(Xmat,file="Xmat2.Rda")
if(featselmethod=="lm2wayanova" | featselmethod=="pls2way" | featselmethod=="spls2way"){
classlabels_class<-as.factor(classlabels[,2]):as.factor(classlabels[,3])
classtable1<-table(classlabels[,2],classlabels[,3])
classlabels_xyplots<-classlabels
#classlabels_orig<-classlabels
# classlabels_orig<-classlabels_orig[seq(1,dim(classlabels)[1],num_replicates),]
classlabels<-cbind(as.data.frame(classlabels[,1]),as.data.frame(classlabels_class))
Ymat<-classlabels
if(featselmethod=="pls2way"){
featselmethod="pls"
}else{
if(featselmethod=="spls2way"){
featselmethod="spls"
}
}
}
# write.table(classlabels,file="organized_classlabelsB.txt",sep="\t",row.names=FALSE)
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
#write.table(Xmat_temp,file="organized_featuretableA.txt",sep="\t",row.names=TRUE)
#write.table(classlabels,file="organized_classlabelsA.txt",sep="\t",row.names=FALSE)
}
if(featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat" | featselmethod=="pls1wayrepeat" | featselmethod=="spls1wayrepeat" | featselmethod=="pls2wayrepeat" |
featselmethod=="spls2wayrepeat" | featselmethod=="ttestrepeat" | featselmethod=="wilcoxrepeat" | featselmethod=="lmregrepeat"){
#analysismode="classification"
pairedanalysis=TRUE
# classlabels<-read.table(class_labels_file,sep="\t",header=TRUE)
if(is.na(Ymat)==TRUE){
classlabels<-read.table(class_labels_file,sep="\t",header=TRUE)
Ymat<-classlabels
}else{
classlabels<-Ymat
}
cnames<-colnames(classlabels)
factor_inf<-classlabels[,-c(1:2)]
factor_inf<-as.data.frame(factor_inf)
colnames(classlabels)<-c("SampleID","SubjectNum",paste("Factor",seq(1,dim(factor_inf)[2]),sep=""))
classlabels_orig<-classlabels
#Xmat<-chocolate[,1]
Xmat_temp<-Xmat #t(Xmat)
Xmat_temp<-cbind(classlabels,Xmat_temp)
pairedanalysis=TRUE
if(featselmethod=="lm1wayanovarepeat" | featselmethod=="pls1wayrepeat" | featselmethod=="spls1wayrepeat" | featselmethod=="ttestrepeat" | featselmethod=="wilcoxrepeat" | featselmethod=="lmregrepeat"){
if(alphabetical.order==TRUE){
Xmat_temp<-Xmat_temp[order(Xmat_temp[,3],Xmat_temp[,2]),]
}else{
Xmat_temp[,3] <- factor(Xmat_temp[,3], levels=unique(Xmat_temp[,3]))
}
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Factor", cnames)
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
subject_inf<-classlabels[,2]
subject_inf<-subject_inf[seq(1,dim(classlabels)[1],num_replicates)]
classlabels_response_mat<-classlabels[,-c(1:2)]
# classlabels_orig<-classlabels
classlabels_sub<-classlabels[,-c(1)]
if(alphabetical.order==FALSE){
classlabels[,3] <- factor(classlabels[,3], levels=unique(classlabels[,3]))
}
levels_classA<-levels(factor(classlabels[,3]))
factor1_msg=(paste("Factor 1 levels: ",paste(levels_classA,collapse=","),sep=""))
classlabels<-classlabels[,-c(2)]
if(alphabetical.order==FALSE){
classlabels[,2] <- factor(classlabels[,2], levels=unique(classlabels[,2]))
}
classlabels_class<-classlabels[,2]
classtable1<-table(classlabels[,2])
#classlabels<-cbind(as.character(classlabels[,1]),as.character(classlabels_class))
classlabels<-cbind(as.data.frame(classlabels[,1]),as.data.frame(classlabels_class))
Ymat<-classlabels
classlabels_xyplots<-classlabels
# classlabels<-classlabels[seq(1,dim(classlabels)[1],num_replicates),]
Ymat<-classlabels
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
# write.table(Xmat_temp,file="organized_featuretableA.txt",sep="\t",row.names=FALSE)
####saveYmat,file="Ymat.Rda")
# ###saveXmat,file="Xmat.Rda")
if(featselmethod=="spls1wayrepeat"){
featselmethod="spls"
}else{
if(featselmethod=="pls1wayrepeat"){
featselmethod="pls"
}
}
if(featselmethod=="wilcoxrepeat"){
featselmethod=="wilcox"
pairedanalysis=TRUE
}
if(featselmethod=="ttestrepeat"){
featselmethod=="ttest"
pairedanalysis=TRUE
}
}
if(featselmethod=="lm2wayanovarepeat" | featselmethod=="pls2wayrepeat" | featselmethod=="spls2wayrepeat"){
if(alphabetical.order==TRUE){
Xmat_temp<-Xmat_temp[order(Xmat_temp[,3],Xmat_temp[,4],Xmat_temp[,2]),]
}else{
Xmat_temp[,3] <- factor(Xmat_temp[,3], levels=unique(Xmat_temp[,3]))
Xmat_temp[,4] <- factor(Xmat_temp[,4], levels=unique(Xmat_temp[,4]))
}
cnames<-colnames(Xmat_temp)
factor_lastcol<-grep("^Factor", cnames)
classlabels<-Xmat_temp[,c(1:factor_lastcol[length(factor_lastcol)])]
classlabels_sub<-classlabels[,-c(1)]
subject_inf<-classlabels[,2]
subject_inf<-subject_inf[seq(1,dim(classlabels)[1],num_replicates)]
classlabels_response_mat<-classlabels[,-c(1:2)]
Ymat<-classlabels
classlabels_xyplots<-classlabels[,-c(2)]
if(alphabetical.order==FALSE){
classlabels[,4] <- factor(classlabels[,4], levels=unique(classlabels[,4]))
classlabels[,3] <- factor(classlabels[,3], levels=unique(classlabels[,3]))
}
levels_classA<-levels(factor(classlabels[,3]))
factor1_msg=(paste("Factor 1 levels: ",paste(levels_classA,collapse=","),sep=""))
levels_classB<-levels(factor(classlabels[,4]))
factor2_msg=(paste("Factor 2 levels: ",paste(levels_classB,collapse=","),sep=""))
Ymat<-classlabels
#print(head(classlabels))
classlabels<-classlabels[,-c(2)]
classlabels_class<-paste(classlabels[,2],":",classlabels[,3],sep="")
classtable1<-table(classlabels[,2],classlabels[,3])
#classlabels<-cbind(as.character(classlabels[,1]),as.character(classlabels_class))
classlabels<-cbind(as.data.frame(classlabels[,1]),as.data.frame(classlabels_class))
Ymat<-classlabels
# write.table(classlabels,file="organized_classlabelsA1.txt",sep="\t",row.names=FALSE)
Xmat<-Xmat_temp[,-c(1:factor_lastcol[length(factor_lastcol)])]
#write.table(Xmat_temp,file="organized_featuretableA.txt",sep="\t",row.names=FALSE)
#write.table(Xmat,file="organized_featuretableB1.txt",sep="\t",row.names=FALSE)
pairedanalysis=TRUE
if(featselmethod=="spls2wayrepeat"){
featselmethod="spls"
}
}
}
}
rownames(Xmat)<-as.character(Xmat_temp[,1])
# save(Xmat,Xmat_temp,file="Xmat1.Rda")
#save(Ymat,file="Ymat1.Rda")
rnames_xmat<-rownames(Xmat)
rnames_ymat<-as.character(Ymat[,1])
if(length(which(duplicated(rnames_ymat)==TRUE))>0){
stop("Duplicate sample IDs are not allowed. Please represent replicates by _1,_2,_3.")
}
check_ylabel<-regexpr(rnames_ymat[1],pattern="^[0-9]*",perl=TRUE)
check_xlabel<-regexpr(rnames_xmat[1],pattern="^X[0-9]*",perl=TRUE)
if(length(check_ylabel)>0 && length(check_xlabel)>0){
if(attr(check_ylabel,"match.length")>0 && attr(check_xlabel,"match.length")>0){
rnames_ymat<-paste("X",rnames_ymat,sep="") #gsub(rnames_ymat,pattern="\\.[0-9]*",replacement="")
}
}
Xmat<-t(Xmat)
colnames(Xmat)<-as.character(Ymat[,1])
Xmat<-cbind(X[,c(1:2)],Xmat)
Xmat<-as.data.frame(Xmat)
Ymat<-as.data.frame(Ymat)
match_names<-match(rnames_xmat,rnames_ymat)
bad_colnames<-length(which(is.na(match_names)==TRUE))
#print(match_names)
#if(is.na()==TRUE){
#save(rnames_xmat,rnames_ymat,Xmat,Ymat,file="debugnames.Rda")
bool_names_match_check<-all(rnames_xmat==rnames_ymat)
if(bad_colnames>0 | bool_names_match_check==FALSE){
# if(bad_colnames>0){
print("Sample names do not match between feature table and class labels files.\n Please try replacing any \"-\" with \".\" in sample names.")
print("Sample names in feature table")
print(head(rnames_xmat))
print("Sample names in classlabels file")
print(head(rnames_ymat))
stop("Sample names do not match between feature table and class labels files.\n Please try replacing any \"-\" with \".\" in sample names. Please try again.")
}
if(is.na(all(diff(match(rnames_xmat,rnames_ymat))))==FALSE){
if(all(diff(match(rnames_xmat,rnames_ymat)) > 0)==TRUE){
setwd("../")
#save(Xmat,Ymat,names_with_mz_time,feature_table_file,parentoutput_dir,class_labels_file,num_replicates,feat.filt.thresh,summarize.replicates,
# summary.method,all.missing.thresh,group.missing.thresh,missing.val,samplermindex,rep.max.missing.thresh,summary.na.replacement,featselmethod,pairedanalysis,input.intensity.scale,file="data_preprocess_in.Rda")
######
rownames(Xmat)<-names_with_mz_time$Name
num_features_total=nrow(Xmat)
#data preprocess classification
data_matrix<-data_preprocess(Xmat=Xmat,Ymat=Ymat,feature_table_file=feature_table_file,parentoutput_dir=parentoutput_dir,class_labels_file=NA,num_replicates=num_replicates,feat.filt.thresh=NA,summarize.replicates=summarize.replicates,summary.method=summary.method,
all.missing.thresh=all.missing.thresh,group.missing.thresh=group.missing.thresh,
log2transform=log2transform,medcenter=medcenter,znormtransform=znormtransform,,quantile_norm=quantile_norm,lowess_norm=lowess_norm,rangescaling=rangescaling,paretoscaling=paretoscaling,
mstus=mstus,sva_norm=sva_norm,eigenms_norm=eigenms_norm,vsn_norm=vsn_norm,madscaling=madscaling,missing.val=missing.val, rep.max.missing.thresh=rep.max.missing.thresh,
summary.na.replacement=summary.na.replacement,featselmethod=featselmethod,TIC_norm=TIC_norm,normalization.method=normalization.method,
input.intensity.scale=input.intensity.scale,log2.transform.constant=log2.transform.constant,alphabetical.order=alphabetical.order)
# save(data_matrix,names_with_mz_time,file="data_preprocess_out.Rda")
}else{
stop("Orders of feature table and classlabels do not match")
}
}else{
#print(diff(match(rnames_xmat,rnames_ymat)))
stop("Orders of feature table and classlabels do not match")
}
if(FALSE){
data_matrix<-data_preprocess(Xmat,Ymat,
feature_table_file,
parentoutput_dir="C:/Users/kuppal2/Documents/Projects/EGCG_pos//xmsPANDA_preprocess3/",
class_labels_file=NA,num_replicates=1,feat.filt.thresh=NA,summarize.replicates=TRUE,
summary.method="mean", all.missing.thresh=0.5,group.missing.thresh=0.5,
log2transform =FALSE, medcenter=FALSE, znormtransform = FALSE,
quantile_norm = FALSE, lowess_norm = FALSE, madscaling = FALSE,
missing.val=0, samplermindex=NA,rep.max.missing.thresh=0.5,summary.na.replacement="zeros")
}
}else{
stop("Invalid value for analysismode parameter. Please use regression or classification.")
}
}
if(is.na(names_with_mz_time)==TRUE){
names_with_mz_time=data_matrix$names_with_mz_time
}
# #save(data_matrix,file="data_matrix.Rda")
data_matrix_beforescaling<-data_matrix$data_matrix_prescaling
data_matrix_beforescaling<-as.data.frame( data_matrix_beforescaling)
data_matrix<-data_matrix$data_matrix_afternorm_scaling
#classlabels<-as.data.frame(classlabels)
if(dim(classlabels)[2]<2){
stop("The class labels/response matrix should have two columns: SampleID, Class/Response. Please see the example.")
}
data_m<-data_matrix[,-c(1:2)]
classlabels<-classlabels[seq(1,dim(classlabels)[1],num_replicates),]
# #save(classlabels,data_matrix,classlabels_orig,Ymat,file="Stage1/datarose.Rda")
classlabels_raw_boxplots<-classlabels
if(dim(classlabels)[2]==2){
if(length(levels(as.factor(classlabels[,2])))==2){
if(balance.classes==TRUE){
table_classes<-table(classlabels[,2])
suppressWarnings(library(ROSE))
Ytrain<-classlabels[,2]
data1=cbind(Ytrain,t(data_matrix[,-c(1:2)]))
##save(data1,classlabels,data_matrix,file="Stage1/data1.Rda")
# data_matrix_presim<-data_matrix
data1<-as.data.frame(data1)
colnames(data1)<-c("Ytrain",paste("var",seq(1,ncol(data1)-1),sep=""))
data1$Ytrain<-classlabels[,2]
if(table_classes[1]==table_classes[2])
{
set.seed(balance.classes.seed)
data1[,-c(1)]<-apply(data1[,-c(1)],2,as.numeric)
new_sample<-aggregate(x=data1[,-c(1)],by=list(as.factor(data1$Ytrain)),mean)
colnames(new_sample)<-colnames(data1)
data1<-rbind(data1,new_sample[1,])
set.seed(balance.classes.seed)
# #save(data1,classlabels,file="Stage1/dataB.Rda")
newData <- ROSE((Ytrain) ~ ., data1, seed = balance.classes.seed,N=nrow(data1)*balance.classes.sizefactor)$data
# newData <- SMOTE(Ytrain ~ ., data=data1, perc.over = 100)
#*balance.classes.sizefactor,perc.under=200*(balance.classes.sizefactor/(balance.classes.sizefactor/0.5)))
}else{
if(balance.classes.method=="ROSE"){
set.seed(balance.classes.seed)
data1[,-c(1)]<-apply(data1[,-c(1)],2,as.numeric)
newData <- ROSE((Ytrain) ~ ., data1, seed = balance.classes.seed,N=nrow(data1)*balance.classes.sizefactor)$data
}else{
set.seed(balance.classes.seed)
newData <- SMOTE(Ytrain ~ ., data=data1, perc.over = 100)
#*balance.classes.sizefactor,perc.under=200*(balance.classes.sizefactor/(balance.classes.sizefactor/0.5)))
}
}
newData<-na.omit(newData)
Xtrain<-newData[,-c(1)]
Xtrain<-as.matrix(Xtrain)
Ytrain<-newData[,c(1)]
Ytrain_mat<-cbind((rownames(Xtrain)),(Ytrain))
Ytrain_mat<-as.data.frame(Ytrain_mat)
print("new data")
print(dim(Xtrain))
print(dim(Ytrain_mat))
print(table(newData$Ytrain))
data_m<-t(Xtrain)
data_matrix<-cbind(data_matrix[,c(1:2)],data_m)
classlabels<-cbind(paste("S",seq(1,nrow(newData)),sep=""),Ytrain)
classlabels<-as.data.frame(classlabels)
print(dim(classlabels))
classlabels_orig<-classlabels
classlabels_sub<-classlabels[,-c(1)]
Ymat<-classlabels
##save(newData,file="Stage1/newData.Rda")
}
}
}
classlabelsA<-classlabels
Xmat<-data_matrix
#if(dim(classlabels_orig)==TRUE){
classlabels_orig<-classlabels_orig[seq(1,dim(classlabels_orig)[1],num_replicates),]
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
classlabels_response_mat<-classlabels_response_mat[seq(1,dim(classlabels_response_mat)[1],num_replicates),]
class_labels_levels_main<-c("S")
Ymat<-classlabels
rnames1<-as.character(Ymat[,1])
rnames2<-as.character(classlabels_orig[,1])
sorted_index<-{}
for(i in 1:length(rnames1)){
sorted_index<-c(sorted_index,grep(x=rnames2,pattern=paste("^",rnames1[i],"$",sep="")))
}
classlabels_orig<-classlabels_orig[sorted_index,]
#write.table(classlabels_response_mat,file="original_classlabelsB.txt",sep="\t",row.names=TRUE)
classlabelsA<-classlabels
if(length(which(duplicated(classlabels)==TRUE))>0){
rownames(classlabels)<-paste("S",seq(1,dim(classlabels)[1]),sep="")
}else{
rownames(classlabels)<-as.character(classlabels[,1])
}#as.character(classlabels[,1])
#print(classlabels)
#print(classlabels[1:10,])
# ###saveclasslabels,file="classlabels.Rda")
# ###saveclasslabels_orig,file="classlabels_orig.Rda")
# ###saveclasslabels_response_mat,file="classlabels_response_mat.Rda")
if(pairedanalysis==TRUE){
###savesubject_inf,file="subjectinf.Rda")
}
if(analysismode=="classification")
{
class_labels_levels<-levels(as.factor(classlabels[,2]))
# print("Using the following class labels")
#print(class_labels_levels)
class_labels_levels_main<-class_labels_levels
class_labels_levels<-unique(class_labels_levels)
bad_rows<-which(class_labels_levels=="")
if(length(bad_rows)>0){
class_labels_levels<-class_labels_levels[-bad_rows]
}
ordered_labels={}
num_samps_group<-new("list")
num_samps_group[[1]]<-0
groupwiseindex<-new("list")
groupwiseindex[[1]]<-0
for(c in 1:length(class_labels_levels))
{
classlabels_index<-which(classlabels[,2]==class_labels_levels[c])
ordered_labels<-c(ordered_labels,as.character(classlabels[classlabels_index,2]))
num_samps_group[[c]]<-length(classlabels_index)
groupwiseindex[[c]]<-classlabels_index
}
Ymatorig<-classlabels
#debugclasslabels
#save(classlabels,class_labels_levels,num_samps_group,Ymatorig,data_matrix,data_m_fc_withfeats,data_m,file="classlabels_1.Rda")
####saveclass_labels_levels,file="class_labels_levels.Rda")
# print("HERE1")
classlabels_dataframe<-classlabels
class_label_alphabets<-class_labels_levels
classlabels<-{}
if(length(class_labels_levels)==2){
#num_samps_group[[1]]=length(which(ordered_labels==class_labels_levels[1]))
#num_samps_group[[2]]=length(which(ordered_labels==class_labels_levels[2]))
class_label_A<-class_labels_levels[[1]]
class_label_B<-class_labels_levels[[2]]
#classlabels<-c(rep("ClassA",num_samps_group[[1]]),rep("ClassB",num_samps_group[[2]]))
classlabels<-c(rep(class_label_A,num_samps_group[[1]]),rep(class_label_B,num_samps_group[[2]]))
}else{
if(length(class_labels_levels)==3){
class_label_A<-class_labels_levels[[1]]
class_label_B<-class_labels_levels[[2]]
class_label_C<-class_labels_levels[[3]]
classlabels<-c(rep(class_label_A,num_samps_group[[1]]),rep(class_label_B,num_samps_group[[2]]),rep(class_label_C,num_samps_group[[3]]))
}else{
for(c in 1:length(class_labels_levels)){
num_samps_group_cur=length(which(Ymatorig[,2]==class_labels_levels[c]))
classlabels<-c(classlabels,rep(paste(class_labels_levels[c],sep=""),num_samps_group_cur))
#,rep("ClassB",num_samps_group[[2]]),rep("ClassC",num_samps_group[[3]]))
}
}
}
# print("Class mapping:")
# print(cbind(class_labels_levels,classlabels))
classlabels<-classlabels_dataframe[,2]
classlabels_2=classlabels
#save(classlabels_2,class_labels_levels,Ymatorig,data_matrix,data_m_fc_withfeats,data_m,file="classlabels_2.Rda")
####################################################################################
#print(head(data_m))
snames<-colnames(data_m)
Ymat<-as.data.frame(classlabels)
m1<-match(snames,Ymat[,1])
#Ymat<-Ymat[m1,]
data_temp<-data_matrix_beforescaling[,-c(1:2)]
rnames<-paste("mzid_",seq(1,nrow(data_matrix)),sep="")
rownames(data_m)=rnames
mzid_mzrt<-data_matrix[,c(1:2)]
colnames(mzid_mzrt)<-c("mz","time")
rownames(mzid_mzrt)=rnames
write.table(mzid_mzrt, file="Stage1/mzid_mzrt.txt",sep="\t",row.names=TRUE)
cl<-makeCluster(num_nodes)
mean_overall<-apply(data_temp,1,do_mean)
#clusterExport(cl,"do_mean")
#mean_overall<-parApply(cl,data_temp,1,do_mean)
#stopCluster(cl)
#mean_overall<-unlist(mean_overall)
# print("mean overall")
#print(summary(mean_overall))
bad_feat<-which(mean_overall==0)
if(length(bad_feat)>0){
data_matrix_beforescaling<-data_matrix_beforescaling[-bad_feat,]
data_m<-data_m[-bad_feat,]
data_matrix<-data_matrix[-bad_feat,]
}
#Step 5) RSD/CV calculation
}else{
classlabels<-(classlabels[,-c(1)])
}
# print("######classlabels#########")
#print(classlabels)
class_labels_levels_new<-levels(classlabels)
if(analysismode=="classification"){
test_classlabels<-cbind(class_labels_levels_main,class_labels_levels_new)
}
if(featselmethod=="ttest" | featselmethod=="wilcox"){
if(length(class_labels_levels)>2){
print("#######################")
print(paste("Warning: More than two classes detected. Invalid feature selection option. Skipping the feature selection for option ",featselmethod,sep=""))
print("#######################")
return("More than two classes detected. Invalid feature selection option.")
}
}
#print("here 2")
######################################################################################
#Step 6) Log2 mean fold change criteria from 0 to 1 with step of 0.1
feat_eval<-{}
feat_sigfdrthresh<-{}
feat_sigfdrthresh_cv<-{}
feat_sigfdrthresh_permut<-{}
permut_acc<-{}
feat_sigfdrthresh<-rep(0,length(log2.fold.change.thresh_list))
feat_sigfdrthresh_cv<-rep(NA,length(log2.fold.change.thresh_list))
feat_sigfdrthresh_permut<-rep(NA,length(log2.fold.change.thresh_list))
res_score_vec<-rep(0,length(log2.fold.change.thresh_list))
#feat_eval<-seq(0,1,0.1)
if(analysismode=="classification"){
best_cv_res<-(-1)*10^30
}else{
best_cv_res<-(1)*10^30
}
best_feats<-{}
goodfeats<-{}
mwan_fdr<-{}
targetedan_fdr<-{}
best_limma_res<-{}
best_acc<-{}
termA<-{}
fheader="transformed_log2fc_threshold_"
X<-t(data_m)
X<-replace(as.matrix(X),which(is.na(X)==TRUE),0)
# rm(pcaMethods)
#try(detach("package:pcaMethods",unload=TRUE),silent=TRUE)
#library(mixOmics)
if(featselmethod=="lmreg" || featselmethod=="lmregrobust" || featselmethod=="logitreg" || featselmethod=="logitregrobust"){
if(length(class_labels_levels)>2){
stop(paste(featselmethod, " feature selection option is only available for 2 class comparisons."),sep="")
}
}
if(sample.col.opt=="default"){
col_vec<-c("#CC0000","#AAC000","blue","mediumpurple4","mediumpurple1","blueviolet","cornflowerblue","cyan4","skyblue",
"darkgreen", "seagreen1", "green","yellow","orange","pink", "coral1", "palevioletred2",
"red","saddlebrown","brown","brown3","white","darkgray","aliceblue",
"aquamarine","aquamarine3","bisque","burlywood1","lavender","khaki3","black")
}else{
if(sample.col.opt=="topo"){
#col_vec<-topo.colors(256) #length(class_labels_levels))
#col_vec<-col_vec[seq(1,length(col_vec),)]
col_vec <- topo.colors(length(class_labels_levels), alpha=alphacol)
}else{
if(sample.col.opt=="heat"){
#col_vec<-heat.colors(256) #length(class_labels_levels))
col_vec <- heat.colors(length(class_labels_levels), alpha=alphacol)
}else{
if(sample.col.opt=="rainbow"){
#col_vec<-heat.colors(256) #length(class_labels_levels))
col_vec<-rainbow(length(class_labels_levels), start = 0, end = alphacol)
#col_vec <- heat.colors(length(class_labels_levels), alpha=alphacol)
}else{
if(sample.col.opt=="terrain"){
#col_vec<-heat.colors(256) #length(class_labels_levels))
#col_vec<-rainbow(length(class_labels_levels), start = 0, end = alphacol)
col_vec <- cm.colors(length(class_labels_levels), alpha=alphacol)
}else{
if(sample.col.opt=="colorblind"){
#col_vec <-c("#386cb0","#fdb462","#7fc97f","#ef3b2c","#662506","#a6cee3","#fb9a99","#984ea3","#ffff33")
# col_vec <- c("#0072B2", "#E69F00", "#009E73", "gold1", "#56B4E9", "#D55E00", "#CC79A7","black")
if(length(class_labels_levels)<9){
col_vec <- c("#0072B2", "#E69F00", "#009E73", "#56B4E9", "#D55E00", "#CC79A7", "#E64B35FF", "grey57")
}else{
#col_vec<-colorRampPalette(brewer.pal(10, "RdBu"))(length(class_labels_levels))
col_vec<-c("#0072B2", "#E69F00", "#009E73", "#56B4E9", "#D55E00", "#CC79A7","#E64B35B2", "#4DBBD5B2","#00A087B2","#3C5488B2","#F39B7FB2","#8491B4B2","#91D1C2B2","#DC0000B2","#7E6148B2",
"#374E55B2","#DF8F44B2","#00A1D5B2","#B24745B2","#79AF97B2","#6A6599B2","#80796BB2","#0073C2B2","#EFC000B2", "#868686B2","#CD534CB2","#7AA6DCB2","#003C67B2","grey57")
}
}else{
check_brewer<-grep(pattern="brewer",x=sample.col.opt)
if(length(check_brewer)>0){
sample.col.opt_temp=gsub(x=sample.col.opt,pattern="brewer.",replacement="")
col_vec <- colorRampPalette(brewer.pal(10, sample.col.opt_temp))(length(class_labels_levels))
}else{
if(sample.col.opt=="journal"){
col_vec<-c("#0072B2", "#E69F00", "#009E73", "#56B4E9", "#D55E00", "#CC79A7","#E64B35FF","#3C5488FF","#F39B7FFF",
"#8491B4FF","#91D1C2FF","#DC0000FF","#B09C85FF","#5F559BFF",
"#808180FF","#20854EFF","#FFDC91FF","#B24745FF",
"#374E55FF","#8F7700FF","#5050FFFF","#6BD76BFF",
"#E64B3519","#4DBBD519","#631879E5","grey75")
if(length(class_labels_levels)<8){
col_vec<-c("#0072B2", "#E69F00", "#009E73", "#56B4E9", "#D55E00", "#CC79A7","grey75")
#col_vec2<-brewer.pal(n = 8, name = "Dark2")
}else{
if(length(class_labels_levels)<=28){
# col_vec<-c("#0072B2", "#E69F00", "#009E73", "#56B4E9", "#D55E00", "#CC79A7", "grey75","#D95F02", "#7570B3", "#E7298A", "#66A61E", "#E6AB02", "#A6761D", "#666666","#1B9E77", "#7570B3", "#E7298A", "#A6761D", "#666666", "#1B9E77", "#D95F02", "#7570B3", "#E7298A", "#66A61E", "#E6AB02", "#A6761D", "#666666")
col_vec<-c("#0072B2", "#E69F00", "#009E73", "#56B4E9", "#D55E00", "#CC79A7","#E64B35FF","#3C5488FF","#F39B7FFF",
"#8491B4FF","#91D1C2FF","#DC0000FF","#B09C85FF","#5F559BFF",
"#808180FF","#20854EFF","#FFDC91FF","#B24745FF",
"#374E55FF","#8F7700FF","#5050FFFF","#6BD76BFF", "#8BD76BFF",
"#E64B3519","#9DBBD0FF","#631879E5","#666666","grey75")
}else{
colfunc <-colorRampPalette(c("#0072B2", "#E69F00", "#009E73", "#56B4E9", "#D55E00", "#CC79A7","grey75"));col_vec<-colfunc(length(class_labels_levels))
col_vec<-col_vec[sample(col_vec)]
}
}
}else{
#colfunc <-colorRampPalette(sample.col.opt);col_vec<-colfunc(length(class_labels_levels))
# if(length(sample.col.opt)==1){
# col_vec <-rep(sample.col.opt,length(class_labels_levels))
# }else{
# colfunc <-colorRampPalette(sample.col.opt);col_vec<-colfunc(length(class_labels_levels))
# col_vec<-col_vec[sample(col_vec)]
#}
if(length(sample.col.opt)==1){
col_vec <-rep(sample.col.opt,length(class_labels_levels))
}else{
if(length(sample.col.opt)>=length(class_labels_levels)){
col_vec <-sample.col.opt
col_vec <- rep(col_vec,length(class_labels_levels))
}else{
colfunc <-colorRampPalette(sample.col.opt);col_vec<-colfunc(length(class_labels_levels))
}
}
}
}
}
}
}
}
}
}
#pca_col_vec<-col_vec
pca_col_vec<-c("mediumpurple4","mediumpurple1","blueviolet","darkblue","blue","cornflowerblue","cyan4","skyblue",
"darkgreen", "seagreen1", "green","yellow","orange","pink", "coral1", "palevioletred2",
"red","saddlebrown","brown","brown3","white","darkgray","aliceblue",
"aquamarine","aquamarine3","bisque","burlywood1","lavender","khaki3","black")
if(is.na(individualsampleplot.col.opt)==TRUE){
individualsampleplot.col.opt=col_vec
}
#cl<-makeCluster(num_nodes)
#feat_sds<-parApply(cl,data_m,1,sd)
feat_sds<-apply(data_m,1,function(x){sd(x,na.rm=TRUE)})
#stopCluster(cl)
bad_sd_ind<-c(which(feat_sds==0),which(is.na(feat_sds)==TRUE))
bad_sd_ind<-unique(bad_sd_ind)
if(length(bad_sd_ind)>0){
data_matrix<-data_matrix[-c(bad_sd_ind),]
data_m<-data_m[-c(bad_sd_ind),]
data_matrix_beforescaling<-data_matrix_beforescaling[-c(bad_sd_ind),]
}
data_temp<-data_matrix_beforescaling[,-c(1:2)]
#cl<-makeCluster(num_nodes)
#clusterExport(cl,"do_rsd")
#feat_rsds<-parApply(cl,data_temp,1,do_rsd)
feat_rsds<-apply(data_temp,1,do_rsd)
#stopCluster(cl)
# #save(feat_rsds,data_temp,data_matrix_beforescaling,data_m,file="rsds.Rda")
sum_rsd<-summary(feat_rsds,na.rm=TRUE)
max_rsd<-max(feat_rsds,na.rm=TRUE)
max_rsd<-round(max_rsd,2)
# print("Summary of RSD across all features:")
#print(sum_rsd)
if(log2.fold.change.thresh_list[length(log2.fold.change.thresh_list)]>max_rsd){
stop(paste("The maximum relative standard deviation threshold in rsd.filt.list should be below ",max_rsd,sep=""))
}
classlabels_parent<-classlabels
classlabels_sub_parent<-classlabels_sub
classlabels_orig_parent<-classlabels_orig
#write.table(classlabels_orig,file="classlabels.txt",sep="\t",row.names=FALSE)
classlabels_response_mat_parent<-classlabels_response_mat
parent_data_m<-round(data_m,5)
res_score<-0
#best_cv_res<-0
best_feats<-{}
best_acc<-0
best_limma_res<-{}
best_logfc_ind<-1
output_dir1<-paste(parentoutput_dir,"/Stage2/",sep="")
dir.create(output_dir1,showWarnings=FALSE)
setwd(output_dir1)
classlabels_sub_parent<-classlabels_sub
classlabels_orig_parent<-classlabels_orig
#write.table(classlabels_orig,file="classlabels.txt",sep="\t",row.names=FALSE)
classlabels_response_mat_parent<-classlabels_response_mat
# rocfeatlist<-rocfeatlist+1
if(pairedanalysis==TRUE){
#print(subject_inf)
write.table(subject_inf,file="subject_inf.txt",sep="\t")
paireddesign=subject_inf
}else{
paireddesign=NA
}
#write.table(classlabels_orig,file="classlabels_orig.txt",sep="\t")
#write.table(classlabels,file="classlabels.txt",sep="\t")
#write.table(classlabels_response_mat,file="classlabels_response_mat.txt",sep="\t")
if(is.na(max_varsel)==TRUE){
max_varsel=dim(data_m)[1]
}
for(lf in 1:length(log2.fold.change.thresh_list))
{
allmetabs_res<-{}
classlabels_response_mat<-classlabels_response_mat_parent
classlabels_sub<-classlabels_sub_parent
classlabels_orig<-classlabels_orig_parent
setwd(parentoutput_dir)
log2.fold.change.thresh=log2.fold.change.thresh_list[lf]
output_dir1<-paste(parentoutput_dir,"/Stage2/",sep="")
dir.create(output_dir1,showWarnings=FALSE)
setwd(output_dir1)
if(logistic_reg==TRUE){
if(robust.estimate==FALSE){ output_dir<-paste(output_dir1,"logitreg","signalthresh",group.missing.thresh,"RSD",log2.fold.change.thresh,"/",sep="")
}else{
if(robust.estimate==TRUE){ output_dir<-paste(output_dir1,"logitregrobust","signalthresh",group.missing.thresh,"RSD",log2.fold.change.thresh,"/",sep="")
}
}
}else{
if(poisson_reg==TRUE){
if(robust.estimate==FALSE){ output_dir<-paste(output_dir1,"poissonreg","signalthresh",group.missing.thresh,"RSD",log2.fold.change.thresh,"/",sep="")
}else{
if(robust.estimate==TRUE){
output_dir<-paste(output_dir1,"poissonregrobust","signalthresh",group.missing.thresh,"RSD",log2.fold.change.thresh,"/",sep="")
}
}
}else{
if(featselmethod=="lmreg"){
if(robust.estimate==TRUE){
output_dir<-paste(output_dir1,"lmregrobust","signalthresh",group.missing.thresh,"RSD",log2.fold.change.thresh,"/",sep="")
}else{
output_dir<-paste(output_dir1,"lmreg","signalthresh",group.missing.thresh,"RSD",log2.fold.change.thresh,"/",sep="")
}
}else{
output_dir<-paste(output_dir1,parentfeatselmethod,"signalthresh",group.missing.thresh,"RSD",log2.fold.change.thresh,"/",sep="")
}
}
}
dir.create(output_dir,showWarnings=FALSE)
setwd(output_dir)
dir.create("Figures",showWarnings = FALSE)
dir.create("Tables",showWarnings = FALSE)
data_m<-parent_data_m
#print("dim of data_m")
#print(dim(data_m))
pdf_fname<-paste("Figures/Results_RSD",log2.fold.change.thresh,".pdf",sep="")
#zip_fname<-paste("Results_RSD",log2.fold.change.thresh,".zip",sep="")
if(output.device.type=="pdf"){
pdf(pdf_fname,width=10,height=10)
}
if(analysismode=="classification" | analysismode=="regression"){
rsd_filt_msg=(paste("Performing RSD filtering using ",log2.fold.change.thresh, " as threshold",sep=""))
if(log2.fold.change.thresh>=0){
if(log2.fold.change.thresh==0){
log2.fold.change.thresh=0.001
}
#good_metabs<-which(abs(mean_groups)>log2.fold.change.thresh)
abs_feat_rsds<-abs(feat_rsds)
good_metabs<-which(abs_feat_rsds>log2.fold.change.thresh)
#print("length of good_metabs")
#print(good_metabs)
}else{
good_metabs<-seq(1,dim(data_m)[1])
}
if(length(good_metabs)>0){
data_m_fc<-data_m[good_metabs,]
data_m_fc_withfeats<-data_matrix[good_metabs,c(1:2)]
data_matrix_beforescaling_rsd<-data_matrix_beforescaling[good_metabs,]
data_matrix<-data_matrix[good_metabs,]
}else{
#data_m_fc<-data_m
#data_m_fc_withfeats<-data_matrix[,c(1:2)]
stop(paste("Please decrease the maximum relative standard deviation (rsd.filt.thresh) threshold to ",max_rsd,sep=""))
}
}else{
data_m_fc<-data_m
data_m_fc_withfeats<-data_matrix[,c(1:2)]
}
# save(data_m_fc_withfeats,data_m_fc,data_m,data_matrix,file="datadebug.Rda")
ylab_text_raw<-ylab_text
if(log2transform==TRUE || input.intensity.scale=="log2"){
if(znormtransform==TRUE){
ylab_text_2="scale normalized"
}else{
if(quantile_norm==TRUE){
ylab_text_2="quantile normalized"
}else{
ylab_text_2=""
}
}
ylab_text=paste("log2 ",ylab_text," ",ylab_text_2,sep="")
}else{
if(znormtransform==TRUE){
ylab_text_2="scale normalized"
}else{
if(quantile_norm==TRUE){
ylab_text_2="quantile normalized"
}else{
ylab_text_2=""
}
}
ylab_text=paste("Raw ",ylab_text," ",ylab_text_2,sep="") #paste("Raw intensity ",ylab_text_2,sep="")
}
#ylab_text=paste("Abundance",sep="")
if(is.na(names_with_mz_time)==FALSE){
data_m_fc_with_names<-merge(names_with_mz_time,data_m_fc_withfeats,by=c("mz","time"))
data_m_fc_with_names<-data_m_fc_with_names[match(data_m_fc_withfeats$mz,data_m_fc_with_names$mz),]
#save(names_with_mz_time,goodfeats,goodfeats_with_names,file="goodfeats_with_names.Rda")
# goodfeats_name<-goodfeats_with_names$Name
#}
}
# save(data_m_fc_withfeats,data_matrix,data_m,data_m_fc,data_m_fc_with_names,names_with_mz_time,file="debugnames.Rda")
if(dim(data_m_fc)[2]>50){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/SampleIntensityDistribution.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
size_num<-min(100,dim(data_m_fc)[2])
par(mfrow=c(1,1),family="sans",cex=cex.plots)
samp_index<-sample(x=1:dim(data_m_fc)[2],size=size_num)
# try(boxplot(data_m_fc[,samp_index],main="Intensity distribution across samples after preprocessing",xlab="Samples",ylab=ylab_text,col=boxplot.col.opt),silent=TRUE)
#samp_dist_col<-get_boxplot_colors(boxplot.col.opt,class_labels_levels=c(1))
boxplot(data_m_fc[,samp_index],main="Intensity distribution across samples after preprocessing",xlab="Samples",ylab=ylab_text,col="white")
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}else{
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/SampleIntensityDistribution.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
par(mfrow=c(1,1),family="sans",cex=cex.plots)
try(boxplot(data_m_fc,main="Intensity distribution across samples after preprocessing",xlab="Samples",ylab=ylab_text,col="white"),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
if(is.na(outlier.method)==FALSE){
if(output.device.type!="pdf"){
temp_filename_1<-paste("Figures/OutlierDetection",outlier.method,".png",sep="")
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
par(mfrow=c(1,1),family="sans",cex=cex.plots)
##save(data_matrix,file="dm1.Rda")
outlier_detect(data_matrix=data_matrix,ncomp=2,column.rm.index=c(1,2),outlier.method=outlier.method[1])
# print("done outlier")
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
data_m_fc_withfeats<-cbind(data_m_fc_withfeats,data_m_fc)
allmetabs_res_withnames<-{}
feat_eval[lf]<-0
res_score_vec[lf]<-0
#feat_sigfdrthresh_cv[lf]<-0
filename<-paste(fheader,log2.fold.change.thresh,".txt",sep="")
#write.table(data_m_fc_withfeats, file=filename,sep="\t",row.names=FALSE)
if(length(data_m_fc)>=dim(parent_data_m)[2])
{
if(dim(data_m_fc)[1]>0){
if(ncol(data_m_fc)<30){
kfold=ncol(data_m_fc)
}
feat_eval[lf]<-dim(data_m_fc)[1]
# col_vec<-c("#CC0000","#AAC000","blue","mediumpurple4","mediumpurple1","blueviolet","darkblue","blue","cornflowerblue","cyan4","skyblue",
#"darkgreen", "seagreen1", "green","yellow","orange","pink", "coral1", "palevioletred2",
#"red","saddlebrown","brown","brown3","white","darkgray","aliceblue",
#"aquamarine","aquamarine3","bisque","burlywood1","lavender","khaki3","black")
if(analysismode=="classification")
{
sampleclass<-{}
patientcolors<-{}
#
classlabels<-as.data.frame(classlabels)
#print(classlabels)
f<-factor(classlabels[,1])
for(c in 1:length(class_labels_levels)){
num_samps_group_cur=length(which(ordered_labels==class_labels_levels[c]))
#classlabels<-c(classlabels,rep(paste("Class",class_label_alphabets,sep=""),num_samps_group_cur))
#,rep("ClassB",num_samps_group[[2]]),rep("ClassC",num_samps_group[[3]]))
sampleclass<-c(sampleclass,rep(paste("Class",class_label_alphabets[c],sep=""),num_samps_group_cur))
#sampleclass<-classlabels[,1] #c(sampleclass,rep(paste("Class",class_labels_levels[c],sep=""),num_samps_group_cur))
patientcolors <-c(patientcolors,rep(col_vec[c],num_samps_group_cur))
}
# library(pcaMethods)
#p1<-pcaMethods::pca(data_m_fc,method="rnipals",center=TRUE,scale="uv",cv="q2",nPcs=3)
tempX<-t(data_m_fc)
# p1<-pcaMethods::pca(tempX,method="rnipals",center=TRUE,scale="uv",cv="q2",nPcs=10)
if(output.device.type!="pdf"){
temp_filename_2<-"Figures/PCAdiagnostics_allfeats.png"
# png(temp_filename_2,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
if(output.device.type!="pdf"){
# dev.off()
}
# try(detach("package:pcaMethods",unload=TRUE),silent=TRUE)
if(dim(classlabels)[2]>2){
classgroup<-paste(classlabels[,1],":",classlabels[,2],sep="") #classlabels[,1]:classlabels[,2]
}else{
classgroup<-classlabels
}
classlabels_orig<-classlabels_orig_parent
if(pairedanalysis==TRUE){
#classlabels_orig<-classlabels_orig[,-c(2)]
}else{
if(featselmethod=="lmreg" || featselmethod=="logitreg" || featselmethod=="poissonreg"){
classlabels_orig<-classlabels_orig[,c(1:2)]
classlabels_orig<-as.data.frame(classlabels_orig)
}
}
if(analysismode=="classification"){
if(dim(classlabels_orig)[2]==2){
if(alphabetical.order==FALSE){
classlabels_orig[,2] <- factor(classlabels_orig[,2], levels=unique(classlabels_orig[,2]))
}
}
if(dim(classlabels_orig)[2]==3){
if(pairedanalysis==TRUE){
if(alphabetical.order==FALSE){
classlabels_orig[,3] <- factor(classlabels_orig[,3], levels=unique(classlabels_orig[,3]))
}
}else{
if(alphabetical.order==FALSE){
classlabels_orig[,2] <- factor(classlabels_orig[,2], levels=unique(classlabels_orig[,2]))
classlabels_orig[,3] <- factor(classlabels_orig[,3], levels=unique(classlabels_orig[,3]))
}
}
}else{
if(dim(classlabels_orig)[2]==4){
if(pairedanalysis==TRUE){
if(alphabetical.order==FALSE){
classlabels_orig[,3] <- factor(classlabels_orig[,3], levels=unique(classlabels_orig[,3]))
classlabels_orig[,4] <- factor(classlabels_orig[,4], levels=unique(classlabels_orig[,4]))
}
}
}
}
}
if(length(which(duplicated(data_m_fc_with_names$Name)==TRUE))>0){
print("Duplicate features detected")
print("Removing duplicate entries for the following features:")
# print(data_m_fc_with_names$Name[which(duplicated(data_m_fc_with_names$Name)==TRUE)])
data_m_fc_withfeats<-data_m_fc_withfeats[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
data_m_fc<-data_m_fc[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
data_matrix<-data_matrix[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
data_m<-data_m[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
data_m_fc_with_names<-data_m_fc_with_names[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
#parent_data_m<-parent_data_m[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
}
##Perform global PCA
if(pca.global.eval==TRUE){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/PCAplots_allfeats.pdf"
#png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
pdf(temp_filename_1,width=plots.width,height=plots.height)
}
plot(0:10, type = "n", xaxt="n", yaxt="n", bty="n", xlab = "", ylab = "")
text(5, 8, "PCA using all features left after pre-processing")
text(5, 7, "The figures include: ")
text(5, 6, "a. pairwise PC score plots ")
text(5, 5, "b. scores for individual samples on each PC")
text(5, 4, "c. Lineplots using PC scores for data with repeated measurements")
###savelist=ls(),file="pcaplotsall.Rda")
# save(data_m_fc_withfeats,classlabels_orig,sample.col.opt,col_vec,pairedanalysis,pca.cex.val,legendlocation,pca.ellipse,ellipse.conf.level,paireddesign,
# lineplot.col.opt,lineplot.lty.option,timeseries.lineplots,pcacenter,pcascale,alphabetical.order,
# analysistype,lme.modeltype,file="pcaplotsall.Rda")
rownames(data_m_fc_withfeats)<-data_m_fc_with_names$Name
# save(data_m_fc_withfeats,data_m_fc_with_names,file="data_m_fc_withfeats.Rda")
classlabels_orig_pca<-classlabels_orig
c1=try(get_pcascoredistplots(X=data_m_fc_withfeats,Y=classlabels_orig,feature_table_file=NA,parentoutput_dir=getwd(),class_labels_file=NA,sample.col.opt=sample.col.opt,
plots.width=2000,plots.height=2000,plots.res=300, alphacol=0.3,col_vec=col_vec,pairedanalysis=pairedanalysis,pca.cex.val=pca.cex.val,legendlocation=legendlocation,
pca.ellipse=pca.ellipse,ellipse.conf.level=ellipse.conf.level,
filename="all",paireddesign=paireddesign,lineplot.col.opt=lineplot.col.opt,
lineplot.lty.option=lineplot.lty.option,timeseries.lineplots=timeseries.lineplots,
pcacenter=pcacenter,pcascale=pcascale,alphabetical.order=alphabetical.order,
study.design=analysistype,lme.modeltype=lme.modeltype),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
classlabels_orig<-classlabels_orig_parent
}else{
#regression
tempgroup<-rep("A",dim(data_m_fc)[2]) #cbind(classlabels_orig[,1],
col_vec1<-rep("black",dim(data_m_fc)[2])
class_labels_levels_main1<-c("A")
analysistype="regression"
if(length(which(duplicated(data_m_fc_with_names$Name)==TRUE))>0){
data_m_fc_withfeats<-data_m_fc_withfeats[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
data_m_fc<-data_m_fc[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
data_matrix<-data_matrix[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
data_m<-data_m[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
data_m_fc_with_names<-data_m_fc_with_names[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
# parent_data_m<-parent_data_m[-which(duplicated(data_m_fc_with_names$Name)==TRUE),]
print("Duplicate features detected")
print("Removing duplicate entries for the following features:")
print(data_m_fc_with_names$Name[which(duplicated(data_m_fc_with_names$Name)==TRUE)])
}
rownames(data_m_fc_withfeats)<-data_m_fc_with_names$Name
# get_pca(X=data_m_fc,samplelabels=tempgroup,legendlocation=legendlocation,filename="all",
# ncomp=3,pcacenter=pcacenter,pcascale=pcascale,legendcex=0.5,outloc=getwd(),col_vec=col_vec1,
# sample.col.opt=sample.col.opt,alphacol=0.3,class_levels=NA,pca.cex.val=pca.cex.val,pca.ellipse=FALSE,
# paireddesign=paireddesign,alphabetical.order=alphabetical.order,pairedanalysis=pairedanalysis,classlabels_orig=classlabels_orig,analysistype=analysistype) #,silent=TRUE)
if(pca.global.eval==TRUE){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/PCAplots_allfeats.pdf"
#png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
pdf(temp_filename_1,width=plots.width,height=plots.height)
}
plot(0:10, type = "n", xaxt="n", yaxt="n", bty="n", xlab = "", ylab = "")
text(5, 8, "PCA using all features left after pre-processing")
text(5, 7, "The figures include: ")
text(5, 6, "a. pairwise PC score plots ")
text(5, 5, "b. scores for individual samples on each PC")
text(5, 4, "c. Lineplots using PC scores for data with repeated measurements")
###savelist=ls(),file="pcaplotsall.Rda")
###save(data_m_fc_withfeats,classlabels_orig,sample.col.opt,col_vec,pairedanalysis,pca.cex.val,legendlocation,pca.ellipse,ellipse.conf.level,paireddesign,lineplot.col.opt,lineplot.lty.option,timeseries.lineplots,pcacenter,pcascale,file="pcaplotsall.Rda")
c1=try(get_pcascoredistplots(X=data_m_fc_withfeats,Y=classlabels_orig,feature_table_file=NA,parentoutput_dir=getwd(),class_labels_file=NA,
sample.col.opt=sample.col.opt,
plots.width=2000,plots.height=2000,plots.res=300, alphacol=0.3,col_vec=col_vec,pairedanalysis=pairedanalysis,pca.cex.val=pca.cex.val,legendlocation=legendlocation,
pca.ellipse=pca.ellipse,ellipse.conf.level=ellipse.conf.level,filename="all",
paireddesign=paireddesign,lineplot.col.opt=lineplot.col.opt,lineplot.lty.option=lineplot.lty.option,
timeseries.lineplots=timeseries.lineplots,pcacenter=pcacenter,pcascale=pcascale,alphabetical.order=alphabetical.order,
study.design=analysistype,lme.modeltype=lme.modeltype),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
if(featselmethod=="pamr"){
#print("HERE")
#savedata_m_fc,classlabels,file="pamdebug.Rda")
if(is.na(fdrthresh)==FALSE){
if(fdrthresh>0.5){
pamrthresh=pvalue.thresh
}else{
pamrthresh=fdrthresh
}
}else{
pamrthresh=pvalue.thresh
}
pamr.res<-do_pamr(X=data_m_fc,Y=classlabels,fdrthresh=pamrthresh,nperms=1000,pamr.threshold.select.max=pamr.threshold.select.max,kfold=kfold)
###save(pamr.res,file="pamr.res.Rda")
goodip<-pamr.res$feature.list
if(length(goodip)<1){
goodip=NA
}
pamr.threshold_value<-pamr.res$threshold_value
feature_rowindex<-seq(1,nrow(data_m_fc))
discore<-rep(0,nrow(data_m_fc))
discore_all<-pamr.res$max.discore.allfeats
if(is.na(goodip)==FALSE){
discore[goodip]<-pamr.res$max.discore.sigfeats
sel.diffdrthresh<-feature_rowindex%in%goodip
max_absolute_standardized_centroids_thresh0<-pamr.res$max.discore.allfeats[goodip]
selected_id_withmztime<-cbind(data_m_fc_withfeats[goodip,c(1:2)],pamr.res$pam_toplist,max_absolute_standardized_centroids_thresh0)
###savepamr.res,file="pamr.res.Rda")
write.csv(selected_id_withmztime,file="dscores.selectedfeats.csv",row.names=FALSE)
rank_vec<-rank(-discore_all)
max_absolute_standardized_centroids_thresh0<-pamr.res$max.discore.allfeats
data_limma_fdrall_withfeats<-cbind(max_absolute_standardized_centroids_thresh0,data_m_fc_withfeats)
write.table(data_limma_fdrall_withfeats,file="Tables/pamr_ranked_feature_table.txt",sep="\t",row.names=FALSE)
}else{
goodip<-{}
sel.diffdrthresh<-rep(FALSE,length(feature_rowindex))
}
rank_vec<-rank(-discore_all)
pamr_ythresh<-pamr.res$max.discore.all.thresh-0.00000001
}
if(featselmethod=="rfesvm"){
svm_classlabels<-classlabels[,1]
if(analysismode=="classification"){
svm_classlabels<-as.data.frame(svm_classlabels)
}
# ##save(data_m_fc,svm_classlabels,svm_kernel,file="svmdebug.Rda")
if(length(class_labels_levels)<3){
rfesvmres = diffexpsvmrfe(x=t(data_m_fc),y=svm_classlabels,svmkernel=svm_kernel)
featureRankedList=rfesvmres$featureRankedList
featureWeights=rfesvmres$featureWeights
#best_subset<-featureRankedList$best_subset
}else{
rfesvmres = diffexpsvmrfemulticlass(x=t(data_m_fc),y=svm_classlabels,svmkernel=svm_kernel)
featureRankedList=rfesvmres$featureRankedList
featureWeights=rfesvmres$featureWeights
}
# ##save(rfesvmres,file="rfesvmres.Rda")
rank_vec<-seq(1,dim(data_m_fc_withfeats)[1])
goodip<-featureRankedList[1:max_varsel]
#dtemp1<-data_m_fc_withfeats[goodip,]
sel.diffdrthresh<-rank_vec%in%goodip
rank_vec<-sort(featureRankedList,index.return=TRUE)$ix
weight_vec<-featureWeights #[rank_vec]
data_limma_fdrall_withfeats<-cbind(featureWeights,data_m_fc_withfeats)
}
f1={}
corfit={}
if(featselmethod=="limma" | featselmethod=="limma1way")
{
# cat("Performing limma analysis",sep="\n")
# save(classlabels,classlabels_orig,classlabels_dataframe,classlabels_response_mat,file="cldebug.Rda")
classlabels_temp1<-classlabels
classlabels<-classlabels_dataframe #classlabels_orig
colnames(classlabels)<-c("SampleID","Factor1")
if(alphabetical.order==FALSE){
classlabels$Factor1<-factor(classlabels$Factor1,levels=unique(classlabels$Factor1))
Factor1<-factor(classlabels$Factor1,levels=unique(classlabels$Factor1))
}else{
Factor1<-factor(classlabels$Factor1)
}
if(limma.contrasts.type=="contr.sum"){
contrasts_factor1<-contr.sum(length(levels(factor(Factor1))))
rownames(contrasts_factor1)<-levels(factor(Factor1))
cnames_contr_factor1<-apply(contrasts_factor1,2,function(x){paste(names(x[which(abs(x)==1)]),collapse = "-")})
}else{
contrasts_factor1<-contr.treatment(length(levels(factor(Factor1))))
rownames(contrasts_factor1)<-levels(factor(Factor1))
cnames_contr_factor1<-apply(contrasts_factor1,2,function(x){paste(names(x[1]),names(x[which(abs(x)==1)]),sep = "-")})
}
colnames(contrasts_factor1)<-cnames_contr_factor1
contrasts(Factor1) <- contrasts_factor1
design <- model.matrix(~Factor1)
classlabels<-classlabels_temp1
# design <- model.matrix(~ -1+f)
#colnames(design) <- levels(f)
options(digit=3)
#parameterNames<-colnames(design)
design_mat_names=colnames(design)
design_mat_names<-design_mat_names[-c(1)]
# limma paired analysis
if(pairedanalysis==TRUE){
f1<-{}
for(c in 1:length(class_labels_levels)){
f1<-c(f1,seq(1,num_samps_group[[c]]))
}
#print("Paired samples order")
f1<-subject_inf
# print(subject_inf)
# print("Design matrix")
# print(design)
####savelist=ls(),file="limma.Rda")
##save(subject_inf,file="subject_inf.Rda")
corfit<-duplicateCorrelation(data_m_fc,design=design,block=subject_inf,ndups=1)
if(limmarobust==TRUE)
{
fit<-lmFit(data_m_fc,design,block=f1,cor=corfit$consensus,method="robust")
}else{
fit<-lmFit(data_m_fc,design,block=f1,cor=corfit$consensus)
}
}else{
#not paired analysis
if(limmarobust==TRUE)
{
fit <- lmFit(data_m_fc,design,method="robust")
}else{
fit <- lmFit(data_m_fc,design)
}
#fit<-treat(fit,lfc=lf)
}
cont.matrix=attributes(design)$contrasts
#print(data_m_fc[1:3,])
#fit2 <- contrasts.fit(fit, cont.matrix)
#remove the intercept coefficient
fit<-fit[,-1]
fit2 <- eBayes(fit)
# save(fit2,fit,data_m_fc,design,f1,corfit,classlabels,Factor1,cnames_contr_factor1,file="limma.eBayes.fit.Rda")
# Various ways of summarising or plotting the results
#topTable(fit,coef=2)
#write.table(t1,file="topTable_limma.txt",sep="\t")
if(dim(design)[2]>2){
pvalues<-fit2$F.p.value
p.value<-fit2$F.p.value
}else{
pvalues<-fit2$p.value
p.value<-fit2$p.value
}
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
fdr_adjust_pvalue<-pvalues
#fdr_adjust_pvalue<-p.adjust(pvalues,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
if(dim(design)[2]<3){
if(fdrmethod=="none"){
filename<-paste("Tables/",parentfeatselmethod,"_pvalall_withfeats.txt",sep="")
}else{
filename<-paste("Tables/",parentfeatselmethod,"_fdrall_withfeats.txt",sep="")
}
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab)
data_limma_fdrall_withfeats<-cbind(p.value,fdr_adjust_pvalue,data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
pvalues<-p.value
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
# write.table(data_limma_fdrall_withfeats,file=filename,sep="\t",row.names=FALSE)
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
goodip<-which(sel.diffdrthresh==TRUE)
d4<-as.data.frame(data_limma_fdrall_withfeats)
logp<-(-1)*log((d4[,1]+(10^-20)),10)
#tiff("pval_dist.tiff",compression="lzw")
#hist(d4[,1],xlab="p",main="Distribution of p-values")
#dev.off()
}else{
adjusted.P.value<-fdr_adjust_pvalue
if(limmadecideTests==TRUE){
results2<-decideTests(fit2,method="nestedF",adjust.method="BH",p.value=fdrthresh)
#tiff("comparison_contrast_overlap.tiff",width=plots.width,height=plots.height,res=plots.res, compression="lzw")
#if(length(class_labels_levels)<4){
if(ncol(results2)<5){
if(output.device.type!="pdf"){
temp_filename_5<-"Figures/LIMMA_venn_diagram.png"
png(temp_filename_5,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
vennDiagram(results2,cex=0.8)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}else{
#dev.off()
results2<-fit2$p.value[,-c(1)]
}
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab2<-colnames(results2)
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab2,cnames_tab)
data_limma_fdrall_withfeats<-cbind(p.value,adjusted.P.value,results2,data_m_fc_withfeats)
data_limma_fdrall_withfeats<-as.data.frame(data_limma_fdrall_withfeats)
if(limmarobust==FALSE){
filename<-"Tables/limma_posthoc1wayanova_results.txt"
}else{
filename<-"Tables/limmarobust_posthoc1wayanova_results.txt"
}
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
if(length(check_names)>0){
data_limma_fdrall_withfeats<-cbind(p.value,adjusted.P.value,results2,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
data_limma_fdrall_withfeats<-as.data.frame(data_limma_fdrall_withfeats)
#data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,results2,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
rem_col_ind1<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("mz"))
rem_col_ind2<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("time"))
rem_col_ind<-c(rem_col_ind1,rem_col_ind2)
}else{
rem_col_ind<-{}
}
if(length(rem_col_ind)>0){
write.table(data_limma_fdrall_withfeats[,-c(rem_col_ind)], file=filename,sep="\t",row.names=FALSE)
}else{
write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
}
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
data_limma_fdrall_withfeats<-cbind(p.value,adjusted.P.value,data_m_fc_withfeats)
if(fdrmethod=="none"){
filename<-paste("limma_posthoc1wayanova_pval",fdrthresh,"_results.txt",sep="")
}else{
filename<-paste("limma_posthoc1wayanova_fdr",fdrthresh,"_results.txt",sep="")
}
if(length(which(data_limma_fdrall_withfeats$adjusted.P.value<fdrthresh & data_limma_fdrall_withfeats$p.value<pvalue.thresh))>0){
data_limma_sig_withfeats<-data_limma_fdrall_withfeats[data_limma_fdrall_withfeats$adjusted.P.value<fdrthresh & data_limma_fdrall_withfeats$p.value<pvalue.thresh,]
#write.table(data_limma_sig_withfeats, file=filename,sep="\t",row.names=FALSE)
}
# data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,data_m_fc_withfeats)
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
final.pvalues<-pvalues
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
}
#pvalues<-data_limma_fdrall_withfeats$p.value
#final.pvalues<-pvalues
# print("checking here")
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
goodip<-which(sel.diffdrthresh==TRUE)
d4<-as.data.frame(data_limma_fdrall_withfeats)
logp<-(-1)*log((d4[,1]+(10^-20)),10)
#tiff("pval_dist.tiff",compression="lzw")
#hist(d4[,1],xlab="p",main="Distribution of p-values")
#dev.off()
if(length(goodip)<1){
print("No features selected.")
}
}
if(featselmethod=="limma2way")
{
# cat("Performing limma2way analysis",sep="\n")
#design <- cbind(Grp1vs2=c(rep(1,num_samps_group[[1]]),rep(0,num_samps_group[[2]])),Grp2vs1=c(rep(0,num_samps_group[[1]]),rep(1,num_samps_group[[2]])))
# print("here")
# save(f,sampleclass,data_m_fc,classlabels,classlabels_orig,file="limma2way.Rda")
classlabels_temp<-classlabels
colnames(classlabels_orig)<-c("SampleID","Factor1","Factor2")
classlabels<- classlabels_orig #classlabels_dataframe #
colnames(classlabels)<-c("SampleID","Factor1","Factor2")
#design <- model.matrix(~ -1+f)
#classlabels<-read.table("/Users/karanuppal/Documents/Emory/JonesLab/Projects/DifferentialExpression/xmsPaNDA/examples_and_manual/Example_feature_table_and_classlabels/classlabels_two_way_anova.txt",sep="\t",header=TRUE)
#classlabels<-classlabels[order(classlabels$Factor2,decreasing = T),]
if(alphabetical.order==FALSE){
classlabels$Factor1<-factor(classlabels$Factor1,levels=unique(classlabels$Factor1))
classlabels$Factor2<-factor(classlabels$Factor2,levels=unique(classlabels$Factor2))
Factor1<-factor(classlabels$Factor1,levels=unique(classlabels$Factor1))
Factor2<-factor(classlabels$Factor2,levels=unique(classlabels$Factor2))
}else{
Factor1<-factor(classlabels$Factor1)
Factor2<-factor(classlabels$Factor2)
}
#this will create sum to zero parametrization. Coefficient Comparison Interpretation
#contrasts(Strain) <- contr.sum(2)
#contrasts(Treatment) <- contr.sum(2)
#design <- model.matrix(~Strain*Treatment)
#Intercept (WT.U+WT.S+Mu.U+Mu.S)/4; Grand mean
#Strain1 (WT.U+WT.S-Mu.U-Mu.S)/4; strain main effect
#Treatment1 (WT.U-WT.S+Mu.U-Mu.S)/4; treatment main effect
#Strain1:Treatment1 (WT.U-WT.S-Mu.U+Mu.S)/4; Interaction
if(limma.contrasts.type=="contr.sum"){
contrasts_factor1<-contr.sum(length(levels(factor(Factor1))))
contrasts_factor2<-contr.sum(length(levels(factor(Factor2))))
rownames(contrasts_factor1)<-levels(factor(Factor1))
rownames(contrasts_factor2)<-levels(factor(Factor2))
cnames_contr_factor1<-apply(contrasts_factor1,2,function(x){paste(names(x[which(abs(x)==1)]),collapse = "-")})
cnames_contr_factor2<-apply(contrasts_factor2,2,function(x){paste(names(x[which(abs(x)==1)]),collapse = "-")})
}else{
contrasts_factor1<-contr.treatment(length(levels(factor(Factor1))))
contrasts_factor2<-contr.treatment(length(levels(factor(Factor2))))
rownames(contrasts_factor1)<-levels(factor(Factor1))
rownames(contrasts_factor2)<-levels(factor(Factor2))
cnames_contr_factor1<-apply(contrasts_factor1,2,function(x){paste(names(x[1]),names(x[which(abs(x)==1)]),sep = "-")})
cnames_contr_factor2<-apply(contrasts_factor2,2,function(x){paste(names(x[1]),names(x[which(abs(x)==1)]),sep= "-")})
}
colnames(contrasts_factor1)<-cnames_contr_factor1
colnames(contrasts_factor2)<-cnames_contr_factor2
contrasts(Factor1) <- contrasts_factor1
contrasts(Factor2) <- contrasts_factor2
design <- model.matrix(~Factor1*Factor2)
# fit<-lmFit(data_m_fc,design=design)
#2. this will create contrasts with respect to the reference group (first level in each factor)
if(FALSE){
contrasts(Factor1) <- contr.treatment(length(levels(factor(Factor1))))
contrasts(Factor2) <- contr.treatment(length(levels(factor(Factor2))))
design.trt <- model.matrix(~Factor1*Factor2)
fit.trt<-lmFit(data_m_fc,design=design.trt)
s1=apply(fit.trt$coefficients,2,function(x){
length(which(is.na(x))==TRUE)/length(x)
})
}
#3. this will create design matrix with all factors
call<-lapply(classlabels[,c(2:3)],contrasts,contrasts=FALSE)
design.all<-model.matrix(~Factor1*Factor2,data=classlabels,contrasts.arg=call)
#grand mean: mean of means (mean of each level)
#mean_per_level<-lapply(2:ncol(design.all),function(x){mean(data_m_fc[1,which(design.all[,x]==1)])})
#mean_per_level<-unlist(mean_per_level)
#names(mean_per_level)<-colnames(design.all[,-1])
#grand_mean<-mean(mean_per_level,na.rm=TRUE)
#grand_mean<-with(d,tapply(data_m_fc[1,],list(Factor1,Factor2),mean))
# colnames(design)<-gsub(colnames(design),pattern="Factor1",replacement="")
#colnames(design)<-gsub(colnames(design),pattern="Factor2",replacement="")
# save(design,f,sampleclass,data_m_fc,classlabels,classlabels_orig,file="limma2way.Rda")
classlabels<-classlabels_temp
# print(data_m_fc[1:4,])
#colnames(design) <- levels(f)
#colnames(design)<-levels(factor(sampleclass))
options(digit=3)
parameterNames<-colnames(design)
# print("Design matrix")
# print(design)
if(pairedanalysis==TRUE)
{
f1<-subject_inf
#print(data_m_fc[1:10,1:10])
#save(design,subject_inf,file="limmadesign.Rda")
}
if(dim(design)[2]>=1){
#cont.matrix <- makeContrasts(Grp1vs2="ClassA-ClassB",Grp1vs3="ClassC-ClassD",Grp2vs3=("ClassA-ClassB")-("ClassC-ClassD"),levels=design)
#cont.matrix <- makeContrasts(Grp1vs2=ClassA-ClassB,Grp1vs3=ClassC-ClassD,Grp2vs3=(ClassA-ClassB)-(ClassC-ClassD),Grp3vs4=ClassA-ClassC,Group2vs4=ClassB-ClassD,levels=design)
#cont.matrix <- makeContrasts(Factor1=(ClassA+ClassB)-(ClassC+ClassD),Factor2=(ClassA+ClassC)-(ClassB+ClassD),Factor1x2=(ClassA-ClassB)-(ClassC-ClassD),levels=design)
design.pairs <-
function(levels) {
n <- length(levels)
design <- matrix(0,n,choose(n,2))
rownames(design) <- levels
colnames(design) <- 1:choose(n,2)
k <- 0
for (i in 1:(n-1))
for (j in (i+1):n) {
k <- k+1
design[i,k] <- 1
design[j,k] <- -1
colnames(design)[k] <- paste(levels[i],"-",levels[j],sep="")
}
design
}
#levels_1<-levels(factor(classlabels[,2]))
#levels_2<-levels(factor(classlabels[,3]))
#design2<-design.pairs(c(as.character(levels_1),as.character(levels_2)))
#cont.matrix<-makeContrasts(contrasts=colnames(design2),levels=c(as.character(levels_1),as.character(levels_2)))
if(pairedanalysis==TRUE){
#class_table_facts<-table(classlabels)
#f1<-c(seq(1,num_samps_group[[1]]),seq(1,num_samps_group[[2]]),seq(1,num_samps_group[[1]]),seq(1,num_samps_group[[2]]))
corfit<-duplicateCorrelation(data_m_fc,design=design,block=subject_inf,ndups=1)
#print(f1)
if(limmarobust==TRUE)
{
fit<-lmFit(data_m_fc,design,block=f1,cor=corfit$consensus,method="robust")
}else
{
fit<-lmFit(data_m_fc,design,block=f1,cor=corfit$consensus)
}
s1=apply(fit$coefficients,2,function(x){
length(which(is.na(x))==TRUE)/length(x)
})
if(length(which(s1==1))>0){
design<-design[,-which(s1==1)]
#fit <- lmFit(data_m_fc,design)
if(limmarobust==TRUE)
{
fit<-lmFit(data_m_fc,design,block=f1,cor=corfit$consensus,method="robust")
}else{
fit<-lmFit(data_m_fc,design,block=f1,cor=corfit$consensus)
}
}
}
else{
# fit <- lmFit(data_m_fc,design)
if(limmarobust==TRUE)
{
fit<-lmFit(data_m_fc,design,method="robust")
}else{
fit <- lmFit(data_m_fc,design)
}
s1=apply(fit$coefficients,2,function(x){
return(length(which(is.na(x))==TRUE)/length(x))
})
if(length(which(s1==1))>0){
design<-design[,-which(s1==1)]
if(limmarobust==TRUE)
{
fit<-lmFit(data_m_fc,design,method="robust")
}else{
fit<-lmFit(data_m_fc,design)
}
}
}
}
fit<-fit[,-1]
fit2=eBayes(fit)
results <- topTableF(fit2, n=Inf)
# decideresults<-decideTests(fit2)
# Ordinary fit
# save(fit2,fit,results,file="limma.eBayes.fit.Rda")
#fit2 <- contrasts.fit(fit, cont.matrix)
#fit2 <- eBayes(fit2)
#as.data.frame(fit2[1:10,])
# Various ways of summarising or plotting the results
#topTable(fit2,coef=2)
# ##save(fit2,file="fit2.Rda")
if(dim(design)[2]>2){
pvalues<-fit2$F.p.value
p.value<-fit2$F.p.value
}else{
pvalues<-fit2$p.value
p.value<-fit2$p.value
}
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
#fdr_adjust_pvalue<-qvalue(pvalues)
#fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
# fdr_adjust_pvalue<-pvalues
fdr_adjust_pvalue<-p.adjust(pvalues,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
#print("Doing this:")
adjusted.p.value<-fdr_adjust_pvalue
data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,data_m_fc_withfeats)
if(limmadecideTests==TRUE){
results2<-decideTests(fit2,adjust.method="BH",method="nestedF",p.value=fdrthresh) #
#tiff("comparison_contrast_overlap.tiff",width=plots.width,height=plots.height,res=plots.res, compression="lzw")
# save(results2,file="results2.Rda")
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab2<-colnames(results2)
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab2,cnames_tab)
data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,results2,data_m_fc_withfeats)
if(limmarobust==FALSE){
filename<-"Tables/limma_2wayposthoc_decideresults.txt"
}else{
filename<-"Tables/limmarobust_2wayposthoc_decideresults.txt"
}
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
# write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
#if(length(class_labels_levels)<5){
if(ncol(results2)<6){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/LIMMA_venn_diagram.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
vennDiagram(results2,cex=0.8)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
else{
#dev.off()
results2<-fit2$p.value[,-c(1)]
}
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab2<-colnames(results2)
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab2,cnames_tab)
#save(data_m_fc_withfeats,names)
data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,results2,data_m_fc_withfeats)
if(limmarobust==FALSE){
filename<-"Tables/limma_2wayposthoc_pvalues.txt"
}else{
filename<-"Tables/limmarobust_2wayposthoc_pvalues.txt"
}
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
if(length(check_names)>0){
data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,results2,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
rem_col_ind1<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("mz"))
rem_col_ind2<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("time"))
rem_col_ind<-c(rem_col_ind1,rem_col_ind2)
}else{
rem_col_ind<-{}
}
if(length(rem_col_ind)>0){
write.table(data_limma_fdrall_withfeats[,-c(rem_col_ind)], file=filename,sep="\t",row.names=FALSE)
}else{
write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
}
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
# write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
#tiff("comparison_contrast_overlap.tiff",width=plots.width,height=plots.height,res=plots.res, compression="lzw")
#dev.off()
#results2<-fit2$p.value
classlabels_orig<-as.data.frame(classlabels_orig)
data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,data_m_fc_withfeats)
# data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#write.table(data_limma_fdrall_withfeats,file="Limma_posthoc2wayanova_results.txt",sep="\t",row.names=FALSE)
#print("checking here")
pvalues<-p.value
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
goodip<-which(sel.diffdrthresh==TRUE)
d4<-as.data.frame(data_limma_fdrall_withfeats)
logp<-(-1)*log((d4[,1]+(10^-20)),10)
#results2<-decideTests(fit2,method="nestedF",adjust.method=fdrmethod,p.value=fdrthresh)
if(length(goodip)<1){
print("No features selected.")
}
}
if(featselmethod=="RF")
{
# cat("Performing RF analysis",sep="\n")
maxint<-apply(data_m_fc,1,max)
data_m_fc_withfeats<-as.data.frame(data_m_fc_withfeats)
data_m_fc<-as.data.frame(data_m_fc)
#write.table(classlabels,file="classlabels_rf.txt",sep="\t",row.names=FALSE)
#save(data_m_fc,classlabels,numtrees,analysismode,file="rfdebug.Rda")
if(rfconditional==TRUE){
cat("Performing random forest analysis using the cforest",sep="\n")
#rfcondres1<-do_rf_conditional(X=data_m_fc,rf_classlabels,ntrees=numtrees,analysismode) #,silent=TRUE)
filename<-"RFconditional_VIM_allfeats.txt"
}else{
#varimp_res2<-do_rf(X=data_m_fc,classlabels=rf_classlabels,ntrees=numtrees,analysismode)
if(analysismode=="classification"){
rf_classlabels<-classlabels[,1]
#print("Performing random forest analysis using the randomForest and Boruta functions")
varimp_res2<-do_rf_boruta(X=data_m_fc,classlabels=rf_classlabels) #,ntrees=numtrees,analysismode)
filename<-"RF_VIM_Boruta_allfeats.txt"
varimp_rf_thresh=0
}else{
rf_classlabels<-classlabels
#print("Performing random forest analysis using the randomForest function")
varimp_res2<-do_rf(X=data_m_fc,classlabels=rf_classlabels,ntrees=numtrees,analysismode)
# save(varimp_res2,data_m_fc,rf_classlabels,numtrees,analysismode,file="varimp_res2.Rda")
filename<-"RF_VIM_regression_allfeats.txt"
varimp_res2<-varimp_res2$rf_varimp #rf_varimp_scaled
#find the lowest value within the top max_varsel features to use as threshold
varimp_rf_thresh<-min(varimp_res2[order(varimp_res2,decreasing=TRUE)[1:(max_varsel+1)]],na.rm=TRUE)
}
}
names(varimp_res2)<-rownames(data_m_fc)
varimp_res3<-cbind(data_m_fc_withfeats[,c(1:2)],varimp_res2)
rownames(varimp_res3)<-rownames(data_m_fc)
filename<-paste("Tables/",filename,sep="")
write.table(varimp_res3, file=filename,sep="\t",row.names=TRUE)
goodip<-which(varimp_res2>varimp_rf_thresh)
if(length(goodip)<1){
print("No features were selected using the selection criteria.")
}
var_names<-rownames(data_m_fc) #paste(sprintf("%.3f",data_m_fc_withfeats[,1]),sprintf("%.1f",data_m_fc_withfeats[,2]),sep="_")
names(varimp_res2)<-as.character(var_names)
sel.diffdrthresh<-varimp_res2>varimp_rf_thresh
if(length(which(sel.diffdrthresh==TRUE))<1){
print("No features were selected using the selection criteria")
}
num_var_rf<-length(which(sel.diffdrthresh==TRUE))
if(num_var_rf>10){
num_var_rf=10
}
sorted_varimp_res<-varimp_res2[order(varimp_res2,decreasing=TRUE)[1:(num_var_rf)]]
sorted_varimp_res<-rev(sort(sorted_varimp_res))
barplot_text=paste("Variable Importance Measure (VIM) \n(top ",length(sorted_varimp_res)," shown)\n",sep="")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/RF_selectfeats_VIMbarplot.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
par(mar=c(10,7,4,2))
# ##save(varimp_res2,data_m_fc,rf_classlabels,sorted_varimp_res,file="test_rf.Rda")
#xaxt="n",
x=barplot(sorted_varimp_res, xlab="", main=barplot_text,cex.axis=0.9,
cex.names=0.9, ylab="",las=2,ylim=range(pretty(c(0,sorted_varimp_res))))
title(ylab = "VIM", cex.lab = 1.5,
line = 4.5)
#x <- barplot(table(mtcars$cyl), xaxt="n")
# labs <- names(sorted_varimp_res)
# text(cex=0.7, labs, xpd=FALSE, srt=45) #,x=x-.25, y=-1.25)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
par(mfrow = c(1,1))
rank_num<-rank(-varimp_res2)
data_limma_fdrall_withfeats<-cbind(varimp_res2,rank_num,data_m_fc_withfeats)
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("VIM","Rank",cnames_tab)
goodip<-which(sel.diffdrthresh==TRUE)
feat_sigfdrthresh[lf]<-length(which(sel.diffdrthresh==TRUE))
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
#write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
}
if(featselmethod=="MARS"){
# cat("Performing MARS analysis",sep="\n")
#print(head(classlabels))
mars_classlabels<-classlabels #[,1]
marsres1<-do_mars(X=data_m_fc,mars_classlabels, analysismode,kfold)
#save(data_m_fc,mars_classlabels, analysismode,kfold,marsres1,file="mars.Rda")
varimp_marsres1<-marsres1$mars_varimp
rownames(varimp_marsres1)<-rownames(data_m_fc)
mars_mznames<-rownames(varimp_marsres1)
#all_names<-paste("mz",seq(1,dim(data_m_fc)[1]),sep="")
#com1<-match(all_names,mars_mznames)
filename<-"MARS_variable_importance.txt"
if(is.na(max_varsel)==FALSE){
if(max_varsel>dim(data_m_fc)[1]){
max_varsel=dim(data_m_fc)[1]
}
varimp_res2<-varimp_marsres1[,4]
#sort by VIM; and keep the top max_varsel scores
sorted_varimp_res<-varimp_res2[order(varimp_res2,decreasing=TRUE)[1:(max_varsel)]]
#get the minimum VIM from the top max_varsel scores
min_thresh<-min(sorted_varimp_res[which(sorted_varimp_res>=mars.gcv.thresh)],na.rm=TRUE)
row_num_vec<-seq(1,length(varimp_res2))
#only use the top max_varsel scores
#goodip<-order(varimp_res2,decreasing=TRUE)[1:(max_varsel)]
#sel.diffdrthresh<-row_num_vec%in%goodip
#use a threshold of mars.gcv.thresh
sel.diffdrthresh<-varimp_marsres1[,4]>=min_thresh
goodip<-which(sel.diffdrthresh==TRUE)
}else{
#use a threshold of mars.gcv.thresh
sel.diffdrthresh<-varimp_marsres1[,4]>=mars.gcv.thresh
goodip<-which(sel.diffdrthresh==TRUE)
}
num_var_rf<-length(which(sel.diffdrthresh==TRUE))
if(num_var_rf>10){
num_var_rf=10
}
sorted_varimp_res<-varimp_res2[order(varimp_res2,decreasing=TRUE)[1:(num_var_rf)]]
sorted_varimp_res<-sort(sorted_varimp_res)
barplot_text=paste("Generalized cross validation (top ",length(sorted_varimp_res)," shown)\n",sep="")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/MARS_selectfeats_GCVbarplot.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
# barplot(sorted_varimp_res, xlab="Selected features", main=barplot_text,cex.axis=0.5,cex.names=0.4, ylab="GCV",range(pretty(c(0,sorted_varimp_res))),space=0.1)
par(mar=c(10,7,4,2))
# ##save(varimp_res2,data_m_fc,rf_classlabels,sorted_varimp_res,file="test_rf.Rda")
#xaxt="n",
x=barplot(sorted_varimp_res, xlab="", main=barplot_text,cex.axis=0.9,
cex.names=0.9, ylab="",las=2,ylim=range(pretty(c(0,sorted_varimp_res))))
title(ylab = "GCV", cex.lab = 1.5,
line = 4.5)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
data_limma_fdrall_withfeats<-cbind(varimp_marsres1[,c(4,6)],data_m_fc_withfeats)
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("GCV importance","RSS importance",cnames_tab)
feat_sigfdrthresh[lf]<-length(which(sel.diffdrthresh==TRUE))
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
goodip<-which(sel.diffdrthresh==TRUE)
}
if(featselmethod=="pls" | featselmethod=="o1pls" | featselmethod=="o2pls" | featselmethod=="spls" | featselmethod=="o1spls" | featselmethod=="o2spls")
{
cat(paste("Performing ",featselmethod," analysis",sep=""),sep="\n")
classlabels<-as.data.frame(classlabels)
if(is.na(max_comp_sel)==TRUE){
max_comp_sel=pls_ncomp
}
rand_pls_sel<-{} #new("list")
if(featselmethod=="spls" | featselmethod=="o1spls" | featselmethod=="o2spls"){
if(featselmethod=="o1spls"){
featselmethod="o1pls"
}else{
if(featselmethod=="o2spls"){
featselmethod="o2pls"
}
}
if(pairedanalysis==TRUE){
classlabels_temp<-cbind(classlabels_sub[,2],classlabels)
set.seed(999)
plsres1<-do_plsda(X=data_m_fc,Y=classlabels_sub,oscmode=featselmethod,numcomp=pls_ncomp,kfold=kfold,evalmethod=pred.eval.method,keepX=max_varsel,sparseselect=TRUE,
analysismode,sample.col.opt=sample.col.opt,sample.col.vec=col_vec,scoreplot_legend=scoreplot_legend,pairedanalysis=pairedanalysis,
optselect=optselect,class_labels_levels_main=class_labels_levels_main,legendlocation=legendlocation,output.device.type=output.device.type,
plots.res=plots.res,plots.width=plots.width,plots.height=plots.height,plots.type=plots.type,pls.ellipse=pca.ellipse,alphabetical.order=alphabetical.order)
if (is(plsres1, "try-error")){
print(paste("sPLS could not be performed at RSD threshold: ",log2.fold.change.thresh,sep=""))
#break;
}
opt_comp<-plsres1$opt_comp
#for(randindex in 1:100)
#save(plsres1,file="plsres1.Rda")
if(is.na(pls.permut.count)==FALSE){
set.seed(999)
seedvec<-runif(pls.permut.count,10,10*pls.permut.count)
if(pls.permut.count>0){
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterEvalQ(cl,library(plsgenomics))
clusterEvalQ(cl,library(dplyr))
clusterEvalQ(cl,library(plyr))
clusterExport(cl,"pls.lda.cv",envir = .GlobalEnv)
clusterExport(cl,"plsda_cv",envir = .GlobalEnv)
#clusterExport(cl,"%>%",envir = .GlobalEnv) #%>%
clusterExport(cl,"do_plsda_rand",envir = .GlobalEnv)
clusterEvalQ(cl,library(mixOmics))
clusterEvalQ(cl,library(pls))
rand_pls_sel<-parLapply(cl,1:pls.permut.count,function(x)
{
set.seed(seedvec[x])
plsresrand<-do_plsda_rand(X=data_m_fc,Y=classlabels_sub[sample(x=seq(1,dim(classlabels_sub)[1]),
size=dim(classlabels_sub)[1]),],oscmode=featselmethod,
numcomp=opt_comp,kfold=kfold,evalmethod=pred.eval.method,keepX=max_varsel,sparseselect=TRUE,
analysismode,sample.col.vec=col_vec,scoreplot_legend=scoreplot_legend,pairedanalysis=pairedanalysis,
optselect=FALSE,class_labels_levels_main=class_labels_levels_main,legendlocation=legendlocation,plotindiv=FALSE,alphabetical.order=alphabetical.order) #,silent=TRUE)
#rand_pls_sel<-cbind(rand_pls_sel,plsresrand$vip_res[,1])
if (is(plsresrand, "try-error")){
return(rep(0,dim(data_m_fc)[1]))
}else{
return(plsresrand$vip_res[,1])
}
})
stopCluster(cl)
}
}
}else{
#plsres1<-try(do_plsda(X=data_m_fc,Y=classlabels,oscmode=featselmethod,numcomp=pls_ncomp,kfold=kfold,evalmethod=pred.eval.method,keepX=max_varsel,sparseselect=TRUE,analysismode,sample.col.vec=col_vec,scoreplot_legend=scoreplot_legend,pairedanalysis=pairedanalysis,optselect=optselect,class_labels_levels_main=class_labels_levels_main,legendlocation=legendlocation,pls.vip.selection=pls.vip.selection),silent=TRUE)
# ##save(data_m_fc,classlabels,pls_ncomp,kfold,file="pls1.Rda")
set.seed(999)
plsres1<-do_plsda(X=data_m_fc,Y=classlabels,oscmode=featselmethod,numcomp=pls_ncomp,kfold=kfold,evalmethod=pred.eval.method,
keepX=max_varsel,sparseselect=TRUE,analysismode,sample.col.opt=sample.col.opt,sample.col.vec=col_vec,
scoreplot_legend=scoreplot_legend,pairedanalysis=pairedanalysis,optselect=optselect,
class_labels_levels_main=class_labels_levels_main,legendlocation=legendlocation,
pls.vip.selection=pls.vip.selection,output.device.type=output.device.type,
plots.res=plots.res,plots.width=plots.width,plots.height=plots.height,plots.type=plots.type,pls.ellipse=pca.ellipse,alphabetical.order=alphabetical.order)
opt_comp<-plsres1$opt_comp
if (is(plsres1, "try-error")){
print(paste("sPLS could not be performed at RSD threshold: ",log2.fold.change.thresh,sep=""))
break;
}
#for(randindex in 1:100)
if(is.na(pls.permut.count)==FALSE){
set.seed(999)
seedvec<-runif(pls.permut.count,10,10*pls.permut.count)
if(pls.permut.count>0){
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterEvalQ(cl,library(plsgenomics))
clusterEvalQ(cl,library(dplyr))
clusterEvalQ(cl,library(plyr))
clusterExport(cl,"pls.lda.cv",envir = .GlobalEnv)
clusterExport(cl,"plsda_cv",envir = .GlobalEnv)
#clusterExport(cl,"%>%",envir = .GlobalEnv) #%>%
clusterExport(cl,"do_plsda_rand",envir = .GlobalEnv)
clusterEvalQ(cl,library(mixOmics))
clusterEvalQ(cl,library(pls))
rand_pls_sel<-parLapply(cl,1:pls.permut.count,function(x)
{
set.seed(seedvec[x])
plsresrand<-do_plsda_rand(X=data_m_fc,Y=classlabels[sample(x=seq(1,dim(classlabels)[1]),size=dim(classlabels)[1]),],oscmode=featselmethod,numcomp=opt_comp,kfold=kfold,
evalmethod=pred.eval.method,keepX=max_varsel,sparseselect=TRUE,analysismode,sample.col.vec=col_vec,scoreplot_legend=scoreplot_legend,
pairedanalysis=pairedanalysis,optselect=FALSE,class_labels_levels_main=class_labels_levels_main,
legendlocation=legendlocation,plotindiv=FALSE,alphabetical.order=alphabetical.order)
#rand_pls_sel<-cbind(rand_pls_sel,plsresrand$vip_res[,1])
#return(plsresrand$vip_res[,1])
if (is(plsresrand, "try-error")){
return(rep(0,dim(data_m_fc)[1]))
}else{
return(plsresrand$vip_res[,1])
}
})
stopCluster(cl)
}
}
}
pls_vip_thresh<-0
if (is(plsres1, "try-error")){
print(paste("sPLS could not be performed at RSD threshold: ",log2.fold.change.thresh,sep=""))
break;
}else{
opt_comp<-plsres1$opt_comp
}
}else{
#PLS
if(pairedanalysis==TRUE){
classlabels_temp<-cbind(classlabels_sub[,2],classlabels)
plsres1<-do_plsda(X=data_m_fc,Y=classlabels_temp,oscmode=featselmethod,numcomp=pls_ncomp,kfold=kfold,evalmethod=pred.eval.method,
keepX=max_varsel,sparseselect=FALSE,analysismode=analysismode,vip.thresh=pls_vip_thresh,sample.col.opt=sample.col.opt,
sample.col.vec=col_vec,scoreplot_legend=scoreplot_legend,pairedanalysis=pairedanalysis,optselect=optselect,
class_labels_levels_main=class_labels_levels_main,legendlocation=legendlocation,pls.vip.selection=pls.vip.selection,
output.device.type=output.device.type,plots.res=plots.res,plots.width=plots.width,
plots.height=plots.height,plots.type=plots.type,pls.ellipse=pca.ellipse,alphabetical.order=alphabetical.order)
if (is(plsres1, "try-error")){
print(paste("PLS could not be performed at RSD threshold: ",log2.fold.change.thresh,sep=""))
break;
}else{
opt_comp<-plsres1$opt_comp
}
}else{
plsres1<-do_plsda(X=data_m_fc,Y=classlabels,oscmode=featselmethod,numcomp=pls_ncomp,kfold=kfold,evalmethod=pred.eval.method,keepX=max_varsel,
sparseselect=FALSE,analysismode=analysismode,vip.thresh=pls_vip_thresh,sample.col.opt=sample.col.opt,
sample.col.vec=col_vec,scoreplot_legend=scoreplot_legend,pairedanalysis=pairedanalysis,optselect=optselect,
class_labels_levels_main=class_labels_levels_main,legendlocation=legendlocation,pls.vip.selection=pls.vip.selection,
output.device.type=output.device.type,plots.res=plots.res,plots.width=plots.width,plots.height=plots.height,
plots.type=plots.type,pls.ellipse=pca.ellipse,alphabetical.order=alphabetical.order)
if (is(plsres1, "try-error")){
print(paste("PLS could not be performed at RSD threshold: ",log2.fold.change.thresh,sep=""))
break;
}else{
opt_comp<-plsres1$opt_comp
}
#for(randindex in 1:100){
if(is.na(pls.permut.count)==FALSE){
set.seed(999)
seedvec<-runif(pls.permut.count,10,10*pls.permut.count)
if(pls.permut.count>0){
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterEvalQ(cl,library(plsgenomics))
clusterEvalQ(cl,library(dplyr))
clusterEvalQ(cl,library(plyr))
clusterExport(cl,"pls.lda.cv",envir = .GlobalEnv)
clusterExport(cl,"plsda_cv",envir = .GlobalEnv)
#clusterExport(cl,"%>%",envir = .GlobalEnv) #%>%
clusterExport(cl,"do_plsda_rand",envir = .GlobalEnv)
clusterEvalQ(cl,library(mixOmics))
clusterEvalQ(cl,library(pls))
#here
rand_pls_sel<-parLapply(cl,1:pls.permut.count,function(x)
{
set.seed(seedvec[x])
#t1fname<-paste("ranpls",x,".Rda",sep="")
####savelist=ls(),file=t1fname)
print(paste("PLSDA permutation number: ",x,sep=""))
plsresrand<-do_plsda_rand(X=data_m_fc,Y=classlabels[sample(x=seq(1,dim(classlabels)[1]),size=dim(classlabels)[1]),],
oscmode=featselmethod,numcomp=opt_comp,kfold=kfold,evalmethod=pred.eval.method,
keepX=max_varsel,sparseselect=FALSE,analysismode,sample.col.vec=col_vec,
scoreplot_legend=scoreplot_legend,pairedanalysis=pairedanalysis,optselect=FALSE,
class_labels_levels_main=class_labels_levels_main,legendlocation=legendlocation,plotindiv=FALSE,alphabetical.order=alphabetical.order) #,silent=TRUE)
if (is(plsresrand, "try-error")){
return(1)
}else{
return(plsresrand$vip_res[,1])
}
})
stopCluster(cl)
}
####saverand_pls_sel,file="rand_pls_sel1.Rda")
}
}
opt_comp<-plsres1$opt_comp
}
if(length(plsres1$bad_variables)>0){
data_m_fc_withfeats<-data_m_fc_withfeats[-c(plsres1$bad_variables),]
data_m_fc<-data_m_fc[-c(plsres1$bad_variables),]
}
if(is.na(pls.permut.count)==FALSE){
if(pls.permut.count>0){
###saverand_pls_sel,file="rand_pls_sel.Rda")
#rand_pls_sel<-ldply(rand_pls_sel,rbind) #unlist(rand_pls_sel)
rand_pls_sel<-as.data.frame(rand_pls_sel)
rand_pls_sel<-t(rand_pls_sel)
rand_pls_sel<-as.data.frame(rand_pls_sel)
if(featselmethod=="spls"){
rand_pls_sel[rand_pls_sel!=0]<-1
}else{
rand_pls_sel[rand_pls_sel<pls_vip_thresh]<-0
rand_pls_sel[rand_pls_sel>=pls_vip_thresh]<-1
}
####saverand_pls_sel,file="rand_pls_sel2.Rda")
rand_pls_sel_prob<-apply(rand_pls_sel,2,sum)/pls.permut.count
#rand_pls_sel_fdr<-p.adjust(rand_pls_sel_prob,method=fdrmethod)
pvalues<-rand_pls_sel_prob
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
#fdr_adjust_pvalue<-qvalue(pvalues)
#fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
fdr_adjust_pvalue<-pvalues
#fdr_adjust_pvalue<-p.adjust(pvalues,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
rand_pls_sel_fdr<-fdr_adjust_pvalue
vip_res<-cbind(data_m_fc_withfeats[,c(1:2)],plsres1$vip_res,rand_pls_sel_prob,rand_pls_sel_fdr)
}else{
vip_res<-cbind(data_m_fc_withfeats[,c(1:2)],plsres1$vip_res)
rand_pls_sel_fdr<-rep(0,dim(data_m_fc_withfeats[,c(1:2)])[1])
rand_pls_sel_prob<-rep(0,dim(data_m_fc_withfeats[,c(1:2)])[1])
}
}else{
vip_res<-cbind(data_m_fc_withfeats[,c(1:2)],plsres1$vip_res)
rand_pls_sel_fdr<-rep(0,dim(data_m_fc_withfeats[,c(1:2)])[1])
rand_pls_sel_prob<-rep(0,dim(data_m_fc_withfeats[,c(1:2)])[1])
}
write.table(vip_res,file="Tables/vip_res.txt",sep="\t",row.names=FALSE)
# write.table(r2_q2_valid_res,file="pls_r2_q2_res.txt",sep="\t",row.names=TRUE)
varimp_plsres1<-plsres1$selected_variables
opt_comp<-plsres1$opt_comp
if(max_comp_sel>opt_comp){
max_comp_sel<-opt_comp
}
# print("opt comp is")
#print(opt_comp)
if(featselmethod=="spls"){
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("Loading (absolute)","Rank",cnames_tab)
#
if(opt_comp>1){
#abs
vip_res1<-abs(plsres1$vip_res)
if(max_comp_sel>1){
vip_res1<-apply(vip_res1[,c(1:max_comp_sel)],1,max)
}else{
vip_res1<-vip_res1[,c(1)]
}
}else{
vip_res1<-abs(plsres1$vip_res)
}
pls_vip<-vip_res1 #(plsres1$vip_res)
if(is.na(pls.permut.count)==FALSE){
#based on loadings for sPLS
sel.diffdrthresh<-pls_vip!=0 & rand_pls_sel_fdr<fdrthresh & rand_pls_sel_prob<pvalue.thresh
}else{
# print("DOING SPLS #here999")
sel.diffdrthresh<-pls_vip!=0
}
goodip<-which(sel.diffdrthresh==TRUE)
# save(goodip,pls_vip,rand_pls_sel_fdr,rand_pls_sel_prob,sel.diffdrthresh,file="splsdebug1.Rda")
}else{
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("VIP","Rank",cnames_tab)
if(max_comp_sel>opt_comp){
max_comp_sel<-opt_comp
}
#pls_vip<-plsres1$vip_res[,c(1:max_comp_sel)]
if(opt_comp>1){
vip_res1<-(plsres1$vip_res)
if(max_comp_sel>1){
if(pls.vip.selection=="mean"){
vip_res1<-apply(vip_res1[,c(1:max_comp_sel)],1,mean)
}else{
vip_res1<-apply(vip_res1[,c(1:max_comp_sel)],1,max)
}
}else{
vip_res1<-vip_res1[,c(1)]
}
}else{
vip_res1<-plsres1$vip_res
}
#vip_res1<-plsres1$vip_res
pls_vip<-vip_res1
#pls
sel.diffdrthresh<-pls_vip>=pls_vip_thresh & rand_pls_sel_fdr<fdrthresh & rand_pls_sel_prob<pvalue.thresh
goodip<-which(sel.diffdrthresh==TRUE)
}
rank_vec<-order(pls_vip,decreasing=TRUE)
rank_vec2<-seq(1,length(rank_vec))
ranked_vec<-pls_vip[rank_vec]
rank_num<-match(pls_vip,ranked_vec)
data_limma_fdrall_withfeats<-cbind(pls_vip,rank_num,data_m_fc_withfeats)
feat_sigfdrthresh[lf]<-length(which(sel.diffdrthresh==TRUE)) #length(plsres1$selected_variables) #length(which(sel.diffdrthresh==TRUE))
filename<-paste("Tables/",parentfeatselmethod,"_variable_importance.txt",sep="")
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
}
#stop("Please choose limma, RF, RFcond, or MARS for featselmethod.")
if(featselmethod=="lmreg" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat"| featselmethod=="logitreg" | featselmethod=="wilcox" | featselmethod=="ttest" |
featselmethod=="ttestrepeat" | featselmethod=="poissonreg" | featselmethod=="wilcoxrepeat" | featselmethod=="lmregrepeat")
{
pvalues<-{}
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
if(featselmethod=="ttestrepeat"){
featselmethod="ttest"
pairedanalysis=TRUE
}
if(featselmethod=="wilcoxrepeat"){
featselmethod="wilcox"
pairedanalysis=TRUE
}
if(featselmethod=="lm1wayanova")
{
# cat("Performing one-way ANOVA analysis",sep="\n")
#print(dim(data_m_fc))
#print(dim(classlabels_response_mat))
#print(dim(classlabels))
#data_mat_anova<-cbind(t(data_m_fc),classlabels_response_mat)
numcores<-round(detectCores()*0.6)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"diffexponewayanova",envir = .GlobalEnv)
clusterExport(cl,"anova",envir = .GlobalEnv)
clusterExport(cl,"TukeyHSD",envir = .GlobalEnv)
clusterExport(cl,"aov",envir = .GlobalEnv)
res1<-parApply(cl,data_m_fc,1,function(x,classlabels_response_mat){
xvec<-x
data_mat_anova<-cbind(xvec,classlabels_response_mat)
data_mat_anova<-as.data.frame(data_mat_anova)
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-c("Response","Factor1")
data_mat_anova$Factor1<-as.factor(data_mat_anova$Factor1)
anova_res<-diffexponewayanova(dataA=data_mat_anova)
return(anova_res)
},classlabels_response_mat)
stopCluster(cl)
main_pval_mat<-{}
posthoc_pval_mat<-{}
pvalues<-{}
#print(head(res1))
for(i in 1:length(res1)){
main_pval_mat<-rbind(main_pval_mat,res1[[i]]$mainpvalues)
pvalues<-c(pvalues,res1[[i]]$mainpvalues[1])
posthoc_pval_mat<-rbind(posthoc_pval_mat,res1[[i]]$posthocfactor1)
}
pvalues<-unlist(pvalues)
#print(summary(pvalues))
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
#fdr_adjust_pvalue<-qvalue(pvalues)
#fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
#fdr_adjust_pvalue<-pvalues
fdr_adjust_pvalue<-p.adjust(pvalues,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-"lm1wayanova_pvalall_posthoc.txt"
}else{
filename<-"lm1wayanova_fdrall_posthoc.txt"
}
cnames_tab<-colnames(data_m_fc_withfeats)
posthoc_names<-colnames(posthoc_pval_mat)
if(length(posthoc_names)<1){
posthoc_names<-c("Factor1vs2")
}
cnames_tab<-c("P.value","adjusted.P.value",posthoc_names,cnames_tab)
#cnames_tab<-c("P.value","adjusted.P.value","posthoc.pvalue",cnames_tab)
pvalues<-as.data.frame(pvalues)
#pvalues<-t(pvalues)
pvalues<-as.data.frame(pvalues)
final.pvalues<-pvalues
#final.pvalues<-pvalues
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,posthoc_pval_mat,data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#gohere
if(length(check_names)>0){
# data_limma_fdrall_withfeats<-cbind(pvalues1,fdr_adjust_pvalue1,pvalues2,fdr_adjust_pvalue2,pvalues3,fdr_adjust_pvalue3,posthoc_pval_mat,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
#colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,posthoc_pval_mat,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
data_limma_fdrall_withfeats<-as.data.frame(data_limma_fdrall_withfeats)
#data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,results2,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
rem_col_ind1<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("mz"))
rem_col_ind2<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("time"))
rem_col_ind<-c(rem_col_ind1,rem_col_ind2)
}else{
rem_col_ind<-{}
}
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
filename<-paste("Tables/",filename,sep="")
if(length(rem_col_ind)>0){
#write.table(data_limma_fdrall_withfeats[,-c(rem_col_ind)], file="Tables/twowayanova_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
write.table(data_limma_fdrall_withfeats[,-c(rem_col_ind)], file=filename,sep="\t",row.names=FALSE)
}else{
#write.table(data_limma_fdrall_withfeats,file="Tables/twowayanova_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
}
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
}
if(featselmethod=="ttest" && pairedanalysis==TRUE)
{
# cat("Performing paired t-test analysis",sep="\n")
#print(dim(data_m_fc))
#print(dim(classlabels_response_mat))
#print(dim(classlabels))
#data_mat_anova<-cbind(t(data_m_fc),classlabels_response_mat)
numcores<-round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"t.test",envir = .GlobalEnv)
res1<-parApply(cl,data_m_fc,1,function(x,classlabels_response_mat){
xvec<-x
data_mat_anova<-cbind(xvec,classlabels_response_mat)
data_mat_anova<-as.data.frame(data_mat_anova)
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-c("Response","Factor1")
#print(data_mat_anova)
data_mat_anova$Factor1<-as.factor(data_mat_anova$Factor1)
#anova_res<-diffexponewayanova(dataA=data_mat_anova)
x1<-data_mat_anova$Response[which(data_mat_anova$Factor1==class_labels_levels[1])]
y1<-data_mat_anova$Response[which(data_mat_anova$Factor1==class_labels_levels[2])]
w1<-t.test(x=x1,y=y1,alternative="two.sided",paired=TRUE)
return(w1$p.value)
},classlabels_response_mat)
stopCluster(cl)
main_pval_mat<-{}
posthoc_pval_mat<-{}
pvalues<-{}
pvalues<-unlist(res1)
#print(summary(pvalues))
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
#fdr_adjust_pvalue<-qvalue(pvalues)
#fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
#fdr_adjust_pvalue<-pvalues
fdr_adjust_pvalue<-p.adjust(pvalues,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-"pairedttest_pvalall_withfeats.txt"
}else{
filename<-"pairedttest_fdrall_withfeats.txt"
}
cnames_tab<-colnames(data_m_fc_withfeats)
posthoc_names<-colnames(posthoc_pval_mat)
if(length(posthoc_names)<1){
posthoc_names<-c("Factor1vs2")
}
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab)
#cnames_tab<-c("P.value","adjusted.P.value","posthoc.pvalue",cnames_tab)
pvalues<-as.data.frame(pvalues)
#pvalues<-t(pvalues)
# print(dim(pvalues))
#print(dim(data_m_fc_withfeats))
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
# write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
}
if(featselmethod=="ttest" && pairedanalysis==FALSE)
{
#cat("Performing t-test analysis",sep="\n")
#print(dim(data_m_fc))
#print(dim(classlabels_response_mat))
#print(dim(classlabels))
#data_mat_anova<-cbind(t(data_m_fc),classlabels_response_mat)
numcores<-round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"t.test",envir = .GlobalEnv)
res1<-parApply(cl,data_m_fc,1,function(x,classlabels_response_mat){
xvec<-x
data_mat_anova<-cbind(xvec,classlabels_response_mat)
data_mat_anova<-as.data.frame(data_mat_anova)
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-c("Response","Factor1")
#print(data_mat_anova)
data_mat_anova$Factor1<-as.factor(data_mat_anova$Factor1)
#anova_res<-diffexponewayanova(dataA=data_mat_anova)
x1<-data_mat_anova$Response[which(data_mat_anova$Factor1==class_labels_levels[1])]
y1<-data_mat_anova$Response[which(data_mat_anova$Factor1==class_labels_levels[2])]
w1<-t.test(x=x1,y=y1,alternative="two.sided")
return(w1$p.value)
},classlabels_response_mat)
stopCluster(cl)
main_pval_mat<-{}
posthoc_pval_mat<-{}
pvalues<-{}
pvalues<-unlist(res1)
#print(summary(pvalues))
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
#fdr_adjust_pvalue<-qvalue(pvalues)
#fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
#fdr_adjust_pvalue<-pvalues
fdr_adjust_pvalue<-p.adjust(pvalues,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-"ttest_pvalall_withfeats.txt"
}else{
filename<-"ttest_fdrall_withfeats.txt"
}
cnames_tab<-colnames(data_m_fc_withfeats)
posthoc_names<-colnames(posthoc_pval_mat)
if(length(posthoc_names)<1){
posthoc_names<-c("Factor1vs2")
}
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab)
#cnames_tab<-c("P.value","adjusted.P.value","posthoc.pvalue",cnames_tab)
pvalues<-as.data.frame(pvalues)
#pvalues<-t(pvalues)
# print(dim(pvalues))
#print(dim(data_m_fc_withfeats))
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
# write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
}
if(featselmethod=="wilcox")
{
# cat("Performing Wilcox rank-sum analysis",sep="\n")
#print(dim(data_m_fc))
#print(dim(classlabels_response_mat))
#print(dim(classlabels))
#data_mat_anova<-cbind(t(data_m_fc),classlabels_response_mat)
numcores<-round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"wilcox.test",envir = .GlobalEnv)
res1<-parApply(cl,data_m_fc,1,function(x,classlabels_response_mat){
xvec<-x
data_mat_anova<-cbind(xvec,classlabels_response_mat)
data_mat_anova<-as.data.frame(data_mat_anova)
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-c("Response","Factor1")
#print(data_mat_anova)
data_mat_anova$Factor1<-as.factor(data_mat_anova$Factor1)
#anova_res<-diffexponewayanova(dataA=data_mat_anova)
x1<-data_mat_anova$Response[which(data_mat_anova$Factor1==class_labels_levels[1])]
y1<-data_mat_anova$Response[which(data_mat_anova$Factor1==class_labels_levels[2])]
w1<-wilcox.test(x=x1,y=y1,alternative="two.sided")
return(w1$p.value)
},classlabels_response_mat)
stopCluster(cl)
main_pval_mat<-{}
posthoc_pval_mat<-{}
pvalues<-{}
pvalues<-unlist(res1)
#print(summary(pvalues))
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
#fdr_adjust_pvalue<-qvalue(pvalues)
#fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
#fdr_adjust_pvalue<-pvalues
fdr_adjust_pvalue<-p.adjust(pvalues,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-"wilcox_pvalall_withfeats.txt"
}else{
filename<-"wilcox_fdrall_withfeats.txt"
}
cnames_tab<-colnames(data_m_fc_withfeats)
posthoc_names<-colnames(posthoc_pval_mat)
if(length(posthoc_names)<1){
posthoc_names<-c("Factor1vs2")
}
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab)
#cnames_tab<-c("P.value","adjusted.P.value","posthoc.pvalue",cnames_tab)
pvalues<-as.data.frame(pvalues)
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
# write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
}
#lmreg:feature selections
if(featselmethod=="lmreg")
{
if(logistic_reg==TRUE){
if(length(levels(classlabels_response_mat[,1]))>2){
print("More than 2 classes found. Skipping logistic regression analysis.")
next;
}
# cat("Performing logistic regression analysis",sep="\n")
classlabels_response_mat[,1]<-as.numeric((classlabels_response_mat[,1]))-1
fileheader="logitreg"
}else{
if(poisson_reg==TRUE){
# cat("Performing poisson regression analysis",sep="\n")
fileheader="poissonreg"
classlabels_response_mat[,1]<-as.numeric((classlabels_response_mat[,1]))
}else{
# cat("Performing linear regression analysis",sep="\n")
fileheader="lmreg"
}
}
numcores<-num_nodes #round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"diffexplmreg",envir = .GlobalEnv)
clusterExport(cl,"lm",envir = .GlobalEnv)
clusterExport(cl,"glm",envir = .GlobalEnv)
clusterExport(cl,"summary",envir = .GlobalEnv)
clusterExport(cl,"anova",envir = .GlobalEnv)
clusterEvalQ(cl,library(sandwich))
#data_mat_anova<-cbind(t(data_m_fc),classlabels_response_mat)
res1<-parApply(cl,data_m_fc,1,function(x,classlabels_response_mat,logistic_reg,poisson_reg,robust.estimate,vcovHC.type){
xvec<-x
data_mat_anova<-cbind(xvec,classlabels_response_mat)
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-cnames
#lmreg feature selection
anova_res<-diffexplmreg(dataA=data_mat_anova,logistic_reg,poisson_reg,robust.estimate,vcovHC.type)
return(anova_res)
},classlabels_response_mat,logistic_reg,poisson_reg,robust.estimate,vcovHC.type)
stopCluster(cl)
main_pval_mat<-{}
posthoc_pval_mat<-{}
pvalues<-{}
#save(res1,file="res1.Rda")
all_inf_mat<-{}
for(i in 1:length(res1)){
main_pval_mat<-rbind(main_pval_mat,res1[[i]]$mainpvalues)
pvalues<-c(pvalues,res1[[i]]$mainpvalues[1])
#posthoc_pval_mat<-rbind(posthoc_pval_mat,res1[[i]]$posthocfactor1)
cur_pvals<-t(res1[[i]]$mainpvalues)
cur_est<-t(res1[[i]]$estimates)
cur_stderr<-t(res1[[i]]$stderr)
cur_tstat<-t(res1[[i]]$statistic)
#cur_pvals<-as.data.frame(cur_pvals)
cur_res<-cbind(cur_pvals,cur_est,cur_stderr,cur_tstat)
all_inf_mat<-rbind(all_inf_mat,cur_res)
}
cnames_1<-c(paste("P.value_",colnames(cur_pvals),sep=""),paste("Estimate_",colnames(cur_pvals),sep=""),paste("StdError_var_",colnames(cur_pvals),sep=""),paste("t-statistic_",colnames(cur_pvals),sep=""))
# print("here after lm reg")
#print(summary(pvalues))
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
#fdr_adjust_pvalue<-qvalue(pvalues)
#fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
#fdr_adjust_pvalue<-pvalues
fdr_adjust_pvalue<-p.adjust(pvalues,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-paste(fileheader,"_pvalall_withfeats.txt",sep="")
}else{
filename<-paste(fileheader,"_fdrall_withfeats.txt",sep="")
}
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("P.value","adjusted.P.value",cnames_tab)
pvalues<-as.data.frame(pvalues)
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
#write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
if(analysismode=="regression"){
filename<-paste(fileheader,"_results_allfeatures.txt",sep="")
filename<-paste("Tables/",filename,sep="")
# write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
}
filename<-paste(fileheader,"_pval_coef_stderr.txt",sep="")
data_allinf_withfeats<-cbind(all_inf_mat,data_m_fc_withfeats)
filename<-paste("Tables/",filename,sep="")
# write.table(data_allinf_withfeats, file=filename,sep="\t",row.names=FALSE)
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c(cnames_1,cnames_tab)
class_column_names<-colnames(classlabels_response_mat)
colnames(data_allinf_withfeats)<-as.character(cnames_tab)
###save(data_allinf_withfeats,cnames_tab,cnames_1,file="data_allinf_withfeats.Rda")
pval_columns<-grep(colnames(data_allinf_withfeats),pattern="P.value")
fdr_adjusted_pvalue<-get_fdr_adjusted_pvalue(data_matrix=data_allinf_withfeats,fdrmethod=fdrmethod)
# data_allinf_withfeats1<-cbind(data_allinf_withfeats[,pval_columns],fdr_adjusted_pvalue,data_allinf_withfeats[,-c(pval_columns)])
cnames_tab1<-c(cnames_tab[pval_columns],colnames(fdr_adjusted_pvalue),cnames_tab[-pval_columns])
pval_columns<-grep(colnames(data_allinf_withfeats),pattern="P.value")
fdr_adjusted_pvalue<-get_fdr_adjusted_pvalue(data_matrix=data_allinf_withfeats,fdrmethod=fdrmethod)
data_allinf_withfeats<-cbind(data_allinf_withfeats[,pval_columns],fdr_adjusted_pvalue,data_allinf_withfeats[,-c(pval_columns)])
cnames_tab1<-c(cnames_tab[pval_columns],colnames(fdr_adjusted_pvalue),cnames_tab[-pval_columns])
filename<-paste(fileheader,"_pval_coef_stderr.txt",sep="")
filename<-paste("Tables/",filename,sep="")
colnames(data_allinf_withfeats)<-cnames_tab1
###save(data_allinf_withfeats,file="d2.Rda")
write.table(data_allinf_withfeats, file=filename,sep="\t",row.names=FALSE)
}
if(featselmethod=="lm2wayanova")
{
cat("Performing two-way ANOVA analysis with Tukey post hoc comparisons",sep="\n")
#print(dim(data_m_fc))
# print(dim(classlabels_response_mat))
numcores<-num_nodes #round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"diffexplmtwowayanova",envir = .GlobalEnv)
clusterExport(cl,"TukeyHSD",envir = .GlobalEnv)
clusterExport(cl,"plotTukeyHSD1",envir = .GlobalEnv)
clusterExport(cl,"aov",envir = .GlobalEnv)
clusterExport(cl,"anova",envir = .GlobalEnv)
clusterEvalQ(cl,library(ggpubr))
clusterEvalQ(cl,library(ggplot2))
# clusterEvalQ(cl,library(cowplot))
#res1<-apply(data_m_fc,1,function(x){
res1<-parRapply(cl,data_m_fc,function(x,classlabels_response_mat){
xvec<-x
colnames(classlabels_response_mat)<-paste("Factor",seq(1,dim(classlabels_response_mat)[2]),sep="")
data_mat_anova<-cbind(xvec,classlabels_response_mat)
#print("2way anova")
# print(data_mat_anova[1:2,])
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-cnames
####savedata_mat_anova,file="data_mat_anova.Rda")
#diffexplmtwowayanova
anova_res<-diffexplmtwowayanova(dataA=data_mat_anova)
return(anova_res)
},classlabels_response_mat)
stopCluster(cl)
# print("done")
####saveres1,file="res1.Rda")
main_pval_mat<-{}
posthoc_pval_mat<-{}
pvalues1<-{}
pvalues2<-{}
pvalues3<-{}
save(res1,file="tukeyhsd_plots.Rda")
for(i in 1:length(res1)){
#print(i)
#print(res1[[i]]$mainpvalues)
#print(res1[[i]]$posthoc)
main_pval_mat<-rbind(main_pval_mat,res1[[i]]$mainpvalues)
pvalues1<-c(pvalues1,as.vector(res1[[i]]$mainpvalues[1]))
pvalues2<-c(pvalues2,as.vector(res1[[i]]$mainpvalues[2]))
pvalues3<-c(pvalues3,as.vector(res1[[i]]$mainpvalues[3]))
posthoc_pval_mat<-rbind(posthoc_pval_mat,res1[[i]]$posthoc)
}
twoanova_res<-cbind(data_m_fc_withfeats[,c(1:2)],main_pval_mat,posthoc_pval_mat)
#write.table(twoanova_res,file="Tables/twoanova_with_posthoc_pvalues.txt",sep="\t",row.names=FALSE)
pvalues1<-main_pval_mat[,1]
pvalues2<-main_pval_mat[,2]
pvalues3<-main_pval_mat[,3]
if(fdrmethod=="none"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="none")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="none")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="none")
}
if(fdrmethod=="BH"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="BH")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="BH")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="BH")
}else{
if(fdrmethod=="ST"){
fdr_adjust_pvalue1<-try(qvalue(pvalues1),silent=TRUE)
if(is(fdr_adjust_pvalue1,"try-error")){
fdr_adjust_pvalue1<-qvalue(pvalues1,lambda=max(pvalues1,na.rm=TRUE))
}
fdr_adjust_pvalue1<-fdr_adjust_pvalue1$qvalues
fdr_adjust_pvalue2<-try(qvalue(pvalues2),silent=TRUE)
if(is(fdr_adjust_pvalue2,"try-error")){
fdr_adjust_pvalue2<-qvalue(pvalues2,lambda=max(pvalues2,na.rm=TRUE))
}
fdr_adjust_pvalue2<-fdr_adjust_pvalue2$qvalues
fdr_adjust_pvalue3<-try(qvalue(pvalues3),silent=TRUE)
if(is(fdr_adjust_pvalue3,"try-error")){
fdr_adjust_pvalue3<-qvalue(pvalues3,lambda=max(pvalues3,na.rm=TRUE))
}
fdr_adjust_pvalue3<-fdr_adjust_pvalue3$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue1<-fdrtool(as.vector(pvalues1),statistic="pvalue",verbose=FALSE)
fdr_adjust_pvalue1<-fdr_adjust_pvalue1$qval
fdr_adjust_pvalue2<-fdrtool(as.vector(pvalues2),statistic="pvalue",verbose=FALSE)
fdr_adjust_pvalue2<-fdr_adjust_pvalue2$qval
fdr_adjust_pvalue3<-fdrtool(as.vector(pvalues3),statistic="pvalue",verbose=FALSE)
fdr_adjust_pvalue3<-fdr_adjust_pvalue3$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="none")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="none")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="BY")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="BY")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="BY")
}else{
if(fdrmethod=="bonferroni"){
# fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="bonferroni")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="bonferroni")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-paste(featselmethod,"_pvalall_withfeats.txt",sep="")
}else{
filename<-paste(featselmethod,"_fdrall_withfeats.txt",sep="")
}
cnames_tab<-colnames(data_m_fc_withfeats)
posthoc_names<-colnames(posthoc_pval_mat)
cnames_tab<-c("Factor1.P.value","Factor1.adjusted.P.value","Factor2.P.value","Factor2.adjusted.P.value","Interact.P.value","Interact.adjusted.P.value",posthoc_names,cnames_tab)
if(FALSE)
{
pvalues1<-as.data.frame(pvalues1)
pvalues1<-t(pvalues1)
fdr_adjust_pvalue1<-as.data.frame(fdr_adjust_pvalue1)
pvalues2<-as.data.frame(pvalues2)
pvalues2<-t(pvalues2)
fdr_adjust_pvalue2<-as.data.frame(fdr_adjust_pvalue2)
pvalues3<-as.data.frame(pvalues3)
pvalues3<-t(pvalues3)
fdr_adjust_pvalue3<-as.data.frame(fdr_adjust_pvalue3)
posthoc_pval_mat<-as.data.frame(posthoc_pval_mat)
}
# ###savedata_m_fc_withfeats,file="data_m_fc_withfeats.Rda")
data_limma_fdrall_withfeats<-cbind(pvalues1,fdr_adjust_pvalue1,pvalues2,fdr_adjust_pvalue2,pvalues3,fdr_adjust_pvalue3,posthoc_pval_mat,data_m_fc_withfeats)
fdr_adjust_pvalue<-cbind(fdr_adjust_pvalue1,fdr_adjust_pvalue2,fdr_adjust_pvalue3)
fdr_adjust_pvalue<-apply(fdr_adjust_pvalue,1,function(x){min(x,na.rm=TRUE)})
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
if(length(check_names)>0){
data_limma_fdrall_withfeats<-cbind(pvalues1,fdr_adjust_pvalue1,pvalues2,fdr_adjust_pvalue2,pvalues3,fdr_adjust_pvalue3,posthoc_pval_mat,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
data_limma_fdrall_withfeats<-as.data.frame(data_limma_fdrall_withfeats)
#data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,results2,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
rem_col_ind1<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("mz"))
rem_col_ind2<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("time"))
rem_col_ind<-c(rem_col_ind1,rem_col_ind2)
}else{
rem_col_ind<-{}
}
if(length(rem_col_ind)>0){
write.table(data_limma_fdrall_withfeats[,-c(rem_col_ind)], file="Tables/twowayanova_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
}else{
write.table(data_limma_fdrall_withfeats,file="Tables/twowayanova_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
}
filename<-paste("Tables/",filename,sep="")
#write.table(data_limma_fdrall_withfeats,file="Tables/twowayanova_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
#write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
fdr_matrix<-cbind(fdr_adjust_pvalue1,fdr_adjust_pvalue2,fdr_adjust_pvalue3)
fdr_matrix<-as.data.frame(fdr_matrix)
fdr_adjust_pvalue_all<-apply(fdr_matrix,1,function(x){return(min(x,na.rm=TRUE)[1])})
pvalues<-cbind(pvalues1,pvalues2,pvalues3)
pvalues<-apply(pvalues,1,function(x){min(x,na.rm=TRUE)[1]})
#pvalues1<-t(pvalues1)
#print("here")
pvalues1<-as.data.frame(pvalues1)
pvalues1<-t(pvalues1)
#print(dim(pvalues1))
#pvalues2<-t(pvalues2)
pvalues2<-as.data.frame(pvalues2)
pvalues2<-t(pvalues2)
#pvalues3<-t(pvalues3)
pvalues3<-as.data.frame(pvalues3)
pvalues3<-t(pvalues3)
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue_all<fdrthresh & final.pvalues<pvalue.thresh
if(length(which(fdr_adjust_pvalue1<fdrthresh))>0){
X1=data_m_fc_withfeats[which(fdr_adjust_pvalue1<fdrthresh),]
Y1=cbind(classlabels_orig[,1],as.character(classlabels_response_mat[,1]))
Y1<-as.data.frame(Y1)
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/HCA_Factor1selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
hca_f1<-get_hca(feature_table_file=NA,parentoutput_dir=output_dir,class_labels_file=NA,X=X1,Y=Y1,heatmap.col.opt=heatmap.col.opt,cor.method=cor.method,is.data.znorm=FALSE,
analysismode="classification",
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300,
alphacol=0.3, hca_type=hca_type,newdevice=FALSE,input.type="intensity",mainlab="Factor1",
alphabetical.order=alphabetical.order,study.design=analysistype,labRow.value = labRow.value, labCol.value = labCol.value,similarity.matrix=similarity.matrix,
cexLegend=hca.cex.legend,cexRow=cex.plots,cexCol=cex.plots)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}else{
print("No significant features for Factor 1.")
}
if(length(which(fdr_adjust_pvalue2<fdrthresh))>0){
X2=data_m_fc_withfeats[which(fdr_adjust_pvalue2<fdrthresh),]
Y2=cbind(classlabels_orig[,1],as.character(classlabels_response_mat[,2]))
Y2<-as.data.frame(Y2)
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/HCA_Factor2selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
hca_f2<-get_hca(feature_table_file=NA,parentoutput_dir=output_dir,class_labels_file=NA,X=X2,Y=Y2,heatmap.col.opt=heatmap.col.opt,cor.method=cor.method,is.data.znorm=FALSE,analysismode="classification",
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300, alphacol=0.3,
hca_type=hca_type,newdevice=FALSE,input.type="intensity",mainlab="Factor2",
alphabetical.order=alphabetical.order,study.design=analysistype,labRow.value = labRow.value, labCol.value = labCol.value,similarity.matrix=similarity.matrix,
cexLegend=hca.cex.legend,cexRow=cex.plots,cexCol=cex.plots)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}else{
print("No significant features for Factor 2.")
}
class_interact<-paste(classlabels_response_mat[,1],":",classlabels_response_mat[,2],sep="")
#classlabels_response_mat[,1]:classlabels_response_mat[,2]
if(length(which(fdr_adjust_pvalue3<fdrthresh))>0){
X3=data_m_fc_withfeats[which(fdr_adjust_pvalue3<fdrthresh),]
Y3=cbind(classlabels_orig[,1],class_interact)
Y3<-as.data.frame(Y3)
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/HCA_Factor1xFactor2selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
hca_f3<-get_hca(feature_table_file=NA,parentoutput_dir=output_dir,class_labels_file=NA,X=X3,Y=Y3,heatmap.col.opt=heatmap.col.opt,cor.method=cor.method,is.data.znorm=FALSE,analysismode="classification",
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300, alphacol=0.3,
hca_type=hca_type,newdevice=FALSE,input.type="intensity",mainlab="Factor1 x Factor2",
alphabetical.order=alphabetical.order,study.design=analysistype,labRow.value = labRow.value, labCol.value = labCol.value,similarity.matrix=similarity.matrix,
cexLegend=hca.cex.legend,cexRow=cex.plots,cexCol=cex.plots)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}else{
print("No significant features for the interaction.")
}
data_limma_fdrall_withfeats<-cbind(final.pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("P.value.Min(Factor1,Factor2,Interaction)","adjusted.P.value.Min(Factor1,Factor2,Interaction)",cnames_tab)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#filename2<-"test2.txt"
#data_limma_fdrsig_withfeats<-data_limma_fdrall_withfeats[sel.diffdrthresh==TRUE,]
#write.table(data_limma_fdrsig_withfeats, file=filename2,sep="\t",row.names=FALSE)
fdr_adjust_pvalue<-fdr_adjust_pvalue_all
}
if(featselmethod=="lm1wayanovarepeat"| featselmethod=="lmregrepeat"){
# save(data_m_fc,classlabels_response_mat,subject_inf,modeltype,file="1waydebug.Rda")
#clusterExport(cl,"classlabels_response_mat",envir = .GlobalEnv)
#clusterExport(cl,"subject_inf",envir = .GlobalEnv)
#res1<-apply(data_m_fc,1,function(x){
if(featselmethod=="lm1wayanovarepeat"){
cat("Performing one-way ANOVA with repeated measurements analysis using nlme::lme()",,sep="\n")
numcores<-num_nodes #round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"diffexplmonewayanovarepeat",envir = .GlobalEnv)
clusterEvalQ(cl,library(nlme))
clusterEvalQ(cl,library(multcomp))
clusterEvalQ(cl,library(lsmeans))
clusterExport(cl,"lme",envir = .GlobalEnv)
clusterExport(cl,"interaction",envir = .GlobalEnv)
clusterExport(cl,"anova",envir = .GlobalEnv)
res1<-parApply(cl,data_m_fc,1,function(x,classlabels_response_mat,subject_inf,modeltype){
#res1<-apply(data_m_fc,1,function(x){
xvec<-x
colnames(classlabels_response_mat)<-paste("Factor",seq(1,dim(classlabels_response_mat)[2]),sep="")
data_mat_anova<-cbind(xvec,classlabels_response_mat)
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-cnames
anova_res<-diffexplmonewayanovarepeat(dataA=data_mat_anova,subject_inf=subject_inf,modeltype=modeltype)
return(anova_res)
},classlabels_response_mat,subject_inf,modeltype)
main_pval_mat<-{}
posthoc_pval_mat<-{}
pvalues<-{}
bad_lm1feats<-{}
###saveres1,file="res1.Rda")
for(i in 1:length(res1)){
if(is.na(res1[[i]]$mainpvalues)==FALSE){
main_pval_mat<-rbind(main_pval_mat,res1[[i]]$mainpvalues)
pvalues<-c(pvalues,res1[[i]]$mainpvalues[1])
posthoc_pval_mat<-rbind(posthoc_pval_mat,res1[[i]]$posthoc)
}else{
bad_lm1feats<-c(bad_lm1feats,i)
}
}
if(length(bad_lm1feats)>0){
data_m_fc_withfeats<-data_m_fc_withfeats[-c(bad_lm1feats),]
data_m_fc<-data_m_fc[-c(bad_lm1feats),]
}
#twoanovarepeat_res<-cbind(data_m_fc_withfeats[,c(1:2)],main_pval_mat,posthoc_pval_mat)
#write.table(twoanovarepeat_res,file="Tables/lm2wayanovarepeat_with_posthoc_pvalues.txt",sep="\t",row.names=FALSE)
pvalues1<-main_pval_mat[,1]
onewayanova_res<-cbind(data_m_fc_withfeats[,c(1:2)],main_pval_mat,posthoc_pval_mat)
# write.table(twoanova_res,file="twoanova_with_posthoc_pvalues.txt",sep="\t",row.names=FALSE)
if(fdrmethod=="none"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="none")
}
if(fdrmethod=="BH"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="BH")
}else{
if(fdrmethod=="ST"){
#print(head(pvalues1))
#print(head(pvalues2))
#print(head(pvalues3))
#print(summary(pvalues1))
#print(summary(pvalues2))
#print(summary(pvalues3))
fdr_adjust_pvalue1<-try(qvalue(pvalues1),silent=TRUE)
if(is(fdr_adjust_pvalue1,"try-error")){
fdr_adjust_pvalue1<-qvalue(pvalues1,lambda=max(pvalues1,na.rm=TRUE))
}
fdr_adjust_pvalue1<-fdr_adjust_pvalue1$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue1<-fdrtool(as.vector(pvalues1),statistic="pvalue",verbose=FALSE)
fdr_adjust_pvalue1<-fdr_adjust_pvalue1$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="BY")
}else{
if(fdrmethod=="bonferroni"){
# fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-paste("Tables/",featselmethod,"_pvalall_withfeats.txt",sep="")
}else{
filename<-paste("Tables/",featselmethod,"_fdrall_withfeats.txt",sep="")
}
cnames_tab<-colnames(data_m_fc_withfeats)
posthoc_names<-colnames(posthoc_pval_mat)
#
cnames_tab<-c("Factor1.P.value","Factor1.adjusted.P.value",posthoc_names,cnames_tab)
data_limma_fdrall_withfeats<-cbind(pvalues1,fdr_adjust_pvalue1,posthoc_pval_mat,data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#gohere
if(length(check_names)>0){
data_limma_fdrall_withfeats<-cbind(pvalues1,fdr_adjust_pvalue1,posthoc_pval_mat,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
data_limma_fdrall_withfeats<-as.data.frame(data_limma_fdrall_withfeats)
#data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,results2,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
rem_col_ind1<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("mz"))
rem_col_ind2<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("time"))
rem_col_ind<-c(rem_col_ind1,rem_col_ind2)
}else{
rem_col_ind<-{}
}
if(length(rem_col_ind)>0){
write.table(data_limma_fdrall_withfeats[,-c(rem_col_ind)],file="Tables/onewayanovarepeat_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
}else{
write.table(data_limma_fdrall_withfeats,file="Tables/onewayanovarepeat_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
}
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
filename<-paste("Tables/",filename,sep="")
fdr_adjust_pvalue<-fdr_adjust_pvalue1
final.pvalues<-pvalues1
sel.diffdrthresh<-fdr_adjust_pvalue1<fdrthresh & final.pvalues<pvalue.thresh
}else{
cat("Performing linear regression with repeated measurements analysis using nlme::lme()",sep="\n")
numcores<-num_nodes #round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"diffexplmregrepeat",envir = .GlobalEnv)
clusterEvalQ(cl,library(nlme))
clusterEvalQ(cl,library(multcomp))
clusterEvalQ(cl,library(lsmeans))
clusterExport(cl,"lme",envir = .GlobalEnv)
clusterExport(cl,"interaction",envir = .GlobalEnv)
clusterExport(cl,"anova",envir = .GlobalEnv)
res1<-parApply(cl,data_m_fc,1,function(x,classlabels_response_mat,subject_inf,modeltype){
#res1<-apply(data_m_fc,1,function(x){
xvec<-x
colnames(classlabels_response_mat)<-paste("Factor",seq(1,dim(classlabels_response_mat)[2]),sep="")
data_mat_anova<-cbind(xvec,classlabels_response_mat)
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-cnames
# save(data_mat_anova,subject_inf,modeltype,file="lmregdebug.Rda")
if(ncol(data_mat_anova)>2){
covar.matrix=classlabels_response_mat[,-c(1)]
}else{
covar.matrix=NA
}
anova_res<-diffexplmregrepeat(dataA=data_mat_anova,subject_inf=subject_inf,modeltype=modeltype,covar.matrix = covar.matrix)
return(anova_res)
},classlabels_response_mat,subject_inf,modeltype)
stopCluster(cl)
main_pval_mat<-{}
pvalues<-{}
# save(res1,file="lmres1.Rda")
posthoc_pval_mat<-{}
bad_lm1feats<-{}
res2<-t(res1)
res2<-as.data.frame(res2)
colnames(res2)<-c("pvalue","coefficient","std.error","t.value")
pvalues<-res2$pvalue
pvalues<-unlist(pvalues)
if(fdrmethod=="BH"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BH")
}else{
if(fdrmethod=="ST"){
fdr_adjust_pvalue<-try(qvalue(pvalues),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
fdr_adjust_pvalue<-qvalue(pvalues,lambda=max(pvalues,na.rm=TRUE))
}
fdr_adjust_pvalue<-fdr_adjust_pvalue$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue<-suppressWarnings(fdrtool(as.vector(pvalues),statistic="pvalue",verbose=FALSE))
fdr_adjust_pvalue<-fdr_adjust_pvalue$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
fdr_adjust_pvalue<-pvalues
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="BY")
}else{
if(fdrmethod=="bonferroni"){
fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-paste(featselmethod,"_pvalall_withfeats.txt",sep="")
}else{
filename<-paste(featselmethod,"_fdrall_withfeats.txt",sep="")
}
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("P.value","adjusted.P.value",c("coefficient","std.error","t.value"),cnames_tab)
pvalues<-as.data.frame(pvalues)
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
#pvalues<-t(pvalues)
#print(dim(pvalues))
#print(dim(data_m_fc_withfeats))
if(length(bad_lm1feats)>0){
data_m_fc_withfeats<-data_m_fc_withfeats[-c(bad_lm1feats),]
data_m_fc<-data_m_fc[-c(bad_lm1feats),]
}
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,res2[,-c(1)],data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
filename<-paste("Tables/",filename,sep="")
# write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
}
}
if(featselmethod=="lm2wayanovarepeat"){
cat("Performing two-way ANOVA with repeated measurements analysis using nlme::lme()",sep="\n")
numcores<-num_nodes #round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterExport(cl,"diffexplmtwowayanovarepeat",envir = .GlobalEnv)
clusterEvalQ(cl,library(nlme))
clusterEvalQ(cl,library(multcomp))
clusterEvalQ(cl,library(lsmeans))
clusterExport(cl,"lme",envir = .GlobalEnv)
clusterExport(cl,"interaction",envir = .GlobalEnv)
clusterExport(cl,"anova",envir = .GlobalEnv)
#clusterExport(cl,"classlabels_response_mat",envir = .GlobalEnv)
#clusterExport(cl,"subject_inf",envir = .GlobalEnv)
#res1<-apply(data_m_fc,1,function(x){
# print(dim(data_m_fc))
# print(dim(classlabels_response_mat))
res1<-parApply(cl,data_m_fc,1,function(x,classlabels_response_mat,subject_inf,modeltype){
# res1<-apply(data_m_fc,1,function(x){
# ###saveclasslabels_response_mat,file="classlabels_response_mat.Rda")
# ###savesubject_inf,file="subject_inf.Rda")
xvec<-x
####savexvec,file="xvec.Rda")
colnames(classlabels_response_mat)<-paste("Factor",seq(1,dim(classlabels_response_mat)[2]),sep="")
data_mat_anova<-cbind(xvec,classlabels_response_mat)
cnames<-colnames(data_mat_anova)
cnames[1]<-"Response"
colnames(data_mat_anova)<-cnames
#print(subject_inf)
#print(dim(data_mat_anova))
subject_inf<-as.data.frame(subject_inf)
#print(dim(subject_inf))
anova_res<-diffexplmtwowayanovarepeat(dataA=data_mat_anova,subject_inf=subject_inf[,1],modeltype=modeltype)
return(anova_res)
},classlabels_response_mat,subject_inf,modeltype)
main_pval_mat<-{}
stopCluster(cl)
posthoc_pval_mat<-{}
#print(head(res1))
# print("here")
pvalues<-{}
bad_lm1feats<-{}
###saveres1,file="res1.Rda")
for(i in 1:length(res1)){
if(is.na(res1[[i]]$mainpvalues)==FALSE){
main_pval_mat<-rbind(main_pval_mat,res1[[i]]$mainpvalues)
pvalues<-c(pvalues,res1[[i]]$mainpvalues[1])
posthoc_pval_mat<-rbind(posthoc_pval_mat,res1[[i]]$posthoc)
}else{
bad_lm1feats<-c(bad_lm1feats,i)
}
}
if(length(bad_lm1feats)>0){
data_m_fc_withfeats<-data_m_fc_withfeats[-c(bad_lm1feats),]
data_m_fc<-data_m_fc[-c(bad_lm1feats),]
}
twoanovarepeat_res<-cbind(data_m_fc_withfeats[,c(1:2)],main_pval_mat,posthoc_pval_mat)
#write.table(twoanovarepeat_res,file="Tables/lm2wayanovarepeat_with_posthoc_pvalues.txt",sep="\t",row.names=FALSE)
pvalues1<-main_pval_mat[,1]
pvalues2<-main_pval_mat[,2]
pvalues3<-main_pval_mat[,3]
twoanova_res<-cbind(data_m_fc_withfeats[,c(1:2)],main_pval_mat,posthoc_pval_mat)
# write.table(twoanova_res,file="twoanova_with_posthoc_pvalues.txt",sep="\t",row.names=FALSE)
if(fdrmethod=="none"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="none")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="none")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="none")
}
if(fdrmethod=="BH"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="BH")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="BH")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="BH")
}else{
if(fdrmethod=="ST"){
#print(head(pvalues1))
#print(head(pvalues2))
#print(head(pvalues3))
#print(summary(pvalues1))
#print(summary(pvalues2))
#print(summary(pvalues3))
fdr_adjust_pvalue1<-try(qvalue(pvalues1),silent=TRUE)
fdr_adjust_pvalue2<-try(qvalue(pvalues2),silent=TRUE)
fdr_adjust_pvalue3<-try(qvalue(pvalues3),silent=TRUE)
if(is(fdr_adjust_pvalue1,"try-error")){
fdr_adjust_pvalue1<-qvalue(pvalues1,lambda=max(pvalues1,na.rm=TRUE))
}
if(is(fdr_adjust_pvalue2,"try-error")){
fdr_adjust_pvalue2<-qvalue(pvalues2,lambda=max(pvalues2,na.rm=TRUE))
}
if(is(fdr_adjust_pvalue3,"try-error")){
fdr_adjust_pvalue3<-qvalue(pvalues3,lambda=max(pvalues3,na.rm=TRUE))
}
fdr_adjust_pvalue1<-fdr_adjust_pvalue1$qvalues
fdr_adjust_pvalue2<-fdr_adjust_pvalue2$qvalues
fdr_adjust_pvalue3<-fdr_adjust_pvalue3$qvalues
}else{
if(fdrmethod=="Strimmer"){
pdf("fdrtool.pdf")
#par_rows=1
#par(mfrow=c(par_rows,1))
fdr_adjust_pvalue1<-fdrtool(as.vector(pvalues1),statistic="pvalue",verbose=FALSE)
fdr_adjust_pvalue1<-fdr_adjust_pvalue1$qval
fdr_adjust_pvalue2<-fdrtool(as.vector(pvalues2),statistic="pvalue",verbose=FALSE)
fdr_adjust_pvalue2<-fdr_adjust_pvalue2$qval
fdr_adjust_pvalue3<-fdrtool(as.vector(pvalues3),statistic="pvalue",verbose=FALSE)
fdr_adjust_pvalue3<-fdr_adjust_pvalue3$qval
try(dev.off(),silent=TRUE)
}else{
if(fdrmethod=="none"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="none")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="none")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="none")
}else{
if(fdrmethod=="BY"){
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="BY")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="BY")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="BY")
}else{
if(fdrmethod=="bonferroni"){
# fdr_adjust_pvalue<-p.adjust(pvalues,method="bonferroni")
fdr_adjust_pvalue1<-p.adjust(pvalues1,method="bonferroni")
fdr_adjust_pvalue2<-p.adjust(pvalues2,method="bonferroni")
fdr_adjust_pvalue3<-p.adjust(pvalues3,method="bonferroni")
}
}
}
}
}
}
if(fdrmethod=="none"){
filename<-paste("Tables/",featselmethod,"_pvalall_withfeats.txt",sep="")
}else{
filename<-paste("Tables/",featselmethod,"_fdrall_withfeats.txt",sep="")
}
cnames_tab<-colnames(data_m_fc_withfeats)
posthoc_names<-colnames(posthoc_pval_mat)
#
cnames_tab<-c("Factor1.P.value","Factor1.adjusted.P.value","Factor2.P.value","Factor2.adjusted.P.value","Interact.P.value","Interact.adjusted.P.value",posthoc_names,cnames_tab)
data_limma_fdrall_withfeats<-cbind(pvalues1,fdr_adjust_pvalue1,pvalues2,fdr_adjust_pvalue2,pvalues3,fdr_adjust_pvalue3,posthoc_pval_mat,data_m_fc_withfeats)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
if(length(check_names)>0){
data_limma_fdrall_withfeats<-cbind(pvalues1,fdr_adjust_pvalue1,pvalues2,fdr_adjust_pvalue2,pvalues3,fdr_adjust_pvalue3,posthoc_pval_mat,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
data_limma_fdrall_withfeats<-as.data.frame(data_limma_fdrall_withfeats)
#data_limma_fdrall_withfeats<-cbind(p.value,adjusted.p.value,results2,data_m_fc_with_names,data_m_fc_withfeats[,-c(1:2)])
rem_col_ind1<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("mz"))
rem_col_ind2<-grep(colnames(data_limma_fdrall_withfeats),pattern=c("time"))
rem_col_ind<-c(rem_col_ind1,rem_col_ind2)
}else{
rem_col_ind<-{}
}
if(length(rem_col_ind)>0){
write.table(data_limma_fdrall_withfeats[,-c(rem_col_ind)], file="Tables/twowayanovarepeat_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
}else{
#write.table(data_limma_fdrall_withfeats,file="Tables/twowayanova_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
write.table(data_limma_fdrall_withfeats,file="Tables/twowayanovarepeat_with_posthoc_comparisons.txt",sep="\t",row.names=FALSE)
}
#filename<-paste("Tables/",filename,sep="")
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
filename<-paste("Tables/",filename,sep="")
fdr_matrix<-cbind(fdr_adjust_pvalue1,fdr_adjust_pvalue2,fdr_adjust_pvalue3)
fdr_matrix<-as.data.frame(fdr_matrix)
fdr_adjust_pvalue_all<-apply(fdr_matrix,1,function(x){return(min(x,na.rm=TRUE))})
pvalues_all<-cbind(pvalues1,pvalues2,pvalues3)
pvalue_matrix<-as.data.frame(pvalues_all)
pvalue_all<-apply(pvalue_matrix,1,function(x){return(min(x,na.rm=TRUE)[1])})
#pvalues1<-t(pvalues1)
#print("here")
#pvalues1<-as.data.frame(pvalues1)
#pvalues1<-t(pvalues1)
#print(dim(pvalues1))
#pvalues2<-t(pvalues2)
#pvalues2<-as.data.frame(pvalues2)
#pvalues2<-t(pvalues2)
#pvalues3<-t(pvalues3)
#pvalues3<-as.data.frame(pvalues3)
#pvalues3<-t(pvalues3)
#pvalues<-t(pvalues)
#print(dim(pvalues1))
#print(dim(pvalues2))
#print(dim(pvalues3))
#print(dim(data_m_fc_withfeats))
pvalues<-pvalue_all
final.pvalues<-pvalues
sel.diffdrthresh<-fdr_adjust_pvalue_all<fdrthresh & final.pvalues<pvalue.thresh
if(length(which(fdr_adjust_pvalue1<fdrthresh))>0){
X1=data_m_fc_withfeats[which(fdr_adjust_pvalue1<fdrthresh),]
Y1=cbind(classlabels_orig[,1],as.character(classlabels_response_mat[,1]))
Y1<-as.data.frame(Y1)
###saveclasslabels_orig,file="classlabels_orig.Rda")
###saveclasslabels_response_mat,file="classlabels_response_mat.Rda")
#print("Performing HCA using features selected for Factor1")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/HCA_Factor1selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
hca_f1<-get_hca(feature_table_file=NA,parentoutput_dir=output_dir,class_labels_file=NA,X=X1,Y=Y1,heatmap.col.opt=heatmap.col.opt,cor.method=cor.method,is.data.znorm=FALSE,analysismode="classification",
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300,
alphacol=0.3, hca_type=hca_type,newdevice=FALSE,input.type="intensity",mainlab="Factor 1",
alphabetical.order=alphabetical.order,study.design="oneway",labRow.value = labRow.value, labCol.value = labCol.value,similarity.matrix=similarity.matrix,
cexLegend=hca.cex.legend,cexRow=cex.plots,cexCol=cex.plots)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}else{
print("No significant features for Factor 1.")
}
if(length(which(fdr_adjust_pvalue2<fdrthresh))>0){
X2=data_m_fc_withfeats[which(fdr_adjust_pvalue2<fdrthresh),]
Y2=cbind(classlabels_orig[,1],as.character(classlabels_response_mat[,2]))
Y2<-as.data.frame(Y2)
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/HCA_Factor2selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
# print("Performing HCA using features selected for Factor2")
hca_f2<-get_hca(feature_table_file=NA,parentoutput_dir=output_dir,class_labels_file=NA,X=X2,Y=Y2,heatmap.col.opt=heatmap.col.opt,cor.method=cor.method,is.data.znorm=FALSE,analysismode="classification",
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300,
alphacol=alphacol, hca_type=hca_type,newdevice=FALSE,input.type="intensity",mainlab="Factor 2",
alphabetical.order=alphabetical.order,study.design="oneway",labRow.value = labRow.value, labCol.value = labCol.value,similarity.matrix=similarity.matrix,
cexLegend=hca.cex.legend,cexRow=cex.plots,cexCol=cex.plots)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}else{
print("No significant features for Factor 2.")
}
class_interact<-paste(classlabels_response_mat[,1],":",classlabels_response_mat[,2],sep="") #classlabels_response_mat[,1]:classlabels_response_mat[,2]
if(length(which(fdr_adjust_pvalue3<fdrthresh))>0){
X3=data_m_fc_withfeats[which(fdr_adjust_pvalue3<fdrthresh),]
Y3=cbind(classlabels_orig[,1],class_interact)
Y3<-as.data.frame(Y3)
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/HCA_Factor1xFactor2selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
#print("Performing HCA using features selected for Factor1x2")
hca_f3<-get_hca(feature_table_file=NA,parentoutput_dir=output_dir,class_labels_file=NA,X=X3,Y=Y3,heatmap.col.opt=heatmap.col.opt,cor.method=cor.method,is.data.znorm=FALSE,analysismode="classification",
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300,
alphacol=0.3, hca_type=hca_type,newdevice=FALSE,input.type="intensity",mainlab="Factor 1 x Factor 2",
alphabetical.order=alphabetical.order,study.design="oneway",labRow.value = labRow.value, labCol.value = labCol.value,similarity.matrix=similarity.matrix,
cexLegend=hca.cex.legend,cexRow=cex.plots,cexCol=cex.plots)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}else{
print("No significant features for Factor 1x2 interaction.")
}
#data_limma_fdrall_withfeats<-cbind(pvalues,fdr_adjust_pvalue,posthoc_pval_mat,data_m_fc_withfeats)
#
data_limma_fdrall_withfeats<-cbind(pvalues1,fdr_adjust_pvalue1,pvalues2,fdr_adjust_pvalue2,pvalues3,fdr_adjust_pvalue3,posthoc_pval_mat,data_m_fc_withfeats)
fdr_adjust_pvalue<-cbind(fdr_adjust_pvalue1,fdr_adjust_pvalue2,fdr_adjust_pvalue3)
fdr_adjust_pvalue<-apply(fdr_adjust_pvalue,1,function(x){min(x,na.rm=TRUE)})
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats[order(fdr_adjust_fpvalue),]
#write.table(data_limma_fdrall_withfeats, file=filename,sep="\t",row.names=FALSE)
data_limma_fdrall_withfeats<-cbind(final.pvalues,fdr_adjust_pvalue,data_m_fc_withfeats)
cnames_tab<-colnames(data_m_fc_withfeats)
cnames_tab<-c("P.value.Min(Factor1,Factor2,Interaction)","adjusted.P.value.Min(Factor1,Factor2,Interaction)",cnames_tab)
colnames(data_limma_fdrall_withfeats)<-as.character(cnames_tab)
#filename2<-"test2.txt"
#data_limma_fdrsig_withfeats<-data_limma_fdrall_withfeats[sel.diffdrthresh==TRUE,]
#write.table(data_limma_fdrsig_withfeats, file=filename2,sep="\t",row.names=FALSE)
fdr_adjust_pvalue<-fdr_adjust_pvalue_all
}
}
#end of feature selection methods
if(featselmethod=="lmreg" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat"
| featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="logitreg" | featselmethod=="limma2wayrepeat" | featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="poissonreg" | featselmethod=="lmregrepeat")
{
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
goodip<-which(sel.diffdrthresh==TRUE)
classlabels<-as.data.frame(classlabels)
# if(featselmethod=="limma2way"){
# vennDiagram(results2,cex=0.8)
# }
#print(summary(fdr_adjust_pvalue))
#pheadrint(summary(final.pvalues))
}
pred_acc<-0 #("NA")
#print("here")
feat_sigfdrthresh[lf]<-length(goodip) #which(sel.diffdrthresh==TRUE))
if(kfold>dim(data_m_fc)[2]){
kfold=dim(data_m_fc)[2]
}
if(analysismode=="classification"){
#print("classification")
if(length(goodip)>0 & dim(data_m_fc)[2]>=kfold){
#save(classlabels,classlabels_orig, data_m_fc,file="debug2.rda")
if(alphabetical.order==FALSE){
Targetvar <- factor(classlabels[,1], levels=unique(classlabels[,1]))
}else{
Targetvar<-factor(classlabels[,1])
}
dataA<-cbind(Targetvar,t(data_m_fc))
dataA<-as.data.frame(dataA)
dataA$Targetvar<-factor(Targetvar)
#df.summary <- dataA %>% group_by(Targetvar) %>% summarize_all(funs(mean))
# df.summary <- dataA %>% group_by(Targetvar) %>% summarize_all(funs(mean))
dataA[,-c(1)]<-apply(dataA[,-c(1)],2,function(x){as.numeric(as.character(x))})
if(alphabetical.order==FALSE){
dataA$Targetvar <- factor(dataA$Targetvar, levels=unique(dataA$Targetvar))
}
df.summary <-aggregate(x=dataA,by=list(as.factor(dataA$Targetvar)),function(x){mean(x,na.rm=TRUE)})
#save(dataA,file="errordataA.Rda")
df.summary.sd <-aggregate(x=dataA[,-c(1)],by=list(as.factor(dataA$Targetvar)),function(x){sd(x,na.rm=TRUE)})
df2<-as.data.frame(df.summary[,-c(1:2)])
group_means<-t(df.summary)
# save(classlabels,classlabels_orig, classlabels_class,Targetvar,dataA,data_m_fc,df.summary,df2,group_means,file="debugfoldchange.Rda")
colnames(group_means)<-paste("mean",levels(as.factor(dataA$Targetvar)),sep="") #paste("Group",seq(1,length(unique(dataA$Targetvar))),sep="")
group_means<-cbind(data_m_fc_withfeats[,c(1:2)],group_means[-c(1:2),])
group_sd<-t(df.summary.sd)
colnames(group_sd)<-paste("std.dev",levels(as.factor(dataA$Targetvar)),sep="") #paste("Group",seq(1,length(unique(dataA$Targetvar))),sep="")
group_sd<-cbind(data_m_fc_withfeats[,c(1:2)],group_sd[-c(1),])
# write.table(group_means,file="group_means.txt",sep="\t",row.names=FALSE)
# ###savedf2,file="df2.Rda")
# ###savedataA,file="dataA.Rda")
# ###saveTargetvar,file="Targetvar.Rda")
if(log2transform==TRUE || input.intensity.scale=="log2"){
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
foldchangeres<-parApply(cl,df2,2,function(x){
res<-lapply(1:length(x),function(i){
return((x[i]-x[-i]))
})
res<-unlist(res)
tempres<-abs(res)
res_ind<-which(tempres==max(tempres,na.rm=TRUE))
return(res[res_ind[1]])
})
stopCluster(cl)
# print("Using log2 fold change threshold of")
# print(foldchangethresh)
}else{
#raw intensities
if(znormtransform==FALSE)
{
# foldchangeres<-apply(log2(df2+1),2,function(x){res<-{};for(i in 1:length(x)){res<-c(res,(x[i]-x[-i]));};tempres<-abs(res);res_ind<-which(tempres==max(tempres,na.rm=TRUE));return(res[res_ind[1]]);})
if(FALSE){
foldchangeres<-apply(log2(df2+log2.transform.constant),2,dist)
if(length(nrow(foldchangeres))>0){
foldchangeres<-apply(foldchangeres,2,function(x)
{
max_ind<-which(x==max(abs(x)))[1];
return(x[max_ind])
}
)
}
}
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
foldchangeres<-parApply(cl,log2(df2+0.0000001),2,function(x){
res<-lapply(1:length(x),function(i){
return((x[i]-x[-i]))
})
res<-unlist(res)
tempres<-abs(res)
res_ind<-which(tempres==max(tempres,na.rm=TRUE))
return(res[res_ind[1]])
})
stopCluster(cl)
foldchangethresh=foldchangethresh
# print("Using raw fold change threshold of")
# print(foldchangethresh)
}else{
# foldchangeres<-apply(df2,2,function(x){res<-{};for(i in 1:length(x)){res<-c(res,(x[i]-(x[-i])));};tempres<-abs(res);res_ind<-which(tempres==max(tempres,na.rm=TRUE));return(res[res_ind[1]]);})
if(FALSE){
foldchangeres<-apply(df2,2,dist)
if(length(nrow(foldchangeres))>0){
foldchangeres<-apply(foldchangeres,2,function(x)
{
max_ind<-which(x==max(abs(x)))[1];
return(x[max_ind])
}
)
}
}
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
foldchangeres<-parApply(cl,df2,2,function(x){
res<-lapply(1:length(x),function(i){
return((x[i]-x[-i]))
})
res<-unlist(res)
tempres<-abs(res)
res_ind<-which(tempres==max(tempres,na.rm=TRUE))
return(res[res_ind[1]])
})
stopCluster(cl)
#print(summary(foldchangeres))
#foldchangethresh=2^foldchangethresh
print("Using Z-score change threshold of")
print(foldchangethresh)
}
}
if(length(class_labels_levels)==2){
zvec=foldchangeres
}else{
zvec=NA
if(featselmethod=="lmreg" && analysismode=="regression"){
cnames_matrix<-colnames(data_limma_fdrall_withfeats)
cnames_colindex<-grep("Estimate_",cnames_matrix)
zvec<-data_limma_fdrall_withfeats[,c(cnames_colindex[1])]
}
}
maxfoldchange<-foldchangeres
goodipfoldchange<-which(abs(maxfoldchange)>foldchangethresh)
#if(FALSE)
{
if(input.intensity.scale=="raw" && log2transform==FALSE && znormtransform==FALSE){
foldchangeres<-2^((foldchangeres))
}
}
maxfoldchange1<-foldchangeres
roundUpNice <- function(x, nice=c(1,2,4,5,6,8,10))
{
if(length(x) != 1) stop("'x' must be of length 1")
10^floor(log10(x)) * nice[[which(x <= 10^floor(log10(x)) * nice)[[1]]]]
}
d4<-as.data.frame(data_limma_fdrall_withfeats)
max_mz_val<-roundUpNice(max(d4$mz)[1])
max_time_val<-roundUpNice(max(d4$time)[1])
x1increment=round_any(max_mz_val/10,10,f=floor)
x2increment=round_any(max_time_val/10,10,f=floor)
if(x2increment<1){
x2increment=0.5
}
if(x1increment<1){
x1increment=0.5
}
if(featselmethod=="lmreg" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat"
| featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="logitreg" | featselmethod=="limma2wayrepeat" | featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="poissonreg" | featselmethod=="lmregrepeat")
{
# print("Plotting manhattan plots")
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
goodip<-which(sel.diffdrthresh==TRUE)
classlabels<-as.data.frame(classlabels)
logp<-(-1)*log((d4[,1]+(10^-20)),10)
if(fdrmethod=="none"){
ythresh<-(-1)*log10(pvalue.thresh)
}else{
ythresh<-min(logp[goodip],na.rm=TRUE)
}
maintext1="Type 1 manhattan plot (-logp vs mz) \n m/z features above the dashed horizontal line meet the selection criteria"
maintext2="Type 2 manhattan plot (-logp vs time) \n m/z features above the dashed horizontal line meet the selection criteria"
if(is.na(zvec[1])==FALSE){
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
}
yvec_val=logp
ylabel="(-)log10p"
yincrement=1
y2thresh=(-1)*log10(pvalue.thresh)
# save(list=c("d4","logp","yvec_val","ythresh","zvec","x1increment","yincrement","maintext1","x2increment","maintext2","ylabel","y2thresh"),file="manhattanplot_objects.Rda")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type1.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
# get_manhattanplots(xvec=d4$mz,yvec=logp,ythresh=ythresh,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab=ylabel,xincrement=x1increment,yincrement=yincrement,maintext=maintext1,col_seq=c("black"),y2thresh=y2thresh,colorvec=manhattanplot.col.opt)
####savelist=ls(),file="m1.Rda")
try(get_manhattanplots(xvec=d4$mz,yvec=logp,ythresh=ythresh,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab=ylabel,
xincrement=x1increment,yincrement=yincrement,maintext=maintext1,col_seq=c("black"),y2thresh=y2thresh,colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type2.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$time,yvec=logp,ythresh=ythresh,up_or_down=zvec,xlab="Retention time",ylab="-log10p",xincrement=x2increment,yincrement=1,maintext=maintext2,col_seq=c("black"),y2thresh=y2thresh,colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(length(class_labels_levels)==2){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/VolcanoPlot.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
maintext1="Volcano plot (-logp vs log2(fold change)) \n colored m/z features meet the selection criteria"
if(is.na(zvec[1])==FALSE){
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
}
##save(maxfoldchange,logp,zvec,ythresh,y2thresh,foldchangethresh,manhattanplot.col.opt,d4,file="debugvolcano.Rda")
try(get_volcanoplots(xvec=maxfoldchange,yvec=logp,up_or_down=zvec,ythresh=ythresh,y2thresh=y2thresh,xthresh=foldchangethresh,maintext=maintext1,ylab="-log10(p-value)",xlab="log2(fold change)",colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}else{
if(featselmethod=="pls" | featselmethod=="o1pls"){
# print("Time 2")
#print(Sys.time())
maintext1="Type 1 manhattan plot (VIP vs mz) \n m/z features above the dashed horizontal line meet the selection criteria"
maintext2="Type 2 manhattan plot (VIP vs time) \n m/z features above the dashed horizontal line meet the selection criteria"
if(is.na(zvec[1])==FALSE){
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
}
yvec_val<-data_limma_fdrall_withfeats[,1]
ythresh=pls_vip_thresh
vip_res<-as.data.frame(vip_res)
bad.feature.index={}
if(is.na(pls.permut.count)==FALSE){
#yvec_val[which(vip_res$rand_pls_sel_prob>=pvalue.thresh | vip_res$rand_pls_sel_fdr>=fdrthresh)]<-0 #(ythresh)*0.5
bad.feature.index=which(vip_res$rand_pls_sel_prob>=pvalue.thresh | vip_res$rand_pls_sel_fdr>=fdrthresh)
}
ylabel="VIP"
yincrement=0.5
y2thresh=NA
# ###savelist=ls(),file="manhattandebug.Rda")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type1.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$mz,yvec=yvec_val,ythresh=pls_vip_thresh,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab="VIP",xincrement=x1increment,yincrement=0.5,maintext=maintext1,col_seq=c("black"),colorvec=manhattanplot.col.opt,bad.feature.index=bad.feature.index),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type2.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$time,yvec=yvec_val,ythresh=pls_vip_thresh,up_or_down=zvec,xlab="Retention time",ylab="VIP",xincrement=x2increment,yincrement=0.5,maintext=maintext2,col_seq=c("black"),colorvec=manhattanplot.col.opt,bad.feature.index=bad.feature.index),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(length(class_labels_levels)==2){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/VolcanoPlot_VIP_vs_foldchange.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
maintext1="Volcano plot (VIP vs log2(fold change)) \n colored m/z features meet the selection criteria"
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
# ###savelist=ls(),file="volcanodebug.Rda")
try(get_volcanoplots(xvec=maxfoldchange,yvec=yvec_val,up_or_down=maxfoldchange,ythresh=ythresh,xthresh=foldchangethresh,maintext=maintext1,ylab="VIP",xlab="log2(fold change)",bad.feature.index=bad.feature.index,colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}else{
if(featselmethod=="spls" | featselmethod=="o1spls"){
maintext1="Type 1 manhattan plot (|loading| vs mz) \n m/z features with non-zero loadings meet the selection criteria"
maintext2="Type 2 manhattan plot (|loading| vs time) \n m/z features with non-zero loadings meet the selection criteria"
if(is.na(zvec[1])==FALSE){
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
}
yvec_val<-data_limma_fdrall_withfeats[,1]
vip_res<-as.data.frame(vip_res)
bad.feature.index={}
if(is.na(pls.permut.count)==FALSE){
# yvec_val[which(vip_res$rand_pls_sel_prob>=pvalue.thresh | vip_res$rand_pls_sel_fdr>=fdrthresh)]<-0
bad.feature.index=which(vip_res$rand_pls_sel_prob>=pvalue.thresh | vip_res$rand_pls_sel_fdr>=fdrthresh)
}
ythresh=0
ylabel="Loading (absolute)"
yincrement=0.1
y2thresh=NA
####savelist=c("d4","yvec_val","ythresh","zvec","x1increment","yincrement","maintext1","x2increment","maintext2","ylabel","y2thresh"),file="manhattanplot_objects.Rda")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type1.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$mz,yvec=yvec_val,ythresh=0,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab="Loading (absolute)",xincrement=x1increment,yincrement=0.1,maintext=maintext1,col_seq=c("black"),colorvec=manhattanplot.col.opt,bad.feature.index=bad.feature.index),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type2.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$time,yvec=yvec_val,ythresh=0,up_or_down=zvec,xlab="Retention time",ylab="Loading (absolute)",xincrement=x2increment,yincrement=0.1,maintext=maintext2,col_seq=c("black"),colorvec=manhattanplot.col.opt,bad.feature.index=bad.feature.index),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
#volcanoplot
if(length(class_labels_levels)==2){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/VolcanoPlot_Loading_vs_foldchange.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
maintext1="Volcano plot (absolute) Loading vs log2(fold change)) \n colored m/z features meet the selection criteria"
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
try(get_volcanoplots(xvec=maxfoldchange,yvec=yvec_val,up_or_down=maxfoldchange,ythresh=ythresh,xthresh=foldchangethresh,maintext=maintext1,ylab="(absolute) Loading",xlab="log2(fold change)",yincrement=0.1,bad.feature.index=bad.feature.index,colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}else{
if(featselmethod=="pamr"){
maintext1="Type 1 manhattan plot (max |standardized centroids (d-statistic)| vs mz) \n m/z features with above the horizontal line meet the selection criteria"
maintext2="Type 2 manhattan plot (max |standardized centroids (d-statistic)| vs time) \n m/z features with above the horizontal line meet the selection criteria"
if(is.na(zvec[1])==FALSE){
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
}
yvec_val<-data_limma_fdrall_withfeats[,1]
##error point
#vip_res<-as.data.frame(vip_res)
discore<-as.data.frame(discore)
bad.feature.index={}
if(is.na(pls.permut.count)==FALSE){
# yvec_val[which(vip_res$rand_pls_sel_prob>=pvalue.thresh | vip_res$rand_pls_sel_fdr>=fdrthresh)]<-0
# bad.feature.index=which(vip_res$rand_pls_sel_prob>=pvalue.thresh | vip_res$rand_pls_sel_fdr>=fdrthresh)
}
ythresh=pamr_ythresh
ylabel="d-statistic (absolute)"
yincrement=0.1
y2thresh=NA
####savelist=c("d4","yvec_val","ythresh","zvec","x1increment","yincrement","maintext1","x2increment","maintext2","ylabel","y2thresh"),file="manhattanplot_objects.Rda")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type1.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$mz,yvec=yvec_val,ythresh=pamr_ythresh,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab="d-statistic (absolute) at threshold=0",xincrement=x1increment,yincrement=0.1,maintext=maintext1,col_seq=c("black"),colorvec=manhattanplot.col.opt,bad.feature.index=NA),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type2.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$time,yvec=yvec_val,ythresh=pamr_ythresh,up_or_down=zvec,xlab="Retention time",ylab="d-statistic (absolute) at threshold=0",xincrement=x2increment,yincrement=0.1,maintext=maintext2,col_seq=c("black"),colorvec=manhattanplot.col.opt,bad.feature.index=NA),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
#volcanoplot
if(length(class_labels_levels)==2){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/VolcanoPlot_Dstatistic_vs_foldchange.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
maintext1="Volcano plot (absolute) max standardized centroid (d-statistic) vs log2(fold change)) \n colored m/z features meet the selection criteria"
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": lower in class ",class_labels_levels_main[1]," & ",manhattanplot.col.opt[1],": higher in class ",class_labels_levels_main[1],sep="")
try(get_volcanoplots(xvec=maxfoldchange,yvec=yvec_val,up_or_down=maxfoldchange,ythresh=pamr_ythresh,xthresh=foldchangethresh,maintext=maintext1,ylab="(absolute) d-statistic at threshold=0",xlab="log2(fold change)",yincrement=0.1,bad.feature.index=NA,colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
}
}
}
goodip<-intersect(goodip,goodipfoldchange)
dataA<-cbind(maxfoldchange,data_m_fc_withfeats)
#write.table(dataA,file="foldchange.txt",sep="\t",row.names=FALSE)
goodfeats_allfields<-{}
if(length(goodip)>0){
feat_sigfdrthresh[lf]<-length(goodip)
subdata<-t(data_m_fc[goodip,])
#save(parent_data_m,file="parent_data_m.Rda")
data_minval<-min(parent_data_m[,-c(1:2)],na.rm=TRUE)*0.5
#svm_model<-svm_cv(v=kfold,x=subdata,y=classlabels,kname=svm_kernel,errortype=pred.eval.method,conflevel=95)
exp_fp<-1
best_feats<-goodip
}else{
print("No features meet the fold change criteria.")
}
}else{
if(dim(data_m_fc)[2]<kfold){
print("Number of samples is too small to calculate cross-validation accuracy.")
}
}
#feat_sigfdrthresh_cv<-c(feat_sigfdrthresh_cv,pred_acc)
if(length(goodip)<1){
# print("########################################")
# print(paste("Relative standard deviation (RSD) threshold: ", log2.fold.change.thresh," %",sep=""))
#print(paste("FDR threshold: ", fdrthresh,sep=""))
print(paste("Number of features left after RSD filtering: ", dim(data_m_fc)[1],sep=""))
print(paste("Number of selected features: ", length(goodip),sep=""))
try(dev.off(),silent=TRUE)
next
}
# save(data_m_fc_withfeats,data_matrix,data_m,goodip,names_with_mz_time,file="gdebug.Rda")
#print("######################################")
suppressMessages(library(cluster))
t1<-table(classlabels)
if(is.na(names_with_mz_time)==FALSE){
data_m_fc_withfeats_A1<-merge(names_with_mz_time,data_m_fc_withfeats,by=c("mz","time"))
rownames(data_m_fc_withfeats)<-as.character(data_m_fc_withfeats_A1$Name)
}else{
rownames(data_m_fc_withfeats)<-as.character(paste(data_m_fc_withfeats[,1],data_m_fc_withfeats[,2],sep="_"))
}
#patientcolors <- unlist(lapply(sampleclass, color.map))
if(length(goodip)>2){
goodfeats<-as.data.frame(data_m_fc_withfeats[goodip,]) #[sel.diffdrthresh==TRUE,])
goodfeats<-unique(goodfeats)
rnames_goodfeats<-rownames(goodfeats) #as.character(paste(goodfeats[,1],goodfeats[,2],sep="_"))
if(length(which(duplicated(rnames_goodfeats)==TRUE))>0){
print("WARNING: Duplicated features found. Removing duplicate entries.")
goodfeats<-goodfeats[-which(duplicated(rnames_goodfeats)==TRUE),]
rnames_goodfeats<-rnames_goodfeats[-which(duplicated(rnames_goodfeats)==TRUE)]
}
#rownames(goodfeats)<-as.character(paste(goodfeats[,1],goodfeats[,2],sep="_"))
data_m<-as.matrix(goodfeats[,-c(1:2)])
rownames(data_m)<-rownames(goodfeats) #as.character(paste(goodfeats[,1],goodfeats[,2],sep="_"))
data_m<-unique(data_m)
X<-t(data_m)
{
heatmap_file<-paste("heatmap_",featselmethod,".tiff",sep="")
heatmap_mainlabel="" #2-way HCA using all significant features"
if(FALSE)
{
# print("this step")
# save(hc,file="hc.Rda")
# save(hr,file="hr.Rda")
#save(distc,file="distc.Rda")
#save(distr,file="distr.Rda")
# save(data_m,heatmap.col.opt,hca_type,classlabels,classlabels_orig,outloc,goodfeats,data_m_fc_withfeats,goodip,names_with_mz_time,plots.height,plots.width,plots.res,file="hcadata_m.Rda")
#save(classlabels,file="classlabels.Rda")
}
# pdf("Testhca.pdf")
#try(
#
#dev.off()
if(is.na(names_with_mz_time)==FALSE){
goodfeats_with_names<-merge(names_with_mz_time,goodfeats,by=c("mz","time"))
goodfeats_with_names<-goodfeats_with_names[match(paste(goodfeats$mz,"_",goodfeats$time,sep=""),paste(goodfeats_with_names$mz,"_",goodfeats_with_names$time,sep="")),]
# save(names_with_mz_time,goodfeats,goodfeats_with_names,file="goodfeats_with_names.Rda")
goodfeats_name<-goodfeats_with_names$Name
rownames(goodfeats)<-goodfeats_name
}else{
#print(head(names_with_mz_time))
# print(head(goodfeats))
#goodfeats_name<-NA
}
if(output.device.type!="pdf"){
# print(getwd())
# save(data_m,heatmap.col.opt,hca_type,classlabels,classlabels_orig,output_dir,goodfeats,names_with_mz_time,data_m_fc_withfeats,goodip,goodfeats_name,names_with_mz_time,
# plots.height,plots.width,plots.res,alphabetical.order,analysistype,labRow.value, labCol.value,hca.cex.legend,file="hcadata_mD.Rda")
temp_filename_1<-"Figures/HCA_All_selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type="cairo",units="in")
#Generate HCA for selected features
hca_res<-get_hca(feature_table_file=NA,parentoutput_dir=output_dir,class_labels_file=NA,X=goodfeats,Y=classlabels_orig,heatmap.col.opt=heatmap.col.opt,
cor.method=cor.method,is.data.znorm=FALSE,analysismode="classification",
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300, alphacol=0.3, hca_type=hca_type,newdevice=FALSE,
input.type="intensity",mainlab="",alphabetical.order=alphabetical.order,study.design=analysistype,
labRow.value = labRow.value, labCol.value = labCol.value,similarity.matrix=similarity.matrix,cexLegend=hca.cex.legend,cexRow=cex.plots,cexCol=cex.plots)
dev.off()
}else{
#Generate HCA for selected features
hca_res<-get_hca(feature_table_file=NA,parentoutput_dir=output_dir,class_labels_file=NA,X=goodfeats,Y=classlabels_orig,heatmap.col.opt=heatmap.col.opt,cor.method=cor.method,is.data.znorm=FALSE,analysismode="classification",
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300, alphacol=0.3, hca_type=hca_type,newdevice=FALSE,
input.type="intensity",mainlab="",alphabetical.order=alphabetical.order,study.design=analysistype,
labRow.value = labRow.value, labCol.value = labCol.value,similarity.matrix=similarity.matrix,cexLegend=hca.cex.legend,cexRow=cex.plots,cexCol=cex.plots)
# get_hca(parentoutput_dir=getwd(),X=goodfeats,Y=classlabels_orig,heatmap.col.opt=heatmap.col.opt,cor.method="spearman",is.data.znorm=FALSE,analysismode="classification",
# sample.col.opt="rainbow",plots.width=2000,plots.height=2000,plots.res=300, alphacol=0.3, hca_type=hca_type,newdevice=FALSE) #,silent=TRUE)
}
}
# print("Done with HCA.")
}
}
else
{
#print("regression")
# print("########################################")
# print(paste("RSD threshold: ", log2.fold.change.thresh,sep=""))
#print(paste("FDR threshold: ", fdrthresh,sep=""))
#print(paste("Number of metabolites left after RSD filtering: ", dim(data_m_fc)[1],sep=""))
#print(paste("Number of sig metabolites: ", length(goodip),sep=""))
#print for regression
#print(paste("Summary for method: ",featselmethod,sep=""))
#print(paste("Relative standard deviation (RSD) threshold: ", log2.fold.change.thresh," %",sep=""))
cat("Analysis summary:",sep="\n")
cat(paste("Number of samples: ", dim(data_m_fc)[2],sep=""),sep="\n")
cat(paste("Number of features in the original dataset: ", num_features_total,sep=""),sep="\n")
# cat(rsd_filt_msg,sep="\n")
cat(paste("Number of features left after preprocessing: ", dim(data_m_fc)[1],sep=""),sep="\n")
cat(paste("Number of selected features: ", length(goodip),sep=""),sep="\n")
#cat("", sep="\n")
if(featselmethod=="lmreg"){
#d4<-read.table(paste(parentoutput_dir,"/Stage2/lmreg_pval_coef_stderr.txt",sep=""),sep="\t",header=TRUE,quote = "")
d4<-read.table("Tables/lmreg_pval_coef_stderr.txt",sep="\t",header=TRUE)
}
if(length(goodip)>=1){
subdata<-t(data_m_fc[goodip,])
if(length(class_labels_levels)==2){
#zvec=foldchangeres
}else{
zvec=NA
if(featselmethod=="lmreg" && analysismode=="regression"){
cnames_matrix<-colnames(d4)
cnames_colindex<-grep("Estimate_",cnames_matrix)
zvec<-d4[,c(cnames_colindex[1])]
#zvec<-d4$Estimate_var1
#if(length(zvec)<1){
# zvec<-d4$X.Estimate_var1.
#}
}
}
roundUpNice <- function(x, nice=c(1,2,4,5,6,8,10)) {
if(length(x) != 1) stop("'x' must be of length 1")
10^floor(log10(x)) * nice[[which(x <= 10^floor(log10(x)) * nice)[[1]]]]
}
d4<-as.data.frame(data_limma_fdrall_withfeats)
# d4<-as.data.frame(d1)
# save(d4,file="mtype1.rda")
x1increment=round_any(max(d4$mz)/10,10,f=floor)
x2increment=round_any(max(d4$time)/10,10,f=floor)
#manplots
if(featselmethod=="lmreg" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat"
| featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="logitreg" | featselmethod=="limma2wayrepeat" | featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="poissonreg" | featselmethod=="lmregrepeat")
{
#print("Plotting manhattan plots")
sel.diffdrthresh<-fdr_adjust_pvalue<fdrthresh & final.pvalues<pvalue.thresh
goodip<-which(sel.diffdrthresh==TRUE)
classlabels<-as.data.frame(classlabels)
logp<-(-1)*log((d4[,1]+(10^-20)),10)
ythresh<-min(logp[goodip],na.rm=TRUE)
maintext1="Type 1 manhattan plot (-logp vs mz) \n m/z features above the dashed horizontal line meet the selection criteria"
maintext2="Type 2 manhattan plot (-logp vs time) \n m/z features above the dashed horizontal line meet the selection criteria"
# print("here1 A")
#print(zvec)
if(is.na(zvec[1])==FALSE){
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": negative association "," & ",manhattanplot.col.opt[1],": positive association ",sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": negative association "," & ",manhattanplot.col.opt[1],": positive association ",sep="")
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type1.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$mz,yvec=logp,ythresh=ythresh,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab="-logP",xincrement=x1increment,yincrement=1,
maintext=maintext1,col_seq=c("black"),y2thresh=(-1)*log10(pvalue.thresh),colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type2.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$time,yvec=logp,ythresh=ythresh,up_or_down=zvec,xlab="Retention time",ylab="-logP",xincrement=x2increment,yincrement=1,
maintext=maintext2,col_seq=c("black"),y2thresh=(-1)*log10(pvalue.thresh),colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
#print("Plotting manhattan plots")
#get_manhattanplots(xvec=d4$mz,yvec=logp,ythresh=ythresh,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab="-logP",xincrement=x1increment,yincrement=1,maintext=maintext1)
#get_manhattanplots(xvec=d4$time,yvec=logp,ythresh=ythresh,up_or_down=zvec,xlab="Retention time",ylab="-logP",xincrement=x2increment,yincrement=1,maintext=maintext2)
}else{
if(featselmethod=="pls" | featselmethod=="o1pls"){
maintext1="Type 1 manhattan plot (VIP vs mz) \n m/z features above the dashed horizontal line meet the selection criteria"
maintext2="Type 2 manhattan plot (VIP vs time) \n m/z features above the dashed horizontal line meet the selection criteria"
if(is.na(zvec[1])==FALSE){
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": negative association "," & ",manhattanplot.col.opt[1],": positive association ",sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": negative association "," & ",manhattanplot.col.opt[1],": positive association ",sep="")
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type1.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$mz,yvec=data_limma_fdrall_withfeats[,1],ythresh=pls_vip_thresh,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab="VIP",xincrement=x1increment,yincrement=0.5,maintext=maintext1,col_seq=c("black"),colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type2.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$time,yvec=data_limma_fdrall_withfeats[,1],ythresh=pls_vip_thresh,up_or_down=zvec,xlab="Retention time",ylab="VIP",xincrement=x2increment,yincrement=0.5,maintext=maintext2,col_seq=c("black"),colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
else{
if(featselmethod=="spls" | featselmethod=="o1spls"){
maintext1="Type 1 manhattan plot (|loading| vs mz) \n m/z features with non-zero loadings meet the selection criteria"
maintext2="Type 2 manhattan plot (|loading| vs time) \n m/z features with non-zero loadings meet the selection criteria"
if(is.na(zvec[1])==FALSE){
maintext1=paste(maintext1,"\n",manhattanplot.col.opt[2],": negative association "," & ",manhattanplot.col.opt[1],": positive association ",sep="")
maintext2=paste(maintext2,"\n",manhattanplot.col.opt[2],": negative association "," & ",manhattanplot.col.opt[1],": positive association ",sep="")
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type1.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$mz,yvec=data_limma_fdrall_withfeats[,1],ythresh=0,up_or_down=zvec,xlab="mass-to-charge (m/z)",ylab="Loading",xincrement=x1increment,yincrement=0.1,maintext=maintext1,col_seq=c("black"),colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/ManhattanPlot_Type2.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
try(get_manhattanplots(xvec=d4$time,yvec=data_limma_fdrall_withfeats[,1],ythresh=0,up_or_down=zvec,xlab="Retention time",ylab="Loading",xincrement=x2increment,yincrement=0.1,maintext=maintext2,col_seq=c("black"),colorvec=manhattanplot.col.opt),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
}
data_minval<-min(parent_data_m[,-c(1:2)],na.rm=TRUE)*0.5
#subdata<-apply(subdata,2,function(x){naind<-which(is.na(x)==TRUE);if(length(naind)>0){ x[naind]<-median(x,na.rm=TRUE)};return(x)})
subdata<-apply(subdata,2,function(x){naind<-which(is.na(x)==TRUE);if(length(naind)>0){ x[naind]<-data_minval};return(x)})
#print(head(subdata))
#print(dim(subdata))
#print(dim(classlabels))
#print(dim(classlabels))
classlabels_response_mat<-as.data.frame(classlabels_response_mat)
if(length(classlabels)>dim(parent_data_m)[2]){
#classlabels<-as.data.frame(classlabels[,1])
classlabels_response_mat<-as.data.frame(classlabels_response_mat[,1])
}
if(FALSE){
svm_model_reg<-try(svm(x=subdata,y=(classlabels_response_mat[,1]),type="eps",cross=kfold),silent=TRUE)
if(is(svm_model_reg,"try-error")){
print("SVM could not be performed. Skipping to the next step.")
termA<-(-1)
pred_acc<-termA
}else{
termA<-svm_model_reg$tot.MSE
pred_acc<-termA
print(paste(kfold,"-fold mean squared error: ", pred_acc,sep=""))
}
}
termA<-(-1)
pred_acc<-termA
# print("######################################")
}else{
#print("Number of selected variables is too small to perform CV.")
}
#print("termA is ")
#print(termA)
# print("dim of goodfeats")
goodfeats<-as.data.frame(data_m_fc_withfeats[sel.diffdrthresh==TRUE,])
goodip<-which(sel.diffdrthresh==TRUE)
#print(length(goodip))
res_score<-termA
#if(res_score<best_cv_res){
if(length(which(sel.diffdrthresh==TRUE))>0){
if(res_score<best_cv_res){
best_logfc_ind<-lf
best_feats<-goodip
best_cv_res<-res_score
best_acc<-pred_acc
best_limma_res<-data_limma_fdrall_withfeats[sel.diffdrthresh==TRUE,]
}
}else{
res_score<-(9999999)
}
res_score_vec[lf]<-res_score
goodfeats<-unique(goodfeats)
# save(names_with_mz_time,goodfeats,file="goodfeats_1.Rda")
if(length(which(is.na(goodfeats$mz)==TRUE))>0){
goodfeats<-goodfeats[-which(is.na(goodfeats$mz)==TRUE),]
}
if(is.na(names_with_mz_time)==FALSE){
goodfeats_with_names<-merge(names_with_mz_time,goodfeats,by=c("mz","time"))
goodfeats_with_names<-goodfeats_with_names[match(goodfeats$mz,goodfeats_with_names$mz),]
#
goodfeats_name<-goodfeats_with_names$Name
#}
}else{
goodfeats_name<-as.character(paste(goodfeats[,1],goodfeats[,2],sep="_"))
}
if(length(which(sel.diffdrthresh==TRUE))>2){
##save(goodfeats,file="goodfeats.Rda")
#rownames(goodfeats)<-as.character(goodfeats[,1])
rownames(goodfeats)<-goodfeats_name #as.character(paste(goodfeats[,1],goodfeats[,2],sep="_"))
data_m<-as.matrix(goodfeats[,-c(1:2)])
rownames(data_m)<-rownames(goodfeats) #as.character(paste(goodfeats[,1],goodfeats[,2],sep="_"))
X<-t(data_m)
pca_comp<-min(dim(X)[1],dim(X)[2])
t1<-seq(1,dim(data_m)[2])
col <-col_vec[1:length(t1)]
hr <- try(hclust(as.dist(1-cor(t(data_m),method=cor.method,use="pairwise.complete.obs"))),silent=TRUE) #metabolites
hc <- try(hclust(as.dist(1-cor(data_m,method=cor.method,use="pairwise.complete.obs"))),silent=TRUE) #samples
if(heatmap.col.opt=="RdBu"){
heatmap.col.opt="redblue"
}
heatmap_cols <- colorRampPalette(brewer.pal(10, "RdBu"))(256)
heatmap_cols<-rev(heatmap_cols)
if(heatmap.col.opt=="topo"){
heatmap_cols<-topo.colors(256)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="heat"){
heatmap_cols<-heat.colors(256)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="yellowblue"){
heatmap_cols<-colorRampPalette(c("yellow","blue"))(256) #colorRampPalette(c("yellow","white","blue"))(256)
#heatmap_cols<-blue2yellow(256) #colorRampPalette(c("yellow","blue"))(256)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="redblue"){
heatmap_cols <- colorRampPalette(brewer.pal(10, "RdBu"))(256)
heatmap_cols<-rev(heatmap_cols)
}else{
#my_palette <- colorRampPalette(c("red", "yellow", "green"))(n = 299)
if(heatmap.col.opt=="redyellowgreen"){
heatmap_cols <- colorRampPalette(c("red", "yellow", "green"))(n = 299)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="yellowwhiteblue"){
heatmap_cols<-colorRampPalette(c("yellow2","white","blue"))(256) #colorRampPalette(c("yellow","white","blue"))(256)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="redwhiteblue"){
heatmap_cols<-colorRampPalette(c("red","white","blue"))(256) #colorRampPalette(c("yellow","white","blue"))(256)
heatmap_cols<-rev(heatmap_cols)
}else{
heatmap_cols <- colorRampPalette(brewer.pal(10, heatmap.col.opt))(256)
heatmap_cols<-rev(heatmap_cols)
}
}
}
}
}
}
}
if(is(hr,"try-error") || is(hc,"try-error")){
print("Hierarchical clustering can not be performed. ")
}else{
mycl_samples <- cutree(hc, h=max(hc$height)/2)
t1<-table(mycl_samples)
col_clust<-topo.colors(length(t1))
patientcolors=rep(col_clust,t1) #mycl_samples[col_clust]
heatmap_file<-paste("heatmap_",featselmethod,"_imp_features.tiff",sep="")
#tiff(heatmap_file,width=plots.width,height=plots.height,res=plots.res, compression="lzw")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/HCA_all_selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
if(znormtransform==FALSE){
h73<-heatmap.2(data_m, Rowv=as.dendrogram(hr), Colv=as.dendrogram(hc), col=heatmap_cols, scale="row",key=TRUE, symkey=FALSE,
density.info="none", trace="none", cexRow=1, cexCol=1,xlab="",ylab="", main="Using all selected features",labRow = hca.labRow.value, labCol = hca.labCol.value)
}else{
h73<-heatmap.2(data_m, Rowv=as.dendrogram(hr), Colv=as.dendrogram(hc), col=heatmap_cols, scale="none",key=TRUE,
symkey=FALSE, density.info="none", trace="none", cexRow=1, cexCol=1,xlab="",ylab="", main="Using all selected features",labRow = hca.labRow.value, labCol = hca.labCol.value)
}
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
mycl_samples <- cutree(hc, h=max(hc$height)/2)
mycl_metabs <- cutree(hr, h=max(hr$height)/2)
ord_data<-cbind(mycl_metabs[rev(h73$rowInd)],goodfeats[rev(h73$rowInd),c(1:2)],data_m[rev(h73$rowInd),h73$colInd])
cnames1<-colnames(ord_data)
cnames1[1]<-"mz_cluster_label"
colnames(ord_data)<-cnames1
fname1<-paste("Tables/Clustering_based_sorted_intensity_data.txt",sep="")
write.table(ord_data,file=fname1,sep="\t",row.names=FALSE)
fname2<-paste("Tables/Sample_clusterlabels.txt",sep="")
sample_clust_num<-mycl_samples[h73$colInd]
classlabels<-as.data.frame(classlabels)
temp1<-classlabels[h73$colInd,]
temp3<-cbind(temp1,sample_clust_num)
rnames1<-rownames(temp3)
temp4<-cbind(rnames1,temp3)
temp4<-as.data.frame(temp4)
if(analysismode=="regression"){
#names(temp3[,1)<-as.character(temp4[,1])
temp3<-temp4[,-c(1)]
temp3<-as.data.frame(temp3)
temp3<-apply(temp3,2,as.numeric)
temp_vec<-as.vector(temp3[,1])
names(temp_vec)<-as.character(temp4[,1])
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/Barplot_dependent_variable_ordered_by_HCA.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
#tiff("Barplot_sample_cluster_ymat.tiff", width=plots.width,height=plots.height,res=plots.res, compression="lzw")
barplot(temp_vec,col="brown",ylab="Y",cex.axis=0.5,cex.names=0.5,main="Dependent variable levels in samples; \n ordered based on hierarchical clustering")
#dev.off()
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
# print(head(temp_vec))
#temp4<-temp4[,-c(2)]
write.table(temp4,file=fname2,sep="\t",row.names=FALSE)
fname3<-paste("Metabolite_clusterlabels.txt",sep="")
mycl_metabs_ord<-mycl_metabs[rev(h73$rowInd)]
}
}
}
classlabels_orig<-classlabels_orig_parent
if(pairedanalysis==TRUE){
classlabels_orig<-classlabels_orig[,-c(2)]
}else{
if(featselmethod=="lmreg" || featselmethod=="logitreg" || featselmethod=="poissonreg"){
classlabels_orig<-classlabels_orig[,c(1:2)]
classlabels_orig<-as.data.frame(classlabels_orig)
}
}
node_names=rownames(data_m_fc_withfeats)
#save(data_limma_fdrall_withfeats,goodip,data_m_fc_withfeats,data_matrix,names_with_mz_time,file="data_limma_fdrall_withfeats1.Rda")
classlabels_orig_wgcna<-classlabels_orig
if(analysismode=="classification"){
classlabels_temp<-classlabels_orig_wgcna #cbind(classlabels_sub[,1],classlabels)
sigfeats=data_m_fc_withfeats[goodip,c(1:2)]
# save(data_m_fc_withfeats,classlabels_temp,sigfeats,goodip,num_nodes,abs.cor.thresh,cor.fdrthresh,alphabetical.order,
# plot_DiNa_graph,degree.centrality.method,node_names,networktype,file="debugdiffrank_eval.Rda")
if(degree_rank_method=="diffrank"){
# degree_eval_res<-try(diffrank_eval(X=data_m_fc_withfeats,Y=classlabels_temp,sigfeats=data_m_fc_withfeats[goodip,c(1:2)],sigfeatsind=goodip,
# num_nodes=num_nodes,abs.cor.thresh=abs.cor.thresh,cor.fdrthresh=cor.fdrthresh,alphabetical.order=alphabetical.order),silent=TRUE)
degree_eval_res<-diffrank_eval(X=data_m_fc_withfeats,Y=classlabels_temp,sigfeats=sigfeats,sigfeatsind=goodip,
num_nodes=num_nodes,abs.cor.thresh=abs.cor.thresh,cor.fdrthresh=cor.fdrthresh,alphabetical.order=alphabetical.order,
node_names=node_names,plot_graph_bool=plot_DiNa_graph,
degree.centrality.method = degree.centrality.method,networktype=networktype) #,silent=TRUE)
}else{
degree_eval_res<-{}
}
}
sample_names_vec<-colnames(data_m_fc_withfeats[,-c(1:2)])
# save(degree_eval_res,file="DiNa_results.Rda")
# save(data_limma_fdrall_withfeats,goodip,sample_names_vec,data_m_fc_withfeats,data_matrix,names_with_mz_time,file="data_limma_fdrall_withfeats.Rda")
if(analysismode=="classification")
{
degree_rank<-rep(1,dim(data_m_fc_withfeats)[1])
if(is(degree_eval_res,"try-error")){
degree_rank<-rep(1,dim(data_m_fc_withfeats)[1])
}else{
if(degree_rank_method=="diffrank"){
diff_degree_measure<-degree_eval_res$all
degree_rank<-diff_degree_measure$DiffRank #rank((1)*diff_degree_measure)
}
}
# save(degree_rank,file="degree_rank.Rda")
if(featselmethod=="lmreg" | featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="limma1way" | featselmethod=="lmreg" | featselmethod=="logitreg" | featselmethod=="limma1wayrepeat" | featselmethod=="limma2wayrepeat" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat" | featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="poissonreg" | featselmethod=="lmregrepeat")
{
diffexp_rank<-rank(data_limma_fdrall_withfeats[,2]) #order(data_limma_fdrall_withfeats[,2],decreasing=FALSE)
type.statistic="pvalue"
if(pvalue.dist.plot==TRUE){
x1=Sys.time()
stat_val<-(-1)*log10(data_limma_fdrall_withfeats[,2])
if(output.device.type!="pdf"){
pdf("Figures/pvalue.distribution.pdf",width=10,height=8)
}
par(mfrow=c(1,2))
kstest_res<-ks.test(data_limma_fdrall_withfeats[,2],"punif",0,1)
kstest_res<-round(kstest_res$p.value,3)
hist(as.numeric(data_limma_fdrall_withfeats[,2]),main=paste("Distribution of pvalues\n","Kolmogorov-Smirnov test for uniform distribution, p=",kstest_res,sep=""),cex.main=0.75,xlab="pvalues")
simpleQQPlot = function (observedPValues,mainlab) {
plot(-log10(1:length(observedPValues)/length(observedPValues)),
-log10(sort(observedPValues)),main=mainlab,xlab=paste("Expected -log10pvalue",sep=""),ylab=paste("Observed -logpvalue",sep=""),cex.main=0.75)
abline(0, 1, col = "brown")
}
inflation <- function(pvalue) {
chisq <- qchisq(1 - pvalue, 1)
lambda <- median(chisq) / qchisq(0.5, 1)
lambda
}
inflation_res<-round(inflation(data_limma_fdrall_withfeats[,2]),2)
simpleQQPlot(data_limma_fdrall_withfeats[,2],mainlab=paste("QQplot pvalues","\np-value inflation factor: ",inflation_res," (no inflation: close to 1; bias: greater than 1)",sep=""))
x2=Sys.time()
#print(x2-x1)
if(output.device.type!="pdf"){
dev.off()
}
}
par(mfrow=c(1,1))
}else{
if(featselmethod=="rfesvm"){
diffexp_rank<-rank((1)*abs(data_limma_fdrall_withfeats[,2]))
#diffexp_rank<-rank_vec
#data_limma_fdrall_withfeats<-cbind(rank_vec,data_limma_fdrall_withfeats)
}else{
if(featselmethod=="pamr"){
diffexp_rank<-rank_vec
#data_limma_fdrall_withfeats<-cbind(rank_vec,data_limma_fdrall_withfeats[,-c(1)])
}else{
if(featselmethod=="MARS"){
diffexp_rank<-rank((-1)*data_limma_fdrall_withfeats[,2])
}else{
diffexp_rank<-rank((1)*data_limma_fdrall_withfeats[,2])
}
}
}
}
if(input.intensity.scale=="raw" && log2transform==FALSE){
fold.change.log2<-maxfoldchange
data_limma_fdrall_withfeats_2<-cbind(fold.change.log2,degree_rank,diffexp_rank,data_limma_fdrall_withfeats)
}else{
if(input.intensity.scale=="log2" || log2transform==TRUE){
fold.change.log2<-maxfoldchange
data_limma_fdrall_withfeats_2<-cbind(fold.change.log2,degree_rank,diffexp_rank,data_limma_fdrall_withfeats)
}
}
# save(data_limma_fdrall_withfeats_2,file="data_limma_fdrall_withfeats_2.Rda")
allmetabs_res<-data_limma_fdrall_withfeats_2
if(analysismode=="classification"){
if(logistic_reg==TRUE){
fname4<-paste("logitreg","results_allfeatures.txt",sep="")
}else{
if(poisson_reg==TRUE){
fname4<-paste("poissonreg","results_allfeatures.txt",sep="")
}else{
fname4<-paste(parentfeatselmethod,"results_allfeatures.txt",sep="")
}
}
fname4<-paste("Tables/",fname4,sep="")
if(is.na(names_with_mz_time)==FALSE){
group_means1<-merge(group_means,group_sd,by=c("mz","time"))
allmetabs_res_temp<-merge(group_means1,allmetabs_res,by=c("mz","time"))
allmetabs_res_withnames<-merge(names_with_mz_time,allmetabs_res_temp,by=c("mz","time"))
# allmetabs_res_withnames<-merge(diff_degree_measure[,c("mz","time","DiffRank")],allmetabs_res_withnames,by=c("mz","time"))
#allmetabs_res_withnames<-cbind(degree_rank,diffexp_rank,allmetabs_res_withnames)
# allmetabs_res_withnames<-allmetabs_res_withnames[,c("DiffRank")]
# save(allmetabs_res_withnames,file="allmetabs_res_withnames.Rda")
# allmetabs_res_withnames<-allmetabs_res_withnames[order(allmetabs_res_withnames$mz,allmetabs_res_withnames$time),]
allmetabs_res_withnames<-allmetabs_res_withnames[order(as.numeric(as.character(allmetabs_res_withnames$mz)),as.numeric(as.character(allmetabs_res_withnames$time))),]
if(length(check_names)>0){
rem_col_ind1<-grep(colnames(allmetabs_res_withnames),pattern=c("mz"))
rem_col_ind2<-grep(colnames(allmetabs_res_withnames),pattern=c("time"))
rem_col_ind<-c(rem_col_ind1,rem_col_ind2)
}else{
rem_col_ind<-{}
}
if(length(rem_col_ind)>0){
write.table(allmetabs_res_withnames[,-c(rem_col_ind)], file=fname4,sep="\t",row.names=FALSE)
}else{
write.table(allmetabs_res_withnames, file=fname4,sep="\t",row.names=FALSE)
}
#rm(data_allinf_withfeats_withnames)
#}
}else{
group_means1<-merge(group_means,group_sd,by=c("mz","time"))
allmetabs_res_temp<-merge(group_means1,allmetabs_res,by=c("mz","time"))
#allmetabs_res_temp<-merge(group_means,allmetabs_res,by=c("mz","time"))
# allmetabs_res_temp<-cbind(degree_rank,diffexp_rank,allmetabs_res_temp)
Name<-paste(allmetabs_res_temp$mz,allmetabs_res_temp$time,sep="_")
allmetabs_res_withnames<-cbind(Name,allmetabs_res_temp)
allmetabs_res_withnames<-as.data.frame(allmetabs_res_withnames)
# allmetabs_res_withnames<-allmetabs_res_withnames[order(allmetabs_res_withnames$mz,allmetabs_res_withnames$time),]
allmetabs_res_withnames<-allmetabs_res_withnames[order(as.numeric(as.character(allmetabs_res_withnames$mz)),as.numeric(as.character(allmetabs_res_withnames$time))),]
write.table(allmetabs_res_withnames,file=fname4,sep="\t",row.names=FALSE)
}
rm(allmetabs_res_temp)
}else{
}
#rm(allmetabs_res)
if(length(goodip)>=1){
# data_limma_fdrall_withfeats_2<-data_limma_fdrall_withfeats_2[goodip,]
#data_limma_fdrall_withfeats_2<-as.data.frame(data_limma_fdrall_withfeats_2)
# save(allmetabs_res_withnames,goodip,file="allmetabs_res_withnames.Rda")
allmetabs_res_withnames<-allmetabs_res_withnames[order(as.numeric(as.character(allmetabs_res_withnames$mz)),as.numeric(as.character(allmetabs_res_withnames$time))),]
goodfeats<-as.data.frame(allmetabs_res_withnames[goodip,]) #data_limma_fdrall_withfeats_2)
goodfeats_allfields<-goodfeats
# write.table(allmetabs_res_withnames,file=fname4,sep="\t",row.names=FALSE)
if(logistic_reg==TRUE){
fname4<-paste("logitreg","results_selectedfeatures.txt",sep="")
}else{
if(poisson_reg==TRUE){
fname4<-paste("poissonreg","results_selectedfeatures.txt",sep="")
}else{
fname4<-paste(featselmethod,"results_selectedfeatures.txt",sep="")
}
}
#fname4<-paste("Tables/",fname4,sep="")
write.table(goodfeats,file=fname4,sep="\t",row.names=FALSE)
if(length(rocfeatlist)>length(goodip)){
rocfeatlist<-rocfeatlist[-which(rocfeatlist>length(goodip))] #seq(1,(length(goodip)))
numselect<-length(goodip)
#rocfeatlist<-rocfeatlist+1
}else{
numselect<-length(rocfeatlist)
}
}
}else{
#analysismode=="regression"
if(featselmethod=="lmreg" | featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="limma1way" | featselmethod=="lmreg" | featselmethod=="logitreg" | featselmethod=="limma1wayrepeat" | featselmethod=="limma2wayrepeat" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat" | featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="poissonreg" | featselmethod=="lmregrepeat")
{
diffexp_rank<-rank(data_limma_fdrall_withfeats[,1]) #order(data_limma_fdrall_withfeats[,2],decreasing=FALSE)
}else{
if(featselmethod=="rfesvm"){
diffexp_rank<-rank_vec
data_limma_fdrall_withfeats<-data_limma_fdrall_withfeats
}else{
if(featselmethod=="pamr"){
diffexp_rank<-rank_vec
# data_limma_fdrall_withfeats<-cbind(rank_vec,data_limma_fdrall_withfeats)
}else{
if(featselmethod=="MARS"){
diffexp_rank<-rank((-1)*data_limma_fdrall_withfeats[,1])
}else{
diffexp_rank<-rank((1)*data_limma_fdrall_withfeats[,2])
}
}
}
}
#save(goodfeats,diffexp_rank,data_limma_fdrall_withfeats,file="t3.Rda")
data_limma_fdrall_withfeats_2<-cbind(diffexp_rank,data_limma_fdrall_withfeats)
# fname4<-paste(featselmethod,"_sigfeats.txt",sep="")
if(logistic_reg==TRUE){
fname4<-paste("logitreg","results_allfeatures.txt",sep="")
}else{
if(poisson_reg==TRUE){
fname4<-paste("poissonreg","results_allfeatures.txt",sep="")
}else{
fname4<-paste(parentfeatselmethod,"results_allfeatures.txt",sep="")
}
}
fname4<-paste("Tables/",fname4,sep="")
allmetabs_res<-data_limma_fdrall_withfeats_2
if(is.na(names_with_mz_time)==FALSE){
allmetabs_res_withnames<-merge(names_with_mz_time,data_limma_fdrall_withfeats_2,by=c("mz","time"))
# allmetabs_res_withnames<-cbind(degree_rank,diffexp_rank,allmetabs_res_withnames)
allmetabs_res_withnames<-allmetabs_res_withnames[order(as.numeric(as.character(allmetabs_res_withnames$mz)),as.numeric(as.character(allmetabs_res_withnames$time))),]
# allmetabs_res_withnames<-allmetabs_res_withnames[order(allmetabs_res_withnames$mz,allmetabs_res_withnames$time),]
#write.table(allmetabs_res_withnames[,-c("mz","time")], file=fname4,sep="\t",row.names=FALSE)
# save(allmetabs_res_withnames,file="allmetabs_res_withnames.Rda")
#rem_col_ind<-grep(colnames(allmetabs_res_withnames),pattern=c("mz","time"))
if(length(check_names)>0){
rem_col_ind1<-grep(colnames(allmetabs_res_withnames),pattern=c("mz"))
rem_col_ind2<-grep(colnames(allmetabs_res_withnames),pattern=c("time"))
rem_col_ind<-c(rem_col_ind1,rem_col_ind2)
}else{
rem_col_ind<-{}
}
if(length(rem_col_ind)>0){
write.table(allmetabs_res_withnames[,-c(rem_col_ind)], file=fname4,sep="\t",row.names=FALSE)
}else{
write.table(allmetabs_res_withnames, file=fname4,sep="\t",row.names=FALSE)
}
# rm(data_allinf_withfeats_withnames)
}else{
# allmetabs_res_temp<-cbind(degree_rank,diffexp_rank,allmetabs_res)
allmetabs_res_withnames<-allmetabs_res
write.table(allmetabs_res,file=fname4,sep="\t",row.names=FALSE)
}
goodfeats<-allmetabs_res_withnames[goodip,] #data_limma_fdrall_withfeats_2[goodip,] #[sel.diffdrthresh==TRUE,]
# save(allmetabs_res_withnames,goodip,file="allmetabs_res_withnames.Rda")
goodfeats<-as.data.frame(allmetabs_res_withnames[goodip,]) #data_limma_fdrall_withfeats_2)
goodfeats_allfields<-goodfeats
if(logistic_reg==TRUE){
fname4<-paste("logitreg","results_selectedfeatures.txt",sep="")
}else{
if(poisson_reg==TRUE){
fname4<-paste("poissonreg","results_selectedfeatures.txt",sep="")
}else{
fname4<-paste(featselmethod,"results_selectedfeatures.txt",sep="")
}
}
# fname4<-paste("Tables/",fname4,sep="")
write.table(goodfeats,file=fname4,sep="\t",row.names=FALSE)
fname4<-paste("Tables/",parentfeatselmethod,"results_allfeatures.txt",sep="")
#allmetabs_res<-goodfeats #data_limma_fdrall_withfeats_2
}
}
# save(goodfeats,file="goodfeats455.Rda")
if(length(goodip)>1){
goodfeats_by_DICErank<-{}
if(analysismode=="classification"){
if(featselmethod=="lmreg" | featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="limma1way" | featselmethod=="lmreg" | featselmethod=="logitreg" | featselmethod=="limma1wayrepeat" | featselmethod=="limma2wayrepeat" | featselmethod=="lm1wayanova" | featselmethod=="lm2wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat" | featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="poissonreg")
{
goodfeats<-goodfeats[order(goodfeats$diffexp_rank,decreasing=FALSE),]
if(length(goodip)>1){
# goodfeats_by_DICErank<-data_limma_fdrall_withfeats_2[r1$top.list,]
}
}else{
goodfeats<-goodfeats[order(goodfeats$diffexp_rank,decreasing=FALSE),]
if(length(goodip)>1){
#goodfeats_by_DICErank<-data_limma_fdrall_withfeats_2[r1$top.list,]
}
}
cnamesd1<-colnames(goodfeats)
time_ind<-which(cnamesd1=="time")
mz_ind<-which(cnamesd1=="mz")
goodfeats_name<-goodfeats$Name
goodfeats_temp<-cbind(goodfeats[,mz_ind],goodfeats[,time_ind],goodfeats[,which(colnames(goodfeats)%in%sample_names_vec)]) #goodfeats[,-c(1:time_ind)])
# save(goodfeats_temp,file="goodfeats_temp.Rda")
cnames_temp<-colnames(goodfeats_temp)
cnames_temp<-c("mz","time",cnames_temp[-c(1:2)])
colnames(goodfeats_temp)<-cnames_temp
goodfeats<-goodfeats_temp
}else{
if(analysismode=="regression"){
# save(goodfeats,file="goodfeats455.Rda")
try(dev.off(),silent=TRUE)
if(featselmethod=="lmreg" | featselmethod=="pls" | featselmethod=="spls" | featselmethod=="o1pls" | featselmethod=="RF" | featselmethod=="MARS"){
####savegoodfeats,file="goodfeats.Rda")
goodfeats<-goodfeats[order(goodfeats$diffexp_rank,decreasing=FALSE),]
}else{
#goodfeats<-goodfeats[order(goodfeats[,1],decreasing=TRUE),]
}
goodfeats<-as.data.frame(goodfeats)
cnamesd1<-colnames(goodfeats)
time_ind<-which(cnamesd1=="time")
mz_ind<-which(cnamesd1=="mz")
goodfeats_name<-goodfeats$Name
goodfeats_temp<-cbind(goodfeats[,mz_ind],goodfeats[,time_ind],goodfeats[,which(colnames(goodfeats)%in%sample_names_vec)]) #goodfeats[,-c(1:time_ind)])
#save(goodfeats_temp,goodfeats,goodfeats_name,file="goodfeats_temp.Rda")
cnames_temp<-colnames(goodfeats_temp)
cnames_temp<-c("mz","time",cnames_temp[-c(1:2)])
colnames(goodfeats_temp)<-cnames_temp
goodfeats<-goodfeats_temp
rm(goodfeats_temp)
# #save(goodfeats,goodfeats_temp,mz_ind,time_ind,classlabels_orig,analysistype,alphabetical.order,col_vec,file="pca1.Rda")
num_sig_feats<-nrow(goodfeats)
if(num_sig_feats>=3 & pca.stage2.eval==TRUE){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/PCAplots_selectedfeats.pdf"
#png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
#pdf(temp_filename_1)
pdf(temp_filename_1,width=plots.width,height=plots.height)
}
plot(0:10, type = "n", xaxt="n", yaxt="n", bty="n", xlab = "", ylab = "")
text(5, 8, "PCA using selected features after feature selection")
text(5, 7, "The figures include: ")
text(5, 6, "a. pairwise PC score plots ")
text(5, 5, "b. scores for individual samples on each PC")
text(5, 4, "c. Lineplots using PC scores for data with repeated measurements")
par(mfrow=c(1,1),family="sans",cex=cex.plots)
rownames(goodfeats)<-goodfeats$Name
get_pcascoredistplots(X=goodfeats,Y=classlabels_orig,feature_table_file=NA,parentoutput_dir=getwd(),
class_labels_file=NA,sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,
plots.res=300, alphacol=0.3,col_vec=col_vec,pairedanalysis=pairedanalysis,
pca.cex.val=pca.cex.val,legendlocation=legendlocation,pca.ellipse=pca.ellipse,
ellipse.conf.level=ellipse.conf.level,filename="selected",paireddesign=paireddesign,
lineplot.col.opt=lineplot.col.opt,lineplot.lty.option=lineplot.lty.option,
timeseries.lineplots=timeseries.lineplots,pcacenter=pcacenter,pcascale=pcascale,
alphabetical.order=alphabetical.order,study.design=analysistype,lme.modeltype=modeltype) #,silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
}
}
class_label_A<-class_labels_levels[1]
class_label_B<-class_labels_levels[2]
#goodfeats_allfields<-{}
if(length(which(sel.diffdrthresh==TRUE))>1){
goodfeats<-as.data.frame(goodfeats)
mzvec<-goodfeats$mz
timevec<-goodfeats$time
if(length(mzvec)>4){
max_per_row<-3
par_rows<-ceiling(9/max_per_row)
}else{
max_per_row<-length(mzvec)
par_rows<-1
}
goodfeats<-as.data.frame(goodfeats)
cnamesd1<-colnames(goodfeats)
time_ind<-which(cnamesd1=="time")
# goodfeats_allfields<-as.data.frame(goodfeats)
file_ind<-1
mz_ind<-which(cnamesd1=="mz")
goodfeats_temp<-cbind(goodfeats[,mz_ind],goodfeats[,time_ind],goodfeats[,-c(1:time_ind)])
cnames_temp<-colnames(goodfeats_temp)
cnames_temp[1]<-"mz"
cnames_temp[2]<-"time"
colnames(goodfeats_temp)<-cnames_temp
#if(length(class_labels_levels)<10)
if(analysismode=="classification" && nrow(goodfeats)>=1 && length(goodip)>=1)
{
if(is.na(rocclassifier)==FALSE){
if(length(class_labels_levels)==2){
#print("Generating ROC curve using top features on training set")
# save(kfold,goodfeats_temp,classlabels,svm_kernel,pred.eval.method,match_class_dist,rocfeatlist,rocfeatincrement,file="rocdebug.Rda")
# roc_res<-try(get_roc(dataA=goodfeats_temp,classlabels=classlabels,classifier=rocclassifier,kname="radial",
# rocfeatlist=rocfeatlist,rocfeatincrement=rocfeatincrement,mainlabel="Training set ROC curve using top features"),silent=TRUE)
if(output.device.type=="pdf"){
roc_newdevice=FALSE
}else{
roc_newdevice=TRUE
}
roc_res<-try(get_roc(dataA=goodfeats_temp,classlabels=classlabels,classifier=rocclassifier,kname="radial",
rocfeatlist=rocfeatlist,rocfeatincrement=rocfeatincrement,
mainlabel="Training set ROC curve using top features",newdevice=roc_newdevice),silent=TRUE)
# print(roc_res)
}
subdata=t(goodfeats[,-c(1:time_ind)])
# save(kfold,subdata,goodfeats,classlabels,svm_kernel,pred.eval.method,match_class_dist,file="svmdebug.Rda")
svm_model<-try(svm_cv(v=kfold,x=subdata,y=classlabels,kname=svm_kernel,errortype=pred.eval.method,conflevel=95,match_class_dist=match_class_dist),silent=TRUE)
#svm_model<-try(svm_cv(v=kfold,x=subdata,y=classlabels,kname=svm_kernel,errortype=pred.eval.method,conflevel=95,match_class_dist=match_class_dist),silent=TRUE)
#svm_model<-try(svm(x=subdata,y=(classlabels),type="nu-classification",cross=kfold,kernel=svm_kernel),silent=TRUE)
#svm_model<-try(svm_cv(v=kfold,x=subdata,y=classlabels,kname=svm_kernel,errortype=pred.eval.method,conflevel=95,match_class_dist=match_class_dist),silent=TRUE)
classlabels<-as.data.frame(classlabels)
if(is(svm_model,"try-error")){
print("SVM could not be performed. Please try lowering the kfold or set kfold=total number of samples for Leave-one-out CV. Skipping to the next step.")
print(svm_model)
termA<-(-1)
pred_acc<-termA
permut_acc<-(-1)
}else{
pred_acc<-svm_model$avg_acc
#print("Accuracy is:")
#print(pred_acc)
if(is.na(cv.perm.count)==FALSE){
print("Calculating permuted CV accuracy")
permut_acc<-{}
#permut_acc<-lapply(1:100,function(j){
numcores<-num_nodes #round(detectCores()*0.5)
cl <- parallel::makeCluster(getOption("cl.cores", num_nodes))
clusterEvalQ(cl,library(e1071))
clusterEvalQ(cl,library(pROC))
clusterEvalQ(cl,library(ROCR))
clusterEvalQ(cl,library(CMA))
clusterExport(cl,"svm_cv",envir = .GlobalEnv)
permut_acc<-parLapply(cl,1:cv.perm.count,function(p1){
rand_order<-sample(1:dim(classlabels)[1],size=dim(classlabels)[1])
classlabels_permut<-classlabels[rand_order,]
classlabels_permut<-as.data.frame(classlabels_permut)
svm_permut_res<-try(svm_cv(v=kfold,x=subdata,y=classlabels_permut,kname=svm_kernel,errortype=pred.eval.method,conflevel=95,match_class_dist=match_class_dist),silent=TRUE)
#svm_permut_res<-try(svm(x=subdata,y=(classlabels_permut),type="nu-classification",cross=kfold,kernel=svm_kernel),silent=TRUE)
#svm_permut_res<-svm_cv(v=kfold,x=subdata,y=classlabels_permut,kname=svm_kernel,errortype=pred.eval.method,conflevel=95,match_class_dist=match_class_dist)
if(is(svm_permut_res,"try-error")){
cur_perm_acc<-NA
}else{
cur_perm_acc<-svm_permut_res$avg_acc #tot.accuracy #
}
return(cur_perm_acc)
})
stopCluster(cl)
permut_acc<-unlist(permut_acc)
permut_acc<-mean(permut_acc,na.rm=TRUE)
permut_acc<-round(permut_acc,2)
print("mean Permuted accuracy is:")
print(permut_acc)
}else{
permut_acc<-(-1)
}
}
}else{
termA<-(-1)
pred_acc<-termA
permut_acc<-(-1)
}
termA<-100*pred_acc
if(featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="limma2wayrepeat" | featselmethod=="lmreg" | featselmethod=="logitreg"
| featselmethod=="lm2wayanova" | featselmethod=="lm1wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat" | featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="poissonreg" | featselmethod=="lmregrepeat")
{
if(fdrmethod=="none"){
exp_fp<-(dim(data_m_fc)[1]*fdrthresh)+1
}else{
exp_fp<-(feat_sigfdrthresh[lf]*fdrthresh)+1
}
}
termB<-(dim(parent_data_m)[1]*dim(parent_data_m)[1])/(dim(data_m_fc)[1]*dim(data_m_fc)[1]*100)
res_score<-(100*(termA-permut_acc))-(feat_weight*termB*exp_fp)
res_score<-round(res_score,2)
if(lf==0)
{
best_logfc_ind<-lf
best_feats<-goodip
best_cv_res<-res_score
best_acc<-pred_acc
best_limma_res<-data_limma_fdrall_withfeats[goodip,] #[sel.diffdrthresh==TRUE,]
}else{
if(res_score>best_cv_res){
best_logfc_ind<-lf
best_feats<-goodip
best_cv_res<-res_score
best_acc<-pred_acc
best_limma_res<-data_limma_fdrall_withfeats[goodip,] #[sel.diffdrthresh==TRUE,]
}
}
pred_acc=round(pred_acc,2)
res_score_vec[lf]<-res_score
if(pred.eval.method=="CV"){
feat_sigfdrthresh_cv[lf]<-pred_acc
feat_sigfdrthresh_permut[lf]<-permut_acc
acc_message=(paste(kfold,"-fold CV accuracy: ", pred_acc,sep=""))
if(is.na(cv.perm.count)==FALSE){
perm_acc_message=(paste("Permuted ",kfold,"-fold CV accuracy: ", permut_acc,sep=""))
}
}else{
if(pred.eval.method=="AUC"){
feat_sigfdrthresh_cv[lf]<-pred_acc
feat_sigfdrthresh_permut[lf]<-permut_acc
acc_message=(paste("ROC area under the curve (AUC) is : ", pred_acc,sep=""))
if(is.na(cv.perm.count)==FALSE){
perm_acc_message=(paste("Permuted ROC area under the curve (AUC) is : ", permut_acc,sep=""))
}
}else{
if(pred.eval.method=="BER"){
feat_sigfdrthresh_cv[lf]<-pred_acc
feat_sigfdrthresh_permut[lf]<-permut_acc
acc_message=(paste(kfold, "-fold CV balanced accuracy rate is: ", pred_acc,sep=""))
if(is.na(cv.perm.count)==FALSE){
perm_acc_message=(paste("Permuted balanced accuracy rate is : ", permut_acc,sep=""))
}
}
}
}
# print("########################################")
# cat("", sep="\n\n")
#print(paste("Summary for method: ",featselmethod,sep=""))
#print(paste("Relative standard deviation (RSD) threshold: ", log2.fold.change.thresh," %",sep=""))
cat("Analysis summary:",sep="\n")
if(is.na(factor1_msg)==FALSE){
cat(factor1_msg,sep="\n")
}
if(is.na(factor2_msg)==FALSE){
cat(factor2_msg,sep="\n")
}
cat(paste("Number of samples: ", dim(data_m_fc)[2],sep=""),sep="\n")
cat(paste("Number of features in the original dataset: ", num_features_total,sep=""),sep="\n")
# cat(rsd_filt_msg,sep="\n")
cat(paste("Number of features left after preprocessing: ", dim(data_m_fc)[1],sep=""),sep="\n")
cat(paste("Number of selected features: ", length(goodip),sep=""),sep="\n")
if(is.na(rocclassifier)==FALSE){
cat(acc_message,sep="\n")
if(is.na(cv.perm.count)==FALSE){
cat(perm_acc_message,sep="\n")
}
}
# cat("", sep="\n\n")
#print("ROC done")
best_subset<-{}
best_acc<-0
xvec<-{}
yvec<-{}
#for(i in 2:max_varsel)
if(is.na(rocclassifier)==FALSE){
if(nrow(goodfeats_temp)<length(rocfeatlist)){
max_cv_varsel<-1:nrow(goodfeats_temp)
}else{
max_cv_varsel<-rocfeatlist #nrow(goodfeats_temp)
}
cv_yvec<-lapply(max_cv_varsel,function(i)
{
subdata<-t(goodfeats_temp[1:i,-c(1:2)])
svm_model<-try(svm_cv(v=kfold,x=subdata,y=classlabels,kname=svm_kernel,errortype=pred.eval.method,conflevel=95,match_class_dist=match_class_dist),silent=TRUE)
#svm_model<-svm_cv(v=kfold,x=subdata,y=classlabels,kname=svm_kernel,errortype=pred.eval.method,conflevel=95,match_class_dist=match_class_dist)
if(is(svm_model,"try-error")){
res1<-NA
}else{
res1<-svm_model$avg_acc
}
return(res1)
})
xvec<-max_cv_varsel
yvec<-unlist(cv_yvec)
if(pred.eval.method=="CV"){
ylab_text=paste(pred.eval.method," accuracy (%)",sep="")
}else{
if(pred.eval.method=="BER"){
ylab_text=paste("Balanced accuracy"," (%)",sep="")
}else{
ylab_text=paste("AUC"," (%)",sep="")
}
}
if(length(yvec)>0){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/kfoldCV_forward_selection.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}else{
# temp_filename_1<-"Figures/kfoldCV_forward_selection.pdf"
#pdf(temp_filename_1)
}
try(plot(x=xvec,y=yvec,main="k-fold CV classification accuracy based on forward selection of top features",xlab="Feature index",ylab=ylab_text,type="b",col="#0072B2",cex.main=0.7),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}else{
# try(dev.off(),silent=TRUE)
}
cv_mat<-cbind(xvec,yvec)
colnames(cv_mat)<-c("Feature Index",ylab_text)
write.table(cv_mat,file="Tables/kfold_cv_mat.txt",sep="\t")
}
}
if(pairedanalysis==TRUE)
{
if(featselmethod=="pls" | featselmethod=="spls"){
classlabels_sub<-classlabels_sub[,-c(1)]
classlabels_temp<-cbind(classlabels_sub)
}else{
classlabels_sub<-classlabels_sub[,-c(1)]
classlabels_temp<-cbind(classlabels_sub)
}
}else{
classlabels_temp<-cbind(classlabels_sub,classlabels)
}
num_sig_feats<-nrow(goodfeats)
if(num_sig_feats<3){
pca.stage2.eval=FALSE
}
if(pca.stage2.eval==TRUE)
{
pca_comp<-min(10,dim(X)[2])
#dev.off()
# print("plotting")
#pdf("sig_features_evaluation.pdf", height=2000,width=2000)
library(pcaMethods)
p1<-pcaMethods::pca(X,method="rnipals",center=TRUE,scale="uv",cv="q2",nPcs=pca_comp)
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/PCAdiagnostics_selectedfeats.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
p2<-plot(p1,col=c("darkgrey","grey"),main="PCA diagnostics after variable selection")
print(p2)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
#dev.off()
}
classlabels_orig<-classlabels_orig_parent
if(pairedanalysis==TRUE){
classlabels_orig<-classlabels_orig[,-c(2)]
}else{
if(featselmethod=="lmreg" || featselmethod=="logitreg" || featselmethod=="poissonreg"){
classlabels_orig<-classlabels_orig[,c(1:2)]
classlabels_orig<-as.data.frame(classlabels_orig)
}
}
classlabels_orig_wgcna<-classlabels_orig
goodfeats_temp<-cbind(goodfeats[,mz_ind],goodfeats[,time_ind],goodfeats[,-c(1:time_ind)])
cnames_temp<-colnames(goodfeats_temp)
cnames_temp<-c("mz","time",cnames_temp[-c(1:2)])
colnames(goodfeats_temp)<-cnames_temp
goodfeats_temp_with_names<-merge(names_with_mz_time,goodfeats_temp,by=c("mz","time"))
goodfeats_temp_with_names<-goodfeats_temp_with_names[match(paste(goodfeats_temp$mz,"_",goodfeats_temp$time,sep=""),paste(goodfeats_temp_with_names$mz,"_",goodfeats_temp_with_names$time,sep="")),]
# save(goodfeats,goodfeats_temp,names_with_mz_time,goodfeats_temp_with_names,file="goodfeats_pca.Rda")
rownames(goodfeats_temp)<-goodfeats_temp_with_names$Name
if(num_sig_feats>=3 & pca.stage2.eval==TRUE){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/PCAplots_selectedfeats.pdf"
#png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
#pdf(temp_filename_1)
pdf(temp_filename_1,width=plots.width,height=plots.height)
}
plot(0:10, type = "n", xaxt="n", yaxt="n", bty="n", xlab = "", ylab = "")
text(5, 8, "PCA using selected features after feature selection")
text(5, 7, "The figures include: ")
text(5, 6, "a. pairwise PC score plots ")
text(5, 5, "b. scores for individual samples on each PC")
text(5, 4, "c. Lineplots using PC scores for data with repeated measurements")
par(mfrow=c(1,1),family="sans",cex=cex.plots)
get_pcascoredistplots(X=goodfeats_temp,Y=classlabels_orig_pca,
feature_table_file=NA,parentoutput_dir=getwd(),class_labels_file=NA,
sample.col.opt=sample.col.opt,plots.width=2000,plots.height=2000,plots.res=300, alphacol=0.3,col_vec=col_vec,pairedanalysis=pairedanalysis,pca.cex.val=pca.cex.val,legendlocation=legendlocation,pca.ellipse=pca.ellipse,ellipse.conf.level=ellipse.conf.level,filename="selected",paireddesign=paireddesign,
lineplot.col.opt=lineplot.col.opt,lineplot.lty.option=lineplot.lty.option,timeseries.lineplots=timeseries.lineplots,pcacenter=pcacenter,pcascale=pcascale,alphabetical.order=alphabetical.order,study.design=analysistype,lme.modeltype=modeltype) #,silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
####savelist=ls(),file="timeseries.Rda")
#if(FALSE)
{
#if(FALSE)
{
if(log2transform==TRUE || input.intensity.scale=="log2"){
if(znormtransform==TRUE){
ylab_text_2="scale normalized"
}else{
if(quantile_norm==TRUE){
ylab_text_2="quantile normalized"
}else{
if(eigenms_norm==TRUE){
ylab_text_2="EigenMS normalized"
}else{
if(sva_norm==TRUE){
ylab_text_2="SVA normalized"
}else{
ylab_text_2=""
}
}
}
}
ylab_text=paste("log2 intensity ",ylab_text_2,sep="")
}else{
if(znormtransform==TRUE){
ylab_text_2="scale normalized"
}else{
if(quantile_norm==TRUE){
ylab_text_2="quantile normalized"
}else{
#ylab_text_2=""
if(medcenter==TRUE){
ylab_text_2="median centered"
}else{
if(lowess_norm==TRUE){
ylab_text_2="LOWESS normalized"
}else{
if(rangescaling==TRUE){
ylab_text_2="range scaling normalized"
}else{
if(paretoscaling==TRUE){
ylab_text_2="pareto scaling normalized"
}else{
if(mstus==TRUE){
ylab_text_2="MSTUS normalized"
}else{
if(vsn_norm==TRUE){
ylab_text_2="VSN normalized"
}else{
ylab_text_2=""
}
}
}
}
}
}
}
}
ylab_text=paste("Intensity ",ylab_text_2,sep="")
}
}
#ylab_text_2=""
#ylab_text=paste("Abundance",ylab_text_2,sep="")
par(mfrow=c(1,1),family="sans",cex=cex.plots)
if(pairedanalysis==TRUE || timeseries.lineplots==TRUE)
{
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/Lineplots_selectedfeats.pdf"
#png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
#pdf(temp_filename_1)
pdf(temp_filename_1,width=plots.width,height=plots.height)
# par(mfrow=c(1,1))
par(mfrow=c(1,1),family="sans",cex=cex.plots)
}
#plot(0:10, type = "n", xaxt="n", yaxt="n", bty="n", xlab = "", ylab = "")
#text(5, 8, "Lineplots using selected features")
# text(5, 7, "The error bars represent the 95% \nconfidence interval in each group (or timepoint)")
# save(goodfeats_temp,classlabels_orig,lineplot.col.opt,col_vec,pairedanalysis,
# pca.cex.val,pca.ellipse,ellipse.conf.level,legendlocation,ylab_text,error.bar,
# cex.plots,lineplot.lty.option,timeseries.lineplots,analysistype,goodfeats_name,alphabetical.order,
# multiple.figures.perpanel,plot.ylab_text,plots.height,plots.width,file="debuga_lineplots.Rda")
#try(
var_sum_list<-get_lineplots(X=goodfeats_temp,Y=classlabels_orig,feature_table_file=NA,
parentoutput_dir=getwd(),class_labels_file=NA,
lineplot.col.opt=lineplot.col.opt,alphacol=alphacol,col_vec=col_vec,
pairedanalysis=pairedanalysis,point.cex.val=pca.cex.val,
legendlocation=legendlocation,pca.ellipse=pca.ellipse,
ellipse.conf.level=ellipse.conf.level,filename="selected",
ylabel=plot.ylab_text,error.bar=error.bar,cex.plots=cex.plots,
lineplot.lty.option=lineplot.lty.option,timeseries.lineplots=timeseries.lineplots,
name=goodfeats_name,study.design=analysistype,
alphabetical.order=alphabetical.order,multiple.figures.perpanel=multiple.figures.perpanel,
plot.height = plots.height,plot.width=plots.width)
#,silent=TRUE) #,silent=TRUE)
#save(var_sum_list,file="var_sum_list.Rda")
var_sum_mat<-{}
# for(i in 1:length(var_sum_list))
#{
# var_sum_mat<-rbind(var_sum_mat,var_sum_list[[i]]$df_write_temp)
#}
# var_sum_mat<-ldply(var_sum_list,rbind)
# write.table(var_sum_mat,file="Tables/data_summary.txt",sep="\t",row.names=FALSE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
# save(goodfeats_temp,classlabels_orig,lineplot.col.opt,alphacol,col_vec,pairedanalysis,pca.cex.val,legendlocation,pca.ellipse,ellipse.conf.level,plot.ylab_text,error.bar,cex.plots,
# lineplot.lty.option,timeseries.lineplots,goodfeats_name,analysistype,alphabetical.order,multiple.figures.perpanel,plots.height,plots.width,file="var_sum.Rda")
var_sum_list<-get_data_summary(X=goodfeats_temp,Y=classlabels_orig,feature_table_file=NA,
parentoutput_dir=getwd(),class_labels_file=NA,
lineplot.col.opt=lineplot.col.opt,alphacol=alphacol,col_vec=col_vec,
pairedanalysis=pairedanalysis,point.cex.val=pca.cex.val,
legendlocation=legendlocation,pca.ellipse=pca.ellipse,
ellipse.conf.level=ellipse.conf.level,filename="selected",
ylabel=plot.ylab_text,error.bar=error.bar,cex.plots=cex.plots,
lineplot.lty.option=lineplot.lty.option,timeseries.lineplots=timeseries.lineplots,
name=goodfeats_name,study.design=analysistype,
alphabetical.order=alphabetical.order,multiple.figures.perpanel=multiple.figures.perpanel,plot.height = plots.height,plot.width=plots.width)
if(nrow(goodfeats)<1){
print(paste("No features selected for ",featselmethod,sep=""))
}
#else
{
#write.table(goodfeats_temp,file="Tables/boxplots_file.normalized.txt",sep="\t",row.names=FALSE)
goodfeats<-goodfeats[,-c(1:time_ind)]
goodfeats_raw<-data_matrix_beforescaling_rsd[goodip,]
#write.table(goodfeats_raw,file="Tables/boxplots_file.raw.txt",sep="\t",row.names=FALSE)
goodfeats_raw<-goodfeats_raw[match(paste(goodfeats_temp$mz,"_",goodfeats_temp$time,sep=""),paste(goodfeats_raw$mz,"_",goodfeats_raw$time,sep="")),]
goodfeats_name<-as.character(goodfeats_name)
# save(goodfeats_name,goodfeats_temp,classlabels_orig,output_dir,boxplot.col.opt,cex.plots,ylab_text,file="boxplotdebug.Rda")
if(pairwise.correlation.analysis==TRUE)
{
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
temp_filename_1<-"Figures/Pairwise.correlation.plots.pdf"
# pdf(temp_filename_1)
pdf(temp_filename_1,width=plots.width,height=plots.height)
}
par(mfrow=c(1,1),family="sans",cex=cex.plots,cex.main=0.7)
# cor1<-WGCNA::cor(t(goodfeats_temp[,-c(1:2)]))
rownames(goodfeats_temp)<-goodfeats_name
#Pairwise correlations between selected features
cor1<-WGCNA::cor(t(goodfeats_temp[,-c(1:2)]),nThreads=num_nodes,method=cor.method,use = 'p')
corpval1=apply(cor1,2,function(x){corPvalueStudent(x,n=ncol(goodfeats_temp[,-c(1:2)]))})
fdr_adjust_pvalue<-try(suppressWarnings(fdrtool(as.vector(cor1[upper.tri(cor1)]),statistic="correlation",verbose=FALSE,plot=FALSE)),silent=TRUE)
if(is(fdr_adjust_pvalue,"try-error")){
print(fdr_adjust_pvalue)
}
cor1[(abs(cor1)<abs.cor.thresh)]<-0
newnet <- cor1
newnet[upper.tri(newnet)][fdr_adjust_pvalue$qval > cor.fdrthresh] <- 0
newnet[lower.tri(newnet)] <- t(newnet)[lower.tri(newnet)]
newnet <- as.matrix(newnet)
corqval1=newnet
diag(corqval1)<-0
upperTriangle<-upper.tri(cor1, diag=F)
lowerTriangle<-lower.tri(cor1, diag=F)
corqval1[upperTriangle]<-fdr_adjust_pvalue$qval
corqval1[lowerTriangle]<-corqval1[upperTriangle]
cor1=newnet
rm(newnet)
# rownames(cor1)<-paste(goodfeats_temp[,c(1)],goodfeats_temp[,c(2)],sep="_")
# colnames(cor1)<-rownames(cor1)
#dendrogram="none",
h1<-heatmap.2(cor1,col=rev(brewer.pal(11,"RdBu")),Rowv=TRUE,Colv=TRUE,scale="none",key=TRUE, symkey=FALSE, density.info="none", trace="none",main="Pairwise correlations between selected features",cexRow = 0.5,cexCol = 0.5,cex.main=0.7)
upperTriangle<-upper.tri(cor1, diag=F) #turn into a upper triangle
cor1.upperTriangle<-cor1 #take a copy of the original cor-mat
cor1.upperTriangle[!upperTriangle]<-NA#set everything not in upper triangle o NA
correlations_melted<-na.omit(melt(cor1.upperTriangle, value.name ="correlationCoef")) #use melt to reshape the matrix into triplets, na.omit to get rid of the NA rows
colnames(correlations_melted)<-c("from", "to", "weight")
# save(correlations_melted,cor1,file="correlations_melted.Rda")
correlations_melted<-as.data.frame(correlations_melted)
correlations_melted$from<-paste("X",correlations_melted$from,sep="")
correlations_melted$to<-paste("Y",correlations_melted$to,sep="")
write.table(correlations_melted,file="Tables/pairwise.correlations.selectedfeatures.linkmatrix.txt",sep="\t",row.names=FALSE)
if(ncol(cor1)>1000){
netres<-plot_graph(correlations_melted,filename="sigfeats_top1000pairwisecor",interactive=FALSE,maxnodesperclass=1000,label.cex=network.label.cex,mtext.val="Top 1000 pairwise correlations between selected features")
}
netres<-try(plot_graph(correlations_melted,filename="sigfeats_pairwisecorrelations",interactive=FALSE,maxnodesperclass=NA,label.cex=network.label.cex,mtext.val="Pairwise correlations between selected features"),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
temp_filename_1<-"Figures/Boxplots.selectedfeats.normalized.pdf"
if(boxplot.type=="simple"){
pdf(temp_filename_1,height=plots.height,width=plots.width)
}
}
goodfeats_name<-as.character(goodfeats_name)
# save(goodfeats_name,goodfeats_temp,classlabels_orig,output_dir,boxplot.col.opt,cex.plots,ylab_text,plot.ylab_text,
# analysistype,boxplot.type,alphabetical.order,goodfeats_name,add.pvalues,add.jitter,file="boxplotdebug.Rda")
par(mfrow=c(1,1),family="sans",cex=cex.plots)
# plot(0:10, type = "n", xaxt="n", yaxt="n", bty="n", xlab = "", ylab = "")
# text(5, 8, "Boxplots of selected features using the\n normalized intensities/abundance levels",cex=1.5,font=2)
#plot.ylab_text1=paste("(Normalized) ",ylab_text,sep="")
#classlabels_paired<-cbind(as.character(classlabels[,1]),as.character(subject_inf),as.character(classlabels[,2]))
#classlabels_paired<-as.data.frame(classlabels_paired)
if(generate.boxplots==TRUE){
# print("Generating boxplots")
if(normalization.method!="none"){
plot.ylab_text1=paste("(Normalized) ",ylab_text,sep="")
if(pairedanalysis==TRUE){
#classlabels_paired<-cbind(classlabels[,1],subject_inf,classlabels[,2])
res<-get_boxplots(X=goodfeats_temp,Y=classlabels_orig,parentoutput_dir=output_dir,boxplot.col.opt=boxplot.col.opt,
newdevice=FALSE,cex.plots=cex.plots,ylabel=plot.ylab_text1,name=goodfeats_name,add.pvalues=add.pvalues,add.jitter=add.jitter,
alphabetical.order=alphabetical.order,boxplot.type=boxplot.type,study.design=gsub(analysistype,pattern="repeat",replacement=""),
multiple.figures.perpanel=multiple.figures.perpanel,numnodes=num_nodes,
plot.height = plots.height,plot.width=plots.width,
filename="Figures/Boxplots.selectedfeats.normalized",alphacol = alpha.col,ggplot.type1=ggplot.type1,facet.nrow=facet.nrow)
}else{
res<-get_boxplots(X=goodfeats_temp,Y=classlabels_orig,parentoutput_dir=output_dir,boxplot.col.opt=boxplot.col.opt,
newdevice=FALSE,cex.plots=cex.plots,ylabel=plot.ylab_text1,name=goodfeats_name,add.pvalues=add.pvalues,add.jitter=add.jitter,
alphabetical.order=alphabetical.order,boxplot.type=boxplot.type,study.design=analysistype,
multiple.figures.perpanel=multiple.figures.perpanel,numnodes=num_nodes,
plot.height = plots.height,plot.width=plots.width,
filename="Figures/Boxplots.selectedfeats.normalized",alphacol = alpha.col,ggplot.type1=ggplot.type1,facet.nrow=facet.nrow)
}
}else{
plot.boxplots.raw=TRUE
goodfeats_raw=goodfeats_temp
}
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(plot.boxplots.raw==TRUE){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/Boxplots.selectedfeats.raw.pdf"
if(boxplot.type=="simple"){
pdf(temp_filename_1,height=plots.height,width=plots.width)
}
}
# save(goodfeats_raw,goodfeats_temp,classlabels_raw_boxplots,classlabels_orig,
# output_dir,boxplot.col.opt,cex.plots,ylab_text,boxplot.type,ylab_text_raw,
# analysistype,multiple.figures.perpanel,alphabetical.order,goodfeats_name,plots.height,plots.width,file="boxplotrawdebug.Rda")
par(mfrow=c(1,1),family="sans",cex=cex.plots)
par(mfrow=c(1,1),family="sans",cex=cex.plots)
#get_boxplots(X=goodfeats_raw,Y=classlabels_raw_boxplots,parentoutput_dir=output_dir,boxplot.col.opt=boxplot.col.opt,alphacol=0.3,newdevice=FALSE,cex.plots=cex.plots,ylabel=" Intensity",name=goodfeats_name,add.pvalues=add.pvalues,
# add.jitter=add.jitter,alphabetical.order=alphabetical.order,boxplot.type=boxplot.type,study.design=analysistype)
plot.ylab_text1=paste("",ylab_text,sep="")
if(pairedanalysis==TRUE){
#classlabels_paired<-cbind(classlabels[,1],subject_inf,classlabels[,2])
get_boxplots(X=goodfeats_raw,Y=classlabels_orig,parentoutput_dir=output_dir,boxplot.col.opt=boxplot.col.opt,
newdevice=FALSE,cex.plots=cex.plots,ylabel=ylab_text_raw,name=goodfeats_name,add.pvalues=add.pvalues,add.jitter=add.jitter,
alphabetical.order=alphabetical.order,boxplot.type=boxplot.type,
study.design=gsub(analysistype,pattern="repeat",replacement=""),multiple.figures.perpanel=multiple.figures.perpanel,numnodes=num_nodes,
plot.height = plots.height,plot.width=plots.width,
filename="Figures/Boxplots.selectedfeats.raw",alphacol = alpha.col,ggplot.type1=ggplot.type1,facet.nrow=facet.nrow)
}else{
get_boxplots(X=goodfeats_raw,Y=classlabels_orig,parentoutput_dir=output_dir,boxplot.col.opt=boxplot.col.opt,
newdevice=FALSE,cex.plots=cex.plots,ylabel=ylab_text_raw,name=goodfeats_name,add.pvalues=add.pvalues,add.jitter=add.jitter,
alphabetical.order=alphabetical.order,boxplot.type=boxplot.type,
study.design=analysistype,multiple.figures.perpanel=multiple.figures.perpanel,numnodes=num_nodes,plot.height = plots.height,plot.width=plots.width,
filename="Figures/Boxplots.selectedfeats.raw",alphacol = alpha.col,ggplot.type1=ggplot.type1,facet.nrow=facet.nrow)
}
#try(dev.off(),silent=TRUE)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
if(FALSE)
{
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/Barplots_selectedfeats.pdf"
#png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
#pdf(temp_filename_1,bg="transparent") #, height = 5.5, width = 3)
pdf(temp_filename_1,width=plots.width,height=plots.height)
}
plot(0:10, type = "n", xaxt="n", yaxt="n", bty="n", xlab = "", ylab = "")
text(5, 8, "Barplots of selected features using the\n normalized intensities/adundance levels")
par(mfrow=c(1,1),family="sans",cex=cex.plots,pty="s")
try(get_barplots(feature_table_file,class_labels_file,X=goodfeats_temp,Y=classlabels_orig,parentoutput_dir=output_dir
,newdevice=FALSE,ylabel=ylab_text,cex.val=cex.plots,barplot.col.opt=barplot.col.opt,error.bar=error.bar),silent=TRUE)
###savelist=ls(),file="getbarplots.Rda")
if(featselmethod=="limma2way" | featselmethod=="limma2wayrepeat" | featselmethod=="pls2wayrepeat" | featselmethod=="spls2wayrepeat" | featselmethod=="pls2way" | featselmethod=="spls2way" | featselmethod=="lm2wayanova" | featselmethod=="lm2wayanovarepeat")
{
#if(ggplot.type1==TRUE){
barplot.xaxis="Factor2"
# }else{
# }
}
get_barplots(feature_table_file,class_labels_file,X=goodfeats_temp,Y=classlabels_orig,parentoutput_dir=output_dir,
newdevice=FALSE,ylabel=plot.ylab_text,cex.plots=cex.plots,barplot.col.opt=barplot.col.opt,error.bar=error.bar,
barplot.xaxis=barplot.xaxis,alphabetical.order=alphabetical.order,name=goodfeats_name,study.design=analysistype)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
if(FALSE){
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/Individual_sample_plots_selectedfeats.pdf"
#png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
#pdf(temp_filename_1)
pdf(temp_filename_1,width=plots.width,height=plots.height)
}
# par(mfrow=c(2,2))
par(mfrow=c(1,1),family="sans",cex=cex.plots)
#try(get_individualsampleplots(feature_table_file,class_labels_file,X=goodfeats_temp,Y=classlabels_orig,parentoutput_dir=output_dir,newdevice=FALSE,ylabel=ylab_text,cex.val=cex.plots,sample.col.opt=sample.col.opt),silent=TRUE)
get_individualsampleplots(feature_table_file,class_labels_file,X=goodfeats_temp,Y=classlabels_orig,parentoutput_dir=output_dir,newdevice=FALSE,ylabel=ylab_text,cex.plots=cex.plots,sample.col.opt=individualsampleplot.col.opt,alphabetical.order=alphabetical.order,name=goodfeats_name)
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
if(globalclustering==TRUE){
print("Performing global clustering using EM")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/GlobalclusteringEM.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
m1<-Mclust(t(data_m_fc_withfeats[,-c(1:2)]))
s1<-m1$classification #summary(m1)
EMcluster<-m1$classification
col_vec <- colorRampPalette(brewer.pal(10, "RdBu"))(length(levels(as.factor(classlabels_orig[,2]))))
#col_vec<-topo.colors(length(levels(as.factor(classlabels_orig[,2])))) #patientcolors #heatmap_cols[1:length(levels(classlabels_orig[,2]))]
t1<-table(EMcluster,classlabels_orig[,2])
par(mfrow=c(1,1))
plot(t1,col=col_vec,main="EM cluster labels\n using all features",cex.axis=1,ylab="Class",xlab="Cluster number")
par(xpd=TRUE)
try(legend("bottomright",legend=levels(classlabels_orig[,2]),text.col=col_vec,pch=13,cex=0.4),silent=TRUE)
par(xpd=FALSE)
# save(m1,EMcluster,classlabels_orig,file="EMres.Rda")
t1<-cbind(EMcluster,classlabels_orig[,2])
write.table(t1,file="Tables/EM_clustering_labels_using_allfeatures.txt",sep="\t")
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
print("Performing global clustering using HCA")
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/GlobalclusteringHCA.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
#if(FALSE)
{
#p1<-heatmap.2(as.matrix(data_m_fc_withfeats[,-c(1:2)]),scale="row",symkey=FALSE,col=topo.colors(n=256))
if(heatmap.col.opt=="RdBu"){
heatmap.col.opt="redblue"
}
heatmap_cols <- colorRampPalette(brewer.pal(10, "RdBu"))(256)
heatmap_cols<-rev(heatmap_cols)
if(heatmap.col.opt=="topo"){
heatmap_cols<-topo.colors(256)
heatmap_cols<-rev(heatmap_cols)
}else
{
if(heatmap.col.opt=="heat"){
heatmap_cols<-heat.colors(256)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="yellowblue"){
heatmap_cols<-colorRampPalette(c("yellow","blue"))(256) #colorRampPalette(c("yellow","white","blue"))(256)
#heatmap_cols<-blue2yellow(256) #colorRampPalette(c("yellow","blue"))(256)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="redblue"){
heatmap_cols <- colorRampPalette(brewer.pal(10, "RdBu"))(256)
heatmap_cols<-rev(heatmap_cols)
}else{
#my_palette <- colorRampPalette(c("red", "yellow", "green"))(n = 299)
if(heatmap.col.opt=="redyellowgreen"){
heatmap_cols <- colorRampPalette(c("red", "yellow", "green"))(n = 299)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="yellowwhiteblue"){
heatmap_cols<-colorRampPalette(c("yellow2","white","blue"))(256) #colorRampPalette(c("yellow","white","blue"))(256)
heatmap_cols<-rev(heatmap_cols)
}else{
if(heatmap.col.opt=="redwhiteblue"){
heatmap_cols<-colorRampPalette(c("red","white","blue"))(256) #colorRampPalette(c("yellow","white","blue"))(256)
heatmap_cols<-rev(heatmap_cols)
}else{
heatmap_cols <- colorRampPalette(brewer.pal(10, heatmap.col.opt))(256)
heatmap_cols<-rev(heatmap_cols)
}
}
}
}
}
}
}
#col_vec<-heatmap_cols[1:length(levels(classlabels_orig[,2]))]
c1<-WGCNA::cor(as.matrix(data_m_fc_withfeats[,-c(1:2)]),method=cor.method,use="pairwise.complete.obs") #cor(d1[,-c(1:2)])
d2<-as.dist(1-c1)
clust1<-hclust(d2)
hr <- try(hclust(as.dist(1-WGCNA::cor(t(data_m_fc_withfeats),method=cor.method,use="pairwise.complete.obs"))),silent=TRUE) #metabolites
#hc <- try(hclust(as.dist(1-WGCNA::cor(data_m,method=cor.method,use="pairwise.complete.obs"))),silent=TRUE) #samples
h73<-heatmap.2(as.matrix(data_m_fc_withfeats[,-c(1:2)]), Rowv=as.dendrogram(hr), Colv=as.dendrogram(clust1),
col=heatmap_cols, scale="row",key=TRUE, symkey=FALSE, density.info="none", trace="none",
cexRow=1, cexCol=1,xlab="",ylab="", main="Global clustering\n using all features",
ColSideColors=patientcolors,labRow = FALSE, labCol = FALSE)
# par(xpd=TRUE)
#legend("bottomleft",legend=levels(classlabels_orig[,2]),text.col=unique(patientcolors),pch=13,cex=0.4)
#par(xpd=FALSE)
clust_res<-cutreeDynamic(distM=as.matrix(d2),dendro=clust1,cutHeight = 0.95,minClusterSize = 2,deepSplit = 4,verbose = FALSE)
#mycl_samples <- cutree(clust1, h=max(clust1$height)/2)
HCAcluster<-clust_res
c2<-cbind(clust1$labels,HCAcluster)
rownames(c2)<-c2[,1]
c2<-as.data.frame(c2)
t1<-table(HCAcluster,classlabels_orig[,2])
plot(t1,col=col_vec,main="HCA (Cutree Dynamic) cluster labels\n using all features",cex.axis=1,ylab="Class",xlab="Cluster number")
par(xpd=TRUE)
try(legend("bottomright",legend=levels(classlabels_orig[,2]),text.col=col_vec,pch=13,cex=0.4),silent=TRUE)
par(xpd=FALSE)
t1<-cbind(HCAcluster,classlabels_orig[,2])
write.table(t1,file="Tables/HCA_clustering_labels_using_allfeatures.txt",sep="\t")
}
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
#dev.off()
}
else
{
#goodfeats_allfields<-as.data.frame(goodfeats)
goodfeats<-goodfeats[,-c(1:time_ind)]
}
}
if(length(goodip)>0){
try(dev.off(),silent=TRUE)
}
}
else{
try(dev.off(),silent=TRUE)
break;
}
if(analysismode=="classification" & WGCNAmodules==TRUE){
classlabels_temp<-classlabels_orig_wgcna #cbind(classlabels_sub[,1],classlabels)
#print(classlabels_temp)
data_temp<-data_matrix_beforescaling[,-c(1:2)]
cl<-makeCluster(num_nodes)
#clusterExport(cl,"do_rsd")
#feat_rsds<-parApply(cl,data_temp,1,do_rsd)
#rm(data_temp)
#feat_rsds<-abs(feat_rsds) #round(max_rsd,2)
#print(summary(feat_rsds))
#if(length(which(feat_rsds>0))>0)
{
X<-data_m_fc_withfeats #data_matrix[which(feat_rsds>=wgcnarsdthresh),]
# print(head(X))
# print(dim(X))
if(output.device.type!="pdf"){
temp_filename_1<-"Figures/WGCNA_preservation_plot.png"
png(temp_filename_1,width=plots.width,height=plots.height,res=plots.res,type=plots.type,units="in")
}
# #save(X,classlabels_temp,data_m_fc_withfeats,goodip,file="wgcna.Rda")
print("Performing WGCNA: generating preservation plot")
#preservationres<-try(do_wgcna(X=X,Y=classlabels_temp,sigfeats=data_m_fc_withfeats[goodip,c(1:2)]),silent=TRUE)
#pres<-try(do_wgcna(X=X,Y=classlabels_temp,sigfeats=data_m_fc_withfeats[goodip,c(1:2)]),silent=TRUE)
pres<-try(do_wgcna(X=X,Y=classlabels_temp,sigfeats=data_m_fc_withfeats[goodip,c(1:2)]),silent=TRUE)
#pres<-do_wgcna(X=X,Y=classlabels_temp,sigfeats=data_m_fc_withfeats[goodip,c(1:2)]) #,silent=TRUE)
if(is(pres,"try-error")){
print("WGCNA could not be performed. Error: ")
print(pres)
}
if(output.device.type!="pdf"){
try(dev.off(),silent=TRUE)
}
}
}
#print(lf)
#print("next iteration")
#dev.off()
}
setwd(parentoutput_dir)
summary_res<-cbind(log2.fold.change.thresh_list,feat_eval,feat_sigfdrthresh,feat_sigfdrthresh_cv,feat_sigfdrthresh_permut,res_score_vec)
if(fdrmethod=="none"){
exp_fp<-round(fdrthresh*feat_eval)
}else{
exp_fp<-round(fdrthresh*feat_sigfdrthresh)
}
rank_num<-order(summary_res[,5],decreasing=TRUE)
##save(allmetabs_res,file="allmetabs_res.Rda")
if(featselmethod=="limma" | featselmethod=="limma2way" | featselmethod=="limma2wayrepeat" | featselmethod=="lmreg" | featselmethod=="logitreg"
| featselmethod=="lm2wayanova" | featselmethod=="lm1wayanova" | featselmethod=="lm1wayanovarepeat" | featselmethod=="lm2wayanovarepeat" | featselmethod=="wilcox" | featselmethod=="ttest" | featselmethod=="poissonreg" | featselmethod=="limma1wayrepeat" | featselmethod=="lmregrepeat")
{
summary_res<-cbind(summary_res,exp_fp)
#print("HERE13134")
type.statistic="pvalue"
if(length(allmetabs_res)>0){
#stat_val<-(-1)*log10(allmetabs_res[,4])
stat_val<-allmetabs_res[,4]
}
colnames(summary_res)<-c("RSD.thresh","Number of features left after RSD filtering","Number of features selected",paste(pred.eval.method,"-accuracy",sep=""),paste(pred.eval.method," permuted accuracy",sep=""),"Score","Expected_False_Positives")
}else{
#exp_fp<-round(fdrthresh*feat_sigfdrthresh)
#if(featselmethod=="MARS" | featselmethod=="RF" | featselmethod=="pls" | featselmethod=="o1pls" | featselmethod=="o2pls"){
exp_fp<-rep(NA,dim(summary_res)[1])
#}
# print("HERE13135")
if(length(allmetabs_res)>0){
stat_val<-(allmetabs_res[,4])
}
type.statistic="other"
summary_res<-cbind(summary_res,exp_fp)
colnames(summary_res)<-c("RSD.thresh","Number of features left after RSD filtering","Number of features selected",paste(pred.eval.method,"-accuracy",sep=""),paste(pred.eval.method," permuted accuracy",sep=""),"Score","Expected_False_Positives")
}
featselmethod<-parentfeatselmethod
file_name<-paste(parentoutput_dir,"/Results_summary_",featselmethod,".txt",sep="")
write.table(summary_res,file=file_name,sep="\t",row.names=FALSE)
if(output.device.type=="pdf"){
try(dev.off(),silent=TRUE)
}
#print("##############Level 1: processing complete###########")
if(length(best_feats)>1)
{
mz_index<-best_feats
#par(mfrow=c(1,1),family="sans",cex=cex.plots)
# get_boxplots(X=goodfeats_raw,Y=classlabels_orig,parentoutput_dir=output_dir,boxplot.col.opt=boxplot.col.opt,alphacol=0.3,newdevice=FALSE,cex=cex.plots,ylabel="raw Intensity",name=goodfeats_name,add.pvalues=add.pvalues,add.jitter=add.jitter,boxplot.type=boxplot.type)
setwd(output_dir)
###save(goodfeats,goodfeats_temp,classlabels_orig,classlabels_response_mat,output_dir,xlab_text,ylab_text,goodfeats_name,file="debugscatter.Rda")
if(analysismode=="regression"){
pdf("Figures/Scatterplots.pdf")
if(is.na(xlab_text)==TRUE){
xlab_text=""
}
# save(goodfeats_temp,classlabels_orig,output_dir,ylab_text,xlab_text,goodfeats_name,cex.plots,scatterplot.col.opt,file="scdebug.Rda")
get_scatter_plots(X=goodfeats_temp,Y=classlabels_orig,parentoutput_dir=output_dir,newdevice=FALSE,ylabel=ylab_text,xlabel=xlab_text,
name=goodfeats_name,cex.plots=cex.plots,scatterplot.col.opt=scatterplot.col.opt)
dev.off()
}
setwd(parentoutput_dir)
if(analysismode=="classification"){
log2.fold.change.thresh=log2.fold.change.thresh_list[best_logfc_ind]
#print(paste("Best results found at RSD threshold ", log2.fold.change.thresh,sep=""))
#print(best_acc)
#print(paste(kfold,"-fold CV accuracy ", best_acc,sep=""))
if(FALSE){
if(pred.eval.method=="CV"){
print(paste(kfold,"-fold CV accuracy: ", best_acc,sep=""))
}else{
if(pred.eval.method=="AUC"){
print(paste("Area under the curve (AUC) is : ", best_acc,sep=""))
}
}
}
# data_m<-parent_data_m
# data_m_fc<-data_m #[which(abs(mean_groups)>log2.fold.change.thresh),]
data_m_fc_withfeats<-data_matrix[,c(1:2)]
data_m_fc_withfeats<-cbind(data_m_fc_withfeats,data_m_fc)
#when using a feature table generated by apLCMS
rnames<-paste("mzid_",seq(1,dim(data_m_fc)[1]),sep="")
#print(best_limma_res[1:3,])
goodfeats<-best_limma_res[order(best_limma_res$mz),-c(1:2)]
#goodfeats<-best_limma_res[,-c(1:2)]
goodfeats_all<-goodfeats
goodfeats<-goodfeats_all
rm(goodfeats_all)
}
try(unlink("Rplots.pdf"),silent=TRUE)
if(globalcor==TRUE){
if(length(best_feats)>2){
if(is.na(abs.cor.thresh)==FALSE){
#setwd(parentoutput_dir)
# print("##############Level 2: Metabolome wide correlation network analysis of differentially expressed metabolites###########")
#print(paste("Generating metabolome-wide ",cor.method," correlation network using RSD threshold ", log2.fold.change.thresh," results",sep=""))
#print(parentoutput_dir)
#print(output_dir)
setwd(output_dir)
data_m_fc_withfeats<-as.data.frame(data_m_fc_withfeats)
goodfeats<-as.data.frame(goodfeats)
#print(goodfeats[1:4,])
sigfeats_index<-which(data_m_fc_withfeats$mz%in%goodfeats$mz)
sigfeats<-sigfeats_index
if(globalcor==TRUE){
#outloc<-paste(parentoutput_dir,"/Allcornetworksigfeats","log2fcthresh",log2.fold.change.thresh,"/",sep="")
#outloc<-paste(parentoutput_dir,"/Stage2","/",sep="")
#dir.create(outloc)
#setwd(outloc)
#dir.create("CorrelationAnalysis")
#setwd("CorrelationAnalysis")
if(networktype=="complete"){
if(output.device.type=="pdf"){
mwan_newdevice=FALSE
}else{
mwan_newdevice=TRUE
}
#gohere
# save(data_matrix,sigfeats_index,output_dir,max.cor.num,net_node_colors,net_legend,cor.method,abs.cor.thresh,cor.fdrthresh,file="r1.Rda")
mwan_fdr<-try(do_cor(data_matrix,subindex=sigfeats_index,targetindex=NA,outloc=output_dir,networkscope="global",cor.method,abs.cor.thresh,cor.fdrthresh,
max.cor.num,net_node_colors,net_legend,newdevice=TRUE),silent=TRUE)
}else{
if(networktype=="GGM"){
mwan_fdr<-try(get_partial_cornet(data_matrix, sigfeats.index=sigfeats_index,targeted.index=NA,networkscope="global",
cor.method,abs.cor.thresh,cor.fdrthresh,outloc=output_dir,net_node_colors,net_legend,newdevice=TRUE),silent=TRUE)
}else{
print("Invalid option. Please use complete or GGM.")
}
}
#print("##############Level 2: processing complete###########")
}else{
#print("##############Skipping Level 2: global correlation analysis###########")
}
#temp_data_m<-cbind(allmetabs_res[,c("mz","time")],stat_val)
if(analysismode=="classification"){
# classlabels_temp<-cbind(classlabels_sub[,1],classlabels)
#do_wgcna(X=data_matrix,Y=classlabels,sigfeats.index=sigfeats_index)
}
#print("##############Level 3: processing complete###########")
#print("#########################")
}
}
else{
cat(paste("Can not perform network analysis. Too few metabolites.",sep=""),sep="\n")
}
}
}
if(FALSE){
if(length(featselmethod)>1){
abs.cor.thresh=NA
globalcor=FALSE
}
}
###save(stat_val,allmetabs_res,check_names,metab_annot,kegg_species_code,database,reference_set,type.statistic,file="fcsdebug.Rda")
setwd(output_dir)
unlink("fdrtoolB.pdf",force=TRUE)
if(is.na(target.data.annot)==FALSE){
#dir.create("NetworkAnalysis")
#setwd("NetworkAnalysis")
colnames(target.data.annot)<-c("mz","time","KEGGID")
if(length(check_names)<1){
allmetabs_res<-cbind(stat_val,allmetabs_res)
metab_data<-merge(allmetabs_res,target.data.annot,by=c("mz","time"))
dup.feature.check=TRUE
}else{
allmetabs_res_withnames<-cbind(stat_val,allmetabs_res_withnames)
metab_data<-merge(allmetabs_res_withnames,target.data.annot,by=c("Name"))
dup.feature.check=FALSE
}
###save(stat_val,allmetabs_res,check_names,metab_annot,kegg_species_code,database,metab_data,reference_set,type.statistic,file="fcsdebug.Rda")
if(length(check_names)<1){
metab_data<-metab_data[,c("KEGGID","stat_val","mz","time")]
colnames(metab_data)<-c("KEGGID","Statistic","mz","time")
}else{
metab_data<-metab_data[,c("KEGGID","stat_val")]
colnames(metab_data)<-c("KEGGID","Statistic")
}
# ##save(metab_annot,kegg_species_code,database,metab_data,reference_set,type.statistic,file="fcsdebug.Rda")
#metab_data: KEGGID, Statistic
fcs_res<-get_fcs(kegg_species_code=kegg_species_code,database=database,target.data=metab_data,target.data.annot=target.data.annot,reference_set=reference_set,type.statistic=type.statistic,fcs.min.hits=fcs.min.hits)
###save(fcs_res,file="fcs_res.Rda")
write.table(fcs_res,file="Tables/functional_class_scoring.txt",sep="\t",row.names=TRUE)
if(length(fcs_res)>0){
if(length(which(fcs_res$pvalue<pvalue.thresh))>10){
fcs_res_filt<-fcs_res[which(fcs_res$pvalue<pvalue.thresh)[1:10],]
}else{
fcs_res_filt<-fcs_res[which(fcs_res$pvalue<pvalue.thresh),]
}
fcs_res_filt<-fcs_res_filt[order(fcs_res_filt$pvalue,decreasing=FALSE),]
fcs_res_filt$Name<-gsub(as.character(fcs_res_filt$Name),pattern=" - Homo sapiens \\(human\\)",replacement="")
fcs_res_filt$pvalue=(-1)*log10(fcs_res_filt$pvalue)
fcs_res_filt<-fcs_res_filt[order(fcs_res_filt$pvalue,decreasing=FALSE),]
print(Sys.time())
p=ggbarplot(fcs_res_filt,x="Name",y="pvalue",orientation="horiz",ylab="(-)log10pvalue",xlab="",color="orange",fill="orange",title=paste("Functional classes significant at p<",pvalue.thresh," threhsold",sep=""))
p=p+font("title",size=10)
p=p+font("x.text",size=10)
p=p+font("y.text",size=10)
p=p + geom_hline(yintercept = (-1)*log10(pvalue.thresh), linetype="dotted",size=0.7)
print(Sys.time())
pdf("Figures/Functional_Class_Scoring.pdf")
print(p)
dev.off()
}
print(paste(featselmethod, " processing done.",sep=""))
}
setwd(parentoutput_dir)
#print("Note A: Please note that log2 fold-change based filtering is only applicable to two-class comparison.
#log2fcthresh of 0 will remove only those features that have exactly sample mean intensities between the two groups.
#More features will be filtered prior to FDR as log2fcthresh increases.")
#print("Note C: Please make sure all the packages are installed. You can use the command install.packages(packagename) to install packages.")
#print("Eg: install.packages(\"mixOmics\"),install.packages(\"snow\"), install.packages(\"e1071\"), biocLite(\"limma\"), install.packages(\"gplots\").")
#print("Eg: install.packages("mixOmics""),install.packages("snow"), install.packages("e1071"), biocLite("limma"), install.packages("gplots").")
##############################
##############################
###############################
if(length(best_feats)>0){
goodfeats<-as.data.frame(goodfeats)
#goodfeats<-data_matrix_beforescaling[which(data_matrix_beforescaling$mz%in%goodfeats$mz),]
}else{
goodfeats-{}
}
cur_date<-Sys.time()
cur_date<-gsub(x=cur_date,pattern="-",replacement="")
cur_date<-gsub(x=cur_date,pattern=":",replacement="")
cur_date<-gsub(x=cur_date,pattern=" ",replacement="")
if(saveRda==TRUE){
fname<-paste("Analysis_",featselmethod,"_",cur_date,".Rda",sep="")
###savelist=ls(),file=fname)
}
################################
fname_del<-paste(output_dir,"/Rplots.pdf",sep="")
try(unlink(fname_del),silent=TRUE)
if(removeRda==TRUE)
{
unlink("*.Rda",force=TRUE,recursive=TRUE)
#unlink("pairwise_results/*.Rda",force=TRUE,recursive=TRUE)
}
cat("",sep="\n")
return(list("diffexp_metabs"=goodfeats_allfields, "mw.an.fdr"=mwan_fdr,"targeted.an.fdr"=targetedan_fdr,
"classlabels"=classlabels_orig,"all_metabs"=allmetabs_res_withnames,"roc_res"=roc_res))
}
|
context("ORF helpers")
library(ORFik)
transcriptRanges <- GRanges(seqnames = Rle(rep("1", 5)),
ranges = IRanges(start = c(1, 10, 20, 30, 40),
end = c(5, 15, 25, 35, 45)),
strand = Rle(strand(rep("+", 5))))
ORFranges <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(1, 10, 20), end = c(5, 15, 25)),
strand = Rle(strand(rep("+", 3))))
ORFranges2 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(10, 20, 30),
end = c(15, 25, 35)),
strand = Rle(strand(rep("+", 3))))
ORFranges3 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(20, 30, 40),
end = c(25, 35, 45)),
strand = Rle(strand(rep("+", 3))))
# Create data for get_all_ORFs_as_GRangesList test_that#1
seqname <- c("tx1", "tx2", "tx3", "tx4")
seqs <- c("ATGGGTATTTATA", "ATGGGTAATA",
"ATGGG", "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA")
grIn1 <- GRanges(seqnames = rep("1", 2),
ranges = IRanges(start = c(21, 10), end = c(23, 19)),
strand = rep("-", 2), names = rep(seqname[1], 2))
grIn2 <- GRanges(seqnames = rep("1", 1),
ranges = IRanges(start = c(1010), end = c(1019)),
strand = rep("-", 1), names = rep(seqname[2], 1))
grIn3 <- GRanges(seqnames = rep("1", 1),
ranges = IRanges(start = c(2000), end = c(2004)),
strand = rep("-", 1), names = rep(seqname[3], 1))
grIn4 <- GRanges(seqnames = rep("1", 2),
ranges = IRanges(start = c(3030, 3000), end = c(3036, 3029)),
strand = rep("-", 2), names = rep(seqname[4], 2))
grl <- GRangesList(grIn1, grIn2, grIn3, grIn4)
names(grl) <- seqname
test_that("defineTrailer works as intended for plus strand", {
#at the start
trailer <- defineTrailer(ORFranges, transcriptRanges)
expect_is(trailer, "GRanges")
expect_equal(start(trailer), c(30, 40))
expect_equal(end(trailer), c(35, 45))
#middle
trailer2 <- defineTrailer(ORFranges2, transcriptRanges)
expect_equal(start(trailer2), 40)
expect_equal(end(trailer2), 45)
#at the end
trailer3 <- defineTrailer(ORFranges3, transcriptRanges)
expect_is(trailer3, "GRanges")
expect_equal(length(trailer3), 0)
#trailer size 3
trailer4 <- defineTrailer(ORFranges2, transcriptRanges, 3)
expect_equal(start(trailer4), 40)
expect_equal(end(trailer4), 42)
})
transcriptRanges <- GRanges(seqnames = Rle(rep("1", 5)),
ranges = IRanges(start = rev(c(1, 10, 20, 30, 40)),
end = rev(c(5, 15, 25, 35, 45))),
strand = Rle(strand(rep("-", 5))))
ORFranges <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = rev(c(1, 10, 20)),
end = rev(c(5, 15, 25))),
strand = Rle(strand(rep("-", 3))))
ORFranges2 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = rev(c(10, 20, 30)),
end = rev(c(15, 25, 35))),
strand = Rle(strand(rep("-", 3))))
ORFranges3 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = rev(c(20, 30, 40)),
end = rev(c(25, 35, 45))),
strand = Rle(strand(rep("-", 3))))
test_that("defineTrailer works as intended for minus strand", {
#at the end
trailer <- defineTrailer(ORFranges, transcriptRanges)
expect_is(trailer, "GRanges")
expect_is(trailer, "GRanges")
expect_equal(length(trailer), 0)
#middle
trailer2 <- defineTrailer(ORFranges2, transcriptRanges)
expect_equal(start(trailer2), 1)
expect_equal(end(trailer2), 5)
#at the start
trailer3 <- defineTrailer(ORFranges3, transcriptRanges)
expect_equal(start(trailer3), c(1, 10))
expect_equal(end(trailer3), c(5, 15))
#trailer size 3
trailer4 <- defineTrailer(ORFranges2, transcriptRanges, 3)
expect_equal(start(trailer4), 3)
expect_equal(end(trailer4), 5)
})
transcriptRanges <- GRanges(seqnames = Rle(rep("1", 4)),
ranges = IRanges(start = rev(c(10, 20, 30, 40)),
end = rev(c(15, 25, 35, 45))),
strand = Rle(strand(rep("-", 4))))
test_that("findORFsFasta works as intended", {
filePath <- system.file("extdata/Danio_rerio_sample", "genome_dummy.fasta",
package = "ORFik")
test_result <- findORFsFasta(filePath, longestORF = FALSE)
expect_is(test_result, "GRanges")
expect_equal(length(test_result), 3990)
## allow circular
test_result <- findORFsFasta(filePath, longestORF = FALSE,
is.circular = TRUE)
expect_is(test_result, "GRanges")
expect_equal(length(test_result), 3998)
})
test_that("findORFs works as intended for plus strand", {
#longestORF F with different frames
test_ranges <- findORFs("ATGGGTAATA", "ATG|TGG|GGG", "TAA|AAT|ATA",
longestORF = FALSE, minimumLength = 0)
expect_is(test_ranges, "IRangesList")
expect_equal(unlist(start(test_ranges), use.names = FALSE), c(1, 2, 3))
expect_equal(unlist(end(test_ranges), use.names = FALSE), c(9, 10, 8))
#longestORF T
test_ranges <- findORFs("ATGATGTAATAA", "ATG|TGA|GGG", "TAA|AAT|ATA",
longestORF = TRUE, minimumLength = 0)
expect_is(test_ranges, "IRangesList")
expect_equal(unlist(start(test_ranges), use.names = FALSE), c(1, 2))
expect_equal(unlist(end(test_ranges), use.names = FALSE), c(9, 10))
#longestORF F with minimum size 12 -> 6 + 3*2
test_ranges <- findORFs("ATGTGGAATATGATGATGATGTAATAA", "ATG|TGA|GGG",
"TAA|AAT|ATA", longestORF = FALSE, minimumLength = 2)
expect_is(test_ranges, "IRangesList")
expect_equal(unlist(start(test_ranges), use.names = FALSE), c(10, 13, 11, 14))
expect_equal(unlist(end(test_ranges), use.names = FALSE), c(24, 24, 25, 25))
#longestORF T with minimum size 12 -> 6 + 3*2
test_ranges <- findORFs("ATGTGGAATATGATGATGATGTAATAA", "ATG|TGA|GGG",
"TAA|AAT|ATA", longestORF = TRUE, minimumLength = 2)
expect_is(test_ranges, "IRangesList")
expect_equal(unlist(start(test_ranges), use.names = FALSE), c(10, 11))
expect_equal(unlist(end(test_ranges), use.names = FALSE), c(24, 25))
#find nothing
test_ranges <- findORFs("B", "ATG|TGA|GGG", "TAA|AAT|ATA", minimumLength = 2)
expect_is(test_ranges, "IRangesList")
expect_equal(length(test_ranges), 0)
})
test_that("findMapORFs works as intended for minus strand", {
#longestORF F with different frames
test_ranges <-findMapORFs(grl, seqs,
"ATG|TGG|GGG",
"TAA|AAT|ATA",
longestORF = FALSE,
minimumLength = 0)
expect_is(test_ranges, "GRangesList")
expect_is(strand(test_ranges),"CompressedRleList")
expect_is(seqnames(test_ranges),"CompressedRleList")
expect_equal(strandPerGroup(test_ranges, FALSE)[1], "-")
expect_equal(as.integer(unlist(start(test_ranges))),
c(21, 10, 1011, 1010, 1012))
expect_equal(as.integer(unlist(end(test_ranges))),
c(22, 19, 1019, 1018, 1017))
expect_equal(as.integer(unlist(width(test_ranges))), c(2, 10, 9, 9, 6))
expect_equal(sum(widthPerGroup(test_ranges) %% 3), 0)
})
# Create data for get_all_ORFs_as_GRangesList test_that#2
namesTx <- c("tx1", "tx2")
seqs <- c("ATGATGTAATAA", "ATGTAA")
grIn1 <- GRanges(seqnames = rep("1", 2),
ranges = IRanges(start = c(1, 3), end = c(1, 13)),
strand = rep("+", 2), names = rep(namesTx[1], 2))
grIn2<- GRanges(seqnames = rep("1", 6),
ranges = IRanges(start = c(1, 1000, 2000, 3000, 4000, 5000),
end = c(1, 1000, 2000, 3000, 4000, 5000)),
strand = rep("+", 6), names = rep(namesTx[2], 6))
grl <- GRangesList(grIn1, grIn2)
names(grl) <- namesTx
test_that("mapToGRanges works as intended for strange exons positive strand", {
#longestORF F with different frames
test_ranges <- findMapORFs(grl,seqs,
"ATG|TGG|GGG",
"TAA|AAT|ATA",
longestORF = FALSE,
minimumLength = 0)
expect_is(test_ranges, "GRangesList")
expect_is(strand(test_ranges),"CompressedRleList")
expect_is(seqnames(test_ranges),"CompressedRleList")
expect_equal(strandPerGroup(test_ranges,FALSE)[1], "+")
expect_equal(as.integer(unlist(start(test_ranges))), c(1, 3, 5,1, 1000, 2000,
3000, 4000, 5000))
expect_equal(as.integer(unlist(end(test_ranges))), c(1, 10, 10,1, 1000, 2000,
3000, 4000, 5000))
expect_equal(sum(widthPerGroup(test_ranges) %% 3), 0)
expect_equal(unlist(grl)$names,c("tx1", "tx1", "tx2", "tx2", "tx2", "tx2",
"tx2", "tx2"))
expect_equal(unlist(test_ranges)$names,c("tx1_1", "tx1_1", "tx1_2", "tx2_1",
"tx2_1", "tx2_1", "tx2_1", "tx2_1",
"tx2_1"))
})
# Create data for get_all_ORFs_as_GRangesList test_that#3
ranges(grIn1) <- rev(ranges(grIn1))
strand(grIn1) <- rep("-", length(grIn1))
ranges(grIn2) <- rev(ranges(grIn2))
strand(grIn2) <- rep("-", length(grIn2))
grl <- GRangesList(grIn1, grIn2)
names(grl) <- namesTx
test_that("mapToGRanges works as intended for strange exons negative strand", {
#longestORF F with different frames
test_ranges <- findMapORFs(grl,seqs,
"ATG|TGG|GGG",
"TAA|AAT|ATA",
longestORF = FALSE,
minimumLength = 0)
test_ranges <- sortPerGroup(test_ranges)
expect_is(test_ranges, "GRangesList")
expect_is(strand(test_ranges),"CompressedRleList")
expect_is(seqnames(test_ranges),"CompressedRleList")
expect_equal(strandPerGroup(test_ranges, FALSE)[1], "-")
expect_equal(as.integer(unlist(start(test_ranges))), c(5, 5, 5000, 4000, 3000,
2000, 1000, 1))
expect_equal(as.integer(unlist(end(test_ranges))), c(13, 10, 5000, 4000, 3000,
2000, 1000, 1))
expect_equal(sum(widthPerGroup(test_ranges) %% 3), 0)
expect_equal(unlist(grl)$names,c("tx1", "tx1", "tx2", "tx2", "tx2",
"tx2", "tx2", "tx2"))
expect_equal(unlist(test_ranges)$names,c("tx1_1","tx1_2", "tx2_1", "tx2_1",
"tx2_1", "tx2_1", "tx2_1", "tx2_1"))
})
namesTx <- c("tx1", "tx2", "tx3", "tx4")
seqs <- c("ATGATGTAATAA", "ATGTAA", "AAAATGAAATAAA", "AAAATGAAATAA")
grIn3 <- GRanges(seqnames = rep("1", 2),
ranges = IRanges(start = c(2000, 2008), end = c(2004, 2015)),
strand = rep("+", 2), names = rep(namesTx[3], 2))
grIn4 <- GRanges(seqnames = rep("1", 2),
ranges = IRanges(start = c(3030, 3000), end = c(3036, 3004)),
strand = rep("-", 2), names = rep(namesTx[4], 2))
grl <- GRangesList(grIn1, grIn2, grIn3, grIn4)
names(grl) <- namesTx
test_that("mapToGRanges works as intended for strange exons both strands", {
#longestORF F with different frames
test_ranges <- findMapORFs(grl,seqs,
"ATG|TGG|GGG",
"TAA|AAT|ATA",
longestORF = FALSE,
minimumLength = 0)
test_ranges <- sortPerGroup(test_ranges)
expect_is(test_ranges, "GRangesList")
expect_is(strand(test_ranges),"CompressedRleList")
expect_is(seqnames(test_ranges),"CompressedRleList")
expect_equal(strandPerGroup(test_ranges, FALSE)[1], "-")
expect_equal(as.integer(unlist(start(test_ranges))),
c(5, 5, 5000, 4000, 3000, 2000, 1000, 1, 2003,
2008, 3030, 3000))
expect_equal(as.integer(unlist(end(test_ranges))),
c(13, 10, 5000, 4000, 3000, 2000, 1000, 1, 2004,
2014, 3033, 3004))
expect_equal(sum(widthPerGroup(test_ranges) %% 3), 0)
})
test_that("pmapFromTranscriptsF works as intended", {
xStart = c(1, 5, 10, 1000, 5, 6, 1, 1)
xEnd = c(6, 8, 12, 2000, 10, 10, 3, 1)
TS = c(1,5, 1000, 1005, 1008, 2000, 2003, 4000, 5000, 7000, 85, 70,
101, 9)
TE = c(3, 9, 1003, 1006, 1010, 2001, 2020, 4500, 6000, 8000, 89,
82, 105, 9)
indices = c(1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 6, 7)
strand = c(rep("+", 10), rep("-", 3), "+")
seqnames = rep("1", length(TS))
result <- split(IRanges(xStart, xEnd), c(seq.int(1, 5), 5, 6, 7))
transcripts <- split(GRanges(seqnames, IRanges(TS, TE), strand),
indices)
test_ranges <- pmapFromTranscriptF(result, transcripts, TRUE)
expect_is(test_ranges, "GRangesList")
expect_equal(start(unlistGrl(test_ranges)),
c(1, 5, 1005, 1008, 2010, 5498, 7000, 85, 78, 78,
103, 9))
expect_equal(end(unlistGrl(test_ranges)),
c(3, 7, 1006, 1009, 2012, 6000, 7497, 85, 82, 82,
105, 9))
})
test_that("GRangesList sorting works as intended", {
test_ranges <- grl[3:4]
test_ranges <- sortPerGroup(test_ranges)
expect_is(test_ranges, "GRangesList")
expect_is(strand(test_ranges),"CompressedRleList")
expect_is(seqnames(test_ranges),"CompressedRleList")
expect_equal(strandPerGroup(test_ranges, FALSE)[1], "+")
expect_equal(as.integer(unlist(start(test_ranges))), c(2000,
2008, 3030, 3000))
expect_equal(as.integer(unlist(end(test_ranges))), c(2004,
2015, 3036, 3004))
test_ranges <- sortPerGroup(test_ranges, ignore.strand = TRUE)
expect_equal(as.integer(unlist(start(test_ranges))), c(2000,
2008, 3000, 3030))
expect_equal(as.integer(unlist(end(test_ranges))), c(2004,
2015, 3004, 3036))
})
test_that("startCodons works as intended", {
ORFranges <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(1, 10, 20),
end = c(5, 15, 25)),
strand = Rle(strand(rep("+", 3))))
ORFranges2 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(20, 30, 40),
end = c(25, 35, 45)),
strand = Rle(strand(rep("+", 3))))
ORFranges3 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(30, 40, 50),
end = c(35, 45, 55)),
strand = Rle(strand(rep("+", 3))))
ORFranges4 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(50, 40, 30),
end = c(55, 45, 35)),
strand = Rle(strand(rep("-", 3))))
ORFranges5 <- GRanges(seqnames = Rle(rep("1", 4)),
ranges = IRanges(start = c(1000, 1002, 1004, 1006),
end = c(1000, 1002, 1004, 1006)),
strand = Rle(strand(rep("+", 4))))
ORFranges6 <- GRanges(seqnames = Rle(rep("1", 4)),
ranges = IRanges(start = c(1002, 1004, 1005, 1006),
end = c(1002, 1004, 1005, 1006)),
strand = Rle(strand(rep("+", 4))))
ORFranges4 <- sort(ORFranges4, decreasing = TRUE)
names(ORFranges) <- rep("tx1_1" ,3)
names(ORFranges2) <- rep("tx1_2", 3)
names(ORFranges3) <- rep("tx1_3", 3)
names(ORFranges4) <- rep("tx4_1", 3)
names(ORFranges5) <- rep("tx1_4", 4)
names(ORFranges6) <- rep("tx1_5", 4)
grl <- GRangesList(tx1_1 = ORFranges, tx1_2 = ORFranges2,
tx1_3 = ORFranges3, tx4_1 = ORFranges4,
tx1_4 = ORFranges5, tx1_5 = ORFranges6)
test_ranges <- startCodons(grl, TRUE)
expect_is(test_ranges, "GRangesList")
expect_is(strand(test_ranges),"CompressedRleList")
expect_is(seqnames(test_ranges),"CompressedRleList")
expect_equal(strandPerGroup(test_ranges, FALSE)[1], "+")
expect_equal(as.integer(unlist(start(test_ranges))),
c(1, 20, 30, 53, 1000, 1002, 1004, 1002, 1004))
expect_equal(as.integer(unlist(end(test_ranges))),
c(3, 22, 32, 55, 1000, 1002, 1004, 1002, 1005))
})
test_that("stopCodons works as intended", {
ORFranges <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(1, 10, 20),
end = c(5, 15, 25)),
strand = Rle(strand(rep("+", 3))))
ORFranges2 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(20, 30, 40),
end = c(25, 35, 45)),
strand = Rle(strand(rep("+", 3))))
ORFranges3 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(30, 40, 50),
end = c(35, 45, 55)),
strand = Rle(strand(rep("+", 3))))
ORFranges4 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(30, 40, 50),
end = c(35, 45, 55)),
strand = Rle(strand(rep("-", 3))))
ORFranges5 <- GRanges(seqnames = Rle(rep("1", 4)),
ranges = IRanges(start = c(1000, 1002, 1004, 1006),
end = c(1000, 1002, 1004, 1006)),
strand = Rle(strand(rep("+", 4))))
ORFranges6 <- GRanges(seqnames = Rle(rep("1", 4)),
ranges = IRanges(start = c(1002, 1003, 1004, 1006),
end = c(1002, 1003, 1004, 1006)),
strand = Rle(strand(rep("+", 4))))
ORFranges4 <- sort(ORFranges4, decreasing = TRUE)
names(ORFranges) <- rep("tx1_1" ,3)
names(ORFranges2) <- rep("tx1_2", 3)
names(ORFranges3) <- rep("tx1_3", 3)
names(ORFranges4) <- rep("tx4_1", 3)
names(ORFranges5) <- rep("tx1_4", 4)
names(ORFranges6) <- rep("tx1_5", 4)
grl <- GRangesList(tx1_1 = ORFranges, tx1_2 = ORFranges2,
tx1_3 = ORFranges3, tx4_1 = ORFranges4,
tx1_4 = ORFranges5, tx1_5 = ORFranges6)
test_ranges <- stopCodons(grl, TRUE)
expect_is(test_ranges, "GRangesList")
expect_is(strand(test_ranges),"CompressedRleList")
expect_is(seqnames(test_ranges),"CompressedRleList")
expect_equal(strandPerGroup(test_ranges, FALSE)[1], "+")
expect_equal(as.integer(unlist(start(test_ranges))), c(23,43, 53, 30, 1002,
1004, 1006, 1003,
1006))
expect_equal(as.integer(unlist(end(test_ranges))), c(25,45, 55, 32, 1002,
1004, 1006, 1004,
1006))
# check with meta columns
ORFranges$names <- rep("tx1_1" ,3)
ORFranges2$names <- rep("tx1_2", 3)
ORFranges3$names <- rep("tx1_3", 3)
ORFranges4$names <- rep("tx4_1", 3)
ORFranges5$names <- rep("tx1_4", 4)
ORFranges6$names <- rep("tx1_5", 4)
grl <- GRangesList(tx1_1 = ORFranges, tx1_2 = ORFranges2,
tx1_3 = ORFranges3, tx4_1 = ORFranges4,
tx1_4 = ORFranges5, tx1_5 = ORFranges6)
test_ranges <- stopCodons(grl, TRUE)
negStopss <- GRangesList(tx1_1 = GRanges("1", c(7, 5, 3, 1), "-"),
tx1_2 = GRanges("1", c(15, 13, 11, 9), "-"))
expect_equal(stopSites(stopCodons(negStopss, FALSE), is.sorted = TRUE),
c(1,9))
negStopss <- GRangesList(tx1_1 = GRanges("1", IRanges(c(9325,8012),
c(9418, 8013)),
"-"))
expect_equal(startSites(stopCodons(negStopss, FALSE), is.sorted = TRUE),
9325)
})
ORFranges <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(1, 10, 20), end = c(5, 15, 25)),
strand = Rle(strand(rep("+", 3))))
ORFranges2 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(10, 20, 30),
end = c(15, 25, 35)),
strand = Rle(strand(rep("+", 3))))
ORFranges3 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(20, 30, 40),
end = c(25, 35, 45)),
strand = Rle(strand(rep("+", 3))))
ORFranges$names <- rep("tx1_1" ,3)
ORFranges2$names <- rep("tx1_2", 3)
ORFranges3$names <- rep("tx1_3", 3)
orfs <- c(ORFranges,ORFranges2,ORFranges3)
grl <- groupGRangesBy(orfs, orfs$names)
test_that("startRegion works as intended", {
transcriptRanges <- GRanges(seqnames = Rle(rep("1", 5)),
ranges = IRanges(start = c(1, 10, 20, 30, 40),
end = c(5, 15, 25, 35, 45)),
strand = Rle(strand(rep("+", 5))))
transcriptRanges <- groupGRangesBy(transcriptRanges,
rep("tx1", length(transcriptRanges)))
test_ranges <- startRegion(grl, transcriptRanges)
expect_equal(as.integer(unlist(start(test_ranges))), c(1, 4, 10, 14, 20))
expect_equal(as.integer(unlist(end(test_ranges))), c(3, 5, 12, 15, 22))
test_ranges <- startRegion(grl)
expect_equal(as.integer(unlist(end(test_ranges))), c(3, 12, 22))
})
test_that("stopRegion works as intended", {
transcriptRanges <- GRanges(seqnames = Rle(rep("1", 5)),
ranges = IRanges(start = c(1, 10, 20, 30, 40),
end = c(5, 15, 25, 35, 45)),
strand = Rle(strand(rep("+", 5))))
transcriptRanges <- groupGRangesBy(transcriptRanges,
rep("tx1", length(transcriptRanges)))
test_ranges <- stopRegion(grl, transcriptRanges)
expect_equal(as.integer(unlist(start(test_ranges))), c(23, 30, 33, 40, 43))
expect_equal(as.integer(unlist(end(test_ranges))), c(25, 31, 35, 41, 45))
test_ranges <- stopRegion(grl)
expect_equal(as.integer(unlist(end(test_ranges))), c(25, 35, 45))
})
test_that("uniqueGroups works as intended", {
grl[3] <- grl[1]
test_ranges <- uniqueGroups(grl)
expect_is(test_ranges, "GRangesList")
expect_equal(strandPerGroup(test_ranges, FALSE), c("+", "+"))
expect_equal(length(test_ranges), 2)
expect_equal(names(test_ranges), c("1", "2"))
})
test_that("uniqueOrder works as intended", {
gr1 <- GRanges("1", IRanges(1,10), "+")
gr2 <- GRanges("1", IRanges(20, 30), "+")
# make a grl with duplicated ORFs (gr1 twice)
grl <- GRangesList(tx1_1 = gr1, tx2_1 = gr2, tx3_1 = gr1)
test_result <- uniqueOrder(grl) # remember ordering
expect_equal(test_result, as.integer(c(1,2,1)))
})
test_that("findUORFs works as intended", {
# Load annotation
txdbFile <- system.file("extdata", "hg19_knownGene_sample.sqlite",
package = "GenomicFeatures")
txdb <- loadTxdb(txdbFile)
fiveUTRs <- loadRegion(txdb, "leaders")
cds <- loadRegion(txdb, "cds")
if (requireNamespace("BSgenome.Hsapiens.UCSC.hg19")) {
# Normally you would not use a BSgenome, but some custome fasta-
# annotation you have for your species
uorfs <- findUORFs(fiveUTRs["uc001bum.2"],
BSgenome.Hsapiens.UCSC.hg19::Hsapiens,
"ATG", cds = cds)
expect_equal(names(uorfs[1]), "uc001bum.2_5")
expect_equal(length(uorfs), 1)
}
})
test_that("artificial.orfs works as intended", {
cds <- GRangesList(tx1 = GRanges("chr1", IRanges(start = c(100), end = 150),"+"),
tx2 = GRanges("chr1", IRanges(200, 205), "+"),
tx3 = GRanges("chr1", IRanges(300, 311), "+"),
tx4 = GRanges("chr1", IRanges(400, 999), "+"),
tx5 = GRanges("chr1", IRanges(500, 511), "-"))
res <- artificial.orfs(cds)
expect_equal(100, startSites(res[1]))
expect_equal(150, stopSites(res[1]))
})
| /tests/testthat/test_ORFs_helpers.R | permissive | Roleren/ORFik | R | false | false | 25,131 | r | context("ORF helpers")
library(ORFik)
transcriptRanges <- GRanges(seqnames = Rle(rep("1", 5)),
ranges = IRanges(start = c(1, 10, 20, 30, 40),
end = c(5, 15, 25, 35, 45)),
strand = Rle(strand(rep("+", 5))))
ORFranges <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(1, 10, 20), end = c(5, 15, 25)),
strand = Rle(strand(rep("+", 3))))
ORFranges2 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(10, 20, 30),
end = c(15, 25, 35)),
strand = Rle(strand(rep("+", 3))))
ORFranges3 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(20, 30, 40),
end = c(25, 35, 45)),
strand = Rle(strand(rep("+", 3))))
# Create data for get_all_ORFs_as_GRangesList test_that#1
seqname <- c("tx1", "tx2", "tx3", "tx4")
seqs <- c("ATGGGTATTTATA", "ATGGGTAATA",
"ATGGG", "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA")
grIn1 <- GRanges(seqnames = rep("1", 2),
ranges = IRanges(start = c(21, 10), end = c(23, 19)),
strand = rep("-", 2), names = rep(seqname[1], 2))
grIn2 <- GRanges(seqnames = rep("1", 1),
ranges = IRanges(start = c(1010), end = c(1019)),
strand = rep("-", 1), names = rep(seqname[2], 1))
grIn3 <- GRanges(seqnames = rep("1", 1),
ranges = IRanges(start = c(2000), end = c(2004)),
strand = rep("-", 1), names = rep(seqname[3], 1))
grIn4 <- GRanges(seqnames = rep("1", 2),
ranges = IRanges(start = c(3030, 3000), end = c(3036, 3029)),
strand = rep("-", 2), names = rep(seqname[4], 2))
grl <- GRangesList(grIn1, grIn2, grIn3, grIn4)
names(grl) <- seqname
test_that("defineTrailer works as intended for plus strand", {
#at the start
trailer <- defineTrailer(ORFranges, transcriptRanges)
expect_is(trailer, "GRanges")
expect_equal(start(trailer), c(30, 40))
expect_equal(end(trailer), c(35, 45))
#middle
trailer2 <- defineTrailer(ORFranges2, transcriptRanges)
expect_equal(start(trailer2), 40)
expect_equal(end(trailer2), 45)
#at the end
trailer3 <- defineTrailer(ORFranges3, transcriptRanges)
expect_is(trailer3, "GRanges")
expect_equal(length(trailer3), 0)
#trailer size 3
trailer4 <- defineTrailer(ORFranges2, transcriptRanges, 3)
expect_equal(start(trailer4), 40)
expect_equal(end(trailer4), 42)
})
transcriptRanges <- GRanges(seqnames = Rle(rep("1", 5)),
ranges = IRanges(start = rev(c(1, 10, 20, 30, 40)),
end = rev(c(5, 15, 25, 35, 45))),
strand = Rle(strand(rep("-", 5))))
ORFranges <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = rev(c(1, 10, 20)),
end = rev(c(5, 15, 25))),
strand = Rle(strand(rep("-", 3))))
ORFranges2 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = rev(c(10, 20, 30)),
end = rev(c(15, 25, 35))),
strand = Rle(strand(rep("-", 3))))
ORFranges3 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = rev(c(20, 30, 40)),
end = rev(c(25, 35, 45))),
strand = Rle(strand(rep("-", 3))))
test_that("defineTrailer works as intended for minus strand", {
#at the end
trailer <- defineTrailer(ORFranges, transcriptRanges)
expect_is(trailer, "GRanges")
expect_is(trailer, "GRanges")
expect_equal(length(trailer), 0)
#middle
trailer2 <- defineTrailer(ORFranges2, transcriptRanges)
expect_equal(start(trailer2), 1)
expect_equal(end(trailer2), 5)
#at the start
trailer3 <- defineTrailer(ORFranges3, transcriptRanges)
expect_equal(start(trailer3), c(1, 10))
expect_equal(end(trailer3), c(5, 15))
#trailer size 3
trailer4 <- defineTrailer(ORFranges2, transcriptRanges, 3)
expect_equal(start(trailer4), 3)
expect_equal(end(trailer4), 5)
})
transcriptRanges <- GRanges(seqnames = Rle(rep("1", 4)),
ranges = IRanges(start = rev(c(10, 20, 30, 40)),
end = rev(c(15, 25, 35, 45))),
strand = Rle(strand(rep("-", 4))))
test_that("findORFsFasta works as intended", {
filePath <- system.file("extdata/Danio_rerio_sample", "genome_dummy.fasta",
package = "ORFik")
test_result <- findORFsFasta(filePath, longestORF = FALSE)
expect_is(test_result, "GRanges")
expect_equal(length(test_result), 3990)
## allow circular
test_result <- findORFsFasta(filePath, longestORF = FALSE,
is.circular = TRUE)
expect_is(test_result, "GRanges")
expect_equal(length(test_result), 3998)
})
test_that("findORFs works as intended for plus strand", {
#longestORF F with different frames
test_ranges <- findORFs("ATGGGTAATA", "ATG|TGG|GGG", "TAA|AAT|ATA",
longestORF = FALSE, minimumLength = 0)
expect_is(test_ranges, "IRangesList")
expect_equal(unlist(start(test_ranges), use.names = FALSE), c(1, 2, 3))
expect_equal(unlist(end(test_ranges), use.names = FALSE), c(9, 10, 8))
#longestORF T
test_ranges <- findORFs("ATGATGTAATAA", "ATG|TGA|GGG", "TAA|AAT|ATA",
longestORF = TRUE, minimumLength = 0)
expect_is(test_ranges, "IRangesList")
expect_equal(unlist(start(test_ranges), use.names = FALSE), c(1, 2))
expect_equal(unlist(end(test_ranges), use.names = FALSE), c(9, 10))
#longestORF F with minimum size 12 -> 6 + 3*2
test_ranges <- findORFs("ATGTGGAATATGATGATGATGTAATAA", "ATG|TGA|GGG",
"TAA|AAT|ATA", longestORF = FALSE, minimumLength = 2)
expect_is(test_ranges, "IRangesList")
expect_equal(unlist(start(test_ranges), use.names = FALSE), c(10, 13, 11, 14))
expect_equal(unlist(end(test_ranges), use.names = FALSE), c(24, 24, 25, 25))
#longestORF T with minimum size 12 -> 6 + 3*2
test_ranges <- findORFs("ATGTGGAATATGATGATGATGTAATAA", "ATG|TGA|GGG",
"TAA|AAT|ATA", longestORF = TRUE, minimumLength = 2)
expect_is(test_ranges, "IRangesList")
expect_equal(unlist(start(test_ranges), use.names = FALSE), c(10, 11))
expect_equal(unlist(end(test_ranges), use.names = FALSE), c(24, 25))
#find nothing
test_ranges <- findORFs("B", "ATG|TGA|GGG", "TAA|AAT|ATA", minimumLength = 2)
expect_is(test_ranges, "IRangesList")
expect_equal(length(test_ranges), 0)
})
test_that("findMapORFs works as intended for minus strand", {
#longestORF F with different frames
test_ranges <-findMapORFs(grl, seqs,
"ATG|TGG|GGG",
"TAA|AAT|ATA",
longestORF = FALSE,
minimumLength = 0)
expect_is(test_ranges, "GRangesList")
expect_is(strand(test_ranges),"CompressedRleList")
expect_is(seqnames(test_ranges),"CompressedRleList")
expect_equal(strandPerGroup(test_ranges, FALSE)[1], "-")
expect_equal(as.integer(unlist(start(test_ranges))),
c(21, 10, 1011, 1010, 1012))
expect_equal(as.integer(unlist(end(test_ranges))),
c(22, 19, 1019, 1018, 1017))
expect_equal(as.integer(unlist(width(test_ranges))), c(2, 10, 9, 9, 6))
expect_equal(sum(widthPerGroup(test_ranges) %% 3), 0)
})
# Create data for get_all_ORFs_as_GRangesList test_that#2
namesTx <- c("tx1", "tx2")
seqs <- c("ATGATGTAATAA", "ATGTAA")
grIn1 <- GRanges(seqnames = rep("1", 2),
ranges = IRanges(start = c(1, 3), end = c(1, 13)),
strand = rep("+", 2), names = rep(namesTx[1], 2))
grIn2<- GRanges(seqnames = rep("1", 6),
ranges = IRanges(start = c(1, 1000, 2000, 3000, 4000, 5000),
end = c(1, 1000, 2000, 3000, 4000, 5000)),
strand = rep("+", 6), names = rep(namesTx[2], 6))
grl <- GRangesList(grIn1, grIn2)
names(grl) <- namesTx
test_that("mapToGRanges works as intended for strange exons positive strand", {
#longestORF F with different frames
test_ranges <- findMapORFs(grl,seqs,
"ATG|TGG|GGG",
"TAA|AAT|ATA",
longestORF = FALSE,
minimumLength = 0)
expect_is(test_ranges, "GRangesList")
expect_is(strand(test_ranges),"CompressedRleList")
expect_is(seqnames(test_ranges),"CompressedRleList")
expect_equal(strandPerGroup(test_ranges,FALSE)[1], "+")
expect_equal(as.integer(unlist(start(test_ranges))), c(1, 3, 5,1, 1000, 2000,
3000, 4000, 5000))
expect_equal(as.integer(unlist(end(test_ranges))), c(1, 10, 10,1, 1000, 2000,
3000, 4000, 5000))
expect_equal(sum(widthPerGroup(test_ranges) %% 3), 0)
expect_equal(unlist(grl)$names,c("tx1", "tx1", "tx2", "tx2", "tx2", "tx2",
"tx2", "tx2"))
expect_equal(unlist(test_ranges)$names,c("tx1_1", "tx1_1", "tx1_2", "tx2_1",
"tx2_1", "tx2_1", "tx2_1", "tx2_1",
"tx2_1"))
})
# Create data for get_all_ORFs_as_GRangesList test_that#3
ranges(grIn1) <- rev(ranges(grIn1))
strand(grIn1) <- rep("-", length(grIn1))
ranges(grIn2) <- rev(ranges(grIn2))
strand(grIn2) <- rep("-", length(grIn2))
grl <- GRangesList(grIn1, grIn2)
names(grl) <- namesTx
test_that("mapToGRanges works as intended for strange exons negative strand", {
#longestORF F with different frames
test_ranges <- findMapORFs(grl,seqs,
"ATG|TGG|GGG",
"TAA|AAT|ATA",
longestORF = FALSE,
minimumLength = 0)
test_ranges <- sortPerGroup(test_ranges)
expect_is(test_ranges, "GRangesList")
expect_is(strand(test_ranges),"CompressedRleList")
expect_is(seqnames(test_ranges),"CompressedRleList")
expect_equal(strandPerGroup(test_ranges, FALSE)[1], "-")
expect_equal(as.integer(unlist(start(test_ranges))), c(5, 5, 5000, 4000, 3000,
2000, 1000, 1))
expect_equal(as.integer(unlist(end(test_ranges))), c(13, 10, 5000, 4000, 3000,
2000, 1000, 1))
expect_equal(sum(widthPerGroup(test_ranges) %% 3), 0)
expect_equal(unlist(grl)$names,c("tx1", "tx1", "tx2", "tx2", "tx2",
"tx2", "tx2", "tx2"))
expect_equal(unlist(test_ranges)$names,c("tx1_1","tx1_2", "tx2_1", "tx2_1",
"tx2_1", "tx2_1", "tx2_1", "tx2_1"))
})
namesTx <- c("tx1", "tx2", "tx3", "tx4")
seqs <- c("ATGATGTAATAA", "ATGTAA", "AAAATGAAATAAA", "AAAATGAAATAA")
grIn3 <- GRanges(seqnames = rep("1", 2),
ranges = IRanges(start = c(2000, 2008), end = c(2004, 2015)),
strand = rep("+", 2), names = rep(namesTx[3], 2))
grIn4 <- GRanges(seqnames = rep("1", 2),
ranges = IRanges(start = c(3030, 3000), end = c(3036, 3004)),
strand = rep("-", 2), names = rep(namesTx[4], 2))
grl <- GRangesList(grIn1, grIn2, grIn3, grIn4)
names(grl) <- namesTx
test_that("mapToGRanges works as intended for strange exons both strands", {
#longestORF F with different frames
test_ranges <- findMapORFs(grl,seqs,
"ATG|TGG|GGG",
"TAA|AAT|ATA",
longestORF = FALSE,
minimumLength = 0)
test_ranges <- sortPerGroup(test_ranges)
expect_is(test_ranges, "GRangesList")
expect_is(strand(test_ranges),"CompressedRleList")
expect_is(seqnames(test_ranges),"CompressedRleList")
expect_equal(strandPerGroup(test_ranges, FALSE)[1], "-")
expect_equal(as.integer(unlist(start(test_ranges))),
c(5, 5, 5000, 4000, 3000, 2000, 1000, 1, 2003,
2008, 3030, 3000))
expect_equal(as.integer(unlist(end(test_ranges))),
c(13, 10, 5000, 4000, 3000, 2000, 1000, 1, 2004,
2014, 3033, 3004))
expect_equal(sum(widthPerGroup(test_ranges) %% 3), 0)
})
test_that("pmapFromTranscriptsF works as intended", {
xStart = c(1, 5, 10, 1000, 5, 6, 1, 1)
xEnd = c(6, 8, 12, 2000, 10, 10, 3, 1)
TS = c(1,5, 1000, 1005, 1008, 2000, 2003, 4000, 5000, 7000, 85, 70,
101, 9)
TE = c(3, 9, 1003, 1006, 1010, 2001, 2020, 4500, 6000, 8000, 89,
82, 105, 9)
indices = c(1, 1, 2, 2, 2, 3, 3, 4, 4, 4, 5, 5, 6, 7)
strand = c(rep("+", 10), rep("-", 3), "+")
seqnames = rep("1", length(TS))
result <- split(IRanges(xStart, xEnd), c(seq.int(1, 5), 5, 6, 7))
transcripts <- split(GRanges(seqnames, IRanges(TS, TE), strand),
indices)
test_ranges <- pmapFromTranscriptF(result, transcripts, TRUE)
expect_is(test_ranges, "GRangesList")
expect_equal(start(unlistGrl(test_ranges)),
c(1, 5, 1005, 1008, 2010, 5498, 7000, 85, 78, 78,
103, 9))
expect_equal(end(unlistGrl(test_ranges)),
c(3, 7, 1006, 1009, 2012, 6000, 7497, 85, 82, 82,
105, 9))
})
test_that("GRangesList sorting works as intended", {
test_ranges <- grl[3:4]
test_ranges <- sortPerGroup(test_ranges)
expect_is(test_ranges, "GRangesList")
expect_is(strand(test_ranges),"CompressedRleList")
expect_is(seqnames(test_ranges),"CompressedRleList")
expect_equal(strandPerGroup(test_ranges, FALSE)[1], "+")
expect_equal(as.integer(unlist(start(test_ranges))), c(2000,
2008, 3030, 3000))
expect_equal(as.integer(unlist(end(test_ranges))), c(2004,
2015, 3036, 3004))
test_ranges <- sortPerGroup(test_ranges, ignore.strand = TRUE)
expect_equal(as.integer(unlist(start(test_ranges))), c(2000,
2008, 3000, 3030))
expect_equal(as.integer(unlist(end(test_ranges))), c(2004,
2015, 3004, 3036))
})
test_that("startCodons works as intended", {
ORFranges <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(1, 10, 20),
end = c(5, 15, 25)),
strand = Rle(strand(rep("+", 3))))
ORFranges2 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(20, 30, 40),
end = c(25, 35, 45)),
strand = Rle(strand(rep("+", 3))))
ORFranges3 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(30, 40, 50),
end = c(35, 45, 55)),
strand = Rle(strand(rep("+", 3))))
ORFranges4 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(50, 40, 30),
end = c(55, 45, 35)),
strand = Rle(strand(rep("-", 3))))
ORFranges5 <- GRanges(seqnames = Rle(rep("1", 4)),
ranges = IRanges(start = c(1000, 1002, 1004, 1006),
end = c(1000, 1002, 1004, 1006)),
strand = Rle(strand(rep("+", 4))))
ORFranges6 <- GRanges(seqnames = Rle(rep("1", 4)),
ranges = IRanges(start = c(1002, 1004, 1005, 1006),
end = c(1002, 1004, 1005, 1006)),
strand = Rle(strand(rep("+", 4))))
ORFranges4 <- sort(ORFranges4, decreasing = TRUE)
names(ORFranges) <- rep("tx1_1" ,3)
names(ORFranges2) <- rep("tx1_2", 3)
names(ORFranges3) <- rep("tx1_3", 3)
names(ORFranges4) <- rep("tx4_1", 3)
names(ORFranges5) <- rep("tx1_4", 4)
names(ORFranges6) <- rep("tx1_5", 4)
grl <- GRangesList(tx1_1 = ORFranges, tx1_2 = ORFranges2,
tx1_3 = ORFranges3, tx4_1 = ORFranges4,
tx1_4 = ORFranges5, tx1_5 = ORFranges6)
test_ranges <- startCodons(grl, TRUE)
expect_is(test_ranges, "GRangesList")
expect_is(strand(test_ranges),"CompressedRleList")
expect_is(seqnames(test_ranges),"CompressedRleList")
expect_equal(strandPerGroup(test_ranges, FALSE)[1], "+")
expect_equal(as.integer(unlist(start(test_ranges))),
c(1, 20, 30, 53, 1000, 1002, 1004, 1002, 1004))
expect_equal(as.integer(unlist(end(test_ranges))),
c(3, 22, 32, 55, 1000, 1002, 1004, 1002, 1005))
})
test_that("stopCodons works as intended", {
ORFranges <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(1, 10, 20),
end = c(5, 15, 25)),
strand = Rle(strand(rep("+", 3))))
ORFranges2 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(20, 30, 40),
end = c(25, 35, 45)),
strand = Rle(strand(rep("+", 3))))
ORFranges3 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(30, 40, 50),
end = c(35, 45, 55)),
strand = Rle(strand(rep("+", 3))))
ORFranges4 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(30, 40, 50),
end = c(35, 45, 55)),
strand = Rle(strand(rep("-", 3))))
ORFranges5 <- GRanges(seqnames = Rle(rep("1", 4)),
ranges = IRanges(start = c(1000, 1002, 1004, 1006),
end = c(1000, 1002, 1004, 1006)),
strand = Rle(strand(rep("+", 4))))
ORFranges6 <- GRanges(seqnames = Rle(rep("1", 4)),
ranges = IRanges(start = c(1002, 1003, 1004, 1006),
end = c(1002, 1003, 1004, 1006)),
strand = Rle(strand(rep("+", 4))))
ORFranges4 <- sort(ORFranges4, decreasing = TRUE)
names(ORFranges) <- rep("tx1_1" ,3)
names(ORFranges2) <- rep("tx1_2", 3)
names(ORFranges3) <- rep("tx1_3", 3)
names(ORFranges4) <- rep("tx4_1", 3)
names(ORFranges5) <- rep("tx1_4", 4)
names(ORFranges6) <- rep("tx1_5", 4)
grl <- GRangesList(tx1_1 = ORFranges, tx1_2 = ORFranges2,
tx1_3 = ORFranges3, tx4_1 = ORFranges4,
tx1_4 = ORFranges5, tx1_5 = ORFranges6)
test_ranges <- stopCodons(grl, TRUE)
expect_is(test_ranges, "GRangesList")
expect_is(strand(test_ranges),"CompressedRleList")
expect_is(seqnames(test_ranges),"CompressedRleList")
expect_equal(strandPerGroup(test_ranges, FALSE)[1], "+")
expect_equal(as.integer(unlist(start(test_ranges))), c(23,43, 53, 30, 1002,
1004, 1006, 1003,
1006))
expect_equal(as.integer(unlist(end(test_ranges))), c(25,45, 55, 32, 1002,
1004, 1006, 1004,
1006))
# check with meta columns
ORFranges$names <- rep("tx1_1" ,3)
ORFranges2$names <- rep("tx1_2", 3)
ORFranges3$names <- rep("tx1_3", 3)
ORFranges4$names <- rep("tx4_1", 3)
ORFranges5$names <- rep("tx1_4", 4)
ORFranges6$names <- rep("tx1_5", 4)
grl <- GRangesList(tx1_1 = ORFranges, tx1_2 = ORFranges2,
tx1_3 = ORFranges3, tx4_1 = ORFranges4,
tx1_4 = ORFranges5, tx1_5 = ORFranges6)
test_ranges <- stopCodons(grl, TRUE)
negStopss <- GRangesList(tx1_1 = GRanges("1", c(7, 5, 3, 1), "-"),
tx1_2 = GRanges("1", c(15, 13, 11, 9), "-"))
expect_equal(stopSites(stopCodons(negStopss, FALSE), is.sorted = TRUE),
c(1,9))
negStopss <- GRangesList(tx1_1 = GRanges("1", IRanges(c(9325,8012),
c(9418, 8013)),
"-"))
expect_equal(startSites(stopCodons(negStopss, FALSE), is.sorted = TRUE),
9325)
})
ORFranges <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(1, 10, 20), end = c(5, 15, 25)),
strand = Rle(strand(rep("+", 3))))
ORFranges2 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(10, 20, 30),
end = c(15, 25, 35)),
strand = Rle(strand(rep("+", 3))))
ORFranges3 <- GRanges(seqnames = Rle(rep("1", 3)),
ranges = IRanges(start = c(20, 30, 40),
end = c(25, 35, 45)),
strand = Rle(strand(rep("+", 3))))
ORFranges$names <- rep("tx1_1" ,3)
ORFranges2$names <- rep("tx1_2", 3)
ORFranges3$names <- rep("tx1_3", 3)
orfs <- c(ORFranges,ORFranges2,ORFranges3)
grl <- groupGRangesBy(orfs, orfs$names)
test_that("startRegion works as intended", {
transcriptRanges <- GRanges(seqnames = Rle(rep("1", 5)),
ranges = IRanges(start = c(1, 10, 20, 30, 40),
end = c(5, 15, 25, 35, 45)),
strand = Rle(strand(rep("+", 5))))
transcriptRanges <- groupGRangesBy(transcriptRanges,
rep("tx1", length(transcriptRanges)))
test_ranges <- startRegion(grl, transcriptRanges)
expect_equal(as.integer(unlist(start(test_ranges))), c(1, 4, 10, 14, 20))
expect_equal(as.integer(unlist(end(test_ranges))), c(3, 5, 12, 15, 22))
test_ranges <- startRegion(grl)
expect_equal(as.integer(unlist(end(test_ranges))), c(3, 12, 22))
})
test_that("stopRegion works as intended", {
transcriptRanges <- GRanges(seqnames = Rle(rep("1", 5)),
ranges = IRanges(start = c(1, 10, 20, 30, 40),
end = c(5, 15, 25, 35, 45)),
strand = Rle(strand(rep("+", 5))))
transcriptRanges <- groupGRangesBy(transcriptRanges,
rep("tx1", length(transcriptRanges)))
test_ranges <- stopRegion(grl, transcriptRanges)
expect_equal(as.integer(unlist(start(test_ranges))), c(23, 30, 33, 40, 43))
expect_equal(as.integer(unlist(end(test_ranges))), c(25, 31, 35, 41, 45))
test_ranges <- stopRegion(grl)
expect_equal(as.integer(unlist(end(test_ranges))), c(25, 35, 45))
})
test_that("uniqueGroups works as intended", {
grl[3] <- grl[1]
test_ranges <- uniqueGroups(grl)
expect_is(test_ranges, "GRangesList")
expect_equal(strandPerGroup(test_ranges, FALSE), c("+", "+"))
expect_equal(length(test_ranges), 2)
expect_equal(names(test_ranges), c("1", "2"))
})
test_that("uniqueOrder works as intended", {
gr1 <- GRanges("1", IRanges(1,10), "+")
gr2 <- GRanges("1", IRanges(20, 30), "+")
# make a grl with duplicated ORFs (gr1 twice)
grl <- GRangesList(tx1_1 = gr1, tx2_1 = gr2, tx3_1 = gr1)
test_result <- uniqueOrder(grl) # remember ordering
expect_equal(test_result, as.integer(c(1,2,1)))
})
test_that("findUORFs works as intended", {
# Load annotation
txdbFile <- system.file("extdata", "hg19_knownGene_sample.sqlite",
package = "GenomicFeatures")
txdb <- loadTxdb(txdbFile)
fiveUTRs <- loadRegion(txdb, "leaders")
cds <- loadRegion(txdb, "cds")
if (requireNamespace("BSgenome.Hsapiens.UCSC.hg19")) {
# Normally you would not use a BSgenome, but some custome fasta-
# annotation you have for your species
uorfs <- findUORFs(fiveUTRs["uc001bum.2"],
BSgenome.Hsapiens.UCSC.hg19::Hsapiens,
"ATG", cds = cds)
expect_equal(names(uorfs[1]), "uc001bum.2_5")
expect_equal(length(uorfs), 1)
}
})
test_that("artificial.orfs works as intended", {
cds <- GRangesList(tx1 = GRanges("chr1", IRanges(start = c(100), end = 150),"+"),
tx2 = GRanges("chr1", IRanges(200, 205), "+"),
tx3 = GRanges("chr1", IRanges(300, 311), "+"),
tx4 = GRanges("chr1", IRanges(400, 999), "+"),
tx5 = GRanges("chr1", IRanges(500, 511), "-"))
res <- artificial.orfs(cds)
expect_equal(100, startSites(res[1]))
expect_equal(150, stopSites(res[1]))
})
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/tbl_merge.R
\name{tbl_merge}
\alias{tbl_merge}
\title{Merge two or more gtsummary objects}
\usage{
tbl_merge(tbls, tab_spanner = NULL)
}
\arguments{
\item{tbls}{List of gtsummary objects to merge}
\item{tab_spanner}{Character vector specifying the spanning headers.
Must be the same length as \code{tbls}. The
strings are interpreted with \code{gt::md}.
Must be same length as \code{tbls} argument}
}
\value{
A \code{tbl_merge} object
}
\description{
Merges two or more \code{tbl_regression}, \code{tbl_uvregression}, \code{tbl_stack},
or \code{tbl_summary} objects and adds appropriate spanning headers.
}
\section{Example Output}{
\if{html}{Example 1}
\if{html}{\figure{tbl_merge_ex1.png}{options: width=70\%}}
\if{html}{Example 2}
\if{html}{\figure{tbl_merge_ex2.png}{options: width=65\%}}
}
\examples{
# Example 1 ----------------------------------
# Side-by-side Regression Models
library(survival)
t1 <-
glm(response ~ trt + grade + age, trial, family = binomial) \%>\%
tbl_regression(exponentiate = TRUE)
t2 <-
coxph(Surv(ttdeath, death) ~ trt + grade + age, trial) \%>\%
tbl_regression(exponentiate = TRUE)
tbl_merge_ex1 <-
tbl_merge(
tbls = list(t1, t2),
tab_spanner = c("**Tumor Response**", "**Time to Death**")
)
# Example 2 ----------------------------------
# Descriptive statistics alongside univariate regression, with no spanning header
t3 <-
trial[c("age", "grade", "response")] \%>\%
tbl_summary(missing = "no") \%>\%
add_n()
t4 <-
tbl_uvregression(
trial[c("ttdeath", "death", "age", "grade", "response")],
method = coxph,
y = Surv(ttdeath, death),
exponentiate = TRUE,
hide_n = TRUE
)
tbl_merge_ex2 <-
tbl_merge(tbls = list(t3, t4)) \%>\%
as_gt(include = -tab_spanner) \%>\%
gt::cols_label(stat_0_1 = gt::md("**Summary Statistics**"))
}
\seealso{
\link{tbl_stack}
Other tbl_regression tools:
\code{\link{add_global_p.tbl_regression}()},
\code{\link{add_nevent.tbl_regression}()},
\code{\link{add_q}()},
\code{\link{bold_italicize_labels_levels}},
\code{\link{combine_terms}()},
\code{\link{inline_text.tbl_regression}()},
\code{\link{modify_header}()},
\code{\link{tbl_regression}()},
\code{\link{tbl_stack}()}
Other tbl_uvregression tools:
\code{\link{add_global_p.tbl_uvregression}()},
\code{\link{add_nevent.tbl_uvregression}()},
\code{\link{add_q}()},
\code{\link{bold_italicize_labels_levels}},
\code{\link{inline_text.tbl_uvregression}()},
\code{\link{modify_header}()},
\code{\link{tbl_stack}()},
\code{\link{tbl_uvregression}()}
Other tbl_summary tools:
\code{\link{add_n}()},
\code{\link{add_overall}()},
\code{\link{add_p.tbl_summary}()},
\code{\link{add_q}()},
\code{\link{add_stat_label}()},
\code{\link{bold_italicize_labels_levels}},
\code{\link{inline_text.tbl_summary}()},
\code{\link{inline_text.tbl_survfit}()},
\code{\link{modify_header}()},
\code{\link{tbl_stack}()},
\code{\link{tbl_summary}()}
}
\author{
Daniel D. Sjoberg
}
\concept{tbl_regression tools}
\concept{tbl_summary tools}
\concept{tbl_uvregression tools}
| /man/tbl_merge.Rd | permissive | ClinicoPath/gtsummary | R | false | true | 3,109 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/tbl_merge.R
\name{tbl_merge}
\alias{tbl_merge}
\title{Merge two or more gtsummary objects}
\usage{
tbl_merge(tbls, tab_spanner = NULL)
}
\arguments{
\item{tbls}{List of gtsummary objects to merge}
\item{tab_spanner}{Character vector specifying the spanning headers.
Must be the same length as \code{tbls}. The
strings are interpreted with \code{gt::md}.
Must be same length as \code{tbls} argument}
}
\value{
A \code{tbl_merge} object
}
\description{
Merges two or more \code{tbl_regression}, \code{tbl_uvregression}, \code{tbl_stack},
or \code{tbl_summary} objects and adds appropriate spanning headers.
}
\section{Example Output}{
\if{html}{Example 1}
\if{html}{\figure{tbl_merge_ex1.png}{options: width=70\%}}
\if{html}{Example 2}
\if{html}{\figure{tbl_merge_ex2.png}{options: width=65\%}}
}
\examples{
# Example 1 ----------------------------------
# Side-by-side Regression Models
library(survival)
t1 <-
glm(response ~ trt + grade + age, trial, family = binomial) \%>\%
tbl_regression(exponentiate = TRUE)
t2 <-
coxph(Surv(ttdeath, death) ~ trt + grade + age, trial) \%>\%
tbl_regression(exponentiate = TRUE)
tbl_merge_ex1 <-
tbl_merge(
tbls = list(t1, t2),
tab_spanner = c("**Tumor Response**", "**Time to Death**")
)
# Example 2 ----------------------------------
# Descriptive statistics alongside univariate regression, with no spanning header
t3 <-
trial[c("age", "grade", "response")] \%>\%
tbl_summary(missing = "no") \%>\%
add_n()
t4 <-
tbl_uvregression(
trial[c("ttdeath", "death", "age", "grade", "response")],
method = coxph,
y = Surv(ttdeath, death),
exponentiate = TRUE,
hide_n = TRUE
)
tbl_merge_ex2 <-
tbl_merge(tbls = list(t3, t4)) \%>\%
as_gt(include = -tab_spanner) \%>\%
gt::cols_label(stat_0_1 = gt::md("**Summary Statistics**"))
}
\seealso{
\link{tbl_stack}
Other tbl_regression tools:
\code{\link{add_global_p.tbl_regression}()},
\code{\link{add_nevent.tbl_regression}()},
\code{\link{add_q}()},
\code{\link{bold_italicize_labels_levels}},
\code{\link{combine_terms}()},
\code{\link{inline_text.tbl_regression}()},
\code{\link{modify_header}()},
\code{\link{tbl_regression}()},
\code{\link{tbl_stack}()}
Other tbl_uvregression tools:
\code{\link{add_global_p.tbl_uvregression}()},
\code{\link{add_nevent.tbl_uvregression}()},
\code{\link{add_q}()},
\code{\link{bold_italicize_labels_levels}},
\code{\link{inline_text.tbl_uvregression}()},
\code{\link{modify_header}()},
\code{\link{tbl_stack}()},
\code{\link{tbl_uvregression}()}
Other tbl_summary tools:
\code{\link{add_n}()},
\code{\link{add_overall}()},
\code{\link{add_p.tbl_summary}()},
\code{\link{add_q}()},
\code{\link{add_stat_label}()},
\code{\link{bold_italicize_labels_levels}},
\code{\link{inline_text.tbl_summary}()},
\code{\link{inline_text.tbl_survfit}()},
\code{\link{modify_header}()},
\code{\link{tbl_stack}()},
\code{\link{tbl_summary}()}
}
\author{
Daniel D. Sjoberg
}
\concept{tbl_regression tools}
\concept{tbl_summary tools}
\concept{tbl_uvregression tools}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ACQR_kfilter.R
\name{ACQR_kfilter}
\alias{ACQR_kfilter}
\title{Kalman filter}
\usage{
ACQR_kfilter(y, A, C, Q, R, x10, P10, nx, ny)
}
\arguments{
\item{y:}{data. Matrix ny*nt
A: matrix nx*nx
C: matrix ny*nx}
}
\value{
y by rows, ny = nrow(y), nt = ncol(y)
}
\description{
Kalman filter for state space model
}
| /man/ACQR_kfilter.Rd | no_license | jywang2016/emssm | R | false | true | 388 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/ACQR_kfilter.R
\name{ACQR_kfilter}
\alias{ACQR_kfilter}
\title{Kalman filter}
\usage{
ACQR_kfilter(y, A, C, Q, R, x10, P10, nx, ny)
}
\arguments{
\item{y:}{data. Matrix ny*nt
A: matrix nx*nx
C: matrix ny*nx}
}
\value{
y by rows, ny = nrow(y), nt = ncol(y)
}
\description{
Kalman filter for state space model
}
|
###########################
# Libraries
###########################
# 1: define the libraries to use
libraries <- c("foreign","httr", "data.table", "stringr")
###########################
# Read dataset
###########################
load("nesi_2015/individuals/nesi_individuals_with_grants_2015.RData")
##############
# Save labels
##############
questions_labels <- as.data.frame(attributes(nesi_individuals_with_grants_2015)$variable.labels)
write.csv(questions_labels, file = "describe_dataset/4_csv_files/questions_labels_2015.csv")
| /nesi/describe_dataset/3_description/questions_labels.R | no_license | jnaudon/datachile-etl | R | false | false | 539 | r | ###########################
# Libraries
###########################
# 1: define the libraries to use
libraries <- c("foreign","httr", "data.table", "stringr")
###########################
# Read dataset
###########################
load("nesi_2015/individuals/nesi_individuals_with_grants_2015.RData")
##############
# Save labels
##############
questions_labels <- as.data.frame(attributes(nesi_individuals_with_grants_2015)$variable.labels)
write.csv(questions_labels, file = "describe_dataset/4_csv_files/questions_labels_2015.csv")
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/geom_timeline.R
\docType{data}
\name{geom_timeline_proto_class}
\alias{geom_timeline_proto_class}
\title{Function creates the new geom (geom_timeline).}
\description{
draw_panel_function is outsourced...looks nicer
}
\keyword{datasets}
| /man/geom_timeline_proto_class.Rd | no_license | moralmar/earthquakesWithR | R | false | true | 314 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/geom_timeline.R
\docType{data}
\name{geom_timeline_proto_class}
\alias{geom_timeline_proto_class}
\title{Function creates the new geom (geom_timeline).}
\description{
draw_panel_function is outsourced...looks nicer
}
\keyword{datasets}
|
source("./helper.R")
load("./blacklist.rds")
sample.sizes <-
c(1000000, 5000000, 10000000)
for (sample.size in sample.sizes) {
sample <- generate.sample(sample.size)
cat(paste(
"continuous,",
sample.size,
"samples: "
))
cat(system.time(computed.net1 <- pc.stable(sample,
test = "mi-cg",
blacklist = base::as.data.frame(bl)
)))
cat("\n")
save(computed.net1, file = filename(
prefix = "./nets/",
name = paste(sample.size,
"continuous.rds",
sep = "-"
)
))
} | /continuous-csl.R | no_license | witsyke/ci-taec-discretization-impact | R | false | false | 522 | r | source("./helper.R")
load("./blacklist.rds")
sample.sizes <-
c(1000000, 5000000, 10000000)
for (sample.size in sample.sizes) {
sample <- generate.sample(sample.size)
cat(paste(
"continuous,",
sample.size,
"samples: "
))
cat(system.time(computed.net1 <- pc.stable(sample,
test = "mi-cg",
blacklist = base::as.data.frame(bl)
)))
cat("\n")
save(computed.net1, file = filename(
prefix = "./nets/",
name = paste(sample.size,
"continuous.rds",
sep = "-"
)
))
} |
# Прочетете данните и ги запишете в data frame в R;
data = read.csv('train.csv', header=TRUE)
?data
View(data)
# Генерирайте си подизвадка от 500 наблюдения. За целта нека f_nr е
# вашият факултетен номер. Задайте състояние на генератора на слу-
# чайни числа в R чрез set.seed(f_nr). С помощта на подходяща фун-
# кция генерирайте извадка без връщане на числата от 1 до 891 като
# не забравяте да я запишете във вектор. Използвайте вектора, за да
# зашишете само редовете със съответните индекси в нов дейтафрейм и
# работете с него оттук нататък;
set.seed(61701)
?sample
rn = sample(1:891, size=500, replace=FALSE)
rn
filtered_data = data[rn,]
filtered_data
nrow(filtered_data)
#Изчистете данните: за нашите цели ще ни трябват само наблюдения,
#при които имаме информация за всяка от променливите, но не всеки
#пътник е споделил каква е възрастта му. Проверете в R какво пра-
# ви функцията is.na и я използвайте върху променливата Age, за да
#извикате само редовете, където имаме наблюдения със записана въз-
# раст. Запишете резултата в нов дейтафрейм и работете с него оттук
#нататък;
filtered_data = na.omit(filtered_data)
# Изкарайте на екрана първите няколко (5-6) наблюдения;
head(filtered_data)
?head
head(filtered_data, n=8)
tail(filtered_data)
filtered_data[1:6,]
#Какъв вид данни (качествени/количествени, непрекъснати/дискретни)
# са записани във всяка от променливите?
names(filtered_data)
factor(filtered_data$Name)
factor(filtered_data$Embarked)
#Survided -качествени
#Pclas -качествена
#Sex -качествена
#Age - количествена, непрекъсната
#SibSp -количествена, дискретна
#Parch - количествена, дискретна
#Fare - количествени, непрекъснати
#Cabin - качествени
#Embarked - качествени
# Изведете дескриптивни статистики за всяка една от променливите;
summary(filtered_data[1])
?fivenum
fivenum(filtered_data[1], na.rm=True)
# Изведете редовете на най-младия и най-стария пътник;
?is.na
data[min(filtered_data$Age, na.rm=TRUE),]
data[max(filtered_data$Age, na.rm=TRUE),]
maxAge = max(filtered_data$Age, na.rm=TRUE)
filtered_data[which(filtered_data$Age == maxAge),]
attach(filtered_data)
filtered_data[Age == maxAge,]
min(filtered_data$Age)
max(filtered_data$Age)
pokemon = read.csv('pokemon.csv', header=TRUE)
pokemon
filtered_pokemons_indexes = sample(1:705, 600, replace = FALSE)
filtered_pokemons = pokemon[filtered_pokemons_indexes, ]
filtered_pokemons
nrow(filtered_pokemons)
# Изведете редовете на покемоните с общ брой точки за атака и защита над 220;
attach(filtered_pokemons)
filtered_pokemons[Attack + Defense > 220,]
#Колко на брой покемони имат първичен или вторичен тип "Dragon"или
#"lying"и са високи над един метър?
nrow(filtered_pokemons[(Type1 == 'Dragon' | Type1 == 'Flying' | Type1 == 'Flying' | Type1 == 'Dragon') & Height > 1, ])
# Направете хистограма на теглото само на покемоните с вторичен тип
#и нанесете графика на плътността върху нея. Симетрично ли са раз-
# положени данните?
fpst = filtered_pokemons[Type2 != "",]; fpst
hist(fpst$Height, probability = TRUE)
lines(density(fpst$Height, bw=5))
lines(density(fpst$Height, bw=3))
lines(density(fpst$Height, bw=1))
?density
?lines
# За покемоните с първичен тип "Normal"или "Fighting"изследвайте
# съвместно променливите Type1 и Height с подходящ графичен метод.
# Забелязвате ли outlier-и? Сравнете извадковите средни и медианите в
# двете групи и направете извод;
# boxplot/hist
fp = filtered_pokemons[Type1 == "Normal" | Type1 == "Flying",]
fp
hist(fp$Height ~ fp$Type1)
fp$Type1 = factor(fp$Type1)
boxplot(fp$Height ~ fp$Type1)
# Изследвайте съвместно променливите Height и Weight с подходящ
#графичен метод. Бихте ли казали, че съществува линейна връзка меж-
# ду тях? Намерете корелацията между величините и коментирайте
#стойността `и. Начертайте регресионна права (линейната функция, ко-
#ято най-добре приближава функционалната зависимост). Ако е наб-
# людаван нов вид покемон с височина 2.1 метра, какво се очаква да е
#теглото му на базата на линейния модел?
plot(fp$Weight ~ fp$Height)
abline(lm(fp$Weight ~ fp$Height))
cor(fp$Height, fp$Weight)
?lm
coef = lm(fp$Weight ~ fp$Height)
#interectp + fp$heihjt*2.1
w = coef$coefficients[1] + coef$coefficients[2] * 2.1
w
coef$coefficients[1]
#####
x = c(4, 14, 2, 9, 1, 10, 11, 7, 3, 13, 4, 14, 2, 9, 1, 10, 11, 7, 3, 12)
mean(x)
median(x)
# conf level (1-a)
#грешка от първи род - a
#грешка от втори род - b
#отвърляме h0 ако е в критичната област
#неотхвърляме h0 ако не е в критимната област
#p_value < a => reject H0
# p_value > a => accept H0
## qqnorm/qqline - п,оверкса за нормално разпределение, StatDA
# с,авнение на медиани от две извадки зависими помежду си - wilcox test(x, y, paired = TRUE)
# равни популации - var.equal
# ворамлно разпределени извади, зависими помужд си- сравнение на средно - t-test, paired=TRUE
# t-test - не знаем дисперсията на популацията
# z-test - известна дисперсия на популацията
# доверителен интервал за средно за извазка с рзавцмер над 30 - z-test/t-test
# генериране на 10 случайн числаq - runif/dunif
# генериране на 10 случайни биномни числа - rbinom
# намиране на 1-ти квантил - qbinom
# ве,оятност X < 10 = ? pbinom()
# P(X=x) dbinom()
# P(X < ?) = 10 qbinom | /SEM/HW1/HW1/hm1.r | no_license | valkirilov/FMI-2017-2018 | R | false | false | 7,412 | r |
# Прочетете данните и ги запишете в data frame в R;
data = read.csv('train.csv', header=TRUE)
?data
View(data)
# Генерирайте си подизвадка от 500 наблюдения. За целта нека f_nr е
# вашият факултетен номер. Задайте състояние на генератора на слу-
# чайни числа в R чрез set.seed(f_nr). С помощта на подходяща фун-
# кция генерирайте извадка без връщане на числата от 1 до 891 като
# не забравяте да я запишете във вектор. Използвайте вектора, за да
# зашишете само редовете със съответните индекси в нов дейтафрейм и
# работете с него оттук нататък;
set.seed(61701)
?sample
rn = sample(1:891, size=500, replace=FALSE)
rn
filtered_data = data[rn,]
filtered_data
nrow(filtered_data)
#Изчистете данните: за нашите цели ще ни трябват само наблюдения,
#при които имаме информация за всяка от променливите, но не всеки
#пътник е споделил каква е възрастта му. Проверете в R какво пра-
# ви функцията is.na и я използвайте върху променливата Age, за да
#извикате само редовете, където имаме наблюдения със записана въз-
# раст. Запишете резултата в нов дейтафрейм и работете с него оттук
#нататък;
filtered_data = na.omit(filtered_data)
# Изкарайте на екрана първите няколко (5-6) наблюдения;
head(filtered_data)
?head
head(filtered_data, n=8)
tail(filtered_data)
filtered_data[1:6,]
#Какъв вид данни (качествени/количествени, непрекъснати/дискретни)
# са записани във всяка от променливите?
names(filtered_data)
factor(filtered_data$Name)
factor(filtered_data$Embarked)
#Survided -качествени
#Pclas -качествена
#Sex -качествена
#Age - количествена, непрекъсната
#SibSp -количествена, дискретна
#Parch - количествена, дискретна
#Fare - количествени, непрекъснати
#Cabin - качествени
#Embarked - качествени
# Изведете дескриптивни статистики за всяка една от променливите;
summary(filtered_data[1])
?fivenum
fivenum(filtered_data[1], na.rm=True)
# Изведете редовете на най-младия и най-стария пътник;
?is.na
data[min(filtered_data$Age, na.rm=TRUE),]
data[max(filtered_data$Age, na.rm=TRUE),]
maxAge = max(filtered_data$Age, na.rm=TRUE)
filtered_data[which(filtered_data$Age == maxAge),]
attach(filtered_data)
filtered_data[Age == maxAge,]
min(filtered_data$Age)
max(filtered_data$Age)
pokemon = read.csv('pokemon.csv', header=TRUE)
pokemon
filtered_pokemons_indexes = sample(1:705, 600, replace = FALSE)
filtered_pokemons = pokemon[filtered_pokemons_indexes, ]
filtered_pokemons
nrow(filtered_pokemons)
# Изведете редовете на покемоните с общ брой точки за атака и защита над 220;
attach(filtered_pokemons)
filtered_pokemons[Attack + Defense > 220,]
#Колко на брой покемони имат първичен или вторичен тип "Dragon"или
#"lying"и са високи над един метър?
nrow(filtered_pokemons[(Type1 == 'Dragon' | Type1 == 'Flying' | Type1 == 'Flying' | Type1 == 'Dragon') & Height > 1, ])
# Направете хистограма на теглото само на покемоните с вторичен тип
#и нанесете графика на плътността върху нея. Симетрично ли са раз-
# положени данните?
fpst = filtered_pokemons[Type2 != "",]; fpst
hist(fpst$Height, probability = TRUE)
lines(density(fpst$Height, bw=5))
lines(density(fpst$Height, bw=3))
lines(density(fpst$Height, bw=1))
?density
?lines
# За покемоните с първичен тип "Normal"или "Fighting"изследвайте
# съвместно променливите Type1 и Height с подходящ графичен метод.
# Забелязвате ли outlier-и? Сравнете извадковите средни и медианите в
# двете групи и направете извод;
# boxplot/hist
fp = filtered_pokemons[Type1 == "Normal" | Type1 == "Flying",]
fp
hist(fp$Height ~ fp$Type1)
fp$Type1 = factor(fp$Type1)
boxplot(fp$Height ~ fp$Type1)
# Изследвайте съвместно променливите Height и Weight с подходящ
#графичен метод. Бихте ли казали, че съществува линейна връзка меж-
# ду тях? Намерете корелацията между величините и коментирайте
#стойността `и. Начертайте регресионна права (линейната функция, ко-
#ято най-добре приближава функционалната зависимост). Ако е наб-
# людаван нов вид покемон с височина 2.1 метра, какво се очаква да е
#теглото му на базата на линейния модел?
plot(fp$Weight ~ fp$Height)
abline(lm(fp$Weight ~ fp$Height))
cor(fp$Height, fp$Weight)
?lm
coef = lm(fp$Weight ~ fp$Height)
#interectp + fp$heihjt*2.1
w = coef$coefficients[1] + coef$coefficients[2] * 2.1
w
coef$coefficients[1]
#####
x = c(4, 14, 2, 9, 1, 10, 11, 7, 3, 13, 4, 14, 2, 9, 1, 10, 11, 7, 3, 12)
mean(x)
median(x)
# conf level (1-a)
#грешка от първи род - a
#грешка от втори род - b
#отвърляме h0 ако е в критичната област
#неотхвърляме h0 ако не е в критимната област
#p_value < a => reject H0
# p_value > a => accept H0
## qqnorm/qqline - п,оверкса за нормално разпределение, StatDA
# с,авнение на медиани от две извадки зависими помежду си - wilcox test(x, y, paired = TRUE)
# равни популации - var.equal
# ворамлно разпределени извади, зависими помужд си- сравнение на средно - t-test, paired=TRUE
# t-test - не знаем дисперсията на популацията
# z-test - известна дисперсия на популацията
# доверителен интервал за средно за извазка с рзавцмер над 30 - z-test/t-test
# генериране на 10 случайн числаq - runif/dunif
# генериране на 10 случайни биномни числа - rbinom
# намиране на 1-ти квантил - qbinom
# ве,оятност X < 10 = ? pbinom()
# P(X=x) dbinom()
# P(X < ?) = 10 qbinom |
######################################
## Formatting human development data
## For post score analyses
######################################
# load libraries, set directories
library(ohicore) #devtools::install_github('ohi-science/ohicore@dev')
library(dplyr)
library(stringr)
library(tidyr)
## comment out when knitting
setwd("globalprep/supplementary_information/v2016")
### Load FAO-specific user-defined functions
source('../../../src/R/common.R') # directory locations
### HDI data
hdi <- read.csv(file.path(dir_M, "git-annex/globalprep/_raw_data/UnitedNations_HumanDevelopmentIndex/d2016/int/HDI_2014_data.csv"))
hdi <- hdi %>%
mutate(Country = as.character(Country)) %>%
mutate(Country = ifelse(Country == "C\xf4te d'Ivoire", "Ivory Coast", Country))
### Function to convert to OHI region ID
hdi_rgn <- name_2_rgn(df_in = hdi,
fld_name='Country')
## duplicates of same region
dups <- hdi_rgn$rgn_id[duplicated(hdi_rgn$rgn_id)]
hdi_rgn[hdi_rgn$rgn_id %in% dups, ]
# population weighted average:
# http://www.worldometers.info/world-population/china-hong-kong-sar-population/
#
pops <- data.frame(Country = c("Hong Kong, China (SAR)", "China"),
population = c(7346248, 1382323332))
hdi_rgn <- hdi_rgn %>%
left_join(pops, by="Country") %>%
mutate(population = ifelse(is.na(population), 1, population)) %>%
group_by(rgn_id) %>%
summarize(value = weighted.mean(HDI_2014, population)) %>%
ungroup()
hdi_rgn[hdi_rgn$rgn_id %in% dups, ]
write.csv(hdi_rgn, "HDI_data.csv", row.names=FALSE)
| /globalprep/supplementary_information/v2016/HDI_prepare.R | no_license | OHI-Science/ohiprep_v2018 | R | false | false | 1,563 | r | ######################################
## Formatting human development data
## For post score analyses
######################################
# load libraries, set directories
library(ohicore) #devtools::install_github('ohi-science/ohicore@dev')
library(dplyr)
library(stringr)
library(tidyr)
## comment out when knitting
setwd("globalprep/supplementary_information/v2016")
### Load FAO-specific user-defined functions
source('../../../src/R/common.R') # directory locations
### HDI data
hdi <- read.csv(file.path(dir_M, "git-annex/globalprep/_raw_data/UnitedNations_HumanDevelopmentIndex/d2016/int/HDI_2014_data.csv"))
hdi <- hdi %>%
mutate(Country = as.character(Country)) %>%
mutate(Country = ifelse(Country == "C\xf4te d'Ivoire", "Ivory Coast", Country))
### Function to convert to OHI region ID
hdi_rgn <- name_2_rgn(df_in = hdi,
fld_name='Country')
## duplicates of same region
dups <- hdi_rgn$rgn_id[duplicated(hdi_rgn$rgn_id)]
hdi_rgn[hdi_rgn$rgn_id %in% dups, ]
# population weighted average:
# http://www.worldometers.info/world-population/china-hong-kong-sar-population/
#
pops <- data.frame(Country = c("Hong Kong, China (SAR)", "China"),
population = c(7346248, 1382323332))
hdi_rgn <- hdi_rgn %>%
left_join(pops, by="Country") %>%
mutate(population = ifelse(is.na(population), 1, population)) %>%
group_by(rgn_id) %>%
summarize(value = weighted.mean(HDI_2014, population)) %>%
ungroup()
hdi_rgn[hdi_rgn$rgn_id %in% dups, ]
write.csv(hdi_rgn, "HDI_data.csv", row.names=FALSE)
|
testlist <- list(rates = numeric(0), thresholds = numeric(0), x = c(5.24724722911887e-116, 3.37207710545706e-307, 6.8089704084083e+38, 5.41631134847847e-312, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0))
result <- do.call(grattan::IncomeTax,testlist)
str(result) | /grattan/inst/testfiles/IncomeTax/libFuzzer_IncomeTax/IncomeTax_valgrind_files/1610382326-test.R | no_license | akhikolla/updated-only-Issues | R | false | false | 395 | r | testlist <- list(rates = numeric(0), thresholds = numeric(0), x = c(5.24724722911887e-116, 3.37207710545706e-307, 6.8089704084083e+38, 5.41631134847847e-312, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0))
result <- do.call(grattan::IncomeTax,testlist)
str(result) |
library("ape")
# setwd("/home/kdi/Filezilla_download/GRASSEN/algorithm")
arg_options <- commandArgs(trailingOnly = TRUE)
arg_options
dir.create(file.path(arg_options[2]))
file_name<- basename(arg_options[1])
# bed_name <- strsplit(file_name, ".", fixed = TRUE)
snps_file <- paste("snps_", arg_options[3],"_", file_name, sep = "") # Make output string
matrix_file <- paste("matrix_", arg_options[3],"_", file_name, sep = "")
# bed_file <- paste("bed_", arg_options[3], "_", bed_name[[1]][1],".bed", sep = "")
snps_out <- file.path(arg_options[2], snps_file) # Make output dir and name in string.
matrix_out <- file.path(arg_options[2], matrix_file)
input_file <- arg_options[1]
stop_quietly <- function() {
opt <- options(show.error.messages = FALSE)
on.exit(options(opt))
stop()
}
# snps_out <- "/home/kdi/Filezilla_download/GRASSEN/algorithm/test_snps_3_test.csv" # Make output dir and name in string.
# matrix_out <- "/home/kdi/Filezilla_download/GRASSEN/algorithm/test_matrix_3_test.csv"
# input_file <- "/home/kdi/Pictures/allel_frequencies_grassen.csv"
df <- read.table(input_file, sep = "\t", na.strings = "-", header = TRUE, row.names = 1)
# if a pair file is specified merge the pairs.
if (length(arg_options) > 3 ){
fc <- file(arg_options[4])
mylist <- strsplit(readLines(fc), ",")
for (pair in mylist){
nm <- (unlist(pair))
df <- df[!rownames(df) %in% nm[-1], ]
new_name <- paste(nm, collapse = "")
row_index <- match(nm[1], rownames(df))
row.names(df)[row_index] <- new_name
}
rownames(df)
}
# value = 1 # if one than it makes minimal snp set, if 3 their have to be atleast 3 snp's that seperate the groups.
#
value <- as.numeric(arg_options[3])
output_files <- function(final_markers, gene_dist){
write.table(final_markers, file = snps_out, sep = "\t", na = "-", row.names = TRUE, col.names = NA)
gene_dist <- as.matrix(gene_dist)
gene_dist[upper.tri(gene_dist)] <- NA
write.table(gene_dist, file = matrix_out, sep = "\t", na = "", col.names = NA)
}
snp_selection <- c(1)# snp to start with. the one with the highest entropy
previous_value <- 0
previous_check <- 0
check = 0.2
while(check != value + 1){
null_list <- c()
snp_number <- c()
for (i in 1:length(df)){
if (any(snp_selection==i) == FALSE){
selection <- df[-1 ,c(snp_selection,i)] #new dataframe with extra snp(i)
gene_dist <- dist(selection) #calc distance with new snp
gene_dist[is.na(gene_dist)] <- 0
if (sum(gene_dist < value) == 0){ # if there are no 0's in the pairwise comparson they are all unique, meaning we found a snp set.
cat("Minimal snp_set found! ")
cat("Containing", length(snp_selection) + 1, "snps", "\n")
snp_selection <<- c(snp_selection, i)
final_markers <<- df[,c(snp_selection)]
output_files(final_markers, gene_dist)
# stop_quietly()
quit()
}
null_list <- c(null_list, sum(gene_dist < check)) #save results of the additional snp(i), this is the amount of 0's
snp_number <- c(snp_number, i) # save snp numbers, so we can relate it back to its performance
}
}
# if ( (sum(gene_dist < check) >= previous_value) & (check == previous_check) ){
# print("No optimal soluion found")
# cat("Closest solution contains", length(snp_selection), "snps", "\n")
# cat("Lowest distance: ", min(gene_dist), "\n")
# final_markers <<- df[,c(snp_selection)]
# output_files(final_markers, gene_dist)
# quit()
# # stop_quietly()
# }
previous_value <- sum(gene_dist < check)
previous_check <- check
if (length(unique(null_list)) != 1){ #skip a non informative snp selection.
best_hit <- which.min(null_list) # get the best hit with highest maf score
snp_selection <- c(snp_selection,snp_number[best_hit]) # add the best hit to the snp set and iterate again.
}
print(sum(gene_dist < check))
if (sum(gene_dist < check) == 0){
check <- check + 0.2
}
}
# hc <- hclust(gene_dist) # apply hirarchical clustering
# plot(hc)
#gene_dist
#sum(gene_dist)
| /extra_scripts/configure_frequency_set.R | no_license | vdkoen/SNP-select | R | false | false | 4,137 | r | library("ape")
# setwd("/home/kdi/Filezilla_download/GRASSEN/algorithm")
arg_options <- commandArgs(trailingOnly = TRUE)
arg_options
dir.create(file.path(arg_options[2]))
file_name<- basename(arg_options[1])
# bed_name <- strsplit(file_name, ".", fixed = TRUE)
snps_file <- paste("snps_", arg_options[3],"_", file_name, sep = "") # Make output string
matrix_file <- paste("matrix_", arg_options[3],"_", file_name, sep = "")
# bed_file <- paste("bed_", arg_options[3], "_", bed_name[[1]][1],".bed", sep = "")
snps_out <- file.path(arg_options[2], snps_file) # Make output dir and name in string.
matrix_out <- file.path(arg_options[2], matrix_file)
input_file <- arg_options[1]
stop_quietly <- function() {
opt <- options(show.error.messages = FALSE)
on.exit(options(opt))
stop()
}
# snps_out <- "/home/kdi/Filezilla_download/GRASSEN/algorithm/test_snps_3_test.csv" # Make output dir and name in string.
# matrix_out <- "/home/kdi/Filezilla_download/GRASSEN/algorithm/test_matrix_3_test.csv"
# input_file <- "/home/kdi/Pictures/allel_frequencies_grassen.csv"
df <- read.table(input_file, sep = "\t", na.strings = "-", header = TRUE, row.names = 1)
# if a pair file is specified merge the pairs.
if (length(arg_options) > 3 ){
fc <- file(arg_options[4])
mylist <- strsplit(readLines(fc), ",")
for (pair in mylist){
nm <- (unlist(pair))
df <- df[!rownames(df) %in% nm[-1], ]
new_name <- paste(nm, collapse = "")
row_index <- match(nm[1], rownames(df))
row.names(df)[row_index] <- new_name
}
rownames(df)
}
# value = 1 # if one than it makes minimal snp set, if 3 their have to be atleast 3 snp's that seperate the groups.
#
value <- as.numeric(arg_options[3])
output_files <- function(final_markers, gene_dist){
write.table(final_markers, file = snps_out, sep = "\t", na = "-", row.names = TRUE, col.names = NA)
gene_dist <- as.matrix(gene_dist)
gene_dist[upper.tri(gene_dist)] <- NA
write.table(gene_dist, file = matrix_out, sep = "\t", na = "", col.names = NA)
}
snp_selection <- c(1)# snp to start with. the one with the highest entropy
previous_value <- 0
previous_check <- 0
check = 0.2
while(check != value + 1){
null_list <- c()
snp_number <- c()
for (i in 1:length(df)){
if (any(snp_selection==i) == FALSE){
selection <- df[-1 ,c(snp_selection,i)] #new dataframe with extra snp(i)
gene_dist <- dist(selection) #calc distance with new snp
gene_dist[is.na(gene_dist)] <- 0
if (sum(gene_dist < value) == 0){ # if there are no 0's in the pairwise comparson they are all unique, meaning we found a snp set.
cat("Minimal snp_set found! ")
cat("Containing", length(snp_selection) + 1, "snps", "\n")
snp_selection <<- c(snp_selection, i)
final_markers <<- df[,c(snp_selection)]
output_files(final_markers, gene_dist)
# stop_quietly()
quit()
}
null_list <- c(null_list, sum(gene_dist < check)) #save results of the additional snp(i), this is the amount of 0's
snp_number <- c(snp_number, i) # save snp numbers, so we can relate it back to its performance
}
}
# if ( (sum(gene_dist < check) >= previous_value) & (check == previous_check) ){
# print("No optimal soluion found")
# cat("Closest solution contains", length(snp_selection), "snps", "\n")
# cat("Lowest distance: ", min(gene_dist), "\n")
# final_markers <<- df[,c(snp_selection)]
# output_files(final_markers, gene_dist)
# quit()
# # stop_quietly()
# }
previous_value <- sum(gene_dist < check)
previous_check <- check
if (length(unique(null_list)) != 1){ #skip a non informative snp selection.
best_hit <- which.min(null_list) # get the best hit with highest maf score
snp_selection <- c(snp_selection,snp_number[best_hit]) # add the best hit to the snp set and iterate again.
}
print(sum(gene_dist < check))
if (sum(gene_dist < check) == 0){
check <- check + 0.2
}
}
# hc <- hclust(gene_dist) # apply hirarchical clustering
# plot(hc)
#gene_dist
#sum(gene_dist)
|
report_quality_assurance <- function(report) {
column_names <- c(
"instance_num", "pay", "assert_all_are_greater_than_or_equal_to", "change", "cost_so_far",
"AUC_holdout", "full_AUC", "subset_AUC"
)
assertive::assert_are_intersecting_sets(report %>% colnames(), column_names)
assertive::assert_all_are_not_na(report %>% select(instance_num, batch))
assertive::assert_all_are_greater_than_or_equal_to(report %>% .$batch %>% diff(), 0)
assertive::assert_all_are_greater_than_or_equal_to(report %>% .$instance_num %>% diff(), 0)
}
| /code/R/report_quality_assurance.R | no_license | ruijiang81/crowdsourcing | R | false | false | 568 | r | report_quality_assurance <- function(report) {
column_names <- c(
"instance_num", "pay", "assert_all_are_greater_than_or_equal_to", "change", "cost_so_far",
"AUC_holdout", "full_AUC", "subset_AUC"
)
assertive::assert_are_intersecting_sets(report %>% colnames(), column_names)
assertive::assert_all_are_not_na(report %>% select(instance_num, batch))
assertive::assert_all_are_greater_than_or_equal_to(report %>% .$batch %>% diff(), 0)
assertive::assert_all_are_greater_than_or_equal_to(report %>% .$instance_num %>% diff(), 0)
}
|
\alias{gtk-High-level-Printing-API}
\alias{GtkPrintOperation}
\alias{GtkPrintOperationPreview}
\alias{gtkPrintOperation}
\alias{GtkPageSetupDoneFunc}
\alias{GtkPrintStatus}
\alias{GtkPrintOperationAction}
\alias{GtkPrintOperationResult}
\alias{GtkPrintError}
\name{gtk-High-level-Printing-API}
\title{GtkPrintOperation}
\description{High-level Printing API}
\section{Methods and Functions}{
\code{\link{gtkPrintOperationNew}()}\cr
\code{\link{gtkPrintOperationSetAllowAsync}(object, allow.async)}\cr
\code{\link{gtkPrintOperationGetError}(object, .errwarn = TRUE)}\cr
\code{\link{gtkPrintOperationSetDefaultPageSetup}(object, default.page.setup = NULL)}\cr
\code{\link{gtkPrintOperationGetDefaultPageSetup}(object)}\cr
\code{\link{gtkPrintOperationSetPrintSettings}(object, print.settings = NULL)}\cr
\code{\link{gtkPrintOperationGetPrintSettings}(object)}\cr
\code{\link{gtkPrintOperationSetJobName}(object, job.name)}\cr
\code{\link{gtkPrintOperationSetNPages}(object, n.pages)}\cr
\code{\link{gtkPrintOperationGetNPagesToPrint}(object)}\cr
\code{\link{gtkPrintOperationSetCurrentPage}(object, current.page)}\cr
\code{\link{gtkPrintOperationSetUseFullPage}(object, full.page)}\cr
\code{\link{gtkPrintOperationSetUnit}(object, unit)}\cr
\code{\link{gtkPrintOperationSetExportFilename}(object, filename)}\cr
\code{\link{gtkPrintOperationSetShowProgress}(object, show.progress)}\cr
\code{\link{gtkPrintOperationSetTrackPrintStatus}(object, track.status)}\cr
\code{\link{gtkPrintOperationSetCustomTabLabel}(object, label)}\cr
\code{\link{gtkPrintOperationRun}(object, action, parent = NULL, .errwarn = TRUE)}\cr
\code{\link{gtkPrintOperationCancel}(object)}\cr
\code{\link{gtkPrintOperationDrawPageFinish}(object)}\cr
\code{\link{gtkPrintOperationSetDeferDrawing}(object)}\cr
\code{\link{gtkPrintOperationGetStatus}(object)}\cr
\code{\link{gtkPrintOperationGetStatusString}(object)}\cr
\code{\link{gtkPrintOperationIsFinished}(object)}\cr
\code{\link{gtkPrintOperationSetSupportSelection}(object, support.selection)}\cr
\code{\link{gtkPrintOperationGetSupportSelection}(object)}\cr
\code{\link{gtkPrintOperationSetHasSelection}(object, has.selection)}\cr
\code{\link{gtkPrintOperationGetHasSelection}(object)}\cr
\code{\link{gtkPrintOperationSetEmbedPageSetup}(object, embed)}\cr
\code{\link{gtkPrintOperationGetEmbedPageSetup}(object)}\cr
\code{\link{gtkPrintRunPageSetupDialog}(parent, page.setup = NULL, settings)}\cr
\code{\link{gtkPrintRunPageSetupDialogAsync}(parent, page.setup, settings, done.cb, data)}\cr
\code{\link{gtkPrintOperationPreviewEndPreview}(object)}\cr
\code{\link{gtkPrintOperationPreviewIsSelected}(object, page.nr)}\cr
\code{\link{gtkPrintOperationPreviewRenderPage}(object, page.nr)}\cr
\code{gtkPrintOperation()}
}
\section{Hierarchy}{\preformatted{
GObject
+----GtkPrintOperation
GInterface
+----GtkPrintOperationPreview
}}
\section{Implementations}{GtkPrintOperationPreview is implemented by
\code{\link{GtkPrintOperation}}.}
\section{Interfaces}{GtkPrintOperation implements
\code{\link{GtkPrintOperationPreview}}.}
\section{Detailed Description}{GtkPrintOperation is the high-level, portable printing API. It looks
a bit different than other GTK+ dialogs such as the \code{\link{GtkFileChooser}},
since some platforms don't expose enough infrastructure to implement
a good print dialog. On such platforms, GtkPrintOperation uses the
native print dialog. On platforms which do not provide a native
print dialog, GTK+ uses its own, see \verb{GtkPrintUnixDialog}.
The typical way to use the high-level printing API is to create a
\code{\link{GtkPrintOperation}} object with \code{\link{gtkPrintOperationNew}} when the user
selects to print. Then you set some properties on it, e.g. the page size,
any \code{\link{GtkPrintSettings}} from previous print operations, the number of pages,
the current page, etc.
Then you start the print operation by calling \code{\link{gtkPrintOperationRun}}.
It will then show a dialog, let the user select a printer and options.
When the user finished the dialog various signals will be emitted on the
\code{\link{GtkPrintOperation}}, the main one being ::draw-page, which you are supposed
to catch and render the page on the provided \code{\link{GtkPrintContext}} using Cairo.
\emph{The high-level printing API}
\preformatted{
settings <- NULL
print_something <-
{
op <- gtkPrintOperation()
if (!is.null(settings))
op$setPrintSettings(settings)
gSignalConnect(op, "begin_print", begin_print)
gSignalConnect(op, "draw_page", draw_page)
res <- op$run("print-dialog", main_window)[[1]]
if (res == "apply")
settings <- op$getPrintSettings()
}
}
By default GtkPrintOperation uses an external application to do
print preview. To implement a custom print preview, an application
must connect to the preview signal. The functions
\code{gtkPrintOperationPrintPreviewRenderPage()},
\code{\link{gtkPrintOperationPreviewEndPreview}} and
\code{\link{gtkPrintOperationPreviewIsSelected}} are useful
when implementing a print preview.
Printing support was added in GTK+ 2.10.}
\section{Structures}{\describe{
\item{\verb{GtkPrintOperation}}{
\emph{undocumented
}
}
\item{\verb{GtkPrintOperationPreview}}{
\emph{undocumented
}
}
}}
\section{Convenient Construction}{\code{gtkPrintOperation} is the equivalent of \code{\link{gtkPrintOperationNew}}.}
\section{Enums and Flags}{\describe{
\item{\verb{GtkPrintStatus}}{
The status gives a rough indication of the completion
of a running print operation.
\describe{
\item{\verb{initial}}{The printing has not started yet; this
status is set initially, and while the print dialog is shown.}
\item{\verb{preparing}}{This status is set while the begin-print
signal is emitted and during pagination.}
\item{\verb{generating-data}}{This status is set while the
pages are being rendered.}
\item{\verb{sending-data}}{The print job is being sent off to the
printer.}
\item{\verb{pending}}{The print job has been sent to the printer,
but is not printed for some reason, e.g. the printer may be stopped.}
\item{\verb{pending-issue}}{Some problem has occurred during
printing, e.g. a paper jam.}
\item{\verb{printing}}{The printer is processing the print job.}
\item{\verb{finished}}{The printing has been completed successfully.}
\item{\verb{finished-aborted}}{The printing has been aborted.}
}
}
\item{\verb{GtkPrintOperationAction}}{
The \code{action} parameter to \code{\link{gtkPrintOperationRun}}
determines what action the print operation should perform.
\describe{
\item{\verb{print-dialog}}{Show the print dialog.}
\item{\verb{print}}{Start to print without showing
the print dialog, based on the current print settings.}
\item{\verb{preview}}{Show the print preview.}
\item{\verb{export}}{Export to a file. This requires
the export-filename property to be set.}
}
}
\item{\verb{GtkPrintOperationResult}}{
A value of this type is returned by \code{\link{gtkPrintOperationRun}}.
\describe{
\item{\verb{error}}{An error has occured.}
\item{\verb{apply}}{The print settings should be stored.}
\item{\verb{cancel}}{The print operation has been canceled,
the print settings should not be stored.}
\item{\verb{in-progress}}{The print operation is not complete
yet. This value will only be returned when running asynchronously.}
}
}
\item{\verb{GtkPrintError}}{
Error codes that identify various errors that can occur while
using the GTK+ printing support.
\describe{
\item{\verb{general}}{An unspecified error occurred.}
\item{\verb{internal-error}}{An internal error occurred.}
\item{\verb{nomem}}{A memory allocation failed.}
}
}
}}
\section{User Functions}{\describe{\item{\code{GtkPageSetupDoneFunc(page.setup, data)}}{
The type of function that is passed to \code{\link{gtkPrintRunPageSetupDialogAsync}}.
This function will be called when the page setup dialog is dismissed, and
also serves as destroy notify for \code{data}.
\describe{
\item{\code{page.setup}}{the \code{\link{GtkPageSetup}} that has been}
\item{\code{data}}{user data that has been passed to
\code{\link{gtkPrintRunPageSetupDialogAsync}}.}
}
}}}
\section{Signals}{\describe{
\item{\code{begin-print(operation, context, user.data)}}{
Emitted after the user has finished changing print settings
in the dialog, before the actual rendering starts.
A typical use for ::begin-print is to use the parameters from the
\code{\link{GtkPrintContext}} and paginate the document accordingly, and then
set the number of pages with \code{\link{gtkPrintOperationSetNPages}}.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{context}}{the \code{\link{GtkPrintContext}} for the current operation}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{create-custom-widget(operation, user.data)}}{
Emitted when displaying the print dialog. If you return a
widget in a handler for this signal it will be added to a custom
tab in the print dialog. You typically return a container widget
with multiple widgets in it.
The print dialog owns the returned widget, and its lifetime is not
controlled by the application. However, the widget is guaranteed
to stay around until the \verb{"custom-widget-apply"}
signal is emitted on the operation. Then you can read out any
information you need from the widgets.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
\emph{Returns:} [\code{\link{GObject}}] A custom widget that gets embedded in the print dialog,
or \code{NULL}
}
\item{\code{custom-widget-apply(operation, widget, user.data)}}{
Emitted right before \verb{"begin-print"} if you added
a custom widget in the \verb{"create-custom-widget"} handler.
When you get this signal you should read the information from the
custom widgets, as the widgets are not guaraneed to be around at a
later time.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{widget}}{the custom widget added in create-custom-widget}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{done(operation, result, user.data)}}{
Emitted when the print operation run has finished doing
everything required for printing.
\code{result} gives you information about what happened during the run.
If \code{result} is \code{GTK_PRINT_OPERATION_RESULT_ERROR} then you can call
\code{\link{gtkPrintOperationGetError}} for more information.
If you enabled print status tracking then
\code{\link{gtkPrintOperationIsFinished}} may still return \code{FALSE}
after \verb{"done"} was emitted.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{result}}{the result of the print operation}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{draw-page(operation, context, page.nr, user.data)}}{
Emitted for every page that is printed. The signal handler
must render the \code{page.nr}'s page onto the cairo context obtained
from \code{context} using \code{\link{gtkPrintContextGetCairoContext}}.
\preformatted{
draw_page <- (operation, context, page_nr, user_data)
{
cr <- context$getCairoContext()
width <- context$getWidth()
cr$rectangle(0, 0, width, HEADER_HEIGHT)
cr$setSourceRgb(0.8, 0.8, 0.8)
cr$fill()
layout <- context$createPangoLayout()
desc <- pangoFontDescriptionFromString("sans 14")
layout$setFontDescription(desc)
layout$setText("some text")
layout$setWidth(width)
layout$setAlignment(layout, "center")
layout_height <- layout$getSize()$height
text_height <- layout_height / PANGO_SCALE
cr$moveTo(width / 2, (HEADER_HEIGHT - text_height) / 2)
pangoCairoShowLayout(cr, layout)
}
}
Use \code{\link{gtkPrintOperationSetUseFullPage}} and
\code{\link{gtkPrintOperationSetUnit}} before starting the print operation
to set up the transformation of the cairo context according to your
needs.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{context}}{the \code{\link{GtkPrintContext}} for the current operation}
\item{\code{page.nr}}{the number of the currently printed page (0-based)}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{end-print(operation, context, user.data)}}{
Emitted after all pages have been rendered.
A handler for this signal can clean up any resources that have
been allocated in the \verb{"begin-print"} handler.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{context}}{the \code{\link{GtkPrintContext}} for the current operation}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{paginate(operation, context, user.data)}}{
Emitted after the \verb{"begin-print"} signal, but before
the actual rendering starts. It keeps getting emitted until a connected
signal handler returns \code{TRUE}.
The ::paginate signal is intended to be used for paginating a document
in small chunks, to avoid blocking the user interface for a long
time. The signal handler should update the number of pages using
\code{\link{gtkPrintOperationSetNPages}}, and return \code{TRUE} if the document
has been completely paginated.
If you don't need to do pagination in chunks, you can simply do
it all in the ::begin-print handler, and set the number of pages
from there.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{context}}{the \code{\link{GtkPrintContext}} for the current operation}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
\emph{Returns:} [logical] \code{TRUE} if pagination is complete
}
\item{\code{preview(operation, preview, context, parent, user.data)}}{
Gets emitted when a preview is requested from the native dialog.
The default handler for this signal uses an external viewer
application to preview.
To implement a custom print preview, an application must return
\code{TRUE} from its handler for this signal. In order to use the
provided \code{context} for the preview implementation, it must be
given a suitable cairo context with \code{\link{gtkPrintContextSetCairoContext}}.
The custom preview implementation can use
\code{\link{gtkPrintOperationPreviewIsSelected}} and
\code{\link{gtkPrintOperationPreviewRenderPage}} to find pages which
are selected for print and render them. The preview must be
finished by calling \code{\link{gtkPrintOperationPreviewEndPreview}}
(typically in response to the user clicking a close button).
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{preview}}{the \verb{GtkPrintPreviewOperation} for the current operation}
\item{\code{context}}{the \code{\link{GtkPrintContext}} that will be used}
\item{\code{parent}}{the \code{\link{GtkWindow}} to use as window parent, or \code{NULL}. \emph{[ \acronym{allow-none} ]}}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
\emph{Returns:} [logical] \code{TRUE} if the listener wants to take over control of the preview
}
\item{\code{request-page-setup(operation, context, page.nr, setup, user.data)}}{
Emitted once for every page that is printed, to give
the application a chance to modify the page setup. Any changes
done to \code{setup} will be in force only for printing this page.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{context}}{the \code{\link{GtkPrintContext}} for the current operation}
\item{\code{page.nr}}{the number of the currently printed page (0-based)}
\item{\code{setup}}{the \code{\link{GtkPageSetup}}}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{status-changed(operation, user.data)}}{
Emitted at between the various phases of the print operation.
See \code{\link{GtkPrintStatus}} for the phases that are being discriminated.
Use \code{\link{gtkPrintOperationGetStatus}} to find out the current
status.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{update-custom-widget(operation, widget, setup, settings, user.data)}}{
Emitted after change of selected printer. The actual page setup and
print settings are passed to the custom widget, which can actualize
itself according to this change.
Since 2.18
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{widget}}{the custom widget added in create-custom-widget}
\item{\code{setup}}{actual page setup}
\item{\code{settings}}{actual print settings}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{got-page-size(preview, context, page.setup, user.data)}}{
The ::got-page-size signal is emitted once for each page
that gets rendered to the preview.
A handler for this signal should update the \code{context}
according to \code{page.setup} and set up a suitable cairo
context, using \code{\link{gtkPrintContextSetCairoContext}}.
\describe{
\item{\code{preview}}{the object on which the signal is emitted}
\item{\code{context}}{the current \code{\link{GtkPrintContext}}}
\item{\code{page.setup}}{the \code{\link{GtkPageSetup}} for the current page}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{ready(preview, context, user.data)}}{
The ::ready signal gets emitted once per preview operation,
before the first page is rendered.
A handler for this signal can be used for setup tasks.
\describe{
\item{\code{preview}}{the object on which the signal is emitted}
\item{\code{context}}{the current \code{\link{GtkPrintContext}}}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
}}
\section{Properties}{\describe{
\item{\verb{allow-async} [logical : Read / Write]}{
Determines whether the print operation may run asynchronously or not.
Some systems don't support asynchronous printing, but those that do
will return \code{GTK_PRINT_OPERATION_RESULT_IN_PROGRESS} as the status, and
emit the \verb{"done"} signal when the operation is actually
done.
The Windows port does not support asynchronous operation at all (this
is unlikely to change). On other platforms, all actions except for
\code{GTK_PRINT_OPERATION_ACTION_EXPORT} support asynchronous operation.
Default value: FALSE Since 2.10
}
\item{\verb{current-page} [integer : Read / Write]}{
The current page in the document.
If this is set before \code{\link{gtkPrintOperationRun}},
the user will be able to select to print only the current page.
Note that this only makes sense for pre-paginated documents.
Allowed values: >= -1 Default value: -1 Since 2.10
}
\item{\verb{custom-tab-label} [character : * : Read / Write]}{
Used as the label of the tab containing custom widgets.
Note that this property may be ignored on some platforms.
If this is \code{NULL}, GTK+ uses a default label.
Default value: NULL Since 2.10
}
\item{\verb{default-page-setup} [\code{\link{GtkPageSetup}} : * : Read / Write]}{
The \code{\link{GtkPageSetup}} used by default.
This page setup will be used by \code{\link{gtkPrintOperationRun}},
but it can be overridden on a per-page basis by connecting
to the \verb{"request-page-setup"} signal.
Since 2.10
}
\item{\verb{embed-page-setup} [logical : Read / Write]}{
If \code{TRUE}, page size combo box and orientation combo box are embedded into page setup page.
Default value: FALSE Since 2.18
}
\item{\verb{export-filename} [character : * : Read / Write]}{
The name of a file to generate instead of showing the print dialog.
Currently, PDF is the only supported format.
The intended use of this property is for implementing
"Export to PDF" actions.
"Print to PDF" support is independent of this and is done
by letting the user pick the "Print to PDF" item from the
list of printers in the print dialog.
Default value: NULL Since 2.10
}
\item{\verb{has-selection} [logical : Read / Write]}{
Determines whether there is a selection in your application.
This can allow your application to print the selection.
This is typically used to make a "Selection" button sensitive.
Default value: FALSE Since 2.18
}
\item{\verb{job-name} [character : * : Read / Write]}{
A string used to identify the job (e.g. in monitoring
applications like eggcups).
If you don't set a job name, GTK+ picks a default one
by numbering successive print jobs.
Default value: "" Since 2.10
}
\item{\verb{n-pages} [integer : Read / Write]}{
The number of pages in the document.
This \emph{must} be set to a positive number
before the rendering starts. It may be set in a
\verb{"begin-print"} signal hander.
Note that the page numbers passed to the
\verb{"request-page-setup"} and
\verb{"draw-page"} signals are 0-based, i.e. if
the user chooses to print all pages, the last ::draw-page signal
will be for page \code{n.pages} - 1.
Allowed values: >= -1 Default value: -1 Since 2.10
}
\item{\verb{n-pages-to-print} [integer : Read]}{
The number of pages that will be printed.
Note that this value is set during print preparation phase
(\code{GTK_PRINT_STATUS_PREPARING}), so this value should never be
get before the data generation phase (\code{GTK_PRINT_STATUS_GENERATING_DATA}).
You can connect to the \verb{"status-changed"} signal
and call \code{\link{gtkPrintOperationGetNPagesToPrint}} when
print status is \code{GTK_PRINT_STATUS_GENERATING_DATA}.
This is typically used to track the progress of print operation.
Allowed values: >= -1 Default value: -1 Since 2.18
}
\item{\verb{print-settings} [\code{\link{GtkPrintSettings}} : * : Read / Write]}{
The \code{\link{GtkPrintSettings}} used for initializing the dialog.
Setting this property is typically used to re-establish
print settings from a previous print operation, see
\code{\link{gtkPrintOperationRun}}.
Since 2.10
}
\item{\verb{show-progress} [logical : Read / Write]}{
Determines whether to show a progress dialog during the
print operation.
Default value: FALSE Since 2.10
}
\item{\verb{status} [\code{\link{GtkPrintStatus}} : Read]}{
The status of the print operation.
Default value: GTK_PRINT_STATUS_INITIAL Since 2.10
}
\item{\verb{status-string} [character : * : Read]}{
A string representation of the status of the print operation.
The string is translated and suitable for displaying the print
status e.g. in a \code{\link{GtkStatusbar}}.
See the \verb{"status"} property for a status value that
is suitable for programmatic use.
Default value: "" Since 2.10
}
\item{\verb{support-selection} [logical : Read / Write]}{
If \code{TRUE}, the print operation will support print of selection.
This allows the print dialog to show a "Selection" button.
Default value: FALSE Since 2.18
}
\item{\verb{track-print-status} [logical : Read / Write]}{
If \code{TRUE}, the print operation will try to continue report on
the status of the print job in the printer queues and printer.
This can allow your application to show things like "out of paper"
issues, and when the print job actually reaches the printer.
However, this is often implemented using polling, and should
not be enabled unless needed.
Default value: FALSE Since 2.10
}
\item{\verb{unit} [\code{\link{GtkUnit}} : Read / Write]}{
The transformation for the cairo context obtained from
\code{\link{GtkPrintContext}} is set up in such a way that distances
are measured in units of \code{unit}.
Default value: GTK_UNIT_PIXEL Since 2.10
}
\item{\verb{use-full-page} [logical : Read / Write]}{
If \code{TRUE}, the transformation for the cairo context obtained
from \code{\link{GtkPrintContext}} puts the origin at the top left corner
of the page (which may not be the top left corner of the sheet,
depending on page orientation and the number of pages per sheet).
Otherwise, the origin is at the top left corner of the imageable
area (i.e. inside the margins).
Default value: FALSE Since 2.10
}
}}
\references{\url{https://developer-old.gnome.org/gtk2/stable/gtk2-High-level-Printing-API.html}}
\author{Derived by RGtkGen from GTK+ documentation}
\seealso{\code{\link{GtkPrintContext}}}
\keyword{internal}
| /RGtk2/man/gtk-High-level-Printing-API.Rd | no_license | lawremi/RGtk2 | R | false | false | 24,827 | rd | \alias{gtk-High-level-Printing-API}
\alias{GtkPrintOperation}
\alias{GtkPrintOperationPreview}
\alias{gtkPrintOperation}
\alias{GtkPageSetupDoneFunc}
\alias{GtkPrintStatus}
\alias{GtkPrintOperationAction}
\alias{GtkPrintOperationResult}
\alias{GtkPrintError}
\name{gtk-High-level-Printing-API}
\title{GtkPrintOperation}
\description{High-level Printing API}
\section{Methods and Functions}{
\code{\link{gtkPrintOperationNew}()}\cr
\code{\link{gtkPrintOperationSetAllowAsync}(object, allow.async)}\cr
\code{\link{gtkPrintOperationGetError}(object, .errwarn = TRUE)}\cr
\code{\link{gtkPrintOperationSetDefaultPageSetup}(object, default.page.setup = NULL)}\cr
\code{\link{gtkPrintOperationGetDefaultPageSetup}(object)}\cr
\code{\link{gtkPrintOperationSetPrintSettings}(object, print.settings = NULL)}\cr
\code{\link{gtkPrintOperationGetPrintSettings}(object)}\cr
\code{\link{gtkPrintOperationSetJobName}(object, job.name)}\cr
\code{\link{gtkPrintOperationSetNPages}(object, n.pages)}\cr
\code{\link{gtkPrintOperationGetNPagesToPrint}(object)}\cr
\code{\link{gtkPrintOperationSetCurrentPage}(object, current.page)}\cr
\code{\link{gtkPrintOperationSetUseFullPage}(object, full.page)}\cr
\code{\link{gtkPrintOperationSetUnit}(object, unit)}\cr
\code{\link{gtkPrintOperationSetExportFilename}(object, filename)}\cr
\code{\link{gtkPrintOperationSetShowProgress}(object, show.progress)}\cr
\code{\link{gtkPrintOperationSetTrackPrintStatus}(object, track.status)}\cr
\code{\link{gtkPrintOperationSetCustomTabLabel}(object, label)}\cr
\code{\link{gtkPrintOperationRun}(object, action, parent = NULL, .errwarn = TRUE)}\cr
\code{\link{gtkPrintOperationCancel}(object)}\cr
\code{\link{gtkPrintOperationDrawPageFinish}(object)}\cr
\code{\link{gtkPrintOperationSetDeferDrawing}(object)}\cr
\code{\link{gtkPrintOperationGetStatus}(object)}\cr
\code{\link{gtkPrintOperationGetStatusString}(object)}\cr
\code{\link{gtkPrintOperationIsFinished}(object)}\cr
\code{\link{gtkPrintOperationSetSupportSelection}(object, support.selection)}\cr
\code{\link{gtkPrintOperationGetSupportSelection}(object)}\cr
\code{\link{gtkPrintOperationSetHasSelection}(object, has.selection)}\cr
\code{\link{gtkPrintOperationGetHasSelection}(object)}\cr
\code{\link{gtkPrintOperationSetEmbedPageSetup}(object, embed)}\cr
\code{\link{gtkPrintOperationGetEmbedPageSetup}(object)}\cr
\code{\link{gtkPrintRunPageSetupDialog}(parent, page.setup = NULL, settings)}\cr
\code{\link{gtkPrintRunPageSetupDialogAsync}(parent, page.setup, settings, done.cb, data)}\cr
\code{\link{gtkPrintOperationPreviewEndPreview}(object)}\cr
\code{\link{gtkPrintOperationPreviewIsSelected}(object, page.nr)}\cr
\code{\link{gtkPrintOperationPreviewRenderPage}(object, page.nr)}\cr
\code{gtkPrintOperation()}
}
\section{Hierarchy}{\preformatted{
GObject
+----GtkPrintOperation
GInterface
+----GtkPrintOperationPreview
}}
\section{Implementations}{GtkPrintOperationPreview is implemented by
\code{\link{GtkPrintOperation}}.}
\section{Interfaces}{GtkPrintOperation implements
\code{\link{GtkPrintOperationPreview}}.}
\section{Detailed Description}{GtkPrintOperation is the high-level, portable printing API. It looks
a bit different than other GTK+ dialogs such as the \code{\link{GtkFileChooser}},
since some platforms don't expose enough infrastructure to implement
a good print dialog. On such platforms, GtkPrintOperation uses the
native print dialog. On platforms which do not provide a native
print dialog, GTK+ uses its own, see \verb{GtkPrintUnixDialog}.
The typical way to use the high-level printing API is to create a
\code{\link{GtkPrintOperation}} object with \code{\link{gtkPrintOperationNew}} when the user
selects to print. Then you set some properties on it, e.g. the page size,
any \code{\link{GtkPrintSettings}} from previous print operations, the number of pages,
the current page, etc.
Then you start the print operation by calling \code{\link{gtkPrintOperationRun}}.
It will then show a dialog, let the user select a printer and options.
When the user finished the dialog various signals will be emitted on the
\code{\link{GtkPrintOperation}}, the main one being ::draw-page, which you are supposed
to catch and render the page on the provided \code{\link{GtkPrintContext}} using Cairo.
\emph{The high-level printing API}
\preformatted{
settings <- NULL
print_something <-
{
op <- gtkPrintOperation()
if (!is.null(settings))
op$setPrintSettings(settings)
gSignalConnect(op, "begin_print", begin_print)
gSignalConnect(op, "draw_page", draw_page)
res <- op$run("print-dialog", main_window)[[1]]
if (res == "apply")
settings <- op$getPrintSettings()
}
}
By default GtkPrintOperation uses an external application to do
print preview. To implement a custom print preview, an application
must connect to the preview signal. The functions
\code{gtkPrintOperationPrintPreviewRenderPage()},
\code{\link{gtkPrintOperationPreviewEndPreview}} and
\code{\link{gtkPrintOperationPreviewIsSelected}} are useful
when implementing a print preview.
Printing support was added in GTK+ 2.10.}
\section{Structures}{\describe{
\item{\verb{GtkPrintOperation}}{
\emph{undocumented
}
}
\item{\verb{GtkPrintOperationPreview}}{
\emph{undocumented
}
}
}}
\section{Convenient Construction}{\code{gtkPrintOperation} is the equivalent of \code{\link{gtkPrintOperationNew}}.}
\section{Enums and Flags}{\describe{
\item{\verb{GtkPrintStatus}}{
The status gives a rough indication of the completion
of a running print operation.
\describe{
\item{\verb{initial}}{The printing has not started yet; this
status is set initially, and while the print dialog is shown.}
\item{\verb{preparing}}{This status is set while the begin-print
signal is emitted and during pagination.}
\item{\verb{generating-data}}{This status is set while the
pages are being rendered.}
\item{\verb{sending-data}}{The print job is being sent off to the
printer.}
\item{\verb{pending}}{The print job has been sent to the printer,
but is not printed for some reason, e.g. the printer may be stopped.}
\item{\verb{pending-issue}}{Some problem has occurred during
printing, e.g. a paper jam.}
\item{\verb{printing}}{The printer is processing the print job.}
\item{\verb{finished}}{The printing has been completed successfully.}
\item{\verb{finished-aborted}}{The printing has been aborted.}
}
}
\item{\verb{GtkPrintOperationAction}}{
The \code{action} parameter to \code{\link{gtkPrintOperationRun}}
determines what action the print operation should perform.
\describe{
\item{\verb{print-dialog}}{Show the print dialog.}
\item{\verb{print}}{Start to print without showing
the print dialog, based on the current print settings.}
\item{\verb{preview}}{Show the print preview.}
\item{\verb{export}}{Export to a file. This requires
the export-filename property to be set.}
}
}
\item{\verb{GtkPrintOperationResult}}{
A value of this type is returned by \code{\link{gtkPrintOperationRun}}.
\describe{
\item{\verb{error}}{An error has occured.}
\item{\verb{apply}}{The print settings should be stored.}
\item{\verb{cancel}}{The print operation has been canceled,
the print settings should not be stored.}
\item{\verb{in-progress}}{The print operation is not complete
yet. This value will only be returned when running asynchronously.}
}
}
\item{\verb{GtkPrintError}}{
Error codes that identify various errors that can occur while
using the GTK+ printing support.
\describe{
\item{\verb{general}}{An unspecified error occurred.}
\item{\verb{internal-error}}{An internal error occurred.}
\item{\verb{nomem}}{A memory allocation failed.}
}
}
}}
\section{User Functions}{\describe{\item{\code{GtkPageSetupDoneFunc(page.setup, data)}}{
The type of function that is passed to \code{\link{gtkPrintRunPageSetupDialogAsync}}.
This function will be called when the page setup dialog is dismissed, and
also serves as destroy notify for \code{data}.
\describe{
\item{\code{page.setup}}{the \code{\link{GtkPageSetup}} that has been}
\item{\code{data}}{user data that has been passed to
\code{\link{gtkPrintRunPageSetupDialogAsync}}.}
}
}}}
\section{Signals}{\describe{
\item{\code{begin-print(operation, context, user.data)}}{
Emitted after the user has finished changing print settings
in the dialog, before the actual rendering starts.
A typical use for ::begin-print is to use the parameters from the
\code{\link{GtkPrintContext}} and paginate the document accordingly, and then
set the number of pages with \code{\link{gtkPrintOperationSetNPages}}.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{context}}{the \code{\link{GtkPrintContext}} for the current operation}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{create-custom-widget(operation, user.data)}}{
Emitted when displaying the print dialog. If you return a
widget in a handler for this signal it will be added to a custom
tab in the print dialog. You typically return a container widget
with multiple widgets in it.
The print dialog owns the returned widget, and its lifetime is not
controlled by the application. However, the widget is guaranteed
to stay around until the \verb{"custom-widget-apply"}
signal is emitted on the operation. Then you can read out any
information you need from the widgets.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
\emph{Returns:} [\code{\link{GObject}}] A custom widget that gets embedded in the print dialog,
or \code{NULL}
}
\item{\code{custom-widget-apply(operation, widget, user.data)}}{
Emitted right before \verb{"begin-print"} if you added
a custom widget in the \verb{"create-custom-widget"} handler.
When you get this signal you should read the information from the
custom widgets, as the widgets are not guaraneed to be around at a
later time.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{widget}}{the custom widget added in create-custom-widget}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{done(operation, result, user.data)}}{
Emitted when the print operation run has finished doing
everything required for printing.
\code{result} gives you information about what happened during the run.
If \code{result} is \code{GTK_PRINT_OPERATION_RESULT_ERROR} then you can call
\code{\link{gtkPrintOperationGetError}} for more information.
If you enabled print status tracking then
\code{\link{gtkPrintOperationIsFinished}} may still return \code{FALSE}
after \verb{"done"} was emitted.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{result}}{the result of the print operation}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{draw-page(operation, context, page.nr, user.data)}}{
Emitted for every page that is printed. The signal handler
must render the \code{page.nr}'s page onto the cairo context obtained
from \code{context} using \code{\link{gtkPrintContextGetCairoContext}}.
\preformatted{
draw_page <- (operation, context, page_nr, user_data)
{
cr <- context$getCairoContext()
width <- context$getWidth()
cr$rectangle(0, 0, width, HEADER_HEIGHT)
cr$setSourceRgb(0.8, 0.8, 0.8)
cr$fill()
layout <- context$createPangoLayout()
desc <- pangoFontDescriptionFromString("sans 14")
layout$setFontDescription(desc)
layout$setText("some text")
layout$setWidth(width)
layout$setAlignment(layout, "center")
layout_height <- layout$getSize()$height
text_height <- layout_height / PANGO_SCALE
cr$moveTo(width / 2, (HEADER_HEIGHT - text_height) / 2)
pangoCairoShowLayout(cr, layout)
}
}
Use \code{\link{gtkPrintOperationSetUseFullPage}} and
\code{\link{gtkPrintOperationSetUnit}} before starting the print operation
to set up the transformation of the cairo context according to your
needs.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{context}}{the \code{\link{GtkPrintContext}} for the current operation}
\item{\code{page.nr}}{the number of the currently printed page (0-based)}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{end-print(operation, context, user.data)}}{
Emitted after all pages have been rendered.
A handler for this signal can clean up any resources that have
been allocated in the \verb{"begin-print"} handler.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{context}}{the \code{\link{GtkPrintContext}} for the current operation}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{paginate(operation, context, user.data)}}{
Emitted after the \verb{"begin-print"} signal, but before
the actual rendering starts. It keeps getting emitted until a connected
signal handler returns \code{TRUE}.
The ::paginate signal is intended to be used for paginating a document
in small chunks, to avoid blocking the user interface for a long
time. The signal handler should update the number of pages using
\code{\link{gtkPrintOperationSetNPages}}, and return \code{TRUE} if the document
has been completely paginated.
If you don't need to do pagination in chunks, you can simply do
it all in the ::begin-print handler, and set the number of pages
from there.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{context}}{the \code{\link{GtkPrintContext}} for the current operation}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
\emph{Returns:} [logical] \code{TRUE} if pagination is complete
}
\item{\code{preview(operation, preview, context, parent, user.data)}}{
Gets emitted when a preview is requested from the native dialog.
The default handler for this signal uses an external viewer
application to preview.
To implement a custom print preview, an application must return
\code{TRUE} from its handler for this signal. In order to use the
provided \code{context} for the preview implementation, it must be
given a suitable cairo context with \code{\link{gtkPrintContextSetCairoContext}}.
The custom preview implementation can use
\code{\link{gtkPrintOperationPreviewIsSelected}} and
\code{\link{gtkPrintOperationPreviewRenderPage}} to find pages which
are selected for print and render them. The preview must be
finished by calling \code{\link{gtkPrintOperationPreviewEndPreview}}
(typically in response to the user clicking a close button).
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{preview}}{the \verb{GtkPrintPreviewOperation} for the current operation}
\item{\code{context}}{the \code{\link{GtkPrintContext}} that will be used}
\item{\code{parent}}{the \code{\link{GtkWindow}} to use as window parent, or \code{NULL}. \emph{[ \acronym{allow-none} ]}}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
\emph{Returns:} [logical] \code{TRUE} if the listener wants to take over control of the preview
}
\item{\code{request-page-setup(operation, context, page.nr, setup, user.data)}}{
Emitted once for every page that is printed, to give
the application a chance to modify the page setup. Any changes
done to \code{setup} will be in force only for printing this page.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{context}}{the \code{\link{GtkPrintContext}} for the current operation}
\item{\code{page.nr}}{the number of the currently printed page (0-based)}
\item{\code{setup}}{the \code{\link{GtkPageSetup}}}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{status-changed(operation, user.data)}}{
Emitted at between the various phases of the print operation.
See \code{\link{GtkPrintStatus}} for the phases that are being discriminated.
Use \code{\link{gtkPrintOperationGetStatus}} to find out the current
status.
Since 2.10
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{update-custom-widget(operation, widget, setup, settings, user.data)}}{
Emitted after change of selected printer. The actual page setup and
print settings are passed to the custom widget, which can actualize
itself according to this change.
Since 2.18
\describe{
\item{\code{operation}}{the \code{\link{GtkPrintOperation}} on which the signal was emitted}
\item{\code{widget}}{the custom widget added in create-custom-widget}
\item{\code{setup}}{actual page setup}
\item{\code{settings}}{actual print settings}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{got-page-size(preview, context, page.setup, user.data)}}{
The ::got-page-size signal is emitted once for each page
that gets rendered to the preview.
A handler for this signal should update the \code{context}
according to \code{page.setup} and set up a suitable cairo
context, using \code{\link{gtkPrintContextSetCairoContext}}.
\describe{
\item{\code{preview}}{the object on which the signal is emitted}
\item{\code{context}}{the current \code{\link{GtkPrintContext}}}
\item{\code{page.setup}}{the \code{\link{GtkPageSetup}} for the current page}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
\item{\code{ready(preview, context, user.data)}}{
The ::ready signal gets emitted once per preview operation,
before the first page is rendered.
A handler for this signal can be used for setup tasks.
\describe{
\item{\code{preview}}{the object on which the signal is emitted}
\item{\code{context}}{the current \code{\link{GtkPrintContext}}}
\item{\code{user.data}}{user data set when the signal handler was connected.}
}
}
}}
\section{Properties}{\describe{
\item{\verb{allow-async} [logical : Read / Write]}{
Determines whether the print operation may run asynchronously or not.
Some systems don't support asynchronous printing, but those that do
will return \code{GTK_PRINT_OPERATION_RESULT_IN_PROGRESS} as the status, and
emit the \verb{"done"} signal when the operation is actually
done.
The Windows port does not support asynchronous operation at all (this
is unlikely to change). On other platforms, all actions except for
\code{GTK_PRINT_OPERATION_ACTION_EXPORT} support asynchronous operation.
Default value: FALSE Since 2.10
}
\item{\verb{current-page} [integer : Read / Write]}{
The current page in the document.
If this is set before \code{\link{gtkPrintOperationRun}},
the user will be able to select to print only the current page.
Note that this only makes sense for pre-paginated documents.
Allowed values: >= -1 Default value: -1 Since 2.10
}
\item{\verb{custom-tab-label} [character : * : Read / Write]}{
Used as the label of the tab containing custom widgets.
Note that this property may be ignored on some platforms.
If this is \code{NULL}, GTK+ uses a default label.
Default value: NULL Since 2.10
}
\item{\verb{default-page-setup} [\code{\link{GtkPageSetup}} : * : Read / Write]}{
The \code{\link{GtkPageSetup}} used by default.
This page setup will be used by \code{\link{gtkPrintOperationRun}},
but it can be overridden on a per-page basis by connecting
to the \verb{"request-page-setup"} signal.
Since 2.10
}
\item{\verb{embed-page-setup} [logical : Read / Write]}{
If \code{TRUE}, page size combo box and orientation combo box are embedded into page setup page.
Default value: FALSE Since 2.18
}
\item{\verb{export-filename} [character : * : Read / Write]}{
The name of a file to generate instead of showing the print dialog.
Currently, PDF is the only supported format.
The intended use of this property is for implementing
"Export to PDF" actions.
"Print to PDF" support is independent of this and is done
by letting the user pick the "Print to PDF" item from the
list of printers in the print dialog.
Default value: NULL Since 2.10
}
\item{\verb{has-selection} [logical : Read / Write]}{
Determines whether there is a selection in your application.
This can allow your application to print the selection.
This is typically used to make a "Selection" button sensitive.
Default value: FALSE Since 2.18
}
\item{\verb{job-name} [character : * : Read / Write]}{
A string used to identify the job (e.g. in monitoring
applications like eggcups).
If you don't set a job name, GTK+ picks a default one
by numbering successive print jobs.
Default value: "" Since 2.10
}
\item{\verb{n-pages} [integer : Read / Write]}{
The number of pages in the document.
This \emph{must} be set to a positive number
before the rendering starts. It may be set in a
\verb{"begin-print"} signal hander.
Note that the page numbers passed to the
\verb{"request-page-setup"} and
\verb{"draw-page"} signals are 0-based, i.e. if
the user chooses to print all pages, the last ::draw-page signal
will be for page \code{n.pages} - 1.
Allowed values: >= -1 Default value: -1 Since 2.10
}
\item{\verb{n-pages-to-print} [integer : Read]}{
The number of pages that will be printed.
Note that this value is set during print preparation phase
(\code{GTK_PRINT_STATUS_PREPARING}), so this value should never be
get before the data generation phase (\code{GTK_PRINT_STATUS_GENERATING_DATA}).
You can connect to the \verb{"status-changed"} signal
and call \code{\link{gtkPrintOperationGetNPagesToPrint}} when
print status is \code{GTK_PRINT_STATUS_GENERATING_DATA}.
This is typically used to track the progress of print operation.
Allowed values: >= -1 Default value: -1 Since 2.18
}
\item{\verb{print-settings} [\code{\link{GtkPrintSettings}} : * : Read / Write]}{
The \code{\link{GtkPrintSettings}} used for initializing the dialog.
Setting this property is typically used to re-establish
print settings from a previous print operation, see
\code{\link{gtkPrintOperationRun}}.
Since 2.10
}
\item{\verb{show-progress} [logical : Read / Write]}{
Determines whether to show a progress dialog during the
print operation.
Default value: FALSE Since 2.10
}
\item{\verb{status} [\code{\link{GtkPrintStatus}} : Read]}{
The status of the print operation.
Default value: GTK_PRINT_STATUS_INITIAL Since 2.10
}
\item{\verb{status-string} [character : * : Read]}{
A string representation of the status of the print operation.
The string is translated and suitable for displaying the print
status e.g. in a \code{\link{GtkStatusbar}}.
See the \verb{"status"} property for a status value that
is suitable for programmatic use.
Default value: "" Since 2.10
}
\item{\verb{support-selection} [logical : Read / Write]}{
If \code{TRUE}, the print operation will support print of selection.
This allows the print dialog to show a "Selection" button.
Default value: FALSE Since 2.18
}
\item{\verb{track-print-status} [logical : Read / Write]}{
If \code{TRUE}, the print operation will try to continue report on
the status of the print job in the printer queues and printer.
This can allow your application to show things like "out of paper"
issues, and when the print job actually reaches the printer.
However, this is often implemented using polling, and should
not be enabled unless needed.
Default value: FALSE Since 2.10
}
\item{\verb{unit} [\code{\link{GtkUnit}} : Read / Write]}{
The transformation for the cairo context obtained from
\code{\link{GtkPrintContext}} is set up in such a way that distances
are measured in units of \code{unit}.
Default value: GTK_UNIT_PIXEL Since 2.10
}
\item{\verb{use-full-page} [logical : Read / Write]}{
If \code{TRUE}, the transformation for the cairo context obtained
from \code{\link{GtkPrintContext}} puts the origin at the top left corner
of the page (which may not be the top left corner of the sheet,
depending on page orientation and the number of pages per sheet).
Otherwise, the origin is at the top left corner of the imageable
area (i.e. inside the margins).
Default value: FALSE Since 2.10
}
}}
\references{\url{https://developer-old.gnome.org/gtk2/stable/gtk2-High-level-Printing-API.html}}
\author{Derived by RGtkGen from GTK+ documentation}
\seealso{\code{\link{GtkPrintContext}}}
\keyword{internal}
|
source("includes.R")
########### CONSTANTS #################
outDataFile = "plot_data/geoIP_per_crawl.csv"
processedPeerFiles = "../output_data_crawls/geoIP_processing/"
procCrawls = list.files(processedPeerFiles, pattern=visitedPattern)
CountsPerTS = pblapply(procCrawls, function(pc) {
fdate = extractStartDate(pc)
dat = LoadDT(paste(processedPeerFiles, pc, sep=""), header=T)
dat$ASNO = NULL
dat$ASName = NULL
dat$IP = NULL
dat$agentVersion = NULL
## All peers with only LocalIP
# localIPIndexSet = dat[, .I[.N == 1 && grepl("LocalIP", country, fixed=T)], .(nodeid)][,V1]
localIPIndexSet = dat[, .I[all(grepl("LocalIP", country, fixed=T))], .(nodeid)][,V1]
numLocalIPs = length(unique(dat[localIPIndexSet]$nodeid))
## We want:
## * Count the country if there is one and ignore the LocalIP
## * Take the country with the majority (solve ties)
## * We excluded the localIPs already
## So let's first count the countries for each nodeid
countryCount = dat[(!grepl("LocalIP", country, fixed=T)), .(count =.N), by=c("nodeid", "country", "online")]
## Enter some data.table magic: For each ID, we want the row that
## has the maximum count. .I gives the index in the original data.table
## that fulfills the expression for a given ID.
## This yields a vector of countries which we count with table() and
## give the result back to data.table
ccTmp = countryCount[countryCount[, .I[count == max(count)], by=c("nodeid")][,V1]]
## We resolve duplicates by just taking the first value
ccTmp = ccTmp[ccTmp[, .I[1], .(nodeid, country, online)][,V1]]
tabAll = data.table(table(ccTmp$country))
tabAll = rbindlist(list(tabAll, data.table(V1 = c("LocalIP"), N = c(numLocalIPs))))
tabAll$timestamp = rep(fdate, nrow(tabAll))
tabAll$type = rep("all", nrow(tabAll))
tabReachable = data.table(table(ccTmp[online=="true"]$country))
tabReachable$timestamp = rep(fdate, nrow(tabReachable))
tabReachable$type = rep("reachable", nrow(tabReachable))
# tmpDT = data.table(table(countryCount[countryCount[, .I[count == max(count)], by=c("nodeid")][,V1]]$country))
# tmpDT$timestamp = rep(fdate, nrow(tmpDT))
return(rbindlist(list(tabAll, tabReachable)))
})
## Combine the data tables into one and take the mean+conf int. We deliberately use the number of
## "observations" in terms of time stamps, to not distort the picture.
## By looking at this from a per-crawl-perspective, we avoid over-representation of
## always-on peers. This could happen if we looked at absolute numbers as before.
mcounts = rbindlist(CountsPerTS)
mcounts$N = as.double(mcounts$N)
write.table(mcounts, file=outDataFile, sep=";", row.names=F)
| /eval/geoIP.R | permissive | bonedaddy/ipfs-crawler | R | false | false | 2,673 | r | source("includes.R")
########### CONSTANTS #################
outDataFile = "plot_data/geoIP_per_crawl.csv"
processedPeerFiles = "../output_data_crawls/geoIP_processing/"
procCrawls = list.files(processedPeerFiles, pattern=visitedPattern)
CountsPerTS = pblapply(procCrawls, function(pc) {
fdate = extractStartDate(pc)
dat = LoadDT(paste(processedPeerFiles, pc, sep=""), header=T)
dat$ASNO = NULL
dat$ASName = NULL
dat$IP = NULL
dat$agentVersion = NULL
## All peers with only LocalIP
# localIPIndexSet = dat[, .I[.N == 1 && grepl("LocalIP", country, fixed=T)], .(nodeid)][,V1]
localIPIndexSet = dat[, .I[all(grepl("LocalIP", country, fixed=T))], .(nodeid)][,V1]
numLocalIPs = length(unique(dat[localIPIndexSet]$nodeid))
## We want:
## * Count the country if there is one and ignore the LocalIP
## * Take the country with the majority (solve ties)
## * We excluded the localIPs already
## So let's first count the countries for each nodeid
countryCount = dat[(!grepl("LocalIP", country, fixed=T)), .(count =.N), by=c("nodeid", "country", "online")]
## Enter some data.table magic: For each ID, we want the row that
## has the maximum count. .I gives the index in the original data.table
## that fulfills the expression for a given ID.
## This yields a vector of countries which we count with table() and
## give the result back to data.table
ccTmp = countryCount[countryCount[, .I[count == max(count)], by=c("nodeid")][,V1]]
## We resolve duplicates by just taking the first value
ccTmp = ccTmp[ccTmp[, .I[1], .(nodeid, country, online)][,V1]]
tabAll = data.table(table(ccTmp$country))
tabAll = rbindlist(list(tabAll, data.table(V1 = c("LocalIP"), N = c(numLocalIPs))))
tabAll$timestamp = rep(fdate, nrow(tabAll))
tabAll$type = rep("all", nrow(tabAll))
tabReachable = data.table(table(ccTmp[online=="true"]$country))
tabReachable$timestamp = rep(fdate, nrow(tabReachable))
tabReachable$type = rep("reachable", nrow(tabReachable))
# tmpDT = data.table(table(countryCount[countryCount[, .I[count == max(count)], by=c("nodeid")][,V1]]$country))
# tmpDT$timestamp = rep(fdate, nrow(tmpDT))
return(rbindlist(list(tabAll, tabReachable)))
})
## Combine the data tables into one and take the mean+conf int. We deliberately use the number of
## "observations" in terms of time stamps, to not distort the picture.
## By looking at this from a per-crawl-perspective, we avoid over-representation of
## always-on peers. This could happen if we looked at absolute numbers as before.
mcounts = rbindlist(CountsPerTS)
mcounts$N = as.double(mcounts$N)
write.table(mcounts, file=outDataFile, sep=";", row.names=F)
|
\name{fun.bimodal.fit.ml}
\alias{fun.bimodal.fit.ml}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{ Finds the final fits using the maximum likelihood estimation for the
bimodal dataset. }
\description{
This is the secondary optimization procedure to evaluate the final bimodal
distribution fits using the maximum likelihood. It usually relies on initial
values found by \code{fun.bimodal.init} function.
}
\usage{
fun.bimodal.fit.ml(data, first.fit, second.fit, prop, param1, param2, selc1,
selc2)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{data}{ Dataset to be fitted.}
\item{first.fit}{ The distribution parameters or the initial values of the
first distribution fit. }
\item{second.fit}{ The distribution parameters or the initial values of the
second distribution fit. }
\item{prop}{ The proportion of the data set, usually obtained from
\code{\link{fun.bimodal.init}}. }
\item{param1}{ Can be either \code{"rs"} or \code{"fmkl"}, depending on the
type of first distribution used. }
\item{param2}{ Can be either \code{"rs"} or \code{"fmkl"}, depending on the
type of second distribution used. }
\item{selc1}{ Selection of initial values for the first distribution, can be
either \code{"rs"}, \code{"fmkl"} or \code{"star"}. Choose initial values from
RPRS (ML), RMFMKL (ML) or STAR method. }
\item{selc2}{ Selection of initial values for the second distribution, can be
either \code{"rs"}, \code{"fmkl"} or \code{"star"}. Choose initial values from
RPRS (ML), RMFMKL (ML) or STAR method. }
}
\details{
This function should be used in tandem with \code{\link{fun.bimodal.init}}.
}
\value{
\item{par}{ The first four numbers are the parameters of the first generalised
lambda distribution, the second four numbers are the parameters of the second
generalised lambda distribution and the last value is the proportion of the
first generalised lambda distribution.}
\item{value}{ The objective value of negative likelihood obtained using the
par above. }
\item{counts}{ A two-element integer vector giving the number of calls to
functions. Gradient is not used in this case. }
\item{convergence}{ An integer code. \code{0} indicates successful convergence.
Error codes are:
\code{1} indicates that the iteration limit 'maxit' had been
reached.
\code{10} indicates degeneracy of the Nelder-Mead simplex. }
\item{message}{ A character string giving any additional information returned
by the optimizer, or \code{NULL}. }
}
\references{ Su (2007). Fitting Single and Mixture of Generalized Lambda
Distributions to Data via Discretized and Maximum Likelihood Methods: GLDEX in
R. Journal of Statistical Software: *21* 9. }
\author{ Steve Su }
\note{ There is currently no guarantee of a global convergence. }
\seealso{ \code{link{fun.bimodal.fit.pml}}, \code{\link{fun.bimodal.init}} }
\examples{
## Extract faithful[,2] into faithful2
# faithful2<-faithful[,2]
## Uses clara clustering method
# clara.faithful2<-fun.class.regime.bi(faithful2, 0.01, clara)
## Save into two different objects
# qqqq1.faithful2.cc<-clara.faithful2$data.a
# qqqq2.faithful2.cc<-clara.faithful2$data.b
## Find the initial values
# result.faithful2.init<-fun.bimodal.init(data1=qqqq1.faithful2.cc,
# data2=qqqq2.faithful2.cc, rs.leap1=3,fmkl.leap1=3,rs.init1 = c(-1.5, 1.5),
# fmkl.init1 = c(-0.25, 1.5), rs.leap2=3,fmkl.leap2=3,rs.init2 = c(-1.5, 1.5),
# fmkl.init2 = c(-0.25, 1.5))
## Find the final fits
# result.faithful2.rsrs<-fun.bimodal.fit.ml(data=faithful2,
# result.faithful2.init[[2]],result.faithful2.init[[3]],
# result.faithful2.init[[1]], param1="rs",param2="rs",selc1="rs",selc2="rs")
## Output
# result.faithful2.rsrs
}
\keyword{smooth} | /man/fun.bimodal.fit.ml.Rd | no_license | nmlemus/GLDEX | R | false | false | 3,907 | rd | \name{fun.bimodal.fit.ml}
\alias{fun.bimodal.fit.ml}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{ Finds the final fits using the maximum likelihood estimation for the
bimodal dataset. }
\description{
This is the secondary optimization procedure to evaluate the final bimodal
distribution fits using the maximum likelihood. It usually relies on initial
values found by \code{fun.bimodal.init} function.
}
\usage{
fun.bimodal.fit.ml(data, first.fit, second.fit, prop, param1, param2, selc1,
selc2)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{data}{ Dataset to be fitted.}
\item{first.fit}{ The distribution parameters or the initial values of the
first distribution fit. }
\item{second.fit}{ The distribution parameters or the initial values of the
second distribution fit. }
\item{prop}{ The proportion of the data set, usually obtained from
\code{\link{fun.bimodal.init}}. }
\item{param1}{ Can be either \code{"rs"} or \code{"fmkl"}, depending on the
type of first distribution used. }
\item{param2}{ Can be either \code{"rs"} or \code{"fmkl"}, depending on the
type of second distribution used. }
\item{selc1}{ Selection of initial values for the first distribution, can be
either \code{"rs"}, \code{"fmkl"} or \code{"star"}. Choose initial values from
RPRS (ML), RMFMKL (ML) or STAR method. }
\item{selc2}{ Selection of initial values for the second distribution, can be
either \code{"rs"}, \code{"fmkl"} or \code{"star"}. Choose initial values from
RPRS (ML), RMFMKL (ML) or STAR method. }
}
\details{
This function should be used in tandem with \code{\link{fun.bimodal.init}}.
}
\value{
\item{par}{ The first four numbers are the parameters of the first generalised
lambda distribution, the second four numbers are the parameters of the second
generalised lambda distribution and the last value is the proportion of the
first generalised lambda distribution.}
\item{value}{ The objective value of negative likelihood obtained using the
par above. }
\item{counts}{ A two-element integer vector giving the number of calls to
functions. Gradient is not used in this case. }
\item{convergence}{ An integer code. \code{0} indicates successful convergence.
Error codes are:
\code{1} indicates that the iteration limit 'maxit' had been
reached.
\code{10} indicates degeneracy of the Nelder-Mead simplex. }
\item{message}{ A character string giving any additional information returned
by the optimizer, or \code{NULL}. }
}
\references{ Su (2007). Fitting Single and Mixture of Generalized Lambda
Distributions to Data via Discretized and Maximum Likelihood Methods: GLDEX in
R. Journal of Statistical Software: *21* 9. }
\author{ Steve Su }
\note{ There is currently no guarantee of a global convergence. }
\seealso{ \code{link{fun.bimodal.fit.pml}}, \code{\link{fun.bimodal.init}} }
\examples{
## Extract faithful[,2] into faithful2
# faithful2<-faithful[,2]
## Uses clara clustering method
# clara.faithful2<-fun.class.regime.bi(faithful2, 0.01, clara)
## Save into two different objects
# qqqq1.faithful2.cc<-clara.faithful2$data.a
# qqqq2.faithful2.cc<-clara.faithful2$data.b
## Find the initial values
# result.faithful2.init<-fun.bimodal.init(data1=qqqq1.faithful2.cc,
# data2=qqqq2.faithful2.cc, rs.leap1=3,fmkl.leap1=3,rs.init1 = c(-1.5, 1.5),
# fmkl.init1 = c(-0.25, 1.5), rs.leap2=3,fmkl.leap2=3,rs.init2 = c(-1.5, 1.5),
# fmkl.init2 = c(-0.25, 1.5))
## Find the final fits
# result.faithful2.rsrs<-fun.bimodal.fit.ml(data=faithful2,
# result.faithful2.init[[2]],result.faithful2.init[[3]],
# result.faithful2.init[[1]], param1="rs",param2="rs",selc1="rs",selc2="rs")
## Output
# result.faithful2.rsrs
}
\keyword{smooth} |
## Matrix inversion is usually a costly computation and there may be some benefit
## to caching the inverse of a matrix rather than computing it repeatedly. With
## this assignment work, a pair of functions are written to cache the inverse of a
## matrix.
## The first function 'makeCacheMatrix' creates a special "matrix" object, which
## is really a list containing a function to
# 1. set the value of the matrix
# 2. get the value of the matrix
# 3. set the value of the matrix's inverse
# 4. get the value of the matrix's inverse
makeCacheMatrix <- function(x = matrix()) {
## Creates a special "matrix" object that can cache its inverse.
inv <- NULL
set <- function(y) {
x <<- y
inv <<- NULL
}
get <- function() x
setinv <- function(i) inv <<- i
getinv <- function() inv
list(set = set,
get = get,
setinv = setinv,
getinv = getinv)
}
## The second function 'cacheSolve' computes the inverse of the matrix in the
## special "object" created with the above function. However, it first checks to
## see if the inverse has already been computed. If so, it get's the inverse
## from the cache and skips the computation. Otherwise, it computes the inverse
## of the matrix and sets the value of the inverse in the cache.
## For inverse computation 'solve' function has been used. For example, if 'x'
## is a square invertible matrix, then this function will return its inverse.
## Note: Given assumption for this assignment is square invertible matrix will
## always be supplied.
cacheSolve <- function(x, ...) {
## Return a matrix that is the inverse of 'x'
inv <- x$getinv()
if(!is.null(inv)) {
message("getting cached data")
return(inv)
}
data <- x$get()
inv <- solve(data, ...)
x$setinv(inv)
inv
}
## -----------
## Sample run:
## -----------
## > x <- matrix(c(2,1,1,2,0,1,1,1,1), nrow=3, ncol=3)
## Inversable matrix is defined
## > m <- makeCacheMatrix(x)
## Created the special 'object'
## > m$get()
## [,1] [,2] [,3]
## [1,] 2 2 1
## [2,] 1 0 1
## [3,] 1 1 1
## Matrix is cached
## > m$getinv()
## NULL
## Inverse is not cached
## > cacheSolve(m)
## [,1] [,2] [,3]
## [1,] 1 1 -2
## [2,] 0 -1 1
## [3,] -1 0 2
## With the first call ... computed inverse is returned
## > m$getinv()
## [,1] [,2] [,3]
## [1,] 1 1 -2
## [2,] 0 -1 1
## [3,] -1 0 2
## Computed inverse is now cached
## > cacheSolve(m)
## getting cached data
## [,1] [,2] [,3]
## [1,] 1 1 -2
## [2,] 0 -1 1
## [3,] -1 0 2
## With the second call ... cached inverse is returned | /cachematrix.R | no_license | M24oEtudiant/ProgrammingAssignment2 | R | false | false | 2,804 | r | ## Matrix inversion is usually a costly computation and there may be some benefit
## to caching the inverse of a matrix rather than computing it repeatedly. With
## this assignment work, a pair of functions are written to cache the inverse of a
## matrix.
## The first function 'makeCacheMatrix' creates a special "matrix" object, which
## is really a list containing a function to
# 1. set the value of the matrix
# 2. get the value of the matrix
# 3. set the value of the matrix's inverse
# 4. get the value of the matrix's inverse
makeCacheMatrix <- function(x = matrix()) {
## Creates a special "matrix" object that can cache its inverse.
inv <- NULL
set <- function(y) {
x <<- y
inv <<- NULL
}
get <- function() x
setinv <- function(i) inv <<- i
getinv <- function() inv
list(set = set,
get = get,
setinv = setinv,
getinv = getinv)
}
## The second function 'cacheSolve' computes the inverse of the matrix in the
## special "object" created with the above function. However, it first checks to
## see if the inverse has already been computed. If so, it get's the inverse
## from the cache and skips the computation. Otherwise, it computes the inverse
## of the matrix and sets the value of the inverse in the cache.
## For inverse computation 'solve' function has been used. For example, if 'x'
## is a square invertible matrix, then this function will return its inverse.
## Note: Given assumption for this assignment is square invertible matrix will
## always be supplied.
cacheSolve <- function(x, ...) {
## Return a matrix that is the inverse of 'x'
inv <- x$getinv()
if(!is.null(inv)) {
message("getting cached data")
return(inv)
}
data <- x$get()
inv <- solve(data, ...)
x$setinv(inv)
inv
}
## -----------
## Sample run:
## -----------
## > x <- matrix(c(2,1,1,2,0,1,1,1,1), nrow=3, ncol=3)
## Inversable matrix is defined
## > m <- makeCacheMatrix(x)
## Created the special 'object'
## > m$get()
## [,1] [,2] [,3]
## [1,] 2 2 1
## [2,] 1 0 1
## [3,] 1 1 1
## Matrix is cached
## > m$getinv()
## NULL
## Inverse is not cached
## > cacheSolve(m)
## [,1] [,2] [,3]
## [1,] 1 1 -2
## [2,] 0 -1 1
## [3,] -1 0 2
## With the first call ... computed inverse is returned
## > m$getinv()
## [,1] [,2] [,3]
## [1,] 1 1 -2
## [2,] 0 -1 1
## [3,] -1 0 2
## Computed inverse is now cached
## > cacheSolve(m)
## getting cached data
## [,1] [,2] [,3]
## [1,] 1 1 -2
## [2,] 0 -1 1
## [3,] -1 0 2
## With the second call ... cached inverse is returned |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/arg.R
\name{arg_value}
\alias{arg_value}
\title{Get Value Argument from Replacement Call}
\usage{
arg_value(node)
}
\arguments{
\item{node}{(Replacement) A call to a replacement function.}
}
\value{
The \code{value} argument from the call.
}
\description{
This function gets the value argument from calls that replace an object.
}
| /man/arg_value.Rd | no_license | frenkiboy/rstatic | R | false | true | 409 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/arg.R
\name{arg_value}
\alias{arg_value}
\title{Get Value Argument from Replacement Call}
\usage{
arg_value(node)
}
\arguments{
\item{node}{(Replacement) A call to a replacement function.}
}
\value{
The \code{value} argument from the call.
}
\description{
This function gets the value argument from calls that replace an object.
}
|
#' Interpolate a grid using bayesian kriging (MCMC).
#'
#' Using coordinate grid and proper EpiVis dataframe to interpolate the grid with MCMC.
#'
#' @details Note that this method could be significantly slow if too many points are to be calculated,
#' as calculating Covariance matrix is O(n^2). Also, the rate is multiplied by 10000 to avoid machine precision
#' errors.is a relatively low-level function allowing users to customize their model, chains,
#' and iterations. The output model and samples are saved locally for future use.
#'
#' @importFrom rstan sampling
#' @importFrom rstan stan_model
#' @importFrom rstan extract
#'
#' @param epi a proper EpiVis data frame.
#' @param grid an `n x 2` matrix of coordinates that one wish to predict at.
#' @param mod a string denoting the name of the stan file, but with exactly four `data` entries: nobs, npred, rate_obs, coord.
#' @param ... additional parameters passed to rstan::sampling.
#'
#' @return fitted stan model.
#' @export
epi_bayesian_krig <- function(epi, grid, mod, ...){
stan_mod <- stan_model(file=paste0(mod,'.stan'))
data <- list(nobs=nrow(epi), npred=nrow(grid),
## rate is multiplied by 10000 to avoid machine precision errors.
rate_obs=epi$rate*10000, coord=rbind(cbind(epi$x, epi$y),
cbind(grid$x, grid$y)))
stan_fit <- sampling(stan_mod, data=data, ...)
save(stan_fit, stan_mod, file="spatial_stan.Rda")
stan_fit
} | /EpiVis/R/epi_bayesian_krig.R | no_license | evertrustJ/EpiVis | R | false | false | 1,534 | r | #' Interpolate a grid using bayesian kriging (MCMC).
#'
#' Using coordinate grid and proper EpiVis dataframe to interpolate the grid with MCMC.
#'
#' @details Note that this method could be significantly slow if too many points are to be calculated,
#' as calculating Covariance matrix is O(n^2). Also, the rate is multiplied by 10000 to avoid machine precision
#' errors.is a relatively low-level function allowing users to customize their model, chains,
#' and iterations. The output model and samples are saved locally for future use.
#'
#' @importFrom rstan sampling
#' @importFrom rstan stan_model
#' @importFrom rstan extract
#'
#' @param epi a proper EpiVis data frame.
#' @param grid an `n x 2` matrix of coordinates that one wish to predict at.
#' @param mod a string denoting the name of the stan file, but with exactly four `data` entries: nobs, npred, rate_obs, coord.
#' @param ... additional parameters passed to rstan::sampling.
#'
#' @return fitted stan model.
#' @export
epi_bayesian_krig <- function(epi, grid, mod, ...){
stan_mod <- stan_model(file=paste0(mod,'.stan'))
data <- list(nobs=nrow(epi), npred=nrow(grid),
## rate is multiplied by 10000 to avoid machine precision errors.
rate_obs=epi$rate*10000, coord=rbind(cbind(epi$x, epi$y),
cbind(grid$x, grid$y)))
stan_fit <- sampling(stan_mod, data=data, ...)
save(stan_fit, stan_mod, file="spatial_stan.Rda")
stan_fit
} |
#!/usr/bin/Rscript
.libPaths(new = "/hpc/local/CentOS7/dbg_mz/R_libs/3.2.2")
# load required packages
# none
# define parameters
cmd_args <- commandArgs(trailingOnly = TRUE)
for (arg in cmd_args) cat(" ", arg, "\n")
outdir <- cmd_args[1]
scanmode <- cmd_args[2]
db <- cmd_args[3]
ppm <- as.numeric(cmd_args[4])
# Cut up entire HMDB into small parts based on the new binning/breaks
load(db)
load(paste(outdir, "breaks.fwhm.RData", sep = "/"))
outdir_hmdb <- paste(outdir, "hmdb_part", sep = "/")
dir.create(outdir_hmdb, showWarnings = FALSE)
if (scanmode=="negative"){
label = "MNeg"
HMDB_add_iso=HMDB_add_iso.Neg
} else {
label = "Mpos"
HMDB_add_iso=HMDB_add_iso.Pos
}
# filter mass range meassured!!!
HMDB_add_iso = HMDB_add_iso[which(HMDB_add_iso[,label]>=breaks.fwhm[1] & HMDB_add_iso[,label]<=breaks.fwhm[length(breaks.fwhm)]),]
# sort on mass
outlist = HMDB_add_iso[order(as.numeric(HMDB_add_iso[,label])),]
n=dim(outlist)[1]
sub=5000 # max rows per file
end=0
min_1_last=sub
check=0
outlist_part=NULL
if (n < sub) {
outlist_part <- outlist
save(outlist_part, file = paste(outdir_hmdb, paste0(scanmode, "_hmdb.1.RData"), sep = "/"))
} else {
if (n >= sub & (floor(n/sub) - 1) >= 2){
for (i in 2:floor(n/sub) - 1){
start <- -(sub - 1) + i*sub
end <- i*sub
if (i > 1){
outlist_i = outlist[c(start:end),]
n_moved = 0
# Calculate 3ppm and replace border, avoid cut within peakgroup!
while ((as.numeric(outlist_i[1,label]) - as.numeric(outlist_part[min_1_last,label]))*1e+06/as.numeric(outlist_i[1,label]) < ppm) {
outlist_part <- rbind(outlist_part, outlist_i[1,])
outlist_i <- outlist_i[-1,]
n_moved <- n_moved + 1
}
# message(paste("Process", i-1,":", dim(outlist_part)[1]))
save(outlist_part, file = paste(outdir_hmdb, paste(scanmode, paste("hmdb",i-1,"RData", sep="."), sep="_"), sep = "/"))
check <- check + dim(outlist_part)[1]
outlist_part <- outlist_i
min_1_last <- dim(outlist_part)[1]
} else {
outlist_part <- outlist[c(start:end),]
}
}
}
start <- end + 1
end <- n
outlist_i <- outlist[c(start:end),]
n_moved <- 0
if (!is.null(outlist_part)) {
# Calculate 3ppm and replace border, avoid cut within peakgroup!
while ((as.numeric(outlist_i[1,label]) - as.numeric(outlist_part[min_1_last,label]))*1e+06/as.numeric(outlist_i[1,label]) < ppm) {
outlist_part = rbind(outlist_part, outlist_i[1,])
outlist_i = outlist_i[-1,]
n_moved = n_moved + 1
}
# message(paste("Process", i+1-1,":", dim(outlist_part)[1]))
save(outlist_part, file = paste(outdir_hmdb, paste(scanmode, paste("hmdb",i,"RData", sep = "."), sep = "_"), sep = "/"))
check <- check + dim(outlist_part)[1]
}
outlist_part <- outlist_i
# message(paste("Process", i+2-1,":", dim(outlist_part)[1]))
save(outlist_part, file = paste(outdir_hmdb, paste(scanmode, paste("hmdb", i + 1, "RData", sep="."), sep="_"), sep = "/"))
check <- check + dim(outlist_part)[1]
cat("\n", "Check", check == dim(outlist)[1])
} | /pipeline/scripts/hmdb_part.R | permissive | metabdel/DIMS | R | false | false | 3,193 | r | #!/usr/bin/Rscript
.libPaths(new = "/hpc/local/CentOS7/dbg_mz/R_libs/3.2.2")
# load required packages
# none
# define parameters
cmd_args <- commandArgs(trailingOnly = TRUE)
for (arg in cmd_args) cat(" ", arg, "\n")
outdir <- cmd_args[1]
scanmode <- cmd_args[2]
db <- cmd_args[3]
ppm <- as.numeric(cmd_args[4])
# Cut up entire HMDB into small parts based on the new binning/breaks
load(db)
load(paste(outdir, "breaks.fwhm.RData", sep = "/"))
outdir_hmdb <- paste(outdir, "hmdb_part", sep = "/")
dir.create(outdir_hmdb, showWarnings = FALSE)
if (scanmode=="negative"){
label = "MNeg"
HMDB_add_iso=HMDB_add_iso.Neg
} else {
label = "Mpos"
HMDB_add_iso=HMDB_add_iso.Pos
}
# filter mass range meassured!!!
HMDB_add_iso = HMDB_add_iso[which(HMDB_add_iso[,label]>=breaks.fwhm[1] & HMDB_add_iso[,label]<=breaks.fwhm[length(breaks.fwhm)]),]
# sort on mass
outlist = HMDB_add_iso[order(as.numeric(HMDB_add_iso[,label])),]
n=dim(outlist)[1]
sub=5000 # max rows per file
end=0
min_1_last=sub
check=0
outlist_part=NULL
if (n < sub) {
outlist_part <- outlist
save(outlist_part, file = paste(outdir_hmdb, paste0(scanmode, "_hmdb.1.RData"), sep = "/"))
} else {
if (n >= sub & (floor(n/sub) - 1) >= 2){
for (i in 2:floor(n/sub) - 1){
start <- -(sub - 1) + i*sub
end <- i*sub
if (i > 1){
outlist_i = outlist[c(start:end),]
n_moved = 0
# Calculate 3ppm and replace border, avoid cut within peakgroup!
while ((as.numeric(outlist_i[1,label]) - as.numeric(outlist_part[min_1_last,label]))*1e+06/as.numeric(outlist_i[1,label]) < ppm) {
outlist_part <- rbind(outlist_part, outlist_i[1,])
outlist_i <- outlist_i[-1,]
n_moved <- n_moved + 1
}
# message(paste("Process", i-1,":", dim(outlist_part)[1]))
save(outlist_part, file = paste(outdir_hmdb, paste(scanmode, paste("hmdb",i-1,"RData", sep="."), sep="_"), sep = "/"))
check <- check + dim(outlist_part)[1]
outlist_part <- outlist_i
min_1_last <- dim(outlist_part)[1]
} else {
outlist_part <- outlist[c(start:end),]
}
}
}
start <- end + 1
end <- n
outlist_i <- outlist[c(start:end),]
n_moved <- 0
if (!is.null(outlist_part)) {
# Calculate 3ppm and replace border, avoid cut within peakgroup!
while ((as.numeric(outlist_i[1,label]) - as.numeric(outlist_part[min_1_last,label]))*1e+06/as.numeric(outlist_i[1,label]) < ppm) {
outlist_part = rbind(outlist_part, outlist_i[1,])
outlist_i = outlist_i[-1,]
n_moved = n_moved + 1
}
# message(paste("Process", i+1-1,":", dim(outlist_part)[1]))
save(outlist_part, file = paste(outdir_hmdb, paste(scanmode, paste("hmdb",i,"RData", sep = "."), sep = "_"), sep = "/"))
check <- check + dim(outlist_part)[1]
}
outlist_part <- outlist_i
# message(paste("Process", i+2-1,":", dim(outlist_part)[1]))
save(outlist_part, file = paste(outdir_hmdb, paste(scanmode, paste("hmdb", i + 1, "RData", sep="."), sep="_"), sep = "/"))
check <- check + dim(outlist_part)[1]
cat("\n", "Check", check == dim(outlist)[1])
} |
## Title ----
##
## Visualisation of factors on reduced dimension (2D) plots
##
## Description ----
##
## Visualise the single-cells by their coordinates on reduced dimensions
## Cells are colored by cluster, and by the barcode fields
## which typically represent the sequence batch, sample
## and aggregation id.
##
## Details ----
##
##
## Usage ----
##
## See options.
# Libraries ----
stopifnot(
require(ggplot2),
require(reshape2),
require(optparse),
require(tenxutils)
)
# Options ----
option_list <- list(
make_option(
c("--table"),
default="none",
help="A table containing the reduced coordinates and phenotype information"
),
make_option(
c("--method"),
default="tSNE",
help="Normally the type of dimension reduction"
),
make_option(
c("--rdim1"),
default="tSNE_1",
help="The name of the column for reduced dimension one"
),
make_option(
c("--rdim2"),
default="tSNE_2",
help="The name of the column for reduced dimension two"
),
make_option(
c("--shapefactor"),
default="none",
help="A column in the cell metadata to use for deterimining the shape of points on the tSNE"
),
make_option(
c("--colorfactors"),
default="none",
help="Column(s) in the cell metadata to use for deterimining the color of points on the tSNE. One plot will be made per color factor."
),
make_option(
c("--plotdirvar"),
default="tsneDir",
help="latex var holding location of plots"
),
make_option(
c("--pointsize"),
default=0.5,
help="The point size for the tSNE plots"
),
make_option(
c("--pointalpha"),
default=0.8,
help="The alpha setting for the points on the tSNE plots"
),
make_option(
c("--outdir"),
default="seurat.out.dir",
help="outdir"
)
)
opt <- parse_args(OptionParser(option_list=option_list))
cat("Running with options:\n")
print(opt)
##
message("Reading in rdims table")
plot_data <- read.table(opt$table, sep="\t", header=TRUE)
rownames(plot_data) <- plot_data$barcode
color_vars <- strsplit(opt$colorfactors,",")[[1]]
tex = ""
print("Making tSNE plots colored by each of the factor variables")
## Make one whole-page tSNE plot per color variable
for(color_var in color_vars)
{
print(paste("Making",color_var,"tSNE plot"))
## If a variable comprises only integers, coerce it to a character vector
numeric = FALSE
if(is.numeric(plot_data[[color_var]]))
{
if(all(plot_data[[color_var]] == round(plot_data[[color_var]]))
& length(unique(plot_data[[color_var]])) < 50)
{
plot_data[[color_var]] <- as.character(plot_data[[color_var]])
}
else
{
numeric = TRUE
}
}
if(opt$shapefactor=="none" | !(opt$shapefactor %in% colnames(plot_data)))
{
gp <- ggplot(plot_data, aes_string(opt$rdim1, opt$rdim2, color=color_var))
} else {
gp <- ggplot(plot_data, aes_string(opt$rdim1, opt$rdim2,
color=color_var, shape=opt$shapefactor))
}
if(numeric)
{
midpoint <- (max(plot_data[[color_var]]) + min(plot_data[[color_var]]))/2
gp <- gp + scale_color_gradient2(low="black",mid="yellow",high="red",midpoint=midpoint)
}
gp <- gp + geom_point(size=opt$pointsize)
plotfilename = paste(opt$method, color_var, sep=".")
save_ggplots(file.path(opt$outdir, plotfilename),
gp,
width=6,
height=4)
texCaption <- paste(opt$method,"plot colored by",color_var)
tex <- paste(tex,
getSubsectionTex(texCaption),
getFigureTex(plotfilename,texCaption,
plot_dir_var=opt$plotdirvar),
sep="\n")
}
print("Writing out latex snippet")
## write out latex snippet
tex_file <- file.path(opt$outdir,
paste("plot.rdims",
"factor.tex",
sep="."))
writeTex(tex_file, tex)
| /R/plot_rdims_factor.R | permissive | MatthieuRouland/tenx | R | false | false | 4,170 | r | ## Title ----
##
## Visualisation of factors on reduced dimension (2D) plots
##
## Description ----
##
## Visualise the single-cells by their coordinates on reduced dimensions
## Cells are colored by cluster, and by the barcode fields
## which typically represent the sequence batch, sample
## and aggregation id.
##
## Details ----
##
##
## Usage ----
##
## See options.
# Libraries ----
stopifnot(
require(ggplot2),
require(reshape2),
require(optparse),
require(tenxutils)
)
# Options ----
option_list <- list(
make_option(
c("--table"),
default="none",
help="A table containing the reduced coordinates and phenotype information"
),
make_option(
c("--method"),
default="tSNE",
help="Normally the type of dimension reduction"
),
make_option(
c("--rdim1"),
default="tSNE_1",
help="The name of the column for reduced dimension one"
),
make_option(
c("--rdim2"),
default="tSNE_2",
help="The name of the column for reduced dimension two"
),
make_option(
c("--shapefactor"),
default="none",
help="A column in the cell metadata to use for deterimining the shape of points on the tSNE"
),
make_option(
c("--colorfactors"),
default="none",
help="Column(s) in the cell metadata to use for deterimining the color of points on the tSNE. One plot will be made per color factor."
),
make_option(
c("--plotdirvar"),
default="tsneDir",
help="latex var holding location of plots"
),
make_option(
c("--pointsize"),
default=0.5,
help="The point size for the tSNE plots"
),
make_option(
c("--pointalpha"),
default=0.8,
help="The alpha setting for the points on the tSNE plots"
),
make_option(
c("--outdir"),
default="seurat.out.dir",
help="outdir"
)
)
opt <- parse_args(OptionParser(option_list=option_list))
cat("Running with options:\n")
print(opt)
##
message("Reading in rdims table")
plot_data <- read.table(opt$table, sep="\t", header=TRUE)
rownames(plot_data) <- plot_data$barcode
color_vars <- strsplit(opt$colorfactors,",")[[1]]
tex = ""
print("Making tSNE plots colored by each of the factor variables")
## Make one whole-page tSNE plot per color variable
for(color_var in color_vars)
{
print(paste("Making",color_var,"tSNE plot"))
## If a variable comprises only integers, coerce it to a character vector
numeric = FALSE
if(is.numeric(plot_data[[color_var]]))
{
if(all(plot_data[[color_var]] == round(plot_data[[color_var]]))
& length(unique(plot_data[[color_var]])) < 50)
{
plot_data[[color_var]] <- as.character(plot_data[[color_var]])
}
else
{
numeric = TRUE
}
}
if(opt$shapefactor=="none" | !(opt$shapefactor %in% colnames(plot_data)))
{
gp <- ggplot(plot_data, aes_string(opt$rdim1, opt$rdim2, color=color_var))
} else {
gp <- ggplot(plot_data, aes_string(opt$rdim1, opt$rdim2,
color=color_var, shape=opt$shapefactor))
}
if(numeric)
{
midpoint <- (max(plot_data[[color_var]]) + min(plot_data[[color_var]]))/2
gp <- gp + scale_color_gradient2(low="black",mid="yellow",high="red",midpoint=midpoint)
}
gp <- gp + geom_point(size=opt$pointsize)
plotfilename = paste(opt$method, color_var, sep=".")
save_ggplots(file.path(opt$outdir, plotfilename),
gp,
width=6,
height=4)
texCaption <- paste(opt$method,"plot colored by",color_var)
tex <- paste(tex,
getSubsectionTex(texCaption),
getFigureTex(plotfilename,texCaption,
plot_dir_var=opt$plotdirvar),
sep="\n")
}
print("Writing out latex snippet")
## write out latex snippet
tex_file <- file.path(opt$outdir,
paste("plot.rdims",
"factor.tex",
sep="."))
writeTex(tex_file, tex)
|
library(broom)
options(scipen = 999)
source("scripts/month_clustering.R")
#sources
#https://rdrr.io/cran/broom/man/prcomp_tidiers.html
#https://poissonisfish.wordpress.com/2017/01/23/principal-component-analysis-in-r/
#http://rstatistics.net/principal-component-analysis/
set.seed(1234)
df_months %>%
ungroup() %>%
remove_rownames() %>%
column_to_rownames(var = "request_type") -> df_months
df_months %>%
prcomp(scale = TRUE) -> pc
#pc <- prcomp(df_months, scale = TRUE)
# information about rotation
pc %>%
tidy() %>%
head()
#head(tidy(pc))
# information about samples (request types)
pc %>%
tidy("samples") %>%
head()
#head(tidy(pc, "samples"))
# information about PCs
pc %>%
tidy("pcs")
#tidy(pc, "pcs")
pc %>%
augment(data = df_months) -> au
#au <- augment(pc, data = df_months)
head(au)
ggplot(au, aes(.fittedPC1, .fittedPC2)) +
geom_point() +
geom_label(aes(label = .rownames)) +
theme_bw()
ggsave("images/311_request_type_month_proportion_PCA.png", height = 12, width = 12)
| /scripts/months_pca.R | no_license | conorotompkins/pittsburgh_311 | R | false | false | 1,031 | r | library(broom)
options(scipen = 999)
source("scripts/month_clustering.R")
#sources
#https://rdrr.io/cran/broom/man/prcomp_tidiers.html
#https://poissonisfish.wordpress.com/2017/01/23/principal-component-analysis-in-r/
#http://rstatistics.net/principal-component-analysis/
set.seed(1234)
df_months %>%
ungroup() %>%
remove_rownames() %>%
column_to_rownames(var = "request_type") -> df_months
df_months %>%
prcomp(scale = TRUE) -> pc
#pc <- prcomp(df_months, scale = TRUE)
# information about rotation
pc %>%
tidy() %>%
head()
#head(tidy(pc))
# information about samples (request types)
pc %>%
tidy("samples") %>%
head()
#head(tidy(pc, "samples"))
# information about PCs
pc %>%
tidy("pcs")
#tidy(pc, "pcs")
pc %>%
augment(data = df_months) -> au
#au <- augment(pc, data = df_months)
head(au)
ggplot(au, aes(.fittedPC1, .fittedPC2)) +
geom_point() +
geom_label(aes(label = .rownames)) +
theme_bw()
ggsave("images/311_request_type_month_proportion_PCA.png", height = 12, width = 12)
|
print(head(Wholesale.customers.data))
dim(Wholesale.customers.data)
wholesale<-data.frame(Wholesale.customers.data)
table(wholesale$Region)
table(wholesale$Channel)
Cluster_wholesales<-data.frame(wholesale[3:8])
kmeans.result<-kmeans(Cluster_wholesales,3)
print(kmeans.result)
table(wholesale$Channel,kmeans.result$cluster)
table(wholesale$Region,kmeans.result$cluster)
kmeans.result$totss
kmeans.result$tot.withinss
plot(Cluster_wholesales,col=kmeans.result$cluster)
| /kmean_clustering.R | no_license | Rajatverma8960/Data_mining_with_R | R | false | false | 494 | r | print(head(Wholesale.customers.data))
dim(Wholesale.customers.data)
wholesale<-data.frame(Wholesale.customers.data)
table(wholesale$Region)
table(wholesale$Channel)
Cluster_wholesales<-data.frame(wholesale[3:8])
kmeans.result<-kmeans(Cluster_wholesales,3)
print(kmeans.result)
table(wholesale$Channel,kmeans.result$cluster)
table(wholesale$Region,kmeans.result$cluster)
kmeans.result$totss
kmeans.result$tot.withinss
plot(Cluster_wholesales,col=kmeans.result$cluster)
|
#This code summarizes the results
# #Read in the reference file.
# library(googlesheets)
library(dplyr)
# greg <- gs_ls()
# bovids <- gs_url("https://docs.google.com/spreadsheets/d/1KGkTVz5IVuBdtQie0QBdeHwyHVH41MjFdbpluFsDX6k/edit#gid=963640939")
# bovids.df <- bovids %>% gs_read(ws = 1)
# subset(bovids.df, `Tooth Type` == "LM1")
#
########################################################
#For a combination of M, k and scaling, this summarizes the results of the simulation for classifying tribe for all 6 tooth types and both sides. The summary files created here are then used to create the plots and figures in the manuscript.
########################################################
M <- 20
k <- 20
scaled <- TRUE
for (tooth in c("LM1","LM2","LM3","UM1","UM2","UM3")){
for (side in 1:2){print(c(tooth,side))
if (!scaled){load(paste0("/Users/gregorymatthews/Dropbox/shapeanalysisgit/results/results20190610_side=",side,"_k=",k,"_M=",M,"_tooth=",tooth,".RData"))}
if (scaled){load(paste0("/Users/gregorymatthews/Dropbox/shapeanalysisgit/results/results20190610_side=",side,"_k=",k,"_M=",M,"_tooth=",tooth,"scaled.RData"))}
logloss_imputed <- c()
logloss_part <- c()
acc_imputed <- acc_part <- acc_strong_imputed <- acc_strong_part <- c()
for (knn in c(1:4,6:20,30,40,50,60,5)){print(knn)
ids <- names(results_list)
knn_partial_matching <- function(DSCN){
temp <- results_list[[DSCN]]$dist_partial
temp$inv_dist <- 1/temp$dist
temp$Tribe <- factor(temp$Tribe, levels = unique(sort(temp$Tribe)))
dat <- data.frame(t(data.frame(c(table(temp$Tribe[order(temp$dist)][1:knn])/knn))))
row.names(dat) <- NULL
dat$true <- results_list[[DSCN]]$truth$Tribe[1]
dat$DSCN <- DSCN
return(dat)
}
part_match <- lapply(ids, knn_partial_matching)
part_match_df <- do.call(rbind,part_match)
part_match_df$true_pred_prob <- NA
for (i in 1:nrow(part_match_df)){
part_match_df$true_pred_prob[i] <- part_match_df[i,as.character(part_match_df$true[i])]
}
#Now for the imputed teeth
knn_imputed <- function(DSCN){
temp <- results_list[[DSCN]]$dist
temp$Tribe <- factor(temp$Tribe, levels = unique(sort(temp$Tribe)))
dat_list <- list()
for (i in 1:M){
pro <- data.frame(t(data.frame(c(table(temp$Tribe[order(temp[[paste0("V",i)]])][1:knn])/knn))))
row.names(pro) <- NULL
dat_list[[i]] <- pro
}
df <- do.call(rbind,dat_list)
dat <- data.frame(t(data.frame(unlist(apply(df,2,mean)))))
row.names(dat) <- NULL
dat$true <- results_list[[DSCN]]$truth$Tribe[1]
dat$DSCN <- DSCN
return(dat)
}
imputed_match <- lapply(ids, knn_imputed)
imputed_match_df <- do.call(rbind,imputed_match)
imputed_match_df$true_pred_prob <- NA
for (i in 1:nrow(imputed_match_df)){
imputed_match_df$true_pred_prob[i] <- imputed_match_df[i,as.character(imputed_match_df$true[i])]
}
#Note: In order to prevent infinite loss a small positive number was added
logloss_imputed[knn] <- mean(-log(imputed_match_df$true_pred_prob+(10^-12)))
logloss_part[knn] <- mean(-log(part_match_df$true_pred_prob+(10^-12)))
#Predict the class for imputed
imputed_match_df$pred <- names(imputed_match_df[,1:7])[apply(imputed_match_df[,1:7],1,which.max)]
reference <- factor(imputed_match_df$true,levels = c("Alcelaphini", "Antilopini", "Tragelaphini", "Neotragini","Bovini", "Reduncini", "Hippotragini" ))
pred <- factor(imputed_match_df$pred, levels = c("Alcelaphini", "Antilopini", "Tragelaphini", "Neotragini","Bovini", "Reduncini", "Hippotragini" ))
library(caret)
acc_imputed[knn] <- confusionMatrix(pred,reference)$overall["Accuracy"]
#For partial matching
part_match_df$pred <- names(part_match_df[,1:7])[apply(part_match_df[,1:7],1,which.max)]
reference <- factor(part_match_df$true,levels = c("Alcelaphini", "Antilopini", "Tragelaphini", "Neotragini","Bovini", "Reduncini", "Hippotragini" ))
pred <- factor(part_match_df$pred, levels = c("Alcelaphini", "Antilopini", "Tragelaphini", "Neotragini","Bovini", "Reduncini", "Hippotragini" ))
library(caret)
acc_part[knn] <- confusionMatrix(pred,reference)$overall["Accuracy"]
#Accuracy of Only "strong" predictions
#Predict the class for imputed
ids <- which(apply(imputed_match_df[,1:7],1,max)>.4)
imputed_match_df$pred <- names(imputed_match_df[,1:7])[apply(imputed_match_df[,1:7],1,which.max)]
reference <- factor(imputed_match_df$true,levels = c("Alcelaphini", "Antilopini", "Tragelaphini", "Neotragini","Bovini", "Reduncini", "Hippotragini" ))
pred <- factor(imputed_match_df$pred, levels = c("Alcelaphini", "Antilopini", "Tragelaphini", "Neotragini","Bovini", "Reduncini", "Hippotragini" ))
pred <- pred[ids]
reference <- reference[ids]
library(caret)
acc_strong_imputed[knn] <- confusionMatrix(pred,reference)$overall["Accuracy"]
#For partial matching
ids <- which(apply(part_match_df[,1:7],1,max)>.4)
part_match_df$pred <- names(part_match_df[,1:7])[apply(part_match_df[,1:7],1,which.max)]
reference <- factor(part_match_df$true,levels = c("Alcelaphini", "Antilopini", "Tragelaphini", "Neotragini","Bovini", "Reduncini", "Hippotragini" ))
pred <- factor(part_match_df$pred, levels = c("Alcelaphini", "Antilopini", "Tragelaphini", "Neotragini","Bovini", "Reduncini", "Hippotragini" ))
pred <- pred[ids]
reference <- reference[ids]
library(caret)
acc_strong_part[knn] <- confusionMatrix(pred,reference)$overall["Accuracy"]
}
if (!scaled){
save(list = c("logloss_imputed","logloss_part","imputed_match_df","part_match_df","acc_imputed","acc_part"),
file = paste0("/Users/gregorymatthews/Dropbox/shapeanalysisgit/results/summaries/results20190610_side=",side,"_k=",k,"_M=",M,"_tooth=",tooth,"_summaries.RData"))
}
if (scaled){
save(list = c("logloss_imputed","logloss_part","imputed_match_df","part_match_df","acc_imputed","acc_part"),
file = paste0("/Users/gregorymatthews/Dropbox/shapeanalysisgit/results/summaries/results20190610_side=",side,"_k=",k,"_M=",M,"_tooth=",tooth,"scaled_summaries.RData"))
}
}}
| /R/1a-summary_code_tribe.R | no_license | gjm112/shape_completion_Matthews_et_al | R | false | false | 6,725 | r | #This code summarizes the results
# #Read in the reference file.
# library(googlesheets)
library(dplyr)
# greg <- gs_ls()
# bovids <- gs_url("https://docs.google.com/spreadsheets/d/1KGkTVz5IVuBdtQie0QBdeHwyHVH41MjFdbpluFsDX6k/edit#gid=963640939")
# bovids.df <- bovids %>% gs_read(ws = 1)
# subset(bovids.df, `Tooth Type` == "LM1")
#
########################################################
#For a combination of M, k and scaling, this summarizes the results of the simulation for classifying tribe for all 6 tooth types and both sides. The summary files created here are then used to create the plots and figures in the manuscript.
########################################################
M <- 20
k <- 20
scaled <- TRUE
for (tooth in c("LM1","LM2","LM3","UM1","UM2","UM3")){
for (side in 1:2){print(c(tooth,side))
if (!scaled){load(paste0("/Users/gregorymatthews/Dropbox/shapeanalysisgit/results/results20190610_side=",side,"_k=",k,"_M=",M,"_tooth=",tooth,".RData"))}
if (scaled){load(paste0("/Users/gregorymatthews/Dropbox/shapeanalysisgit/results/results20190610_side=",side,"_k=",k,"_M=",M,"_tooth=",tooth,"scaled.RData"))}
logloss_imputed <- c()
logloss_part <- c()
acc_imputed <- acc_part <- acc_strong_imputed <- acc_strong_part <- c()
for (knn in c(1:4,6:20,30,40,50,60,5)){print(knn)
ids <- names(results_list)
knn_partial_matching <- function(DSCN){
temp <- results_list[[DSCN]]$dist_partial
temp$inv_dist <- 1/temp$dist
temp$Tribe <- factor(temp$Tribe, levels = unique(sort(temp$Tribe)))
dat <- data.frame(t(data.frame(c(table(temp$Tribe[order(temp$dist)][1:knn])/knn))))
row.names(dat) <- NULL
dat$true <- results_list[[DSCN]]$truth$Tribe[1]
dat$DSCN <- DSCN
return(dat)
}
part_match <- lapply(ids, knn_partial_matching)
part_match_df <- do.call(rbind,part_match)
part_match_df$true_pred_prob <- NA
for (i in 1:nrow(part_match_df)){
part_match_df$true_pred_prob[i] <- part_match_df[i,as.character(part_match_df$true[i])]
}
#Now for the imputed teeth
knn_imputed <- function(DSCN){
temp <- results_list[[DSCN]]$dist
temp$Tribe <- factor(temp$Tribe, levels = unique(sort(temp$Tribe)))
dat_list <- list()
for (i in 1:M){
pro <- data.frame(t(data.frame(c(table(temp$Tribe[order(temp[[paste0("V",i)]])][1:knn])/knn))))
row.names(pro) <- NULL
dat_list[[i]] <- pro
}
df <- do.call(rbind,dat_list)
dat <- data.frame(t(data.frame(unlist(apply(df,2,mean)))))
row.names(dat) <- NULL
dat$true <- results_list[[DSCN]]$truth$Tribe[1]
dat$DSCN <- DSCN
return(dat)
}
imputed_match <- lapply(ids, knn_imputed)
imputed_match_df <- do.call(rbind,imputed_match)
imputed_match_df$true_pred_prob <- NA
for (i in 1:nrow(imputed_match_df)){
imputed_match_df$true_pred_prob[i] <- imputed_match_df[i,as.character(imputed_match_df$true[i])]
}
#Note: In order to prevent infinite loss a small positive number was added
logloss_imputed[knn] <- mean(-log(imputed_match_df$true_pred_prob+(10^-12)))
logloss_part[knn] <- mean(-log(part_match_df$true_pred_prob+(10^-12)))
#Predict the class for imputed
imputed_match_df$pred <- names(imputed_match_df[,1:7])[apply(imputed_match_df[,1:7],1,which.max)]
reference <- factor(imputed_match_df$true,levels = c("Alcelaphini", "Antilopini", "Tragelaphini", "Neotragini","Bovini", "Reduncini", "Hippotragini" ))
pred <- factor(imputed_match_df$pred, levels = c("Alcelaphini", "Antilopini", "Tragelaphini", "Neotragini","Bovini", "Reduncini", "Hippotragini" ))
library(caret)
acc_imputed[knn] <- confusionMatrix(pred,reference)$overall["Accuracy"]
#For partial matching
part_match_df$pred <- names(part_match_df[,1:7])[apply(part_match_df[,1:7],1,which.max)]
reference <- factor(part_match_df$true,levels = c("Alcelaphini", "Antilopini", "Tragelaphini", "Neotragini","Bovini", "Reduncini", "Hippotragini" ))
pred <- factor(part_match_df$pred, levels = c("Alcelaphini", "Antilopini", "Tragelaphini", "Neotragini","Bovini", "Reduncini", "Hippotragini" ))
library(caret)
acc_part[knn] <- confusionMatrix(pred,reference)$overall["Accuracy"]
#Accuracy of Only "strong" predictions
#Predict the class for imputed
ids <- which(apply(imputed_match_df[,1:7],1,max)>.4)
imputed_match_df$pred <- names(imputed_match_df[,1:7])[apply(imputed_match_df[,1:7],1,which.max)]
reference <- factor(imputed_match_df$true,levels = c("Alcelaphini", "Antilopini", "Tragelaphini", "Neotragini","Bovini", "Reduncini", "Hippotragini" ))
pred <- factor(imputed_match_df$pred, levels = c("Alcelaphini", "Antilopini", "Tragelaphini", "Neotragini","Bovini", "Reduncini", "Hippotragini" ))
pred <- pred[ids]
reference <- reference[ids]
library(caret)
acc_strong_imputed[knn] <- confusionMatrix(pred,reference)$overall["Accuracy"]
#For partial matching
ids <- which(apply(part_match_df[,1:7],1,max)>.4)
part_match_df$pred <- names(part_match_df[,1:7])[apply(part_match_df[,1:7],1,which.max)]
reference <- factor(part_match_df$true,levels = c("Alcelaphini", "Antilopini", "Tragelaphini", "Neotragini","Bovini", "Reduncini", "Hippotragini" ))
pred <- factor(part_match_df$pred, levels = c("Alcelaphini", "Antilopini", "Tragelaphini", "Neotragini","Bovini", "Reduncini", "Hippotragini" ))
pred <- pred[ids]
reference <- reference[ids]
library(caret)
acc_strong_part[knn] <- confusionMatrix(pred,reference)$overall["Accuracy"]
}
if (!scaled){
save(list = c("logloss_imputed","logloss_part","imputed_match_df","part_match_df","acc_imputed","acc_part"),
file = paste0("/Users/gregorymatthews/Dropbox/shapeanalysisgit/results/summaries/results20190610_side=",side,"_k=",k,"_M=",M,"_tooth=",tooth,"_summaries.RData"))
}
if (scaled){
save(list = c("logloss_imputed","logloss_part","imputed_match_df","part_match_df","acc_imputed","acc_part"),
file = paste0("/Users/gregorymatthews/Dropbox/shapeanalysisgit/results/summaries/results20190610_side=",side,"_k=",k,"_M=",M,"_tooth=",tooth,"scaled_summaries.RData"))
}
}}
|
library(caret)
source("./cmd/fit_svm.R")
source("./cmd/fit_gbm.R")
source("./cmd/evaluate_classification.R")
source("./cmd/viz_confusion_matrix.R")
# metrics used for regression
METRICS_REG <- c()
# metrics used for classification
METRICS_CLS <- c("Precision", "Recall", "F1", "Accuracy", "Kappa")
model_facade <- function(name, type, df, tst_ratio, label) {
#
# Train and evaluate a model on a data set given.
#
# Args:
# name: name of algorithm
# type: regression or classification
# df: data.frame
# tst_ratio: ratio of data.frame used for test
# label: label feature name(s)
#
# Returns:
# Result of evaluation.
#
# split data set
train_idx <- round(nrow(df) * (1 - tst_ratio), 0)
train <- df[1:train_idx, ]
test <- df[(train_idx + 1):nrow(df), ]
# model and predict
if (name == "gbm") {
model <- fit_gbm(train, label, c("datetime", "machineID"))
pred <- as.data.frame(predict(model, test, n.trees = 50, type = "response"))
names(pred) <- gsub(".50", "", names(pred))
pred <- as.factor(colnames(pred)[max.col(pred)])
} else if (name == "svm") {
model <- fit_svm(train, label, c("datetime", "machineID"))
pred <- predict(model, test, type = "response")
} else {
# [TODO] other modeling methods
return(NULL)
}
# evaluation
if (type == "regression") {
stop("Regression is under construction.")
} else {
# get model performance
performance <- evaluate_classification(actual = test$failure, predicted = pred)
# visualize confusion matrix
viz <- viz_confusion_matrix(performance$confusion_matrix)
# return model and evaluation result
return(list(confusion_matrix = performance$confusion_matrix,
metrics = performance$metrics[, METRICS_CLS],
plot = viz))
}
} | /model_facade.R | no_license | watanabe8760/predicto | R | false | false | 1,873 | r | library(caret)
source("./cmd/fit_svm.R")
source("./cmd/fit_gbm.R")
source("./cmd/evaluate_classification.R")
source("./cmd/viz_confusion_matrix.R")
# metrics used for regression
METRICS_REG <- c()
# metrics used for classification
METRICS_CLS <- c("Precision", "Recall", "F1", "Accuracy", "Kappa")
model_facade <- function(name, type, df, tst_ratio, label) {
#
# Train and evaluate a model on a data set given.
#
# Args:
# name: name of algorithm
# type: regression or classification
# df: data.frame
# tst_ratio: ratio of data.frame used for test
# label: label feature name(s)
#
# Returns:
# Result of evaluation.
#
# split data set
train_idx <- round(nrow(df) * (1 - tst_ratio), 0)
train <- df[1:train_idx, ]
test <- df[(train_idx + 1):nrow(df), ]
# model and predict
if (name == "gbm") {
model <- fit_gbm(train, label, c("datetime", "machineID"))
pred <- as.data.frame(predict(model, test, n.trees = 50, type = "response"))
names(pred) <- gsub(".50", "", names(pred))
pred <- as.factor(colnames(pred)[max.col(pred)])
} else if (name == "svm") {
model <- fit_svm(train, label, c("datetime", "machineID"))
pred <- predict(model, test, type = "response")
} else {
# [TODO] other modeling methods
return(NULL)
}
# evaluation
if (type == "regression") {
stop("Regression is under construction.")
} else {
# get model performance
performance <- evaluate_classification(actual = test$failure, predicted = pred)
# visualize confusion matrix
viz <- viz_confusion_matrix(performance$confusion_matrix)
# return model and evaluation result
return(list(confusion_matrix = performance$confusion_matrix,
metrics = performance$metrics[, METRICS_CLS],
plot = viz))
}
} |
##
# Author: Autogenerated on 2013-11-27 18:13:59
# gitHash: c4ad841105ba82f4a3979e4cf1ae7e20a5905e59
# SEED: 4663640625336856642
##
source('./findNSourceUtils.R')
Log.info("======================== Begin Test ===========================")
complexFilterTest_iris_wheader_147 <- function(conn) {
Log.info("A munge-task R unit test on data <iris_wheader> testing the functional unit <['', '==']> ")
Log.info("Uploading iris_wheader")
hex <- h2o.uploadFile(conn, locate("../../smalldata/iris/iris_wheader.csv.gz"), "riris_wheader.hex")
Log.info("Performing compound task ( ( hex[,c(\"petal_len\")] == 2.4241506089 )) on dataset <iris_wheader>")
filterHex <- hex[( ( hex[,c("petal_len")] == 2.4241506089 )) ,]
Log.info("Performing compound task ( ( hex[,c(\"sepal_len\")] == 6.97302946541 )) on dataset iris_wheader, and also subsetting columns.")
filterHex <- hex[( ( hex[,c("sepal_len")] == 6.97302946541 )) , c("petal_wid","sepal_wid","petal_len","sepal_len")]
Log.info("Now do the same filter & subset, but select complement of columns.")
filterHex <- hex[( ( hex[,c("sepal_len")] == 6.97302946541 )) , c("class")]
}
conn = new("H2OClient", ip=myIP, port=myPort)
tryCatch(test_that("compoundFilterTest_ on data iris_wheader", complexFilterTest_iris_wheader_147(conn)), warning = function(w) WARN(w), error = function(e) FAIL(e))
PASS()
| /R/tests/testdir_autoGen/runit_complexFilterTest_iris_wheader_147.R | permissive | hardikk/h2o | R | false | false | 1,633 | r | ##
# Author: Autogenerated on 2013-11-27 18:13:59
# gitHash: c4ad841105ba82f4a3979e4cf1ae7e20a5905e59
# SEED: 4663640625336856642
##
source('./findNSourceUtils.R')
Log.info("======================== Begin Test ===========================")
complexFilterTest_iris_wheader_147 <- function(conn) {
Log.info("A munge-task R unit test on data <iris_wheader> testing the functional unit <['', '==']> ")
Log.info("Uploading iris_wheader")
hex <- h2o.uploadFile(conn, locate("../../smalldata/iris/iris_wheader.csv.gz"), "riris_wheader.hex")
Log.info("Performing compound task ( ( hex[,c(\"petal_len\")] == 2.4241506089 )) on dataset <iris_wheader>")
filterHex <- hex[( ( hex[,c("petal_len")] == 2.4241506089 )) ,]
Log.info("Performing compound task ( ( hex[,c(\"sepal_len\")] == 6.97302946541 )) on dataset iris_wheader, and also subsetting columns.")
filterHex <- hex[( ( hex[,c("sepal_len")] == 6.97302946541 )) , c("petal_wid","sepal_wid","petal_len","sepal_len")]
Log.info("Now do the same filter & subset, but select complement of columns.")
filterHex <- hex[( ( hex[,c("sepal_len")] == 6.97302946541 )) , c("class")]
}
conn = new("H2OClient", ip=myIP, port=myPort)
tryCatch(test_that("compoundFilterTest_ on data iris_wheader", complexFilterTest_iris_wheader_147(conn)), warning = function(w) WARN(w), error = function(e) FAIL(e))
PASS()
|
###########################################################################################################
############################ Exploring data for community bid offers ####################################
###########################################################################################################
# This code specifies the descriptive statistics used for the community data for AMO Common. Note the sample size for this
# data is much smaller and so statistical analysis offers less power.
# 1.0 Set working directory and load the data ----
setwd ("C:/Users/wwainwright/Documents/R/Zambia_Analysis")
Zambia <- read.csv("C:/Users/wwainwright/Documents/R/Zambia_Analysis/Data/Community_Final.csv")
View(Zambia)
# Makes all the 0 values in data sheet be N/A
Zambia[Zambia==0]<-NA
# 2.0 Load the packages ----
# Install additional packages
install.packages("psych")
install.packages("corrplot")
# Load packages
library(tidyr)
library(dplyr)
library(ggplot2)
library(readr)
library(gridExtra)
library(scales)
library(psych)
library(corrplot)
# 3.0 Explore the data ----
names(Zambia)
show(Zambia)
# 4.0 summary of all data ----
# Summaries of data
summary(Zambia$ECOREGION)
summary(Zambia$GMA)
summary(Zambia$HA)
summary(Zambia$USD)
summary(Zambia$USDHA)
summary(Zambia$USDPLOT)
# Simple plots of the data
barplot(Zambia$HA)
barplot(Zambia$USD)
# 5.0 Aggregate the data for summary stats----
# GMA / non-GMA sites
aggregate(Zambia[, 3:34], list(Zambia$GMA), mean)
# Ecoregion 1 / Ecoregion 2
aggregate(Zambia[, 3:34], list(Zambia$ECOREGION1), mean)
# 6.0 Subset data into GMA and non-GMA / Ecoregion 1 and Ecoregion 2 ----
GMA <- Zambia[Zambia$GMA == "1" ,]
nonGMA <- Zambia[Zambia$GMA == "0" ,]
Eco1 <- Zambia[Zambia$ECOREGION == "1" ,]
Eco2 <- Zambia[Zambia$ECOREGION == "0" ,]
# 7.0 t-test (Ecoregion and GMA differences) ----
# independent 2-sample t-test for i) ecoregion and ii) GMA differences
t.test(Zambia$ECOREGION1,Zambia$ELEVATION)
t.test(Zambia$ECOREGION1,Zambia$CWRRICHNESS)
t.test(Zambia$ECOREGION1,Zambia$DISTANCEHOTSPOT)
t.test(Zambia$ECOREGION1,Zambia$vigna_unguiculata)
t.test(Zambia$ECOREGION1,Zambia$vigna_juncea)
t.test(Zambia$ECOREGION1,Zambia$eleusine_coracana_subsp.africana)
t.test(Zambia$ECOREGION1,Zambia$HA)
t.test(Zambia$ECOREGION1,Zambia$FARMERS)
t.test(Zambia$ECOREGION1,Zambia$DISTANCECOMMUNITY)
t.test(Zambia$ECOREGION1,Zambia$USD)
t.test(Zambia$ECOREGION1,Zambia$USDHA)
t.test(Zambia$GMA,Zambia$ELEVATION)
t.test(Zambia$GMA,Zambia$CWRRICHNESS)
t.test(Zambia$GMA,Zambia$DISTANCEHOTSPOT)
t.test(Zambia$GMA,Zambia$vigna_unguiculata)
t.test(Zambia$GMA,Zambia$vigna_juncea)
t.test(Zambia$GMA,Zambia$eleusine_coracana_subsp.africana)
t.test(Zambia$GMA,Zambia$HA)
t.test(Zambia$GMA,Zambia$FARMERS)
t.test(Zambia$GMA,Zambia$DISTANCECOMMUNITY)
t.test(Zambia$GMA,Zambia$USD)
t.test(Zambia$GMA,Zambia$USDHA)
# 8.0 Standard deviations of parameters ----
# Ecoregion 1
sd(Eco1$ELEVATION)
sd(Eco1$CWRRICHNESS)
sd(Eco1$vigna_unguiculata)
sd(Eco1$vigna_juncea)
sd(Eco1$eleusine_coracana_subsp.africana)
sd(Eco1$DISTANCEHOTSPOT)
sd(Eco1$HA)
sd(Eco1$FARMERS)
sd(Eco1$DISTANCECOMMUNITY)
sd(Eco1$USD)
sd(Eco1$USDHA)
# Ecoregion 2
sd(Eco2$ELEVATION)
sd(Eco2$CWRRICHNESS)
sd(Eco2$vigna_unguiculata)
sd(Eco2$vigna_juncea)
sd(Eco2$eleusine_coracana_subsp.africana)
sd(Eco2$DISTANCEHOTSPOT)
sd(Eco2$HA)
sd(Eco2$FARMERS)
sd(Eco2$DISTANCECOMMUNITY)
sd(Eco2$USD)
sd(Eco2$USDHA)
# GMA
sd(GMA$ELEVATION)
sd(GMA$CWRRICHNESS)
sd(GMA$vigna_unguiculata)
sd(GMA$vigna_juncea)
sd(GMA$eleusine_coracana_subsp.africana)
sd(GMA$DISTANCEHOTSPOT)
sd(GMA$HA)
sd(GMA$FARMERS)
sd(GMA$DISTANCECOMMUNITY)
sd(GMA$USD)
sd(GMA$USDHA)
# non-GMA
sd(nonGMA$ELEVATION)
sd(nonGMA$CWRRICHNESS)
sd(nonGMA$vigna_unguiculata)
sd(nonGMA$vigna_juncea)
sd(nonGMA$eleusine_coracana_subsp.africana)
sd(nonGMA$DISTANCEHOTSPOT)
sd(nonGMA$HA)
sd(nonGMA$FARMERS)
sd(nonGMA$DISTANCECOMMUNITY)
sd(nonGMA$USD)
sd(nonGMA$USDHA)
# 9.0 Bar Plot community bids as cost per hectare for GMA and non-GMA sites----
# Plot all farmer bids from GMA sites (with confidence intervals)
(wbar1 <- ggplot(GMA, aes(x=reorder(COMMUNITY, USDHA), y=USDHA)) +
geom_bar(position=position_dodge(width=0.1), width = 0.15, stat="identity", colour="black", fill="#00868B") +
geom_smooth(method = "loess", se=TRUE, color="blue", aes(group=1)) +
ylab("Cost per hectare (USD)") +
xlab("Community bid offer GMA sites") +
theme(
panel.border = element_blank(),
panel.background = element_blank(),
axis.text.x=element_text(size=12, angle=45, vjust=1, hjust=1), # making the years at a bit of an angle
axis.text.y=element_text(size=12),
axis.title.x=element_text(size=14, face="plain"),
axis.title.y=element_text(size=14, face="plain"),
#panel.grid.major.x=element_blank(), # Removing the background grid lines
#panel.grid.minor.x=element_blank(),
#panel.grid.minor.y=element_blank(),
#panel.grid.major.y=element_blank(),
plot.margin = unit(c(1,1,1,1), units = , "cm"), # Adding a 1cm margin around the plot
#axis.text.x=element_blank(),
#axis.ticks.x=element_blank(),
panel.grid.minor = element_line(size = 0.1, linetype = 'solid', colour = "black")))
# Plot all farmer bids from nonGMA sites (with confidence intervals)
(wbar2 <- ggplot(nonGMA, aes(x=reorder(COMMUNITY, USDHA), y=USDHA)) +
geom_bar(position=position_dodge(width=0.1), width = 0.15, stat="identity", colour="black", fill="#00868B") +
geom_smooth(method = "loess", se=TRUE, color="blue", aes(group=1)) +
ylab("Cost per hectare (USD)") +
xlab("Community bid offer non-GMA sites") +
theme(
panel.border = element_blank(),
panel.background = element_blank(),
axis.text.x=element_text(size=12, angle=45, vjust=1, hjust=1), # making the years at a bit of an angle
axis.text.y=element_text(size=12),
axis.title.x=element_text(size=14, face="plain"),
axis.title.y=element_text(size=14, face="plain"),
#panel.grid.major.x=element_blank(), # Removing the background grid lines
#panel.grid.minor.x=element_blank(),
#panel.grid.minor.y=element_blank(),
#panel.grid.major.y=element_blank(),
plot.margin = unit(c(1,1,1,1), units = , "cm"), # Adding a 1cm margin around the plot
#axis.text.x=element_blank(),
#axis.ticks.x=element_blank(),
panel.grid.minor = element_line(size = 0.1, linetype = 'solid', colour = "black")))
## Arrange the two plots into a Panel
limits <- c(0, 1700)
breaks <- seq(limits[1], limits[2], by=100)
# assign common axis to both plots
wbar1.common.y <- wbar1 + scale_y_continuous(limits=limits, breaks=breaks)
wbar2.common.y <- wbar2 + scale_y_continuous(limits=limits, breaks=breaks)
# build the plots
wbar1.common.y <- ggplot_gtable(ggplot_build(wbar1.common.y))
wbar2.common.y <- ggplot_gtable(ggplot_build(wbar2.common.y))
# copy the plot height from p1 to p2
wbar1.common.y$heights <- wbar2.common.y$heights
# Display
grid.arrange(wbar1.common.y,wbar2.common.y,ncol=2,widths=c(10,10))
# 10.0 Bar Plot community bids as cost per hectare for Ecoregion1 and Ecoregion2 ----
(wbar3 <- ggplot(Eco1, aes(x=reorder(COMMUNITY, USDHA), y=USDHA)) +
geom_bar(position=position_dodge(width=0.1), width = 0.15, stat="identity", colour="black", fill="#00868B") +
geom_smooth(method = "loess", se=TRUE, color="blue", aes(group=1)) +
ylab("Cost per hectare (USD)") +
xlab("Community bid offer Ecoregion 1") +
theme(
panel.border = element_blank(),
panel.background = element_blank(),
axis.text.x=element_text(size=12, angle=45, vjust=1, hjust=0.8), # making the years at a bit of an angle
axis.text.y=element_text(size=12),
axis.title.x=element_text(size=14, face="plain"),
axis.title.y=element_text(size=14, face="plain"),
#panel.grid.major.x=element_blank(), # Removing the background grid lines
#panel.grid.minor.x=element_blank(),
#panel.grid.minor.y=element_blank(),
#panel.grid.major.y=element_blank(),
plot.margin = unit(c(1,1,1,1), units = , "cm"), # Adding a 1cm margin around the plot
#axis.text.x=element_blank(),
#axis.ticks.x=element_blank(),
panel.grid.minor = element_line(size = 0.1, linetype = 'solid', colour = "black")))
# Plot all farmer bids from nonGMA sites (with confidence intervals)
(wbar4 <- ggplot(Eco2, aes(x=reorder(COMMUNITY, USDHA), y=USDHA)) +
geom_bar(position=position_dodge(width=0.1), width = 0.15, stat="identity", colour="black", fill="#00868B") +
geom_smooth(method = "loess", se=TRUE, color="blue", aes(group=1)) +
ylab("Cost per hectare (USD)") +
xlab("Community bid offer Ecoregion 2") +
theme(
panel.border = element_blank(),
panel.background = element_blank(),
axis.text.x=element_text(size=12, angle=45, vjust=1, hjust=0.8), # making the years at a bit of an angle
axis.text.y=element_text(size=12),
axis.title.x=element_text(size=14, face="plain"),
axis.title.y=element_text(size=14, face="plain"),
#panel.grid.major.x=element_blank(), # Removing the background grid lines
#panel.grid.minor.x=element_blank(),
#panel.grid.minor.y=element_blank(),
#panel.grid.major.y=element_blank(),
plot.margin = unit(c(1,1,1,1), units = , "cm"), # Adding a 1cm margin around the plot
#axis.text.x=element_blank(),
#axis.ticks.x=element_blank(),
panel.grid.minor = element_line(size = 0.1, linetype = 'solid', colour = "black")))
## Arrange the two plots into a Panel
limits <- c(0, 1700)
breaks <- seq(limits[1], limits[2], by=100)
# assign common axis to both plots
wbar3.common.y <- wbar3 + scale_y_continuous(limits=limits, breaks=breaks)
wbar4.common.y <- wbar4 + scale_y_continuous(limits=limits, breaks=breaks)
# build the plots
wbar3.common.y <- ggplot_gtable(ggplot_build(wbar3.common.y))
wbar4.common.y <- ggplot_gtable(ggplot_build(wbar4.common.y))
# copy the plot height from p1 to p2
wbar3.common.y$heights <- wbar4.common.y$heights
# Display
grid.arrange(wbar3.common.y,wbar4.common.y,ncol=2,widths=c(10,10))
# 10.0 Correlation matrix analysis ----
# Create data frame of variables with selected columns using column indices
Zamcor <- Zambia[,c(3,7,12,13,14,15,16,17,33)]
Zamcor1 <- Zambia[,c(3,7,8,9,10,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,33)] # With extra variables included
# Group correlation test
corr.test(Zamcor[1:9])
corr.test(Zamcor1[1:26])
# Visulisations
pairs.panels(Zamcor[1:9])
# Simple visualisation of correlation analysis effect size including significance
x <- cor(Zamcor[1:9])
colnames (x) <- c("Farmers", "Ecoregion", "CWR Richness", "Elevation", "Distance to hotspot", "Area", "Distance from community", "GMA", "Price")
rownames(x) <- c("Farmers", "Ecoregion", "CWR Richness", "Elevation", "Distance to hotspot", "Area", "Distance from community", "GMA", "Price")
p.mat <- cor.mtest(Zamcor, conf.level = .95)
p.mat <- cor.mtest(Zamcor)$p
corrplot(x, p.mat = res1$p, sig.level = .05)
corrplot(x, type="upper", order="hclust", addrect = 2, p.mat = p.mat, sig.level = 0.05, insig = "blank")
# 11.0 Regression analysis ----
# Overarching regression model using bid offer price for all data (Note small sample size)
mod.1 = lm(formula = USD ~ HA, data = Zambia)
summary(mod.1)
| /Scripts/Exploredata_Communitybids.R | no_license | wainwrigh/Zambia-CWR-Data- | R | false | false | 11,781 | r | ###########################################################################################################
############################ Exploring data for community bid offers ####################################
###########################################################################################################
# This code specifies the descriptive statistics used for the community data for AMO Common. Note the sample size for this
# data is much smaller and so statistical analysis offers less power.
# 1.0 Set working directory and load the data ----
setwd ("C:/Users/wwainwright/Documents/R/Zambia_Analysis")
Zambia <- read.csv("C:/Users/wwainwright/Documents/R/Zambia_Analysis/Data/Community_Final.csv")
View(Zambia)
# Makes all the 0 values in data sheet be N/A
Zambia[Zambia==0]<-NA
# 2.0 Load the packages ----
# Install additional packages
install.packages("psych")
install.packages("corrplot")
# Load packages
library(tidyr)
library(dplyr)
library(ggplot2)
library(readr)
library(gridExtra)
library(scales)
library(psych)
library(corrplot)
# 3.0 Explore the data ----
names(Zambia)
show(Zambia)
# 4.0 summary of all data ----
# Summaries of data
summary(Zambia$ECOREGION)
summary(Zambia$GMA)
summary(Zambia$HA)
summary(Zambia$USD)
summary(Zambia$USDHA)
summary(Zambia$USDPLOT)
# Simple plots of the data
barplot(Zambia$HA)
barplot(Zambia$USD)
# 5.0 Aggregate the data for summary stats----
# GMA / non-GMA sites
aggregate(Zambia[, 3:34], list(Zambia$GMA), mean)
# Ecoregion 1 / Ecoregion 2
aggregate(Zambia[, 3:34], list(Zambia$ECOREGION1), mean)
# 6.0 Subset data into GMA and non-GMA / Ecoregion 1 and Ecoregion 2 ----
GMA <- Zambia[Zambia$GMA == "1" ,]
nonGMA <- Zambia[Zambia$GMA == "0" ,]
Eco1 <- Zambia[Zambia$ECOREGION == "1" ,]
Eco2 <- Zambia[Zambia$ECOREGION == "0" ,]
# 7.0 t-test (Ecoregion and GMA differences) ----
# independent 2-sample t-test for i) ecoregion and ii) GMA differences
t.test(Zambia$ECOREGION1,Zambia$ELEVATION)
t.test(Zambia$ECOREGION1,Zambia$CWRRICHNESS)
t.test(Zambia$ECOREGION1,Zambia$DISTANCEHOTSPOT)
t.test(Zambia$ECOREGION1,Zambia$vigna_unguiculata)
t.test(Zambia$ECOREGION1,Zambia$vigna_juncea)
t.test(Zambia$ECOREGION1,Zambia$eleusine_coracana_subsp.africana)
t.test(Zambia$ECOREGION1,Zambia$HA)
t.test(Zambia$ECOREGION1,Zambia$FARMERS)
t.test(Zambia$ECOREGION1,Zambia$DISTANCECOMMUNITY)
t.test(Zambia$ECOREGION1,Zambia$USD)
t.test(Zambia$ECOREGION1,Zambia$USDHA)
t.test(Zambia$GMA,Zambia$ELEVATION)
t.test(Zambia$GMA,Zambia$CWRRICHNESS)
t.test(Zambia$GMA,Zambia$DISTANCEHOTSPOT)
t.test(Zambia$GMA,Zambia$vigna_unguiculata)
t.test(Zambia$GMA,Zambia$vigna_juncea)
t.test(Zambia$GMA,Zambia$eleusine_coracana_subsp.africana)
t.test(Zambia$GMA,Zambia$HA)
t.test(Zambia$GMA,Zambia$FARMERS)
t.test(Zambia$GMA,Zambia$DISTANCECOMMUNITY)
t.test(Zambia$GMA,Zambia$USD)
t.test(Zambia$GMA,Zambia$USDHA)
# 8.0 Standard deviations of parameters ----
# Ecoregion 1
sd(Eco1$ELEVATION)
sd(Eco1$CWRRICHNESS)
sd(Eco1$vigna_unguiculata)
sd(Eco1$vigna_juncea)
sd(Eco1$eleusine_coracana_subsp.africana)
sd(Eco1$DISTANCEHOTSPOT)
sd(Eco1$HA)
sd(Eco1$FARMERS)
sd(Eco1$DISTANCECOMMUNITY)
sd(Eco1$USD)
sd(Eco1$USDHA)
# Ecoregion 2
sd(Eco2$ELEVATION)
sd(Eco2$CWRRICHNESS)
sd(Eco2$vigna_unguiculata)
sd(Eco2$vigna_juncea)
sd(Eco2$eleusine_coracana_subsp.africana)
sd(Eco2$DISTANCEHOTSPOT)
sd(Eco2$HA)
sd(Eco2$FARMERS)
sd(Eco2$DISTANCECOMMUNITY)
sd(Eco2$USD)
sd(Eco2$USDHA)
# GMA
sd(GMA$ELEVATION)
sd(GMA$CWRRICHNESS)
sd(GMA$vigna_unguiculata)
sd(GMA$vigna_juncea)
sd(GMA$eleusine_coracana_subsp.africana)
sd(GMA$DISTANCEHOTSPOT)
sd(GMA$HA)
sd(GMA$FARMERS)
sd(GMA$DISTANCECOMMUNITY)
sd(GMA$USD)
sd(GMA$USDHA)
# non-GMA
sd(nonGMA$ELEVATION)
sd(nonGMA$CWRRICHNESS)
sd(nonGMA$vigna_unguiculata)
sd(nonGMA$vigna_juncea)
sd(nonGMA$eleusine_coracana_subsp.africana)
sd(nonGMA$DISTANCEHOTSPOT)
sd(nonGMA$HA)
sd(nonGMA$FARMERS)
sd(nonGMA$DISTANCECOMMUNITY)
sd(nonGMA$USD)
sd(nonGMA$USDHA)
# 9.0 Bar Plot community bids as cost per hectare for GMA and non-GMA sites----
# Plot all farmer bids from GMA sites (with confidence intervals)
(wbar1 <- ggplot(GMA, aes(x=reorder(COMMUNITY, USDHA), y=USDHA)) +
geom_bar(position=position_dodge(width=0.1), width = 0.15, stat="identity", colour="black", fill="#00868B") +
geom_smooth(method = "loess", se=TRUE, color="blue", aes(group=1)) +
ylab("Cost per hectare (USD)") +
xlab("Community bid offer GMA sites") +
theme(
panel.border = element_blank(),
panel.background = element_blank(),
axis.text.x=element_text(size=12, angle=45, vjust=1, hjust=1), # making the years at a bit of an angle
axis.text.y=element_text(size=12),
axis.title.x=element_text(size=14, face="plain"),
axis.title.y=element_text(size=14, face="plain"),
#panel.grid.major.x=element_blank(), # Removing the background grid lines
#panel.grid.minor.x=element_blank(),
#panel.grid.minor.y=element_blank(),
#panel.grid.major.y=element_blank(),
plot.margin = unit(c(1,1,1,1), units = , "cm"), # Adding a 1cm margin around the plot
#axis.text.x=element_blank(),
#axis.ticks.x=element_blank(),
panel.grid.minor = element_line(size = 0.1, linetype = 'solid', colour = "black")))
# Plot all farmer bids from nonGMA sites (with confidence intervals)
(wbar2 <- ggplot(nonGMA, aes(x=reorder(COMMUNITY, USDHA), y=USDHA)) +
geom_bar(position=position_dodge(width=0.1), width = 0.15, stat="identity", colour="black", fill="#00868B") +
geom_smooth(method = "loess", se=TRUE, color="blue", aes(group=1)) +
ylab("Cost per hectare (USD)") +
xlab("Community bid offer non-GMA sites") +
theme(
panel.border = element_blank(),
panel.background = element_blank(),
axis.text.x=element_text(size=12, angle=45, vjust=1, hjust=1), # making the years at a bit of an angle
axis.text.y=element_text(size=12),
axis.title.x=element_text(size=14, face="plain"),
axis.title.y=element_text(size=14, face="plain"),
#panel.grid.major.x=element_blank(), # Removing the background grid lines
#panel.grid.minor.x=element_blank(),
#panel.grid.minor.y=element_blank(),
#panel.grid.major.y=element_blank(),
plot.margin = unit(c(1,1,1,1), units = , "cm"), # Adding a 1cm margin around the plot
#axis.text.x=element_blank(),
#axis.ticks.x=element_blank(),
panel.grid.minor = element_line(size = 0.1, linetype = 'solid', colour = "black")))
## Arrange the two plots into a Panel
limits <- c(0, 1700)
breaks <- seq(limits[1], limits[2], by=100)
# assign common axis to both plots
wbar1.common.y <- wbar1 + scale_y_continuous(limits=limits, breaks=breaks)
wbar2.common.y <- wbar2 + scale_y_continuous(limits=limits, breaks=breaks)
# build the plots
wbar1.common.y <- ggplot_gtable(ggplot_build(wbar1.common.y))
wbar2.common.y <- ggplot_gtable(ggplot_build(wbar2.common.y))
# copy the plot height from p1 to p2
wbar1.common.y$heights <- wbar2.common.y$heights
# Display
grid.arrange(wbar1.common.y,wbar2.common.y,ncol=2,widths=c(10,10))
# 10.0 Bar Plot community bids as cost per hectare for Ecoregion1 and Ecoregion2 ----
(wbar3 <- ggplot(Eco1, aes(x=reorder(COMMUNITY, USDHA), y=USDHA)) +
geom_bar(position=position_dodge(width=0.1), width = 0.15, stat="identity", colour="black", fill="#00868B") +
geom_smooth(method = "loess", se=TRUE, color="blue", aes(group=1)) +
ylab("Cost per hectare (USD)") +
xlab("Community bid offer Ecoregion 1") +
theme(
panel.border = element_blank(),
panel.background = element_blank(),
axis.text.x=element_text(size=12, angle=45, vjust=1, hjust=0.8), # making the years at a bit of an angle
axis.text.y=element_text(size=12),
axis.title.x=element_text(size=14, face="plain"),
axis.title.y=element_text(size=14, face="plain"),
#panel.grid.major.x=element_blank(), # Removing the background grid lines
#panel.grid.minor.x=element_blank(),
#panel.grid.minor.y=element_blank(),
#panel.grid.major.y=element_blank(),
plot.margin = unit(c(1,1,1,1), units = , "cm"), # Adding a 1cm margin around the plot
#axis.text.x=element_blank(),
#axis.ticks.x=element_blank(),
panel.grid.minor = element_line(size = 0.1, linetype = 'solid', colour = "black")))
# Plot all farmer bids from nonGMA sites (with confidence intervals)
(wbar4 <- ggplot(Eco2, aes(x=reorder(COMMUNITY, USDHA), y=USDHA)) +
geom_bar(position=position_dodge(width=0.1), width = 0.15, stat="identity", colour="black", fill="#00868B") +
geom_smooth(method = "loess", se=TRUE, color="blue", aes(group=1)) +
ylab("Cost per hectare (USD)") +
xlab("Community bid offer Ecoregion 2") +
theme(
panel.border = element_blank(),
panel.background = element_blank(),
axis.text.x=element_text(size=12, angle=45, vjust=1, hjust=0.8), # making the years at a bit of an angle
axis.text.y=element_text(size=12),
axis.title.x=element_text(size=14, face="plain"),
axis.title.y=element_text(size=14, face="plain"),
#panel.grid.major.x=element_blank(), # Removing the background grid lines
#panel.grid.minor.x=element_blank(),
#panel.grid.minor.y=element_blank(),
#panel.grid.major.y=element_blank(),
plot.margin = unit(c(1,1,1,1), units = , "cm"), # Adding a 1cm margin around the plot
#axis.text.x=element_blank(),
#axis.ticks.x=element_blank(),
panel.grid.minor = element_line(size = 0.1, linetype = 'solid', colour = "black")))
## Arrange the two plots into a Panel
limits <- c(0, 1700)
breaks <- seq(limits[1], limits[2], by=100)
# assign common axis to both plots
wbar3.common.y <- wbar3 + scale_y_continuous(limits=limits, breaks=breaks)
wbar4.common.y <- wbar4 + scale_y_continuous(limits=limits, breaks=breaks)
# build the plots
wbar3.common.y <- ggplot_gtable(ggplot_build(wbar3.common.y))
wbar4.common.y <- ggplot_gtable(ggplot_build(wbar4.common.y))
# copy the plot height from p1 to p2
wbar3.common.y$heights <- wbar4.common.y$heights
# Display
grid.arrange(wbar3.common.y,wbar4.common.y,ncol=2,widths=c(10,10))
# 10.0 Correlation matrix analysis ----
# Create data frame of variables with selected columns using column indices
Zamcor <- Zambia[,c(3,7,12,13,14,15,16,17,33)]
Zamcor1 <- Zambia[,c(3,7,8,9,10,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,33)] # With extra variables included
# Group correlation test
corr.test(Zamcor[1:9])
corr.test(Zamcor1[1:26])
# Visulisations
pairs.panels(Zamcor[1:9])
# Simple visualisation of correlation analysis effect size including significance
x <- cor(Zamcor[1:9])
colnames (x) <- c("Farmers", "Ecoregion", "CWR Richness", "Elevation", "Distance to hotspot", "Area", "Distance from community", "GMA", "Price")
rownames(x) <- c("Farmers", "Ecoregion", "CWR Richness", "Elevation", "Distance to hotspot", "Area", "Distance from community", "GMA", "Price")
p.mat <- cor.mtest(Zamcor, conf.level = .95)
p.mat <- cor.mtest(Zamcor)$p
corrplot(x, p.mat = res1$p, sig.level = .05)
corrplot(x, type="upper", order="hclust", addrect = 2, p.mat = p.mat, sig.level = 0.05, insig = "blank")
# 11.0 Regression analysis ----
# Overarching regression model using bid offer price for all data (Note small sample size)
mod.1 = lm(formula = USD ~ HA, data = Zambia)
summary(mod.1)
|
testlist <- list(id = c(-65537L, -1L, -1L, -1L, -1L, -1L, -1L, -1L, -1L, -1L, -65533L, 0L, -1241513985L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), x = numeric(0), y = numeric(0))
result <- do.call(ggforce:::enclose_points,testlist)
str(result) | /ggforce/inst/testfiles/enclose_points/libFuzzer_enclose_points/enclose_points_valgrind_files/1610031287-test.R | no_license | akhikolla/updated-only-Issues | R | false | false | 504 | r | testlist <- list(id = c(-65537L, -1L, -1L, -1L, -1L, -1L, -1L, -1L, -1L, -1L, -65533L, 0L, -1241513985L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), x = numeric(0), y = numeric(0))
result <- do.call(ggforce:::enclose_points,testlist)
str(result) |
\name{dykstra_linealBallDF}
\alias{dykstra_linealBallDF}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
%% ~~function to do ... ~~
}
\description{
%% ~~ A concise (1-5 lines) description of what the function does. ~~
}
\usage{
dykstra_linealBallDF(DF, A = "matrix", b = "vector", r = c(1), centers = matrix(rep(0, length(DF[1, ]) * length(r)), nrow = length(r), ncol = length(DF[1, ])), eq = rep("<=", M), I = matrix(rep(1, dim(DF)[1] * dim(DF)[2]), ncol = dim(DF)[2], nrow = dim(DF)[1]), W = diag(length(DF[1, ])))
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{DF}{
%% ~~Describe \code{DF} here~~
}
\item{A}{
%% ~~Describe \code{A} here~~
}
\item{b}{
%% ~~Describe \code{b} here~~
}
\item{r}{
%% ~~Describe \code{r} here~~
}
\item{centers}{
%% ~~Describe \code{centers} here~~
}
\item{eq}{
%% ~~Describe \code{eq} here~~
}
\item{I}{
%% ~~Describe \code{I} here~~
}
\item{W}{
%% ~~Describe \code{W} here~~
}
}
\details{
%% ~~ If necessary, more details than the description above ~~
}
\value{
%% ~Describe the value returned
%% If it is a LIST, use
%% \item{comp1 }{Description of 'comp1'}
%% \item{comp2 }{Description of 'comp2'}
%% ...
}
\references{
%% ~put references to the literature/web site here ~
}
\author{
%% ~~who you are~~
}
\note{
%% ~~further notes~~
}
%% ~Make other sections like Warning with \section{Warning }{....} ~
\seealso{
%% ~~objects to See Also as \code{\link{help}}, ~~~
}
\examples{
##---- Should be DIRECTLY executable !! ----
##-- ==> Define data, use random,
##-- or do help(data=index) for the standard data sets.
## The function is currently defined as
function (DF, A = "matrix", b = "vector", r = c(1), centers = matrix(rep(0,
length(DF[1, ]) * length(r)), nrow = length(r), ncol = length(DF[1,
])), eq = rep("<=", M), I = matrix(rep(1, dim(DF)[1] * dim(DF)[2]),
ncol = dim(DF)[2], nrow = dim(DF)[1]), W = diag(length(DF[1,
])))
{
M = length(r) + length(b)
df_checkLineal = check_rows_lineal(DF, A, b, eq[1:length(b)])
df_checkBall = check_rows_ball(DF, r, centers, eq[(length(b) +
1):M])
df_check = merge(df_checkLineal, df_checkBall, all = TRUE)
df_final = data.frame()
df_proyec = data.frame()
for (i in 1:length(DF[, 1])) {
xi = DF[i, ]
ii = which(I[i, ][1:length(I[i, ])] == 1)
if (any(I[i, ] == 1)) {
if (all(I[i, ] == 1)) {
x = dykstra_linealBall(xi, A, b, r, centers,
eq, W)
}
else {
xi_prima = xi[ii]
Wi = t(as.matrix(W[, ii]))[, ii]
A_prima = as.matrix(A[, ii])
bi_prima = b - as.matrix(A[, -ii]) \%*\% as.numeric(xi[-ii])
centers_prima = t(centers[, ii])
r_prima = c()
for (k in 1:length(r)) {
r_prima[k] = sqrt(abs(r^2 - sum((xi[-ii] -
centers[k, ][-ii])^2)))
}
x = dykstra_linealBall(xi_prima, A = A_prima,
b = bi_prima, r = r_prima, centers = centers_prima,
eq, Wi)
}
xi[ii] = x
df_final = rbind(df_final, xi)
if (any(xi != DF[i, ], na.rm = TRUE)) {
df_proyec = rbind(df_proyec, xi)
}
}
else {
df_final = rbind(df_final, xi)
}
}
return(list(df_proyec = df_proyec, df_check = df_check, df_original = DF,
df_final = df_final))
}
}
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{ ~kwd1 }% use one of RShowDoc("KEYWORDS")
\keyword{ ~kwd2 }% __ONLY ONE__ keyword per line
| /adjustRestrictionsDF/man/dykstra_linealBallDF.Rd | no_license | GuilleAbril/DykstraDF | R | false | false | 3,920 | rd | \name{dykstra_linealBallDF}
\alias{dykstra_linealBallDF}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
%% ~~function to do ... ~~
}
\description{
%% ~~ A concise (1-5 lines) description of what the function does. ~~
}
\usage{
dykstra_linealBallDF(DF, A = "matrix", b = "vector", r = c(1), centers = matrix(rep(0, length(DF[1, ]) * length(r)), nrow = length(r), ncol = length(DF[1, ])), eq = rep("<=", M), I = matrix(rep(1, dim(DF)[1] * dim(DF)[2]), ncol = dim(DF)[2], nrow = dim(DF)[1]), W = diag(length(DF[1, ])))
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{DF}{
%% ~~Describe \code{DF} here~~
}
\item{A}{
%% ~~Describe \code{A} here~~
}
\item{b}{
%% ~~Describe \code{b} here~~
}
\item{r}{
%% ~~Describe \code{r} here~~
}
\item{centers}{
%% ~~Describe \code{centers} here~~
}
\item{eq}{
%% ~~Describe \code{eq} here~~
}
\item{I}{
%% ~~Describe \code{I} here~~
}
\item{W}{
%% ~~Describe \code{W} here~~
}
}
\details{
%% ~~ If necessary, more details than the description above ~~
}
\value{
%% ~Describe the value returned
%% If it is a LIST, use
%% \item{comp1 }{Description of 'comp1'}
%% \item{comp2 }{Description of 'comp2'}
%% ...
}
\references{
%% ~put references to the literature/web site here ~
}
\author{
%% ~~who you are~~
}
\note{
%% ~~further notes~~
}
%% ~Make other sections like Warning with \section{Warning }{....} ~
\seealso{
%% ~~objects to See Also as \code{\link{help}}, ~~~
}
\examples{
##---- Should be DIRECTLY executable !! ----
##-- ==> Define data, use random,
##-- or do help(data=index) for the standard data sets.
## The function is currently defined as
function (DF, A = "matrix", b = "vector", r = c(1), centers = matrix(rep(0,
length(DF[1, ]) * length(r)), nrow = length(r), ncol = length(DF[1,
])), eq = rep("<=", M), I = matrix(rep(1, dim(DF)[1] * dim(DF)[2]),
ncol = dim(DF)[2], nrow = dim(DF)[1]), W = diag(length(DF[1,
])))
{
M = length(r) + length(b)
df_checkLineal = check_rows_lineal(DF, A, b, eq[1:length(b)])
df_checkBall = check_rows_ball(DF, r, centers, eq[(length(b) +
1):M])
df_check = merge(df_checkLineal, df_checkBall, all = TRUE)
df_final = data.frame()
df_proyec = data.frame()
for (i in 1:length(DF[, 1])) {
xi = DF[i, ]
ii = which(I[i, ][1:length(I[i, ])] == 1)
if (any(I[i, ] == 1)) {
if (all(I[i, ] == 1)) {
x = dykstra_linealBall(xi, A, b, r, centers,
eq, W)
}
else {
xi_prima = xi[ii]
Wi = t(as.matrix(W[, ii]))[, ii]
A_prima = as.matrix(A[, ii])
bi_prima = b - as.matrix(A[, -ii]) \%*\% as.numeric(xi[-ii])
centers_prima = t(centers[, ii])
r_prima = c()
for (k in 1:length(r)) {
r_prima[k] = sqrt(abs(r^2 - sum((xi[-ii] -
centers[k, ][-ii])^2)))
}
x = dykstra_linealBall(xi_prima, A = A_prima,
b = bi_prima, r = r_prima, centers = centers_prima,
eq, Wi)
}
xi[ii] = x
df_final = rbind(df_final, xi)
if (any(xi != DF[i, ], na.rm = TRUE)) {
df_proyec = rbind(df_proyec, xi)
}
}
else {
df_final = rbind(df_final, xi)
}
}
return(list(df_proyec = df_proyec, df_check = df_check, df_original = DF,
df_final = df_final))
}
}
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{ ~kwd1 }% use one of RShowDoc("KEYWORDS")
\keyword{ ~kwd2 }% __ONLY ONE__ keyword per line
|
library(dplyr, warn.conflicts = F, quietly = T)
source('dt_sim.R')
#weights related functions
source('funs/weight_define_each.R')
source('funs/fun_a.R')
source('funs/fun_b.R')
source('funs/assign_weights.R')
source('funs/pvf_apply.R')
source('funs/pvf.R')
source('funs/mi_weights.R')
#scenario: four weights- BCVA, CST and AEs
#BCVA is defined as a function of BCVA at BL
#AEs are defined as a function of sex
#CST is defined as a function of CST at BL
#Scenario 2: patients care more about PE and ocular AEs than non-ocular AEs or CST
#################
#define weights #
#################
#assume that PE weights are affected only by BCVA at BL
#patients who have lower BCVA at BL would have higher weights on average that patients who have higher
#BCVA values at BL
v1_w1_mu <- c(90, 60, 30)
v1_w1_sd <- rep(7, 3)
#assume that AEs weights are affected by sex, and that women would have lower weights than men
v1_w2_mu <- c(70, 80)
v1_w2_sd <- rep(7, 2)
v1_w3_mu <- c(30, 40)
v1_w3_sd <- rep(7, 2)
#assume that CST weights are affected by CST at BL, patients with higher CST at BL, will give higher
#weights for the CST outcome
v1_w4_mu <- c(15, 30)
v1_w4_sd <- rep(7, 2)
p_miss <- 0.5
x1 <- parallel::mclapply(X = 1:1000,
mc.cores = 24,
FUN = function(i){
#generate simulated data to be used with weights
set.seed(888*i)
dt_out <- dt_sim()
#weights specification
w1_spec <- weight_define_each(data = dt_out, name_weight = 'bcva_48w', br_spec = 'benefit', 'bcvac_bl', w_mu = v1_w1_mu, w_sd = v1_w1_sd)
w2_spec <- weight_define_each(data = dt_out, name_weight = 'ae_oc', br_spec = 'risk', 'sex', w_mu = v1_w2_mu, w_sd = v1_w2_sd)
w3_spec <- weight_define_each(data = dt_out, name_weight = 'ae_noc', br_spec = 'risk', 'sex', w_mu = v1_w3_mu, w_sd = v1_w3_sd)
w4_spec <- weight_define_each(data = dt_out, name_weight = 'cst_16w', br_spec = 'risk', 'cstc_bl', w_mu = v1_w4_mu, w_sd = v1_w4_sd)
#cobmine weights into one list
l <- list(w1_spec, w2_spec, w3_spec, w4_spec)
#assign weights based on the mean/sd specification provided by the user
#for each patient, the highest weight will be assigned 100
dt_w <- assign_weights(data = dt_out, w_spec = l)
#standardize weights and apply utilization function that calculates mcda scores for each patient
dt_final <- pvf_apply(data = dt_w, w_spec = l)
#treatment arms comparison using all the weight, only XX% of the weights
dt_final[, 'miss'] <- stats::rbinom(n = nrow(dt_final), 1, prob = p_miss)
mcda_test_all <- stats::t.test(dt_final$mcda[dt_final$trt=='c'], dt_final$mcda[dt_final$trt=='t'])
mcda_test_obs <- stats::t.test(dt_final$mcda[dt_final$trt=='c' & dt_final$miss == 0],
dt_final$mcda[dt_final$trt=='t' & dt_final$miss == 0])
mcda_test_mi <- mi_weights(data = dt_final,
vars_bl = c('bcva_bl', 'age_bl', 'sex', 'cst_bl', 'srf', 'irf', 'rpe'),
w_spec = l, num_m = 10, mi_method = 'cart',
trunc_range = TRUE)
###########################
#summarise the br results #
###########################
br_comp <- tibble::tibble(meth = 'all',
mean_diff = mcda_test_all$estimate[1] - mcda_test_all$estimate[2],
se_diff = mean_diff/mcda_test_all$statistic)
br_comp[2, 'meth'] <- 'obs'
br_comp[2, 'mean_diff'] <- mcda_test_obs$estimate[1] - mcda_test_obs$estimate[2]
br_comp[2, 'se_diff'] <- (mcda_test_obs$estimate[1] - mcda_test_obs$estimate[2])/
mcda_test_obs$statistic
br_comp[3, 'meth'] <- 'mi'
br_comp[3, 'mean_diff'] <- mcda_test_mi$qbar
br_comp[3, 'se_diff'] <- sqrt(mcda_test_mi$t)
br_comp[3, 'ubar'] <- mcda_test_mi$ubar
br_comp[3, 'b'] <- mcda_test_mi$b
br_result <- tibble::tibble(res = ifelse(mcda_test_all$conf.int[2] < 0, 'benefit', 'no benefit'),
meth = 'all')
br_result[2, 'res'] <- ifelse(mcda_test_obs$conf.int[2] < 0, 'benefit', 'no benefit')
br_result[2, 'meth'] <- 'obs'
br_result[3, 'res'] <- ifelse(mcda_test_mi$qbar + qt(0.975, df = mcda_test_mi$v)*
sqrt(mcda_test_mi$t) < 0, 'benefit', 'no benefit')
br_result[3, 'meth'] <- 'mi'
br_result[, 'sim_id'] <- i
out <- list(br_comp, br_result)%>%purrr::set_names('br_comp', 'br_result')
return(out)
})
saveRDS(x1, sprintf('mcda_results/mcda_c4_sc2_pmiss%d_%s%s.rds', 100*0.5, 'cart', TRUE))
| /pgms_sim/mcda_c4_sc2_pmiss50_cartTRUE.R | no_license | yuliasidi/ch3sim | R | false | false | 4,451 | r |
library(dplyr, warn.conflicts = F, quietly = T)
source('dt_sim.R')
#weights related functions
source('funs/weight_define_each.R')
source('funs/fun_a.R')
source('funs/fun_b.R')
source('funs/assign_weights.R')
source('funs/pvf_apply.R')
source('funs/pvf.R')
source('funs/mi_weights.R')
#scenario: four weights- BCVA, CST and AEs
#BCVA is defined as a function of BCVA at BL
#AEs are defined as a function of sex
#CST is defined as a function of CST at BL
#Scenario 2: patients care more about PE and ocular AEs than non-ocular AEs or CST
#################
#define weights #
#################
#assume that PE weights are affected only by BCVA at BL
#patients who have lower BCVA at BL would have higher weights on average that patients who have higher
#BCVA values at BL
v1_w1_mu <- c(90, 60, 30)
v1_w1_sd <- rep(7, 3)
#assume that AEs weights are affected by sex, and that women would have lower weights than men
v1_w2_mu <- c(70, 80)
v1_w2_sd <- rep(7, 2)
v1_w3_mu <- c(30, 40)
v1_w3_sd <- rep(7, 2)
#assume that CST weights are affected by CST at BL, patients with higher CST at BL, will give higher
#weights for the CST outcome
v1_w4_mu <- c(15, 30)
v1_w4_sd <- rep(7, 2)
p_miss <- 0.5
x1 <- parallel::mclapply(X = 1:1000,
mc.cores = 24,
FUN = function(i){
#generate simulated data to be used with weights
set.seed(888*i)
dt_out <- dt_sim()
#weights specification
w1_spec <- weight_define_each(data = dt_out, name_weight = 'bcva_48w', br_spec = 'benefit', 'bcvac_bl', w_mu = v1_w1_mu, w_sd = v1_w1_sd)
w2_spec <- weight_define_each(data = dt_out, name_weight = 'ae_oc', br_spec = 'risk', 'sex', w_mu = v1_w2_mu, w_sd = v1_w2_sd)
w3_spec <- weight_define_each(data = dt_out, name_weight = 'ae_noc', br_spec = 'risk', 'sex', w_mu = v1_w3_mu, w_sd = v1_w3_sd)
w4_spec <- weight_define_each(data = dt_out, name_weight = 'cst_16w', br_spec = 'risk', 'cstc_bl', w_mu = v1_w4_mu, w_sd = v1_w4_sd)
#cobmine weights into one list
l <- list(w1_spec, w2_spec, w3_spec, w4_spec)
#assign weights based on the mean/sd specification provided by the user
#for each patient, the highest weight will be assigned 100
dt_w <- assign_weights(data = dt_out, w_spec = l)
#standardize weights and apply utilization function that calculates mcda scores for each patient
dt_final <- pvf_apply(data = dt_w, w_spec = l)
#treatment arms comparison using all the weight, only XX% of the weights
dt_final[, 'miss'] <- stats::rbinom(n = nrow(dt_final), 1, prob = p_miss)
mcda_test_all <- stats::t.test(dt_final$mcda[dt_final$trt=='c'], dt_final$mcda[dt_final$trt=='t'])
mcda_test_obs <- stats::t.test(dt_final$mcda[dt_final$trt=='c' & dt_final$miss == 0],
dt_final$mcda[dt_final$trt=='t' & dt_final$miss == 0])
mcda_test_mi <- mi_weights(data = dt_final,
vars_bl = c('bcva_bl', 'age_bl', 'sex', 'cst_bl', 'srf', 'irf', 'rpe'),
w_spec = l, num_m = 10, mi_method = 'cart',
trunc_range = TRUE)
###########################
#summarise the br results #
###########################
br_comp <- tibble::tibble(meth = 'all',
mean_diff = mcda_test_all$estimate[1] - mcda_test_all$estimate[2],
se_diff = mean_diff/mcda_test_all$statistic)
br_comp[2, 'meth'] <- 'obs'
br_comp[2, 'mean_diff'] <- mcda_test_obs$estimate[1] - mcda_test_obs$estimate[2]
br_comp[2, 'se_diff'] <- (mcda_test_obs$estimate[1] - mcda_test_obs$estimate[2])/
mcda_test_obs$statistic
br_comp[3, 'meth'] <- 'mi'
br_comp[3, 'mean_diff'] <- mcda_test_mi$qbar
br_comp[3, 'se_diff'] <- sqrt(mcda_test_mi$t)
br_comp[3, 'ubar'] <- mcda_test_mi$ubar
br_comp[3, 'b'] <- mcda_test_mi$b
br_result <- tibble::tibble(res = ifelse(mcda_test_all$conf.int[2] < 0, 'benefit', 'no benefit'),
meth = 'all')
br_result[2, 'res'] <- ifelse(mcda_test_obs$conf.int[2] < 0, 'benefit', 'no benefit')
br_result[2, 'meth'] <- 'obs'
br_result[3, 'res'] <- ifelse(mcda_test_mi$qbar + qt(0.975, df = mcda_test_mi$v)*
sqrt(mcda_test_mi$t) < 0, 'benefit', 'no benefit')
br_result[3, 'meth'] <- 'mi'
br_result[, 'sim_id'] <- i
out <- list(br_comp, br_result)%>%purrr::set_names('br_comp', 'br_result')
return(out)
})
saveRDS(x1, sprintf('mcda_results/mcda_c4_sc2_pmiss%d_%s%s.rds', 100*0.5, 'cart', TRUE))
|
testlist <- list(m = NULL, repetitions = 0L, in_m = structure(c(2.31584307392677e+77, 9.53818252170339e+295, 8.6936633125005e-311, 4.12396251261199e-221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(5L, 7L)))
result <- do.call(CNull:::communities_individual_based_sampling_beta,testlist)
str(result) | /CNull/inst/testfiles/communities_individual_based_sampling_beta/AFL_communities_individual_based_sampling_beta/communities_individual_based_sampling_beta_valgrind_files/1615829465-test.R | no_license | akhikolla/updatedatatype-list2 | R | false | false | 360 | r | testlist <- list(m = NULL, repetitions = 0L, in_m = structure(c(2.31584307392677e+77, 9.53818252170339e+295, 8.6936633125005e-311, 4.12396251261199e-221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(5L, 7L)))
result <- do.call(CNull:::communities_individual_based_sampling_beta,testlist)
str(result) |
#' Arrange several plots into a single view
#'
#' @param ... Set of plots to arrange.
#' @param plotlist List of plots
#' @param file Unused
#' @param cols Number of columns to arrange the plots into.
#' @param layout Layout matrix
#'
#'
#' @examples
multiplot <- function(..., plotlist=NULL, file, cols=1, layout=NULL) {
library(grid)
# Make a list from the ... arguments and plotlist
plots <- c(list(...), plotlist)
numPlots = length(plots)
# If layout is NULL, then use 'cols' to determine layout
if (is.null(layout)) {
# Make the panel
# ncol: Number of columns of plots
# nrow: Number of rows needed, calculated from # of cols
layout <- matrix(seq(1, cols * ceiling(numPlots/cols)),
ncol = cols, nrow = ceiling(numPlots/cols))
}
if (numPlots==1) {
print(plots[[1]])
} else {
# Set up the page
grid.newpage()
pushViewport(viewport(layout = grid.layout(nrow(layout), ncol(layout))))
# Make each plot, in the correct location
for (i in 1:numPlots) {
# Get the i,j matrix positions of the regions that contain this subplot
matchidx <- as.data.frame(which(layout == i, arr.ind = TRUE))
print(plots[[i]], vp = viewport(layout.pos.row = matchidx$row,
layout.pos.col = matchidx$col))
}
}
}
#' Distribution of xbar and theta
#'
#' @param n Number of samples per iteration
#' @param iter Number of iterations to simulate
#' @param mu Expected value of the variables
#' @param sigma Covariance of the variables
#'
#'
#' @examples
xbarthetadist = function(n,iter,mu,sigma){
library(mvtnorm)
library(ggplot2)
library(viridis)
mat = matrix(NA, nr= iter, nc=3)
colnames(mat)= c("xbar1","xbar2","theta")
for(i in 1:iter){
x = rmvnorm(n,mu,sigma)
mat[i,c(1,2)] <- colMeans(x)
s=cov(x)
eig=eigen(s)
theta = acos(eig$vectors[,1][1])
mat[i,3]<-theta
}
df=as.data.frame(mat)
g = ggplot(df, aes(x=xbar1,y=xbar2)) + coord_equal()
a = ggplot(df, aes(x=theta))
gp = g + geom_point()
#print(gp)
gd = g + stat_density2d(aes(colour=..density..), geom='point', contour=F) + scale_color_viridis()
#print(gd)
ah = a + geom_histogram()
ad = a + geom_density(fill="red")
multiplot(gp, gd, ah, ad, cols=2)
#print(ah)
#print(ad)
#head(mat)
}
# end of function
| /R/distsim.R | no_license | matt-m-herndon/class_exercise_shiny_sample | R | false | false | 2,352 | r | #' Arrange several plots into a single view
#'
#' @param ... Set of plots to arrange.
#' @param plotlist List of plots
#' @param file Unused
#' @param cols Number of columns to arrange the plots into.
#' @param layout Layout matrix
#'
#'
#' @examples
multiplot <- function(..., plotlist=NULL, file, cols=1, layout=NULL) {
library(grid)
# Make a list from the ... arguments and plotlist
plots <- c(list(...), plotlist)
numPlots = length(plots)
# If layout is NULL, then use 'cols' to determine layout
if (is.null(layout)) {
# Make the panel
# ncol: Number of columns of plots
# nrow: Number of rows needed, calculated from # of cols
layout <- matrix(seq(1, cols * ceiling(numPlots/cols)),
ncol = cols, nrow = ceiling(numPlots/cols))
}
if (numPlots==1) {
print(plots[[1]])
} else {
# Set up the page
grid.newpage()
pushViewport(viewport(layout = grid.layout(nrow(layout), ncol(layout))))
# Make each plot, in the correct location
for (i in 1:numPlots) {
# Get the i,j matrix positions of the regions that contain this subplot
matchidx <- as.data.frame(which(layout == i, arr.ind = TRUE))
print(plots[[i]], vp = viewport(layout.pos.row = matchidx$row,
layout.pos.col = matchidx$col))
}
}
}
#' Distribution of xbar and theta
#'
#' @param n Number of samples per iteration
#' @param iter Number of iterations to simulate
#' @param mu Expected value of the variables
#' @param sigma Covariance of the variables
#'
#'
#' @examples
xbarthetadist = function(n,iter,mu,sigma){
library(mvtnorm)
library(ggplot2)
library(viridis)
mat = matrix(NA, nr= iter, nc=3)
colnames(mat)= c("xbar1","xbar2","theta")
for(i in 1:iter){
x = rmvnorm(n,mu,sigma)
mat[i,c(1,2)] <- colMeans(x)
s=cov(x)
eig=eigen(s)
theta = acos(eig$vectors[,1][1])
mat[i,3]<-theta
}
df=as.data.frame(mat)
g = ggplot(df, aes(x=xbar1,y=xbar2)) + coord_equal()
a = ggplot(df, aes(x=theta))
gp = g + geom_point()
#print(gp)
gd = g + stat_density2d(aes(colour=..density..), geom='point', contour=F) + scale_color_viridis()
#print(gd)
ah = a + geom_histogram()
ad = a + geom_density(fill="red")
multiplot(gp, gd, ah, ad, cols=2)
#print(ah)
#print(ad)
#head(mat)
}
# end of function
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/genericFunctions.R
\name{tibbleToNamedMatrix}
\alias{tibbleToNamedMatrix}
\title{Convert a data frame with an id column into a matrix with row names}
\usage{
tibbleToNamedMatrix(tibble, row_names = "transcript_id")
}
\description{
Convert a data frame with an id column into a matrix with row names
}
| /seqUtils/man/tibbleToNamedMatrix.Rd | permissive | kauralasoo/macrophage-tuQTLs | R | false | true | 379 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/genericFunctions.R
\name{tibbleToNamedMatrix}
\alias{tibbleToNamedMatrix}
\title{Convert a data frame with an id column into a matrix with row names}
\usage{
tibbleToNamedMatrix(tibble, row_names = "transcript_id")
}
\description{
Convert a data frame with an id column into a matrix with row names
}
|
standardize <- function(xxxx) {
values(xxxx) <- values(xxxx) + 1
min_x <- min(na.omit(values(xxxx)))
if(min_x < 0) {
values(xxxx) <- values(xxxx) - min_x
}
max_x <- max(na.omit(values(xxxx)))
values(xxxx) <- values(xxxx) / max_x
return(xxxx)
}
| /07_landscape_genetics1/standardize_raster.r | no_license | jdmanthey/MolEcol2019 | R | false | false | 259 | r |
standardize <- function(xxxx) {
values(xxxx) <- values(xxxx) + 1
min_x <- min(na.omit(values(xxxx)))
if(min_x < 0) {
values(xxxx) <- values(xxxx) - min_x
}
max_x <- max(na.omit(values(xxxx)))
values(xxxx) <- values(xxxx) / max_x
return(xxxx)
}
|
# Examples of classification
input <- matrix(runif(1000), 500, 2)
input_valid <- matrix(runif(100), 50, 2)
target <- (cos(rowSums(input + input^2)) > 0.5) * 1
target_valid <- (cos(rowSums(input_valid + input_valid^2)) > 0.5) * 1
# create a new deep neural network for classificaiton
dnn_classification <- new_dnn(
c(2, 50, 50, 20, 1), # The layer structure of the deep neural network.
# The first element is the number of input variables.
# The last element is the number of output variables.
hidden_layer_default = rectified_linear_unit_function, # for hidden layers, use rectified_linear_unit_function
output_layer_default = sigmoidUnitDerivative # for classification, use sigmoidUnitDerivative function
)
dnn_classification <- train_dnn(
dnn_classification,
# training data
input, # input variable for training
target, # target variable for training
input_valid, # input variable for validation
target_valid, # target variable for validation
# training parameters
learn_rate_weight = exp(-8) * 10, # learning rate for weights, higher if use dropout
learn_rate_bias = exp(-8) * 10, # learning rate for biases, hihger if use dropout
learn_rate_gamma = exp(-8) * 10, # learning rate for the gamma factor used
batch_size = 10, # number of observations in a batch during training. Higher for faster training. Lower for faster convergence
batch_normalization = T, # logical value, T to use batch normalization
dropout_input = 0.2, # dropout ratio in input.
dropout_hidden = 0.5, # dropout ratio in hidden layers
momentum_initial = 0.6, # initial momentum in Stochastic Gradient Descent training
momentum_final = 0.9, # final momentum in Stochastic Gradient Descent training
momentum_switch = 100, # after which the momentum is switched from initial to final momentum
num_epochs = 100, # number of iterations in training
# Error function
error_function = crossEntropyErr, # error function to minimize during training. For regression, use crossEntropyErr
report_classification_error = T # whether to print classification error during training
)
# the prediciton by dnn_regression
pred <- predict(dnn_classification)
hist(pred)
# calculate the r-squared of the prediciton
AR(dnn_classification)
# calcualte the r-squared of the prediciton in validation
AR(dnn_classification, input = input_valid, target = target_valid)
# print the layer weights
# this function can print heatmap, histogram, or a surface
print_weight(dnn_regression, 1, type = "heatmap")
print_weight(dnn_regression, 2, type = "surface")
print_weight(dnn_regression, 3, type = "histogram")
| /deeplearning/inst/examples_classification.R | no_license | ingted/R-Examples | R | false | false | 2,730 | r | # Examples of classification
input <- matrix(runif(1000), 500, 2)
input_valid <- matrix(runif(100), 50, 2)
target <- (cos(rowSums(input + input^2)) > 0.5) * 1
target_valid <- (cos(rowSums(input_valid + input_valid^2)) > 0.5) * 1
# create a new deep neural network for classificaiton
dnn_classification <- new_dnn(
c(2, 50, 50, 20, 1), # The layer structure of the deep neural network.
# The first element is the number of input variables.
# The last element is the number of output variables.
hidden_layer_default = rectified_linear_unit_function, # for hidden layers, use rectified_linear_unit_function
output_layer_default = sigmoidUnitDerivative # for classification, use sigmoidUnitDerivative function
)
dnn_classification <- train_dnn(
dnn_classification,
# training data
input, # input variable for training
target, # target variable for training
input_valid, # input variable for validation
target_valid, # target variable for validation
# training parameters
learn_rate_weight = exp(-8) * 10, # learning rate for weights, higher if use dropout
learn_rate_bias = exp(-8) * 10, # learning rate for biases, hihger if use dropout
learn_rate_gamma = exp(-8) * 10, # learning rate for the gamma factor used
batch_size = 10, # number of observations in a batch during training. Higher for faster training. Lower for faster convergence
batch_normalization = T, # logical value, T to use batch normalization
dropout_input = 0.2, # dropout ratio in input.
dropout_hidden = 0.5, # dropout ratio in hidden layers
momentum_initial = 0.6, # initial momentum in Stochastic Gradient Descent training
momentum_final = 0.9, # final momentum in Stochastic Gradient Descent training
momentum_switch = 100, # after which the momentum is switched from initial to final momentum
num_epochs = 100, # number of iterations in training
# Error function
error_function = crossEntropyErr, # error function to minimize during training. For regression, use crossEntropyErr
report_classification_error = T # whether to print classification error during training
)
# the prediciton by dnn_regression
pred <- predict(dnn_classification)
hist(pred)
# calculate the r-squared of the prediciton
AR(dnn_classification)
# calcualte the r-squared of the prediciton in validation
AR(dnn_classification, input = input_valid, target = target_valid)
# print the layer weights
# this function can print heatmap, histogram, or a surface
print_weight(dnn_regression, 1, type = "heatmap")
print_weight(dnn_regression, 2, type = "surface")
print_weight(dnn_regression, 3, type = "histogram")
|
#' Abundance and revenue information of fish caught in Moorea, French Polynesia
#'
#' Calculate the most frequently caught fish in each location, total revenue for each location, total fisheries revenue sum, and if requested a graph of revenue by location and total revenue (as text)
#' @param catch_location_data data frame with columns: fish species, northside, westside, and eastside (sides of the island)
#' @param price_data data frame with fish species and price (Polynesian Franc/kg)
#' @param graph specify TRUE for output of a graph of revenue by location
#' @return returns a list containing most frequently caught fish in each location, revenue by location, revenue by fisheries, total revene, and graph if requested
fish_summary = function(catch_location_data, price_data, graph=FALSE) {
### 1. most frequently caught fish at each location (ie: side of the island)
north_catch <- rep(catch_location_data$fish, catch_location_data$north)
west_catch <- rep(catch_location_data$fish, catch_location_data$west)
east_catch <- rep(catch_location_data$fish, catch_location_data$east)
north_catch <- as.factor(north_catch)
west_catch <- as.factor(west_catch)
east_catch <- as.factor(east_catch)
freq_north <- names(which.max(summary(north_catch)))
freq_west <- names(which.max(summary(west_catch)))
freq_east <- names(which.max(summary(east_catch)))
most_frequent_catch <- data_frame(freq_north, freq_west, freq_east) %>%
magrittr::set_colnames(value = c("freq_north", "freq_west", "freq_east"))
### 2. total revenues by location
if(any(price_data$price < 0)) stop('Potential error: fish prices can not be negative')
revenues_locations <- left_join(catch_location_data, price_data, by = "fish") %>%
mutate(north_rev = north*price) %>%
mutate(west_rev = west*price) %>%
mutate(east_rev = east*price)
north_rev = sum(revenues_locations$north_rev)
west_rev = sum(revenues_locations$west_rev)
east_rev = sum(revenues_locations$east_rev)
total_revenues_locations <- data_frame(north_rev, west_rev, east_rev) %>%
magrittr::set_colnames(value = c("rev_north", "rev_west", "rev_east"))
### 3. total revenues by fishery
total_revenues_by_fishery <- left_join(catch_location_data, price_data, by = "fish") %>%
mutate(totalfish = rowSums(.[2:4])) %>%
mutate(fishrev = totalfish*price) %>%
select("fish", "fishrev") %>%
magrittr::set_colnames(value = c("Fishery", "Total Revenue"))
### 4. total revenue of all fisheries
total_revenue <- sum(north_rev, west_rev, east_rev)
### 5. graph of revenues by location if requested with total revenue printed bottom right
if (graph == TRUE) {
graph <- revenues_locations %>%
magrittr::set_colnames(value = c("fish", "north", "east", "west", "price", "North", "West", "East")) %>%
gather("North", "West", "East", key = "location", value = "price") %>%
group_by(location) %>%
summarize(price=sum(price)) %>%
ungroup()
caption <- c("Total Revenue: PF")
graph_revenue <- ggplot(graph) +
geom_col(aes(x=location, y = price), fill= "deepskyblue4") +
ylab("Price (PF/kg)") +
xlab("Location") +
theme_classic() +
labs(title ="Total Catch Revenues by Location", caption = paste(caption,total_revenue))
graph_revenue
}
return(list(most_frequent_catch, total_revenues_locations, total_revenues_by_fishery, total_revenue, graph_revenue))
} | /Assignment_4/R/calc_fisheries_data.R | no_license | j-verstaen/ESM262 | R | false | false | 3,518 | r | #' Abundance and revenue information of fish caught in Moorea, French Polynesia
#'
#' Calculate the most frequently caught fish in each location, total revenue for each location, total fisheries revenue sum, and if requested a graph of revenue by location and total revenue (as text)
#' @param catch_location_data data frame with columns: fish species, northside, westside, and eastside (sides of the island)
#' @param price_data data frame with fish species and price (Polynesian Franc/kg)
#' @param graph specify TRUE for output of a graph of revenue by location
#' @return returns a list containing most frequently caught fish in each location, revenue by location, revenue by fisheries, total revene, and graph if requested
fish_summary = function(catch_location_data, price_data, graph=FALSE) {
### 1. most frequently caught fish at each location (ie: side of the island)
north_catch <- rep(catch_location_data$fish, catch_location_data$north)
west_catch <- rep(catch_location_data$fish, catch_location_data$west)
east_catch <- rep(catch_location_data$fish, catch_location_data$east)
north_catch <- as.factor(north_catch)
west_catch <- as.factor(west_catch)
east_catch <- as.factor(east_catch)
freq_north <- names(which.max(summary(north_catch)))
freq_west <- names(which.max(summary(west_catch)))
freq_east <- names(which.max(summary(east_catch)))
most_frequent_catch <- data_frame(freq_north, freq_west, freq_east) %>%
magrittr::set_colnames(value = c("freq_north", "freq_west", "freq_east"))
### 2. total revenues by location
if(any(price_data$price < 0)) stop('Potential error: fish prices can not be negative')
revenues_locations <- left_join(catch_location_data, price_data, by = "fish") %>%
mutate(north_rev = north*price) %>%
mutate(west_rev = west*price) %>%
mutate(east_rev = east*price)
north_rev = sum(revenues_locations$north_rev)
west_rev = sum(revenues_locations$west_rev)
east_rev = sum(revenues_locations$east_rev)
total_revenues_locations <- data_frame(north_rev, west_rev, east_rev) %>%
magrittr::set_colnames(value = c("rev_north", "rev_west", "rev_east"))
### 3. total revenues by fishery
total_revenues_by_fishery <- left_join(catch_location_data, price_data, by = "fish") %>%
mutate(totalfish = rowSums(.[2:4])) %>%
mutate(fishrev = totalfish*price) %>%
select("fish", "fishrev") %>%
magrittr::set_colnames(value = c("Fishery", "Total Revenue"))
### 4. total revenue of all fisheries
total_revenue <- sum(north_rev, west_rev, east_rev)
### 5. graph of revenues by location if requested with total revenue printed bottom right
if (graph == TRUE) {
graph <- revenues_locations %>%
magrittr::set_colnames(value = c("fish", "north", "east", "west", "price", "North", "West", "East")) %>%
gather("North", "West", "East", key = "location", value = "price") %>%
group_by(location) %>%
summarize(price=sum(price)) %>%
ungroup()
caption <- c("Total Revenue: PF")
graph_revenue <- ggplot(graph) +
geom_col(aes(x=location, y = price), fill= "deepskyblue4") +
ylab("Price (PF/kg)") +
xlab("Location") +
theme_classic() +
labs(title ="Total Catch Revenues by Location", caption = paste(caption,total_revenue))
graph_revenue
}
return(list(most_frequent_catch, total_revenues_locations, total_revenues_by_fishery, total_revenue, graph_revenue))
} |
context ("Adding a supplementary row")
de_io <- iotable_get()
CO2_coefficients <- data.frame(agriculture_group = 0.2379,
industry_group = 0.5172,
construction = 0.0456,
trade_group = 0.1320,
business_services_group = 0.0127,
other_services_group = 0.0530)
CH4_coefficients <- data.frame(agriculture_group = 0.0349,
industry_group = 0.0011,
construction = 0,
trade_group = 0,
business_services_group = 0,
other_services_group = 0.0021)
CO2 <- cbind (
data.frame ( iotables_row = "CO2_coefficients"),
CO2_coefficients
)
CH4 <- cbind(
data.frame ( iotables_row = "CH4_coefficients"),
CH4_coefficients
)
de_coeff <- input_coefficient_matrix_create ( iotable_get() )
emissions <- rbind ( CO2, CH4 )
supplementary_data <- emissions
extended <- supplementary_add ( data_table = de_io,
supplementary_data = emissions)
# Check against The Eurostat Manual page 494
test_that("correct data is returned", {
expect_equal(extended$construction [ which ( extended[,1] == "CO2_coefficients") ],
0.0456, tolerance=1e-6)
expect_equal(extended$other_services_group[ which ( extended[,1] == "CO2_coefficients" ) ],
0.0530, tolerance=1e-6)
expect_equal(extended$other_services_group[ which ( extended[,1] == "CH4_coefficients" ) ],
0.0021, tolerance=1e-6)
})
| /tests/testthat/test-supplementary_add.R | permissive | cran/iotables | R | false | false | 1,722 | r | context ("Adding a supplementary row")
de_io <- iotable_get()
CO2_coefficients <- data.frame(agriculture_group = 0.2379,
industry_group = 0.5172,
construction = 0.0456,
trade_group = 0.1320,
business_services_group = 0.0127,
other_services_group = 0.0530)
CH4_coefficients <- data.frame(agriculture_group = 0.0349,
industry_group = 0.0011,
construction = 0,
trade_group = 0,
business_services_group = 0,
other_services_group = 0.0021)
CO2 <- cbind (
data.frame ( iotables_row = "CO2_coefficients"),
CO2_coefficients
)
CH4 <- cbind(
data.frame ( iotables_row = "CH4_coefficients"),
CH4_coefficients
)
de_coeff <- input_coefficient_matrix_create ( iotable_get() )
emissions <- rbind ( CO2, CH4 )
supplementary_data <- emissions
extended <- supplementary_add ( data_table = de_io,
supplementary_data = emissions)
# Check against The Eurostat Manual page 494
test_that("correct data is returned", {
expect_equal(extended$construction [ which ( extended[,1] == "CO2_coefficients") ],
0.0456, tolerance=1e-6)
expect_equal(extended$other_services_group[ which ( extended[,1] == "CO2_coefficients" ) ],
0.0530, tolerance=1e-6)
expect_equal(extended$other_services_group[ which ( extended[,1] == "CH4_coefficients" ) ],
0.0021, tolerance=1e-6)
})
|
#import libraries
library(readxl)
library(MCMCglmm)
#read data
bzs1 <- read_xlsx('BZS1_transformation.xlsx')
bzs1 <- as.data.frame(bzs1)
area <- pi*25
bzs1['density'] <- bzs1['road_length']/area
#linear models for each response variable
#responses are scaled to avoid error in MCMCglmm ('Mixed model equations singular: use a (stronger) prior')
#linear model for percent decrease in benzotriazole concentration
bzs_model <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp1 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp2 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp3 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp4 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp5 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp6 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
BZT_init:Salt +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp7 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp8 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp9 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
Salt:density, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of benzotriazole alanine
BZTalanine_model <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model.temp1 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model.temp2 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model.temp3 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model.temp4 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model.temp5 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
Microbes:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model.temp6 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model.temp7 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model.temp8 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of glycosylated benzotriazole
glycosylatedBZT_model <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp1 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp2 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:density + BZT_init:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp3 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp4 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp5 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp6 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:density +
Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp7 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp8 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp9 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp10 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + density +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp11 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of benzotriazole acetyl-alanine
BZTacetylalanine_model <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp1 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp2 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp3 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp4 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp5 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp6 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp7 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp8 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp9 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp10 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp11 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of aniline
aniline_model <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model.temp1 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model.temp2 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model.temp3 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model.temp4 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model.temp5 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of methylbenzotriazole
methylBZT_model <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model.temp1 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model.temp2 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of methoxybenzotriazole
methoxyBZT_model <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
methoxyBZT_model.temp1 <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of pthalic acid
pthalic_acid_model <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model.temp1 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model.temp2 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model.temp3 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model.temp4 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of hydroxyBZT
hydroxyBZT_model <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp1 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp2 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp3 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp4 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp5 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp6 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp7 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp8 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp9 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp10 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp11 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Microbes +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for duckweed pixel area
px_model <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp1 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp2 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp3 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp4 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp5 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp6 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp7 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Microbes + BZT_init:density +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp8 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Microbes + BZT_init:density, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp9 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp10 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp11 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp12 <- MCMCglmm(scale(px.mn)~Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp13 <- MCMCglmm(scale(px.mn)~Salt, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for optical density
od_model <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp1 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp2 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp3 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp4 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp5 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp6 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:density +
Salt:Microbes + Salt:density, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp7 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:density +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp8 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp9 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp10 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp11 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp12 <- MCMCglmm(scale(od.mn)~Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for optical density, restricting to inoculated wells
odi_model <- MCMCglmm(scale(od.mn)~BZT_init + Salt + density +
BZT_init:Salt + BZT_init:density +
Salt:density +
BZT_init:Salt:density, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model.temp1 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + density +
BZT_init:Salt + BZT_init:density +
Salt:density, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model.temp2 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + density +
BZT_init:density +
Salt:density, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model.temp3 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + density +
Salt:density, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model.temp4 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + density, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model.temp5 <- MCMCglmm(scale(od.mn)~BZT_init + Salt, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model.temp6 <- MCMCglmm(scale(od.mn)~Salt, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
#linear model for percent decrease in benzotriazole concentration, with distance to city center as the location descriptor
bzs_model_km <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp1 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp2 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp3 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp4 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp5 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp6 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
BZT_init:Salt +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp7 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp8 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp9 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of benzotriazole alanine, with distance to city center as the location descriptor
BZTalanine_model_km <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model_km.temp1 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model_km.temp2 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model_km.temp3 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model_km.temp4 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model_km.temp5 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
Microbes:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model_km.temp6 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model_km.temp7 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of glycosylated benzotriazole, with distance to city center as the location descriptor
glycosylatedBZT_model_km <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp1 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp2 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp3 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp4 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp5 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp6 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp7 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp8 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp9 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp10 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp11 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of benzotriazole acetyl-alanine, with distance to city center as the location descriptor
BZTacetylalanine_model_km <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp1 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp2 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp3 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp4 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype, data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp5 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype, data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp6 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
Microbes:km, random = ~ Genotype, data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp7 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype, data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp8 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes, random = ~ Genotype, data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp9 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt, random = ~ Genotype, data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp10 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt, random = ~ Genotype, data=bzs1,verbose=F)
#linear model for amount of aniline, with distance to city center as the location descriptor
aniline_model_km <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model_km.temp1 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model_km.temp2 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model_km.temp3 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model_km.temp4 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of methylbenzotriazole, with distance to city center as the location descriptor
methylBZT_model_km <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp1 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp2 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp3 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp4 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp5 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp6 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Microbes + BZT_init:km +
Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp7 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
BZT_init:km +
Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp8 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp9 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp10 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp11 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp12 <- MCMCglmm(scale(methylBZT)~Salt + km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp13 <- MCMCglmm(scale(methylBZT)~km, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of methoxybenzotriazole, with distance to city center as the location descriptor
methoxyBZT_model_km <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methoxyBZT_model_km.temp1 <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of pthalic acid, with distance to city center as the location descriptor
pthalic_acid_model_km <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_km.temp1 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_km.temp2 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_km.temp3 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_km.temp4 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_km.temp5 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:km +
Salt:km +
Microbes:km +
BZT_init:Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_km.temp6 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:km +
Salt:km +
BZT_init:Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_km.temp7 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + km +
BZT_init:Salt + BZT_init:km +
Salt:km +
BZT_init:Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of hydroxyBZT, with distance to city center as the location descriptor
hydroxyBZT_model_km <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp1 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp2 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp3 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp4 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp5 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp6<- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp7<- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp8<- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp9<- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp10<- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp11<- MCMCglmm(scale(hydroxyBZT)~BZT_init + Microbes +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for duckweed pixel area, with distance to city center as the location descriptor
px_model_km <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp1 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp2 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp3 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp4 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp5 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp6 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp7<- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Microbes + BZT_init:km +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp8<- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Microbes + BZT_init:km, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp9<- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp10<- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp11<- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp12<- MCMCglmm(scale(px.mn)~Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp13<- MCMCglmm(scale(px.mn)~Salt, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for optical km, with distance to city center as the location descriptor
od_model_km <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp1 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp2 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp3 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp4 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp5 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp6 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:km +
Salt:Microbes + Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp7 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:km +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp8 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp9 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp10 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp11 <- MCMCglmm(scale(od.mn)~Salt + Microbes + km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp12 <- MCMCglmm(scale(od.mn)~Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for optical km, restricting to inoculated wells, with distance to city center as the location descriptor
odi_model_km <- MCMCglmm(scale(od.mn)~BZT_init + Salt + km +
BZT_init:Salt + BZT_init:km +
Salt:km +
BZT_init:Salt:km, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model_km.temp1 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + km +
BZT_init:Salt + BZT_init:km +
Salt:km, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model_km.temp2 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + km +
BZT_init:km +
Salt:km, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model_km.temp3 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + km +
Salt:km, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model_km.temp4 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + km, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model_km.temp5 <- MCMCglmm(scale(od.mn)~BZT_init + Salt, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model_km.temp6 <- MCMCglmm(scale(od.mn)~Salt, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
#linear model for percent decrease in benzotriazole concentration, both location descriptors
#removed scaling to avoid error message
#some effects not estimable
# bzs_model_both <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# Microbes:density + Microbes:km +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:Microbes:density + BZT_init:Microbes:km + BZT_init:density:km +
# Salt:Microbes:density + Salt:Microbes:km + Salt:density:km +
# Microbes:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp1 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# Microbes:density + Microbes:km +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:Microbes:density + BZT_init:Microbes:km + BZT_init:density:km +
# Salt:Microbes:km + Salt:density:km +
# Microbes:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp2 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# Microbes:density + Microbes:km +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:Microbes:density + BZT_init:Microbes:km + BZT_init:density:km +
# Salt:density:km +
# Microbes:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp3 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# Microbes:density + Microbes:km +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:Microbes:density + BZT_init:density:km +
# Salt:density:km +
# Microbes:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp4 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# Microbes:density + Microbes:km +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:density:km +
# Salt:density:km +
# Microbes:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp5 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# Microbes:density +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:density:km +
# Salt:density:km +
# Microbes:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp6 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# Microbes:density +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:density:km +
# Salt:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp7 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:density:km +
# Salt:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp8 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# density:km +
# BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:density:km +
# Salt:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp9 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# density:km +
# BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:density:km +
# Salt:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp10 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:density + BZT_init:km +
# Salt:density + Salt:km +
# density:km +
# BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:density:km +
# Salt:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp11 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:density + BZT_init:km +
# Salt:density + Salt:km +
# BZT_init:Salt:density + BZT_init:Salt:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# #linear model for amount of BZTalanine, both location descriptors
# #some effects not estimable
# BZTalanine_model_both <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# Microbes:density + Microbes:km +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:Microbes:density + BZT_init:Microbes:km + BZT_init:density:km +
# Salt:Microbes:density + Salt:Microbes:km + Salt:density:km +
# Microbes:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#linear model for percent decrease in benzotriazole concentration, without location variable
bzs_model_noloc <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_noloc.temp1 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_noloc.temp2 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes +
BZT_init:Salt +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_noloc.temp3 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_noloc.temp4 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, bzs model without location variable
bzs_model_DIC <- 0
bzs_model_norand_DIC <- 0
for(i in 1:10) {
bzs_model_DIC[i] <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)$DIC
bzs_model_norand_DIC[i] <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes, data=bzs1,verbose=F)$DIC
}
#linear model for amount of glycoslyated bzt, without location variable
glycosylatedBZT_model_noloc <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_noloc.temp1 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_noloc.temp2 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_noloc.temp3 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_noloc.temp4 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, glycosylatedBZT model without location variable
glycosylatedBZT_model_DIC <- 0
glycosylatedBZT_model_norand_DIC <- 0
for(i in 1:10) {
glycosylatedBZT_model_DIC[i] <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)$DIC
glycosylatedBZT_model_norand_DIC[i] <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt +
BZT_init:Salt,data=bzs1,verbose=F)$DIC
}
#linear model for amount of BZTalanine, without location variable
BZTalanine_model_noloc <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, BZTalanine model without location variable
BZTalanine_model_DIC <- 0
BZTalanine_model_norand_DIC <- 0
for(i in 1:10) {
BZTalanine_model_DIC[i] <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)$DIC
BZTalanine_model_norand_DIC[i] <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, data=bzs1,verbose=F)$DIC
}
#linear model for amount of BZTacetylalanine, without location variable
BZTacetylalanine_model_noloc <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model_noloc.temp1 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model_noloc.temp2 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model_noloc.temp3 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model_noloc.temp4 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, BZTacetylalanine model without location variable
BZTacetylalanine_model_DIC <- 0
BZTacetylalanine_model_norand_DIC <- 0
for(i in 1:10) {
BZTacetylalanine_model_DIC[i] <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)$DIC
BZTacetylalanine_model_norand_DIC[i] <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt +
BZT_init:Salt, data=bzs1,verbose=F)$DIC
}
#linear model for amount of methylBZT, without location variable
methylBZT_model_noloc <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_noloc.temp1 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_noloc.temp2 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_noloc.temp3 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_noloc.temp4 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_noloc.temp5 <- MCMCglmm(scale(methylBZT)~Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_noloc.temp6 <- MCMCglmm(scale(methylBZT)~Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, methylBZT model without location variable
methylBZT_model_DIC <- 0
methylBZT_model_norand_DIC <- 0
for(i in 1:10) {
methylBZT_model_DIC[i] <- MCMCglmm(scale(methylBZT)~Microbes, random = ~ Genotype , data=bzs1,verbose=F)$DIC
methylBZT_model_norand_DIC[i] <- MCMCglmm(scale(methylBZT)~Microbes, data=bzs1,verbose=F)$DIC
}
#linear model for amount of methoxyBZT, without location variable
methoxyBZT_model_noloc <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methoxyBZT_model_noloc.temp1 <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methoxyBZT_model_noloc.temp2 <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methoxyBZT_model_noloc.temp3 <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methoxyBZT_model_noloc.temp4 <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methoxyBZT_model_noloc.temp5 <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt, random = ~ Genotype , data=bzs1,verbose=F)
methoxyBZT_model_noloc.temp6 <- MCMCglmm(scale(methoxyBZT)~BZT_init, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, methoxyBZT model without location variable
methoxyBZT_model_DIC <- 0
methoxyBZT_model_norand_DIC <- 0
for(i in 1:10) {
methoxyBZT_model_DIC[i] <- MCMCglmm(scale(methoxyBZT)~BZT_init, random = ~ Genotype , data=bzs1,verbose=F)$DIC
methoxyBZT_model_norand_DIC[i] <- MCMCglmm(scale(methoxyBZT)~BZT_init, data=bzs1,verbose=F)$DIC
}
#linear model for amount of aniline, without location variable
aniline_model_noloc <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, aniline model without location variable
aniline_model_DIC <- 0
aniline_model_norand_DIC <- 0
for(i in 1:10) {
aniline_model_DIC[i] <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)$DIC
aniline_model_norand_DIC[i] <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, data=bzs1,verbose=F)$DIC
}
#linear model for amount of pthalic_acid, without location variable
pthalic_acid_model_noloc <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_noloc.temp1 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_noloc.temp2 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_noloc.temp3 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_noloc.temp4 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_noloc.temp5 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, pthalic_acid model without location variable
pthalic_acid_model_DIC <- 0
pthalic_acid_model_norand_DIC <- 0
for(i in 1:10) {
pthalic_acid_model_DIC[i] <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)$DIC
pthalic_acid_model_norand_DIC[i] <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt +
BZT_init:Salt, data=bzs1,verbose=F)$DIC
}
#linear model for amount of hydroxyBZT, without location variable
hydroxyBZT_model_noloc <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_noloc.temp1 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_noloc.temp2 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_noloc.temp3 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_noloc.temp4 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_noloc.temp5 <- MCMCglmm(scale(hydroxyBZT)~BZT_init +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, hydroxyBZT model without location variable
hydroxyBZT_model_DIC <- 0
hydroxyBZT_model_norand_DIC <- 0
for(i in 1:10) {
hydroxyBZT_model_DIC[i] <- MCMCglmm(scale(hydroxyBZT)~BZT_init +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)$DIC
hydroxyBZT_model_norand_DIC[i] <- MCMCglmm(scale(hydroxyBZT)~BZT_init +
BZT_init:Microbes, data=bzs1,verbose=F)$DIC
}
#MANOVA
res.man <- manova(cbind(BZT_percent_d, hydroxyBZT, BZTalanine, BZTacetylalanine,
glycosylatedBZT, pthalic_acid, methoxyBZT, methylBZT, aniline) ~ BZT_init*Salt*Microbes, data = bzs1)
| /BZS1.R | no_license | ericyuzhuhao/BZS_duckweed | R | false | false | 96,960 | r | #import libraries
library(readxl)
library(MCMCglmm)
#read data
bzs1 <- read_xlsx('BZS1_transformation.xlsx')
bzs1 <- as.data.frame(bzs1)
area <- pi*25
bzs1['density'] <- bzs1['road_length']/area
#linear models for each response variable
#responses are scaled to avoid error in MCMCglmm ('Mixed model equations singular: use a (stronger) prior')
#linear model for percent decrease in benzotriazole concentration
bzs_model <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp1 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp2 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp3 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp4 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp5 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp6 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
BZT_init:Salt +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp7 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp8 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model.temp9 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + density +
Salt:density, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of benzotriazole alanine
BZTalanine_model <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model.temp1 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model.temp2 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model.temp3 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model.temp4 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model.temp5 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
Microbes:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model.temp6 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model.temp7 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model.temp8 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of glycosylated benzotriazole
glycosylatedBZT_model <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp1 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp2 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:density + BZT_init:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp3 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp4 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp5 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp6 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:density +
Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp7 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp8 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp9 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp10 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + density +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model.temp11 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of benzotriazole acetyl-alanine
BZTacetylalanine_model <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp1 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp2 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp3 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp4 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp5 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp6 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp7 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp8 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp9 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + density +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp10 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model.temp11 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of aniline
aniline_model <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model.temp1 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model.temp2 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model.temp3 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model.temp4 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model.temp5 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of methylbenzotriazole
methylBZT_model <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model.temp1 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model.temp2 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of methoxybenzotriazole
methoxyBZT_model <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
methoxyBZT_model.temp1 <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of pthalic acid
pthalic_acid_model <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model.temp1 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model.temp2 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model.temp3 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model.temp4 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of hydroxyBZT
hydroxyBZT_model <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp1 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp2 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp3 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp4 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp5 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp6 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp7 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp8 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp9 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + density +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp10 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model.temp11 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Microbes +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for duckweed pixel area
px_model <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp1 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp2 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp3 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp4 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp5 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp6 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp7 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Microbes + BZT_init:density +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp8 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Microbes + BZT_init:density, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp9 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp10 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + density, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp11 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp12 <- MCMCglmm(scale(px.mn)~Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model.temp13 <- MCMCglmm(scale(px.mn)~Salt, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for optical density
od_model <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp1 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Microbes:density +
Salt:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp2 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Salt:Microbes + BZT_init:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp3 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density +
BZT_init:Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp4 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:Microbes + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp5 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:density +
Salt:Microbes + Salt:density +
Microbes:density, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp6 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:density +
Salt:Microbes + Salt:density, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp7 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt + BZT_init:density +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp8 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
BZT_init:Salt +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp9 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp10 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + density, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp11 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
od_model.temp12 <- MCMCglmm(scale(od.mn)~Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for optical density, restricting to inoculated wells
odi_model <- MCMCglmm(scale(od.mn)~BZT_init + Salt + density +
BZT_init:Salt + BZT_init:density +
Salt:density +
BZT_init:Salt:density, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model.temp1 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + density +
BZT_init:Salt + BZT_init:density +
Salt:density, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model.temp2 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + density +
BZT_init:density +
Salt:density, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model.temp3 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + density +
Salt:density, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model.temp4 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + density, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model.temp5 <- MCMCglmm(scale(od.mn)~BZT_init + Salt, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model.temp6 <- MCMCglmm(scale(od.mn)~Salt, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
#linear model for percent decrease in benzotriazole concentration, with distance to city center as the location descriptor
bzs_model_km <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp1 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp2 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp3 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp4 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp5 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp6 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
BZT_init:Salt +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp7 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp8 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_km.temp9 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes + km +
Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of benzotriazole alanine, with distance to city center as the location descriptor
BZTalanine_model_km <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model_km.temp1 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model_km.temp2 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model_km.temp3 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model_km.temp4 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model_km.temp5 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
Microbes:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model_km.temp6 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTalanine_model_km.temp7 <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of glycosylated benzotriazole, with distance to city center as the location descriptor
glycosylatedBZT_model_km <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp1 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp2 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp3 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp4 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp5 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp6 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp7 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp8 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp9 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp10 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_km.temp11 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of benzotriazole acetyl-alanine, with distance to city center as the location descriptor
BZTacetylalanine_model_km <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp1 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp2 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp3 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp4 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype, data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp5 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype, data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp6 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
Microbes:km, random = ~ Genotype, data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp7 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype, data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp8 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes, random = ~ Genotype, data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp9 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes + km +
BZT_init:Salt, random = ~ Genotype, data=bzs1,verbose=F)
BZTacetylalanine_model_km.temp10 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt, random = ~ Genotype, data=bzs1,verbose=F)
#linear model for amount of aniline, with distance to city center as the location descriptor
aniline_model_km <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model_km.temp1 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model_km.temp2 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model_km.temp3 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
aniline_model_km.temp4 <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of methylbenzotriazole, with distance to city center as the location descriptor
methylBZT_model_km <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp1 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp2 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp3 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp4 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp5 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp6 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Microbes + BZT_init:km +
Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp7 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
BZT_init:km +
Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp8 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp9 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km +
Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp10 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes + km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp11 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp12 <- MCMCglmm(scale(methylBZT)~Salt + km, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_km.temp13 <- MCMCglmm(scale(methylBZT)~km, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of methoxybenzotriazole, with distance to city center as the location descriptor
methoxyBZT_model_km <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
methoxyBZT_model_km.temp1 <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of pthalic acid, with distance to city center as the location descriptor
pthalic_acid_model_km <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_km.temp1 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_km.temp2 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_km.temp3 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_km.temp4 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_km.temp5 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:km +
Salt:km +
Microbes:km +
BZT_init:Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_km.temp6 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:km +
Salt:km +
BZT_init:Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_km.temp7 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + km +
BZT_init:Salt + BZT_init:km +
Salt:km +
BZT_init:Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for amount of hydroxyBZT, with distance to city center as the location descriptor
hydroxyBZT_model_km <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp1 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp2 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp3 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp4 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp5 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp6<- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp7<- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp8<- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp9<- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes + km +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp10<- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_km.temp11<- MCMCglmm(scale(hydroxyBZT)~BZT_init + Microbes +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for duckweed pixel area, with distance to city center as the location descriptor
px_model_km <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp1 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp2 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp3 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp4 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp5 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp6 <- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp7<- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Microbes + BZT_init:km +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp8<- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Microbes + BZT_init:km, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp9<- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp10<- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes + km, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp11<- MCMCglmm(scale(px.mn)~BZT_init + Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp12<- MCMCglmm(scale(px.mn)~Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
px_model_km.temp13<- MCMCglmm(scale(px.mn)~Salt, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for optical km, with distance to city center as the location descriptor
od_model_km <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Salt:km + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp1 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Microbes:km +
Salt:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp2 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Salt:Microbes + BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp3 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km +
BZT_init:Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp4 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:Microbes + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp5 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:km +
Salt:Microbes + Salt:km +
Microbes:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp6 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:km +
Salt:Microbes + Salt:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp7 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:km +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp8 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:Salt + BZT_init:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp9 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km +
BZT_init:km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp10 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + Microbes + km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp11 <- MCMCglmm(scale(od.mn)~Salt + Microbes + km, random = ~ Genotype , data=bzs1,verbose=F)
od_model_km.temp12 <- MCMCglmm(scale(od.mn)~Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#linear model for optical km, restricting to inoculated wells, with distance to city center as the location descriptor
odi_model_km <- MCMCglmm(scale(od.mn)~BZT_init + Salt + km +
BZT_init:Salt + BZT_init:km +
Salt:km +
BZT_init:Salt:km, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model_km.temp1 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + km +
BZT_init:Salt + BZT_init:km +
Salt:km, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model_km.temp2 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + km +
BZT_init:km +
Salt:km, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model_km.temp3 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + km +
Salt:km, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model_km.temp4 <- MCMCglmm(scale(od.mn)~BZT_init + Salt + km, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model_km.temp5 <- MCMCglmm(scale(od.mn)~BZT_init + Salt, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
odi_model_km.temp6 <- MCMCglmm(scale(od.mn)~Salt, random = ~ Genotype , data=bzs1[bzs1$Microbes == 'Yes',],verbose=F)
#linear model for percent decrease in benzotriazole concentration, both location descriptors
#removed scaling to avoid error message
#some effects not estimable
# bzs_model_both <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# Microbes:density + Microbes:km +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:Microbes:density + BZT_init:Microbes:km + BZT_init:density:km +
# Salt:Microbes:density + Salt:Microbes:km + Salt:density:km +
# Microbes:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp1 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# Microbes:density + Microbes:km +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:Microbes:density + BZT_init:Microbes:km + BZT_init:density:km +
# Salt:Microbes:km + Salt:density:km +
# Microbes:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp2 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# Microbes:density + Microbes:km +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:Microbes:density + BZT_init:Microbes:km + BZT_init:density:km +
# Salt:density:km +
# Microbes:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp3 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# Microbes:density + Microbes:km +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:Microbes:density + BZT_init:density:km +
# Salt:density:km +
# Microbes:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp4 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# Microbes:density + Microbes:km +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:density:km +
# Salt:density:km +
# Microbes:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp5 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# Microbes:density +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:density:km +
# Salt:density:km +
# Microbes:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp6 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# Microbes:density +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:density:km +
# Salt:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp7 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:density:km +
# Salt:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp8 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# density:km +
# BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:density:km +
# Salt:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp9 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# density:km +
# BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:density:km +
# Salt:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp10 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:density + BZT_init:km +
# Salt:density + Salt:km +
# density:km +
# BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:density:km +
# Salt:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# bzs_model_both.temp11 <- MCMCglmm(BZT_percent_d~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:density + BZT_init:km +
# Salt:density + Salt:km +
# BZT_init:Salt:density + BZT_init:Salt:km, random = ~ Genotype, data=bzs1,verbose=F)
#
# #linear model for amount of BZTalanine, both location descriptors
# #some effects not estimable
# BZTalanine_model_both <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes + density + km +
# BZT_init:Salt + BZT_init:Microbes + BZT_init:density + BZT_init:km +
# Salt:Microbes + Salt:density + Salt:km +
# Microbes:density + Microbes:km +
# density:km +
# BZT_init:Salt:Microbes + BZT_init:Salt:density + BZT_init:Salt:km +
# BZT_init:Microbes:density + BZT_init:Microbes:km + BZT_init:density:km +
# Salt:Microbes:density + Salt:Microbes:km + Salt:density:km +
# Microbes:density:km, random = ~ Genotype, data=bzs1,verbose=F)
#linear model for percent decrease in benzotriazole concentration, without location variable
bzs_model_noloc <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_noloc.temp1 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_noloc.temp2 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes +
BZT_init:Salt +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_noloc.temp3 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
bzs_model_noloc.temp4 <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, bzs model without location variable
bzs_model_DIC <- 0
bzs_model_norand_DIC <- 0
for(i in 1:10) {
bzs_model_DIC[i] <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)$DIC
bzs_model_norand_DIC[i] <- MCMCglmm(scale(BZT_percent_d)~BZT_init + Salt + Microbes, data=bzs1,verbose=F)$DIC
}
#linear model for amount of glycoslyated bzt, without location variable
glycosylatedBZT_model_noloc <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_noloc.temp1 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_noloc.temp2 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_noloc.temp3 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
glycosylatedBZT_model_noloc.temp4 <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, glycosylatedBZT model without location variable
glycosylatedBZT_model_DIC <- 0
glycosylatedBZT_model_norand_DIC <- 0
for(i in 1:10) {
glycosylatedBZT_model_DIC[i] <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)$DIC
glycosylatedBZT_model_norand_DIC[i] <- MCMCglmm(scale(glycosylatedBZT)~BZT_init + Salt +
BZT_init:Salt,data=bzs1,verbose=F)$DIC
}
#linear model for amount of BZTalanine, without location variable
BZTalanine_model_noloc <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, BZTalanine model without location variable
BZTalanine_model_DIC <- 0
BZTalanine_model_norand_DIC <- 0
for(i in 1:10) {
BZTalanine_model_DIC[i] <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)$DIC
BZTalanine_model_norand_DIC[i] <- MCMCglmm(scale(BZTalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, data=bzs1,verbose=F)$DIC
}
#linear model for amount of BZTacetylalanine, without location variable
BZTacetylalanine_model_noloc <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model_noloc.temp1 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model_noloc.temp2 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model_noloc.temp3 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt + Microbes +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
BZTacetylalanine_model_noloc.temp4 <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, BZTacetylalanine model without location variable
BZTacetylalanine_model_DIC <- 0
BZTacetylalanine_model_norand_DIC <- 0
for(i in 1:10) {
BZTacetylalanine_model_DIC[i] <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)$DIC
BZTacetylalanine_model_norand_DIC[i] <- MCMCglmm(scale(BZTacetylalanine)~BZT_init + Salt +
BZT_init:Salt, data=bzs1,verbose=F)$DIC
}
#linear model for amount of methylBZT, without location variable
methylBZT_model_noloc <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_noloc.temp1 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_noloc.temp2 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_noloc.temp3 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_noloc.temp4 <- MCMCglmm(scale(methylBZT)~BZT_init + Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_noloc.temp5 <- MCMCglmm(scale(methylBZT)~Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methylBZT_model_noloc.temp6 <- MCMCglmm(scale(methylBZT)~Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, methylBZT model without location variable
methylBZT_model_DIC <- 0
methylBZT_model_norand_DIC <- 0
for(i in 1:10) {
methylBZT_model_DIC[i] <- MCMCglmm(scale(methylBZT)~Microbes, random = ~ Genotype , data=bzs1,verbose=F)$DIC
methylBZT_model_norand_DIC[i] <- MCMCglmm(scale(methylBZT)~Microbes, data=bzs1,verbose=F)$DIC
}
#linear model for amount of methoxyBZT, without location variable
methoxyBZT_model_noloc <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methoxyBZT_model_noloc.temp1 <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methoxyBZT_model_noloc.temp2 <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methoxyBZT_model_noloc.temp3 <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methoxyBZT_model_noloc.temp4 <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt + Microbes, random = ~ Genotype , data=bzs1,verbose=F)
methoxyBZT_model_noloc.temp5 <- MCMCglmm(scale(methoxyBZT)~BZT_init + Salt, random = ~ Genotype , data=bzs1,verbose=F)
methoxyBZT_model_noloc.temp6 <- MCMCglmm(scale(methoxyBZT)~BZT_init, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, methoxyBZT model without location variable
methoxyBZT_model_DIC <- 0
methoxyBZT_model_norand_DIC <- 0
for(i in 1:10) {
methoxyBZT_model_DIC[i] <- MCMCglmm(scale(methoxyBZT)~BZT_init, random = ~ Genotype , data=bzs1,verbose=F)$DIC
methoxyBZT_model_norand_DIC[i] <- MCMCglmm(scale(methoxyBZT)~BZT_init, data=bzs1,verbose=F)$DIC
}
#linear model for amount of aniline, without location variable
aniline_model_noloc <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, aniline model without location variable
aniline_model_DIC <- 0
aniline_model_norand_DIC <- 0
for(i in 1:10) {
aniline_model_DIC[i] <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)$DIC
aniline_model_norand_DIC[i] <- MCMCglmm(scale(aniline)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, data=bzs1,verbose=F)$DIC
}
#linear model for amount of pthalic_acid, without location variable
pthalic_acid_model_noloc <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_noloc.temp1 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_noloc.temp2 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_noloc.temp3 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt + Microbes +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_noloc.temp4 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
pthalic_acid_model_noloc.temp5 <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, pthalic_acid model without location variable
pthalic_acid_model_DIC <- 0
pthalic_acid_model_norand_DIC <- 0
for(i in 1:10) {
pthalic_acid_model_DIC[i] <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt +
BZT_init:Salt, random = ~ Genotype , data=bzs1,verbose=F)$DIC
pthalic_acid_model_norand_DIC[i] <- MCMCglmm(scale(pthalic_acid)~BZT_init + Salt +
BZT_init:Salt, data=bzs1,verbose=F)$DIC
}
#linear model for amount of hydroxyBZT, without location variable
hydroxyBZT_model_noloc <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes +
BZT_init:Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_noloc.temp1 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Salt + BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_noloc.temp2 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Microbes +
Salt:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_noloc.temp3 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt + Microbes +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_noloc.temp4 <- MCMCglmm(scale(hydroxyBZT)~BZT_init + Salt +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
hydroxyBZT_model_noloc.temp5 <- MCMCglmm(scale(hydroxyBZT)~BZT_init +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)
#significance of random effects, hydroxyBZT model without location variable
hydroxyBZT_model_DIC <- 0
hydroxyBZT_model_norand_DIC <- 0
for(i in 1:10) {
hydroxyBZT_model_DIC[i] <- MCMCglmm(scale(hydroxyBZT)~BZT_init +
BZT_init:Microbes, random = ~ Genotype , data=bzs1,verbose=F)$DIC
hydroxyBZT_model_norand_DIC[i] <- MCMCglmm(scale(hydroxyBZT)~BZT_init +
BZT_init:Microbes, data=bzs1,verbose=F)$DIC
}
#MANOVA
res.man <- manova(cbind(BZT_percent_d, hydroxyBZT, BZTalanine, BZTacetylalanine,
glycosylatedBZT, pthalic_acid, methoxyBZT, methylBZT, aniline) ~ BZT_init*Salt*Microbes, data = bzs1)
|
#' Illustrate an F Test graphically.
#'
#' This function plots the density probability distribution of an F statistic, with a vertical cutline at the observed F value specified. A p-value and the observed F value are plotted. Although largely customizable, only three arguments are required (the observed F and the degrees of freedom).
#'
#' @param f A numeric value indicating the observed F statistic. Alternatively, you can pass an object of class \code{lm} created by the function \code{lm()}.
#' @param dfnum A numeric value indicating the degrees of freedom of the numerator. This argument is optional if you are using an \code{lm} object as the \code{f} argument.
#' @param dfdenom A numeric value indicating the degrees of freedom of the denominator. This argument is optional if you are using an \code{lm} object as the \code{f} argument.
#' @param blank A logical that indicates whether to hide (\code{blank = TRUE}) the test statistic value, p value and cutline. The corresponding colors are actually only made transparent when \code{blank = TRUE}, so that the output is scaled exactly the same (this is useful and especially intended for step-by-step explanations).
#' @param xmax A numeric including the maximum for the x-axis. Defaults to \code{"auto"}, which scales the plot automatically (optional).
#' @param title A character or expression indicating a custom title for the plot (optional).
#' @param xlabel A character or expression indicating a custom title for the x axis (optional).
#' @param ylabel A character or expression indicating a custom title for the y axis (optional).
#' @param fontfamily A character indicating the font family of all the titles and labels (e.g. \code{"serif"} (default), \code{"sans"}, \code{"Helvetica"}, \code{"Palatino"}, etc.) (optional).
#' @param colorleft A character indicating the color for the "left" area under the curve (optional).
#' @param colorright A character indicating the color for the "right" area under the curve (optional).
#' @param colorleftcurve A character indicating the color for the "left" part of the curve (optional).
#' @param colorrightcurve A character indicating the color for the "right" part of the curve (optional). By default, for color consistency, this color is also passed to the label, but this can be changed by providing an argument for the \code{colorlabel} parameter.
#' @param colorcut A character indicating the color for the cut line at the observed test statistic (optional).
#' @param colorplabel A character indicating the color for the label of the p-value (optional). By default, for color consistency, this color is the same as color of \code{colorright}.
#' @param theme A character indicating one of the predefined color themes. The themes are \code{"default"} (light blue and red), \code{"blackandwhite"}, \code{"whiteandred"}, \code{"blueandred"}, \code{"greenandred"} and \code{"goldandblue"}) (optional). Supersedes \code{colorleft} and \code{colorright} if another argument than \code{"default"} is provided.
#' @param signifdigitsf A numeric indicating the number of desired significant figures reported for the F (optional).
#' @param curvelinesize A numeric indicating the size of the curve line (optional).
#' @param cutlinesize A numeric indicating the size of the cut line (optional). By default, the size of the curve line is used.
#' @return A plot with the density of probability of F under the null hypothesis, annotated with the observed test statistic and the p-value.
#' @export plotftest
#' @examples
#' #Making an F plot with an F of 3, and degrees of freedom of 1 and 5.
#' plotftest(f = 4, dfnum = 3, dfdenom = 5)
#'
#' #Note that the same can be obtained even quicker with:
#' plotftest(4,3,5)
#'
#' #The same plot without the f or p value
#' plotftest(4,3,5, blank = TRUE)
#'
#' #Passing an "lm" object
#' set.seed(1)
#' x <- rnorm(10) ; y <- x + rnorm(10)
#' fit <- lm(y ~ x)
#' plotftest(fit)
#' plotftest(summary(fit)) # also works
#'
#' #Passing an "anova" F-change test
#' set.seed(1)
#' x <- rnorm(10) ; y <- x + rnorm(10)
#' fit1 <- lm(y ~ x)
#' fit2 <- lm(y ~ poly(x, 2))
#' comp <- anova(fit1, fit2)
#' plotftest(comp)
#'
#' @author Nils Myszkowski <nmyszkowski@pace.edu>
plotftest <- function(f, dfnum = f$fstatistic[2], dfdenom = f$fstatistic[3], blank = FALSE, xmax = "auto", title = "F Test", xlabel = "F", ylabel = "Density of probability\nunder the null hypothesis", fontfamily = "serif", colorleft = "aliceblue", colorright = "firebrick3", colorleftcurve = "black", colorrightcurve = "black", colorcut = "black", colorplabel = colorright, theme = "default", signifdigitsf = 3, curvelinesize = .4, cutlinesize = curvelinesize) {
x=NULL
# If f is an anova() object, take values from it
if ("anova" %in% class(f)) {
dfnum <- f$Df[2]
dfdenom <- f$Res.Df[2]
f <- f$F[2]
}
# If f is a "summary.lm" object, take values from it
if ("summary.lm" %in% class(f)) {
dfnum <- f$fstatistic[2]
dfdenom <- f$fstatistic[3]
f <- f$fstatistic[1]
}
# If f is a "lm" object, take values from it
if ("lm" %in% class(f)) {
dfnum <- summary(f)$fstatistic[2]
dfdenom <- summary(f)$fstatistic[3]
f <- summary(f)$fstatistic[1]
}
# Unname inputs (can cause issues)
f <- unname(f)
dfnum <- unname(dfnum)
dfdenom <- unname(dfdenom)
# Create a function to restrict plotting areas to specific bounds of x
area_range <- function(fun, min, max) {
function(x) {
y <- fun(x)
y[x < min | x > max] <- NA
return(y)
}
}
# Function to format p value
p_value_format <- function(p) {
if (p < .001) {
"< .001"
} else if (p > .999) {
"> .999"
} else {
paste0("= ", substr(sprintf("%.3f", p), 2, 5))
}
}
#Calculate the p value
pvalue <- stats::pf(q = f, df1 = dfnum, df2 = dfdenom, lower.tail = FALSE)
#Label for p value
plab <- paste0("p ", p_value_format(pvalue))
#Label for F value
flab <- paste("F =", signif(f, digits = signifdigitsf), sep = " ")
#Define x axis bounds as the maximum between 1.5*f or 3 (this avoids only the tip of the curve to be plotted when F is small, and keeps a nice t curve shape display)
if (xmax == "auto") {
xbound <- max(1.5*f, 3)
} else {xbound <- xmax}
#To ensure lines plotted by stat_function are smooth
precisionfactor <- 5000
#To define the function to plot in stat_function
density <- function(x) stats::df(x, df1 = dfnum, df2 = dfdenom)
#Use the maximum density (top of the curve) to use as maximum y axis value (start finding maximum at .2 to avoid very high densities values when the density function has a y axis asymptote)
maxdensity <- stats::optimize(density, interval=c(0.2, xbound), maximum=TRUE)$objective
#Use the density corresponding to the given f to place the label above (if this density is too high places the label lower in order to avoid the label being out above the plot)
y_plabel <- min(density(f)+maxdensity*.1, maxdensity*.7)
#To place the p value labels on the x axis, at the middle of the part of the curve they correspond to
x_plabel <- f+(xbound-f)/2
#Define the fill color of the labels as white
colorlabelfill <- "white"
#Theme options
if (theme == "default") {
colorleft <- colorleft
colorright <- colorright
colorplabel <- colorplabel
} else if (theme == "blackandwhite"){
colorleft <- "grey96"
colorright <- "darkgrey"
colorplabel <- "black"
} else if (theme == "whiteandred") {
colorleft <- "grey96"
colorright <- "firebrick3"
colorplabel <- "firebrick3"
} else if (theme == "blueandred") {
colorleft <- "#104E8B"
colorright <- "firebrick3"
colorplabel <- "firebrick3"
} else if (theme == "greenandred") {
colorleft <- "seagreen"
colorright <- "firebrick3"
colorplabel <- "firebrick3"
}else if (theme == "goldandblue") {
colorleft <- "#FFC61E"
colorright <- "#00337F"
colorplabel <- "#00337F"
}else warning("The ",'"', "theme", '"', " argument was not recognized. See documentation for a list of available color themes. Reverting to default.")
#To make some colors transparent when `blank` parameter is TRUE (to only plot de probability density function in that case)
if (blank == TRUE) {
colorright <- grDevices::adjustcolor("white", alpha.f = 0)
colorcut <- grDevices::adjustcolor("white", alpha.f = 0)
colorplabel <- grDevices::adjustcolor("white", alpha.f = 0)
colorlabelfill <- grDevices::adjustcolor("white", alpha.f = 0)
}
else {
#Do nothing
}
#Plotting with ggplot2
ggplot2::ggplot(data.frame(x = c(0, xbound)), ggplot2::aes(x)) +
#Left side area
ggplot2::stat_function(fun = area_range(density, 0, xbound), geom="area", fill=colorleft, n=precisionfactor) +
#Right side area
ggplot2::stat_function(fun = area_range(density, f, xbound), geom="area", fill=colorright, n=precisionfactor) +
#Right side curve
ggplot2::stat_function(fun = density, xlim = c(f,xbound), colour = colorrightcurve,linewidth=curvelinesize, ) +
#Left side curve
ggplot2::stat_function(fun = density, xlim = c(0,f), colour = colorleftcurve, n=1000, linewidth=curvelinesize) +
#Define plotting area for extraspace (proportional to the max y plotted) below the graph to place f label
ggplot2::coord_cartesian(xlim=c(0,xbound),ylim=c(maxdensity*(-.08), maxdensity)) +
#Cut line
ggplot2::geom_vline(xintercept = f*1, colour = colorcut, linewidth = cutlinesize) +
#p label
ggplot2::geom_label(ggplot2::aes(x_plabel,y_plabel,label = plab), colour=colorplabel, fill = colorlabelfill, family=fontfamily) +
#f label
ggplot2::geom_label(ggplot2::aes(f,maxdensity*(-.05),label = flab),colour=colorcut, fill = colorlabelfill, family=fontfamily) +
#Add the title
ggplot2::ggtitle(title) +
#Axis labels
ggplot2::labs(x=xlabel,y=ylabel, size=10) +
#Apply black and white ggplot theme to avoid grey background, etc.
ggplot2::theme_bw() +
#Remove gridlines and pass fontfamily argument to ggplot2
ggplot2::theme(
panel.grid.major = ggplot2::element_blank(),
panel.grid.minor = ggplot2::element_blank(),
panel.border = ggplot2::element_blank(),
axis.title = ggplot2::element_text(family = fontfamily),
axis.text = ggplot2::element_text(family = fontfamily),
axis.text.x = ggplot2::element_text(family = fontfamily),
axis.text.y = ggplot2::element_text(family = fontfamily),
plot.title = ggplot2::element_text(family = fontfamily, hjust = .5),
legend.text = ggplot2::element_text(family = fontfamily),
legend.title = ggplot2::element_text(family = fontfamily))
}
| /R/plot.ftest.R | no_license | cran/nhstplot | R | false | false | 10,614 | r | #' Illustrate an F Test graphically.
#'
#' This function plots the density probability distribution of an F statistic, with a vertical cutline at the observed F value specified. A p-value and the observed F value are plotted. Although largely customizable, only three arguments are required (the observed F and the degrees of freedom).
#'
#' @param f A numeric value indicating the observed F statistic. Alternatively, you can pass an object of class \code{lm} created by the function \code{lm()}.
#' @param dfnum A numeric value indicating the degrees of freedom of the numerator. This argument is optional if you are using an \code{lm} object as the \code{f} argument.
#' @param dfdenom A numeric value indicating the degrees of freedom of the denominator. This argument is optional if you are using an \code{lm} object as the \code{f} argument.
#' @param blank A logical that indicates whether to hide (\code{blank = TRUE}) the test statistic value, p value and cutline. The corresponding colors are actually only made transparent when \code{blank = TRUE}, so that the output is scaled exactly the same (this is useful and especially intended for step-by-step explanations).
#' @param xmax A numeric including the maximum for the x-axis. Defaults to \code{"auto"}, which scales the plot automatically (optional).
#' @param title A character or expression indicating a custom title for the plot (optional).
#' @param xlabel A character or expression indicating a custom title for the x axis (optional).
#' @param ylabel A character or expression indicating a custom title for the y axis (optional).
#' @param fontfamily A character indicating the font family of all the titles and labels (e.g. \code{"serif"} (default), \code{"sans"}, \code{"Helvetica"}, \code{"Palatino"}, etc.) (optional).
#' @param colorleft A character indicating the color for the "left" area under the curve (optional).
#' @param colorright A character indicating the color for the "right" area under the curve (optional).
#' @param colorleftcurve A character indicating the color for the "left" part of the curve (optional).
#' @param colorrightcurve A character indicating the color for the "right" part of the curve (optional). By default, for color consistency, this color is also passed to the label, but this can be changed by providing an argument for the \code{colorlabel} parameter.
#' @param colorcut A character indicating the color for the cut line at the observed test statistic (optional).
#' @param colorplabel A character indicating the color for the label of the p-value (optional). By default, for color consistency, this color is the same as color of \code{colorright}.
#' @param theme A character indicating one of the predefined color themes. The themes are \code{"default"} (light blue and red), \code{"blackandwhite"}, \code{"whiteandred"}, \code{"blueandred"}, \code{"greenandred"} and \code{"goldandblue"}) (optional). Supersedes \code{colorleft} and \code{colorright} if another argument than \code{"default"} is provided.
#' @param signifdigitsf A numeric indicating the number of desired significant figures reported for the F (optional).
#' @param curvelinesize A numeric indicating the size of the curve line (optional).
#' @param cutlinesize A numeric indicating the size of the cut line (optional). By default, the size of the curve line is used.
#' @return A plot with the density of probability of F under the null hypothesis, annotated with the observed test statistic and the p-value.
#' @export plotftest
#' @examples
#' #Making an F plot with an F of 3, and degrees of freedom of 1 and 5.
#' plotftest(f = 4, dfnum = 3, dfdenom = 5)
#'
#' #Note that the same can be obtained even quicker with:
#' plotftest(4,3,5)
#'
#' #The same plot without the f or p value
#' plotftest(4,3,5, blank = TRUE)
#'
#' #Passing an "lm" object
#' set.seed(1)
#' x <- rnorm(10) ; y <- x + rnorm(10)
#' fit <- lm(y ~ x)
#' plotftest(fit)
#' plotftest(summary(fit)) # also works
#'
#' #Passing an "anova" F-change test
#' set.seed(1)
#' x <- rnorm(10) ; y <- x + rnorm(10)
#' fit1 <- lm(y ~ x)
#' fit2 <- lm(y ~ poly(x, 2))
#' comp <- anova(fit1, fit2)
#' plotftest(comp)
#'
#' @author Nils Myszkowski <nmyszkowski@pace.edu>
plotftest <- function(f, dfnum = f$fstatistic[2], dfdenom = f$fstatistic[3], blank = FALSE, xmax = "auto", title = "F Test", xlabel = "F", ylabel = "Density of probability\nunder the null hypothesis", fontfamily = "serif", colorleft = "aliceblue", colorright = "firebrick3", colorleftcurve = "black", colorrightcurve = "black", colorcut = "black", colorplabel = colorright, theme = "default", signifdigitsf = 3, curvelinesize = .4, cutlinesize = curvelinesize) {
x=NULL
# If f is an anova() object, take values from it
if ("anova" %in% class(f)) {
dfnum <- f$Df[2]
dfdenom <- f$Res.Df[2]
f <- f$F[2]
}
# If f is a "summary.lm" object, take values from it
if ("summary.lm" %in% class(f)) {
dfnum <- f$fstatistic[2]
dfdenom <- f$fstatistic[3]
f <- f$fstatistic[1]
}
# If f is a "lm" object, take values from it
if ("lm" %in% class(f)) {
dfnum <- summary(f)$fstatistic[2]
dfdenom <- summary(f)$fstatistic[3]
f <- summary(f)$fstatistic[1]
}
# Unname inputs (can cause issues)
f <- unname(f)
dfnum <- unname(dfnum)
dfdenom <- unname(dfdenom)
# Create a function to restrict plotting areas to specific bounds of x
area_range <- function(fun, min, max) {
function(x) {
y <- fun(x)
y[x < min | x > max] <- NA
return(y)
}
}
# Function to format p value
p_value_format <- function(p) {
if (p < .001) {
"< .001"
} else if (p > .999) {
"> .999"
} else {
paste0("= ", substr(sprintf("%.3f", p), 2, 5))
}
}
#Calculate the p value
pvalue <- stats::pf(q = f, df1 = dfnum, df2 = dfdenom, lower.tail = FALSE)
#Label for p value
plab <- paste0("p ", p_value_format(pvalue))
#Label for F value
flab <- paste("F =", signif(f, digits = signifdigitsf), sep = " ")
#Define x axis bounds as the maximum between 1.5*f or 3 (this avoids only the tip of the curve to be plotted when F is small, and keeps a nice t curve shape display)
if (xmax == "auto") {
xbound <- max(1.5*f, 3)
} else {xbound <- xmax}
#To ensure lines plotted by stat_function are smooth
precisionfactor <- 5000
#To define the function to plot in stat_function
density <- function(x) stats::df(x, df1 = dfnum, df2 = dfdenom)
#Use the maximum density (top of the curve) to use as maximum y axis value (start finding maximum at .2 to avoid very high densities values when the density function has a y axis asymptote)
maxdensity <- stats::optimize(density, interval=c(0.2, xbound), maximum=TRUE)$objective
#Use the density corresponding to the given f to place the label above (if this density is too high places the label lower in order to avoid the label being out above the plot)
y_plabel <- min(density(f)+maxdensity*.1, maxdensity*.7)
#To place the p value labels on the x axis, at the middle of the part of the curve they correspond to
x_plabel <- f+(xbound-f)/2
#Define the fill color of the labels as white
colorlabelfill <- "white"
#Theme options
if (theme == "default") {
colorleft <- colorleft
colorright <- colorright
colorplabel <- colorplabel
} else if (theme == "blackandwhite"){
colorleft <- "grey96"
colorright <- "darkgrey"
colorplabel <- "black"
} else if (theme == "whiteandred") {
colorleft <- "grey96"
colorright <- "firebrick3"
colorplabel <- "firebrick3"
} else if (theme == "blueandred") {
colorleft <- "#104E8B"
colorright <- "firebrick3"
colorplabel <- "firebrick3"
} else if (theme == "greenandred") {
colorleft <- "seagreen"
colorright <- "firebrick3"
colorplabel <- "firebrick3"
}else if (theme == "goldandblue") {
colorleft <- "#FFC61E"
colorright <- "#00337F"
colorplabel <- "#00337F"
}else warning("The ",'"', "theme", '"', " argument was not recognized. See documentation for a list of available color themes. Reverting to default.")
#To make some colors transparent when `blank` parameter is TRUE (to only plot de probability density function in that case)
if (blank == TRUE) {
colorright <- grDevices::adjustcolor("white", alpha.f = 0)
colorcut <- grDevices::adjustcolor("white", alpha.f = 0)
colorplabel <- grDevices::adjustcolor("white", alpha.f = 0)
colorlabelfill <- grDevices::adjustcolor("white", alpha.f = 0)
}
else {
#Do nothing
}
#Plotting with ggplot2
ggplot2::ggplot(data.frame(x = c(0, xbound)), ggplot2::aes(x)) +
#Left side area
ggplot2::stat_function(fun = area_range(density, 0, xbound), geom="area", fill=colorleft, n=precisionfactor) +
#Right side area
ggplot2::stat_function(fun = area_range(density, f, xbound), geom="area", fill=colorright, n=precisionfactor) +
#Right side curve
ggplot2::stat_function(fun = density, xlim = c(f,xbound), colour = colorrightcurve,linewidth=curvelinesize, ) +
#Left side curve
ggplot2::stat_function(fun = density, xlim = c(0,f), colour = colorleftcurve, n=1000, linewidth=curvelinesize) +
#Define plotting area for extraspace (proportional to the max y plotted) below the graph to place f label
ggplot2::coord_cartesian(xlim=c(0,xbound),ylim=c(maxdensity*(-.08), maxdensity)) +
#Cut line
ggplot2::geom_vline(xintercept = f*1, colour = colorcut, linewidth = cutlinesize) +
#p label
ggplot2::geom_label(ggplot2::aes(x_plabel,y_plabel,label = plab), colour=colorplabel, fill = colorlabelfill, family=fontfamily) +
#f label
ggplot2::geom_label(ggplot2::aes(f,maxdensity*(-.05),label = flab),colour=colorcut, fill = colorlabelfill, family=fontfamily) +
#Add the title
ggplot2::ggtitle(title) +
#Axis labels
ggplot2::labs(x=xlabel,y=ylabel, size=10) +
#Apply black and white ggplot theme to avoid grey background, etc.
ggplot2::theme_bw() +
#Remove gridlines and pass fontfamily argument to ggplot2
ggplot2::theme(
panel.grid.major = ggplot2::element_blank(),
panel.grid.minor = ggplot2::element_blank(),
panel.border = ggplot2::element_blank(),
axis.title = ggplot2::element_text(family = fontfamily),
axis.text = ggplot2::element_text(family = fontfamily),
axis.text.x = ggplot2::element_text(family = fontfamily),
axis.text.y = ggplot2::element_text(family = fontfamily),
plot.title = ggplot2::element_text(family = fontfamily, hjust = .5),
legend.text = ggplot2::element_text(family = fontfamily),
legend.title = ggplot2::element_text(family = fontfamily))
}
|
#Analysis of the Irrigation dataset
#Fahad Reda
#17.11.2020
#A small case study
#load package
library(tidyverse)
#being with wide "messy" format:
irrigation <- read.csv("data/irrigation_wide.csv")
#Examine the data :
glimpse(irrigation)
summary(irrigation)
#In 2007, what is the total area under irrigation
#for only the Americas
irrigation %>%
filter (year == 2007) %>%
select( ends_with("erica")) %>% sum()
###
irrigation %>%
filter(year==2007) %>%
select ('N.America', 'S.America')%>% sum()
###
irrigation %>%
filter(year==2007) %>%
select(4,5) %>%
sum()
#how to make tidy data
irrigation_t <- irrigation %>%
pivot_longer(-year, names_to ="region")
irrigation_t
#what is the total areas under irrigation in each year?
irrigation_t %>%
group_by(year) %>%
summarise(total = sum (value))
irrigation_t %>%
group_by(region)%>%
summarise
(diff = value [year==2007] - value[year==1980]) %>%
##
c (0, diff(xx)/xx[-length(xx)])
irrigation_t %>%
group_by (region) %>%
mutate (diff(value))
#what is the rate of change in each region?
irrigation_t %>% arrange(region) %>%
group_by(region) %>%
mutate(rate= c (0, diff(value)/ value [-length(value)]))
#where is the lowest and highest?
irrigation_t[which.max(irrigation_t$rate),]
irrigation_t[which.min(irrigation_t$rate),]
#this will give max rate for each region
####
irrigation_t %>%
slice_max(rate, n=1)
#becaue ... the tibble is still a group_df
#so to get the global answer : ungroup()
#highst
irrigation_t %>%
ungroup() %>%
slice_max(rate, n = 1)
#lowest
irrigation_t %>%
ungroup() %>%
slice_min(rate, n= 1 )
#standardize aginest 1980 (relative change over 1980) (easier)
#which region increased most from 1980 t0 2007?
irrigation_t %>%
group_by(region) %>%
group_split()
summarise(diff = value[year==2007] - value[year==1980])
#plot areas over time for each region?
ggplot(irrigation, aes(x= region, y = area)) +
geom_point()
| /Stage 1/Irrigation-v1.R | no_license | fahadmreda/Stage1-2-MISK | R | false | false | 1,946 | r | #Analysis of the Irrigation dataset
#Fahad Reda
#17.11.2020
#A small case study
#load package
library(tidyverse)
#being with wide "messy" format:
irrigation <- read.csv("data/irrigation_wide.csv")
#Examine the data :
glimpse(irrigation)
summary(irrigation)
#In 2007, what is the total area under irrigation
#for only the Americas
irrigation %>%
filter (year == 2007) %>%
select( ends_with("erica")) %>% sum()
###
irrigation %>%
filter(year==2007) %>%
select ('N.America', 'S.America')%>% sum()
###
irrigation %>%
filter(year==2007) %>%
select(4,5) %>%
sum()
#how to make tidy data
irrigation_t <- irrigation %>%
pivot_longer(-year, names_to ="region")
irrigation_t
#what is the total areas under irrigation in each year?
irrigation_t %>%
group_by(year) %>%
summarise(total = sum (value))
irrigation_t %>%
group_by(region)%>%
summarise
(diff = value [year==2007] - value[year==1980]) %>%
##
c (0, diff(xx)/xx[-length(xx)])
irrigation_t %>%
group_by (region) %>%
mutate (diff(value))
#what is the rate of change in each region?
irrigation_t %>% arrange(region) %>%
group_by(region) %>%
mutate(rate= c (0, diff(value)/ value [-length(value)]))
#where is the lowest and highest?
irrigation_t[which.max(irrigation_t$rate),]
irrigation_t[which.min(irrigation_t$rate),]
#this will give max rate for each region
####
irrigation_t %>%
slice_max(rate, n=1)
#becaue ... the tibble is still a group_df
#so to get the global answer : ungroup()
#highst
irrigation_t %>%
ungroup() %>%
slice_max(rate, n = 1)
#lowest
irrigation_t %>%
ungroup() %>%
slice_min(rate, n= 1 )
#standardize aginest 1980 (relative change over 1980) (easier)
#which region increased most from 1980 t0 2007?
irrigation_t %>%
group_by(region) %>%
group_split()
summarise(diff = value[year==2007] - value[year==1980])
#plot areas over time for each region?
ggplot(irrigation, aes(x= region, y = area)) +
geom_point()
|
context("cc_list_keys")
test_that("cc_list_keys level works - parsing", {
skip_on_cran()
aa <- cc_list_keys()
expect_is(aa, "tbl_df")
expect_named(aa, c("Key", "LastModified", "ETag", "Size", "StorageClass"))
expect_is(aa$Key, "character")
expect_is(aa$ETag, "character")
})
test_that("cc_list_keys - max parameter works", {
skip_on_cran()
aa <- cc_list_keys(max = 1)
bb <- cc_list_keys(max = 3)
cc <- cc_list_keys(max = 7)
dd <- cc_list_keys(max = 0)
expect_is(aa, "tbl_df")
expect_is(bb, "tbl_df")
expect_is(cc, "tbl_df")
expect_is(dd, "tbl_df")
expect_gt(NROW(bb), NROW(aa))
expect_gt(NROW(cc), NROW(aa))
expect_gt(NROW(cc), NROW(bb))
expect_equal(NROW(dd), 0)
})
test_that("cc_list_keys - prefix parameter works", {
skip_on_cran()
pref <- "ccafs/ccafs-climate/data/ipcc_5ar_ciat_downscaled/"
aa <- cc_list_keys(prefix = pref, max = 10)
expect_is(aa, "tbl_df")
expect_true(all(grepl(paste0("^", pref), aa$Key)))
})
test_that("cc_list_keys - marker parameter works", {
skip_on_cran()
aa <- cc_list_keys(max = 3)
bb <- cc_list_keys(marker = aa$Key[3], max = 3)
expect_is(aa, "tbl_df")
expect_is(bb, "tbl_df")
})
| /tests/testthat/test-cc_list_keys.R | permissive | chrisfan24/ccafs | R | false | false | 1,184 | r | context("cc_list_keys")
test_that("cc_list_keys level works - parsing", {
skip_on_cran()
aa <- cc_list_keys()
expect_is(aa, "tbl_df")
expect_named(aa, c("Key", "LastModified", "ETag", "Size", "StorageClass"))
expect_is(aa$Key, "character")
expect_is(aa$ETag, "character")
})
test_that("cc_list_keys - max parameter works", {
skip_on_cran()
aa <- cc_list_keys(max = 1)
bb <- cc_list_keys(max = 3)
cc <- cc_list_keys(max = 7)
dd <- cc_list_keys(max = 0)
expect_is(aa, "tbl_df")
expect_is(bb, "tbl_df")
expect_is(cc, "tbl_df")
expect_is(dd, "tbl_df")
expect_gt(NROW(bb), NROW(aa))
expect_gt(NROW(cc), NROW(aa))
expect_gt(NROW(cc), NROW(bb))
expect_equal(NROW(dd), 0)
})
test_that("cc_list_keys - prefix parameter works", {
skip_on_cran()
pref <- "ccafs/ccafs-climate/data/ipcc_5ar_ciat_downscaled/"
aa <- cc_list_keys(prefix = pref, max = 10)
expect_is(aa, "tbl_df")
expect_true(all(grepl(paste0("^", pref), aa$Key)))
})
test_that("cc_list_keys - marker parameter works", {
skip_on_cran()
aa <- cc_list_keys(max = 3)
bb <- cc_list_keys(marker = aa$Key[3], max = 3)
expect_is(aa, "tbl_df")
expect_is(bb, "tbl_df")
})
|
% Generated by roxygen2 (4.1.0): do not edit by hand
% Please edit documentation in R/make.time.factor.r
\name{make.time.factor}
\alias{make.time.factor}
\title{Make time-varying dummy variables from time-varying factor variable}
\usage{
make.time.factor(x, var.name, times, intercept = NULL, delete = TRUE)
}
\arguments{
\item{x}{dataframe containing set of factor variables with names composed of
var.name prefix and times suffix}
\item{var.name}{prefix for variable names}
\item{times}{numeric suffixes for variable names}
\item{intercept}{the value of the factor variable that will be used for the
intercept}
\item{delete}{if TRUE, the origninal time-varying factor variables are
removed from the returned dataframe}
}
\value{
x: a dataframe containing the original data (with time-varying
factor variables removed if delete=TRUE) and the time-varying dummy
variables added.
}
\description{
Create a new dataframe with time-varying dummy variables from a time-varying
factor variable. The time-varying dummy variables are named appropriately
to be used as a set of time dependent individual covariates in a parameter
specification
}
\details{
An example of the var.name and times is var.name="observer", times=1:5. The
code expects to find observer1,...,observer5 to be factor variables in x. If
there are k unique levels (excluding ".") across the time varying factor
variables, then k-1 dummy variables are created for each of the named factor
variables. They are named with var.name, level[i], times[j] concatenated
together where level[i] is the name of the facto level i. If there a m
times then the new data set will contain m*(k-1) dummy variables. If the
factor variable includes any "." values these are ignored because they are
used to indicate a missing value that is paired with a missing value in the
encounter history. Note that it will create each dummy variable for each
factor even if a particular level is not contained within a factor (eg
observers 1 to 3 used but only 1 and 2 on occasion 1).
}
\examples{
# see example in weta
}
\author{
Jeff Laake
}
\keyword{utility}
| /RMark/man/make.time.factor.Rd | no_license | buddhidayananda/RMark | R | false | false | 2,157 | rd | % Generated by roxygen2 (4.1.0): do not edit by hand
% Please edit documentation in R/make.time.factor.r
\name{make.time.factor}
\alias{make.time.factor}
\title{Make time-varying dummy variables from time-varying factor variable}
\usage{
make.time.factor(x, var.name, times, intercept = NULL, delete = TRUE)
}
\arguments{
\item{x}{dataframe containing set of factor variables with names composed of
var.name prefix and times suffix}
\item{var.name}{prefix for variable names}
\item{times}{numeric suffixes for variable names}
\item{intercept}{the value of the factor variable that will be used for the
intercept}
\item{delete}{if TRUE, the origninal time-varying factor variables are
removed from the returned dataframe}
}
\value{
x: a dataframe containing the original data (with time-varying
factor variables removed if delete=TRUE) and the time-varying dummy
variables added.
}
\description{
Create a new dataframe with time-varying dummy variables from a time-varying
factor variable. The time-varying dummy variables are named appropriately
to be used as a set of time dependent individual covariates in a parameter
specification
}
\details{
An example of the var.name and times is var.name="observer", times=1:5. The
code expects to find observer1,...,observer5 to be factor variables in x. If
there are k unique levels (excluding ".") across the time varying factor
variables, then k-1 dummy variables are created for each of the named factor
variables. They are named with var.name, level[i], times[j] concatenated
together where level[i] is the name of the facto level i. If there a m
times then the new data set will contain m*(k-1) dummy variables. If the
factor variable includes any "." values these are ignored because they are
used to indicate a missing value that is paired with a missing value in the
encounter history. Note that it will create each dummy variable for each
factor even if a particular level is not contained within a factor (eg
observers 1 to 3 used but only 1 and 2 on occasion 1).
}
\examples{
# see example in weta
}
\author{
Jeff Laake
}
\keyword{utility}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/MPSep_consumerrisk.R
\name{MPSep_consumerrisk}
\alias{MPSep_consumerrisk}
\title{Consumer's Risk for Multi-State RDT with Multiple Periods and Criteria for Separate Periods}
\usage{
MPSep_consumerrisk(n, cvec, pivec, Rvec)
}
\arguments{
\item{n}{RDT sample size}
\item{cvec}{Maximum allowable failures for each separate period}
\item{pivec}{Failure probability for each seperate period}
\item{Rvec}{Lower level reliability requirements for each cumulative period from the begining of the test.}
}
\value{
Probability for consumer's risk
}
\description{
Define the consumer risk function hich gets the probability of passing the test when the lower level reliability requirements are not satisfied for any cumulative periods.
The maximum allowable failures for each separate period need to be satisfied to pass the test (for Multi-state RDT, Multiple Periods, Scenario I)
}
\examples{
pi <- pi_MCSim_dirichlet(M = 5000, seed = 10, par = c(1, 1, 1))
MPSep_consumerrisk(n = 10, cvec = c(1,1), pi = pi, Rvec = c(0.8, 0.7))
}
| /OptimalRDTinR/man/MPSep_consumerrisk.Rd | permissive | ericchen12377/OptimalRDT_R | R | false | true | 1,102 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/MPSep_consumerrisk.R
\name{MPSep_consumerrisk}
\alias{MPSep_consumerrisk}
\title{Consumer's Risk for Multi-State RDT with Multiple Periods and Criteria for Separate Periods}
\usage{
MPSep_consumerrisk(n, cvec, pivec, Rvec)
}
\arguments{
\item{n}{RDT sample size}
\item{cvec}{Maximum allowable failures for each separate period}
\item{pivec}{Failure probability for each seperate period}
\item{Rvec}{Lower level reliability requirements for each cumulative period from the begining of the test.}
}
\value{
Probability for consumer's risk
}
\description{
Define the consumer risk function hich gets the probability of passing the test when the lower level reliability requirements are not satisfied for any cumulative periods.
The maximum allowable failures for each separate period need to be satisfied to pass the test (for Multi-state RDT, Multiple Periods, Scenario I)
}
\examples{
pi <- pi_MCSim_dirichlet(M = 5000, seed = 10, par = c(1, 1, 1))
MPSep_consumerrisk(n = 10, cvec = c(1,1), pi = pi, Rvec = c(0.8, 0.7))
}
|
c DCNF-Autarky [version 0.0.1].
c Copyright (c) 2018-2019 Swansea University.
c
c Input Clause Count: 88751
c Performing E1-Autarky iteration.
c Remaining clauses count after E-Reduction: 88451
c
c Performing E1-Autarky iteration.
c Remaining clauses count after E-Reduction: 88451
c
c Input Parameter (command line, file):
c input filename QBFLIB/Kronegger-Pfandler-Pichler/bomb/p20-20.pddl_planlen=1.qdimacs
c output filename /tmp/dcnfAutarky.dimacs
c autarky level 1
c conformity level 0
c encoding type 2
c no.of var 740
c no.of clauses 88751
c no.of taut cls 400
c
c Output Parameters:
c remaining no.of clauses 88451
c
c QBFLIB/Kronegger-Pfandler-Pichler/bomb/p20-20.pddl_planlen=1.qdimacs 740 88751 E1 [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 45 46 47 48 49 50 51 52 54 55 56 59 60 61 62 63 64 65 66 67 68 69 71 73 74 75 76 77 78 79 80 82 83 84 85 86 88 89 91 92 93 94 95 96 97 98 99 100 102 103 104 105 106 107 110 111 112 113 114 116 117 118 119 122 123 124 125 126 127 128 129 130 131 132 133 134 135 137 138 139 140 141 143 144 145 147 148 149 150 151 152 153 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740] 400 20 440 88451 RED
| /code/dcnf-ankit-optimized/Results/QBFLIB-2018/E1/Experiments/Kronegger-Pfandler-Pichler/bomb/p20-20.pddl_planlen=1/p20-20.pddl_planlen=1.R | no_license | arey0pushpa/dcnf-autarky | R | false | false | 1,782 | r | c DCNF-Autarky [version 0.0.1].
c Copyright (c) 2018-2019 Swansea University.
c
c Input Clause Count: 88751
c Performing E1-Autarky iteration.
c Remaining clauses count after E-Reduction: 88451
c
c Performing E1-Autarky iteration.
c Remaining clauses count after E-Reduction: 88451
c
c Input Parameter (command line, file):
c input filename QBFLIB/Kronegger-Pfandler-Pichler/bomb/p20-20.pddl_planlen=1.qdimacs
c output filename /tmp/dcnfAutarky.dimacs
c autarky level 1
c conformity level 0
c encoding type 2
c no.of var 740
c no.of clauses 88751
c no.of taut cls 400
c
c Output Parameters:
c remaining no.of clauses 88451
c
c QBFLIB/Kronegger-Pfandler-Pichler/bomb/p20-20.pddl_planlen=1.qdimacs 740 88751 E1 [1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 45 46 47 48 49 50 51 52 54 55 56 59 60 61 62 63 64 65 66 67 68 69 71 73 74 75 76 77 78 79 80 82 83 84 85 86 88 89 91 92 93 94 95 96 97 98 99 100 102 103 104 105 106 107 110 111 112 113 114 116 117 118 119 122 123 124 125 126 127 128 129 130 131 132 133 134 135 137 138 139 140 141 143 144 145 147 148 149 150 151 152 153 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740] 400 20 440 88451 RED
|
#-------------------------------------------------------------------------------
# STRAVA API STREAM DATA EXTRACTION & PROCESSING
# Extract stream data for different activity types and visualize
# AUTHOR: Francine Stephens
# DATE CREATED: 4/14/21
# LAST UPDATED DATE: 4/14/21
#-------------------------------------------------------------------------------
## INITIALIZE-------------------------------------------------------------------
#devtools::install_github('fawda123/rStrava')
packages <- c(
"tidyverse",
"rStrava",
"sp",
"ggmap",
"raster",
"mapproj",
"lubridate",
"leaflet",
"rStrava",
"extrafont",
"hrbrthemes",
"wesanderson",
"ggtext"
)
lapply(packages, library, character.only = T)
# KEY PARAMETERS
app_name <- 'Francine Stephens' # chosen by user
app_client_id <- '#####' # an integer, assigned by Strava
app_secret <- 'XXXXXXXXXXXXXXXXXXXXXX#################' # an alphanumeric secret, assigned by Strava
mykey <- 'XXXXXXXXXXXXXXXXXXXXXXXXXX' # Google API key
register_google(mykey)
# CONFIG
# stoken <- httr::config(
# token = strava_oauth(
# app_name,
# app_client_id,
# app_secret,
# app_scope="activity:read_all",
# cache=TRUE)
# )
stoken <- httr::config(token = readRDS('.httr-oauth')[[1]])
## EXTRACT DATA-----------------------------------------------------------------
# Download Strava data
myinfo <- get_athlete(stoken, id = '37259397')
routes <- get_activity_list(stoken)
length(routes) # GET count of activities
# CONVERT ACTIVITIES FROM LIST TO DATAFRAME
# Set units to imperial to convert to miles.
# To get a subset specify a slice as below.
# Can also filter to get a subset using dplyr filter.
activities_data <- compile_activities(routes, units = "imperial")
saveRDS(activities_data, file = "strava_activities_041421.rds")
# RUNS
runs <- activities_data %>%
filter(
type == "Run" & !is.na(start_latitude)
)
## DO SUBSETS (N=40) OF THE RUNS BC ACTIVITY STREAMS CANNOT HANDLE ALL RUNS AT A TIME
run_ids <- runs %>%
slice(1:39) %>%
pull(id)
run_dates <- runs %>%
dplyr::select(id, start_date) %>%
mutate(date = as.Date(str_sub(start_date, end = 10))
)
run_streams_first <- get_activity_streams(act_data=routes,
stoken,
id=run_ids,
types="latlng",
units="imperial")
run_streams_recent <- get_activity_streams(act_data=routes,
stoken,
id=run_ids,
types="latlng",
units="imperial")
all_runs <- rbind(run_streams_first, run_streams_recent) %>%
mutate(location = as.factor(
if_else(lng < -120,
"Stanford",
"Rockwall Ranch")
)
) %>%
left_join(., run_dates, by = "id") %>%
mutate(group_no = group_indices_(., .dots="id"),
group_rem = group_no / 5,
group_str = as.character(group_rem),
group_col = case_when(
str_detect(group_str, ".2") ~ "1",
str_detect(group_str, ".4") ~ "2",
str_detect(group_str, ".6") ~ "3",
str_detect(group_str, ".8") ~ "4",
TRUE ~ "5"
)
)
## VISUALIZATION----------------------------------------------------------------
##########
# RUNS
##########
# SET COLORS AND THEMES
stanford_cardinal <- "#8C1515"
dark_green <- "#006400"
theme_runs <- theme_void(base_family = "Rockwell",
base_size = 20) +
theme(panel.spacing = unit(0, "lines"),
strip.background = element_blank(),
strip.text = element_blank(),
plot.margin = unit(rep(1, 4), "cm"),
legend.position = "none",
plot.title = element_text(hjust = 0.5, vjust = 3)
)
theme_by_location <- theme_void(base_family = "Rockwell",
base_size = 13) +
theme(panel.spacing = unit(0, "lines"),
strip.background = element_blank(),
strip.text = element_blank(),
plot.margin = unit(rep(1, 4), "cm"),
legend.position = "none",
plot.title = element_markdown(hjust = 0.5, vjust = 3)
)
colors_runs <- scale_color_manual(values = c(stanford_cardinal,
dark_green)
)
# MAP
runs_by_location <- ggplot(all_runs) +
geom_path(aes(lng, lat, group = id, color = location), size = 0.35, lineend = "round") +
facet_wrap(~location, scales = 'free') +
labs(title = "Francine's
<span style='color:#8C1515'>Stanford</span> & <span style='color:#006400'>Rockwall Ranch, NBTX</span> Runs",
caption = "Runs as of April 14, 2021") +
theme_by_location +
colors_runs
ggsave("runs_041421_by_location.png", plot = runs_by_location)
runs_moonrise <- ggplot(all_runs) +
geom_path(aes(lng, lat, group = id, color = group_col), size = 0.35, lineend = "round") +
facet_wrap(~id, scales = 'free') +
labs(title = "Francine's Runs") +
theme_runs +
scale_color_manual(values=wes_palette(n=5, name="Moonrise3"))
ggsave("runs_041421_moonrise.png", plot = runs_moonrise)
| /Strava/strava_activity_stream_facet_plots.R | no_license | francine-stephens/Exercise | R | false | false | 5,202 | r | #-------------------------------------------------------------------------------
# STRAVA API STREAM DATA EXTRACTION & PROCESSING
# Extract stream data for different activity types and visualize
# AUTHOR: Francine Stephens
# DATE CREATED: 4/14/21
# LAST UPDATED DATE: 4/14/21
#-------------------------------------------------------------------------------
## INITIALIZE-------------------------------------------------------------------
#devtools::install_github('fawda123/rStrava')
packages <- c(
"tidyverse",
"rStrava",
"sp",
"ggmap",
"raster",
"mapproj",
"lubridate",
"leaflet",
"rStrava",
"extrafont",
"hrbrthemes",
"wesanderson",
"ggtext"
)
lapply(packages, library, character.only = T)
# KEY PARAMETERS
app_name <- 'Francine Stephens' # chosen by user
app_client_id <- '#####' # an integer, assigned by Strava
app_secret <- 'XXXXXXXXXXXXXXXXXXXXXX#################' # an alphanumeric secret, assigned by Strava
mykey <- 'XXXXXXXXXXXXXXXXXXXXXXXXXX' # Google API key
register_google(mykey)
# CONFIG
# stoken <- httr::config(
# token = strava_oauth(
# app_name,
# app_client_id,
# app_secret,
# app_scope="activity:read_all",
# cache=TRUE)
# )
stoken <- httr::config(token = readRDS('.httr-oauth')[[1]])
## EXTRACT DATA-----------------------------------------------------------------
# Download Strava data
myinfo <- get_athlete(stoken, id = '37259397')
routes <- get_activity_list(stoken)
length(routes) # GET count of activities
# CONVERT ACTIVITIES FROM LIST TO DATAFRAME
# Set units to imperial to convert to miles.
# To get a subset specify a slice as below.
# Can also filter to get a subset using dplyr filter.
activities_data <- compile_activities(routes, units = "imperial")
saveRDS(activities_data, file = "strava_activities_041421.rds")
# RUNS
runs <- activities_data %>%
filter(
type == "Run" & !is.na(start_latitude)
)
## DO SUBSETS (N=40) OF THE RUNS BC ACTIVITY STREAMS CANNOT HANDLE ALL RUNS AT A TIME
run_ids <- runs %>%
slice(1:39) %>%
pull(id)
run_dates <- runs %>%
dplyr::select(id, start_date) %>%
mutate(date = as.Date(str_sub(start_date, end = 10))
)
run_streams_first <- get_activity_streams(act_data=routes,
stoken,
id=run_ids,
types="latlng",
units="imperial")
run_streams_recent <- get_activity_streams(act_data=routes,
stoken,
id=run_ids,
types="latlng",
units="imperial")
all_runs <- rbind(run_streams_first, run_streams_recent) %>%
mutate(location = as.factor(
if_else(lng < -120,
"Stanford",
"Rockwall Ranch")
)
) %>%
left_join(., run_dates, by = "id") %>%
mutate(group_no = group_indices_(., .dots="id"),
group_rem = group_no / 5,
group_str = as.character(group_rem),
group_col = case_when(
str_detect(group_str, ".2") ~ "1",
str_detect(group_str, ".4") ~ "2",
str_detect(group_str, ".6") ~ "3",
str_detect(group_str, ".8") ~ "4",
TRUE ~ "5"
)
)
## VISUALIZATION----------------------------------------------------------------
##########
# RUNS
##########
# SET COLORS AND THEMES
stanford_cardinal <- "#8C1515"
dark_green <- "#006400"
theme_runs <- theme_void(base_family = "Rockwell",
base_size = 20) +
theme(panel.spacing = unit(0, "lines"),
strip.background = element_blank(),
strip.text = element_blank(),
plot.margin = unit(rep(1, 4), "cm"),
legend.position = "none",
plot.title = element_text(hjust = 0.5, vjust = 3)
)
theme_by_location <- theme_void(base_family = "Rockwell",
base_size = 13) +
theme(panel.spacing = unit(0, "lines"),
strip.background = element_blank(),
strip.text = element_blank(),
plot.margin = unit(rep(1, 4), "cm"),
legend.position = "none",
plot.title = element_markdown(hjust = 0.5, vjust = 3)
)
colors_runs <- scale_color_manual(values = c(stanford_cardinal,
dark_green)
)
# MAP
runs_by_location <- ggplot(all_runs) +
geom_path(aes(lng, lat, group = id, color = location), size = 0.35, lineend = "round") +
facet_wrap(~location, scales = 'free') +
labs(title = "Francine's
<span style='color:#8C1515'>Stanford</span> & <span style='color:#006400'>Rockwall Ranch, NBTX</span> Runs",
caption = "Runs as of April 14, 2021") +
theme_by_location +
colors_runs
ggsave("runs_041421_by_location.png", plot = runs_by_location)
runs_moonrise <- ggplot(all_runs) +
geom_path(aes(lng, lat, group = id, color = group_col), size = 0.35, lineend = "round") +
facet_wrap(~id, scales = 'free') +
labs(title = "Francine's Runs") +
theme_runs +
scale_color_manual(values=wes_palette(n=5, name="Moonrise3"))
ggsave("runs_041421_moonrise.png", plot = runs_moonrise)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.