content large_stringlengths 0 6.46M | path large_stringlengths 3 331 | license_type large_stringclasses 2 values | repo_name large_stringlengths 5 125 | language large_stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 4 6.46M | extension large_stringclasses 75 values | text stringlengths 0 6.46M |
|---|---|---|---|---|---|---|---|---|---|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/AllGenerics.R
\name{summarize}
\alias{summarize}
\title{Return programmatically useful summary of a fit}
\usage{
summarize(object, ...)
}
\arguments{
\item{object}{LMlike or subclass}
\item{...}{other arguments}
}
\value{
list of parameters characterizing fit
}
\description{
Return programmatically useful summary of a fit
}
| /man/summarize.Rd | no_license | QinLab/MAST | R | false | true | 406 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/AllGenerics.R
\name{summarize}
\alias{summarize}
\title{Return programmatically useful summary of a fit}
\usage{
summarize(object, ...)
}
\arguments{
\item{object}{LMlike or subclass}
\item{...}{other arguments}
}
\value{
list of parameters characterizing fit
}
\description{
Return programmatically useful summary of a fit
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/BayesCTDesigncode.R
\name{simple_sim}
\alias{simple_sim}
\title{Two Arm Bayesian Clinical Trial Simulation without Historical Data}
\usage{
simple_sim(
trial_reps = 100,
outcome_type = "weibull",
subj_per_arm = c(50, 100, 150, 200, 250),
effect_vals = c(0.6, 1, 1.4),
control_parms = NULL,
time_vec = NULL,
censor_value = NULL,
alpha = 0.05,
get_var = FALSE,
get_bias = FALSE,
get_mse = FALSE,
seedval = NULL,
quietly = TRUE
)
}
\arguments{
\item{trial_reps}{Number of trials to replicate within each combination of
\code{a0_vals}, \code{subj_per_arm}, \code{effect_vals}, and \code{rand_control_parms}.
As the number of trials increases, the precision of the estimate will increase.
Default is 100.}
\item{outcome_type}{Outcome distribution. Must be equal to \code{weibull},
\code{lognormal}, \code{pwe} (Piecewise Exponential), \code{gaussian},
\code{bernoulli}, or \code{poisson}. Default is \code{weibull}.}
\item{subj_per_arm}{A vector of sample sizes, all of which must be positive
integers. Default is \code{c(50, 100, 150, 200, 250)}.}
\item{effect_vals}{A vector of effects that should be reasonable for the
outcome_type being studied, hazard ratios for Weibull, odds ratios for
Bernoulli, mean ratios for Poisson, etc.. When \code{effect_vals} contain
the null effect for a given \code{outcome_type}, the \code{power} component
of \code{data} will contain an estimate of Type One Error. In order to
have a good set of Type One Error estimates, \code{trial_reps} need to be
at least 10,000. In such a case, if the total number of combinations
made up from \code{subj_per_arm}, \code{a0_vals}, \code{effect_vals}, and
\code{rand_control_diff} is very large, the time to complete the simulation
can be substantial. Default is \code{c(0.6, 1, 1.4)}.}
\item{control_parms}{A vector of parameter values defining the outcome
distribution for randomized controls. See Details for what is required for
each \code{outcome_type}.}
\item{time_vec}{A vector of time values that are used to create time periods
within which the exponential hazard is constant. Only used for piecewise
exponential models. Default is \code{NULL}.}
\item{censor_value}{A single value at which right censoring occurs when
simulating randomized subject outcomes. Used with survival outcomes.
Default is \code{NULL}, where \code{NULL} implies no right censoring.}
\item{alpha}{A number ranging between 0 and 1 that defines the acceptable Type 1
error rate. Default is 0.05.}
\item{get_var}{A TRUE/FALSE indicator of whether an array of variance
estimates will be returned. Default is \code{FALSE}.}
\item{get_bias}{A TRUE/FALSE indicator of whether an array of bias
estimates will be returned. Default is \code{FALSE}.}
\item{get_mse}{A TRUE/FALSE indicator of whether an array of MSE
estimates will be returned. Default is \code{FALSE}.}
\item{seedval}{A seed value for pseudo-random number generation.}
\item{quietly}{A TRUE/FALSE indicator of whether notes are printed
to output about simulation progress as the simulation runs. If
running interactively in RStudio or running in the R console,
\code{quietly} can be set to FALSE. If running in a Notebook or
knitr document, \code{quietly} needs to be set to TRUE. Otherwise
each note will be printed on a separate line and it will take up
a lot of output space. Default is \code{TRUE}.}
}
\value{
\code{simple_sim()} returns an S3 object of class \code{bayes_ctd_array}.
As noted in Details, an object of class \code{bayes_ctd_array} has 6 elements: a
list containing simulation results (\code{data}), copies of the 4 function
arguments \code{subj_per_arm}, \code{a0_vals}, \code{effect_vals}, and
\code{rand_control_diff}, and finally \code{objtype} indicating that \code{simple_sim()}
was used. See Details for a discussion about the contents of
\code{data}. Results from the simulation contained in the \code{bayes_ctd_array}
object can be printed or plotted using the \code{print()} and
\code{plot()} methods. The results can also be accessed using basic list
element identification and array slicing. For example, to get the power results
from a simulation, one could use the code \code{bayes_ctd_array$data$power}, where
\code{bayes_ctd_array} is replaced with the name of the variable containing the
\code{bayes_ctd_array} object. Even though this is a 4-dimensional array, the power
results only occupy a single 2-dimensional table. To print this 2-dimensional table,
one would use the code \code{bayes_ctd_array$data$power[,1,,1]}, where
\code{bayes_ctd_array} is replaced with the name of the variable containing the
\code{bayes_ctd_array} object.
}
\description{
\code{simple_sim()} returns an S3 object of class \code{bayes_ctd_array}, which
will contain simulation results for power, statistic estimation, bias, variance,
and mse as requested by user.
}
\details{
The object \code{bayes_ctd_array} has 6 elements: a list containing simulation
results (\code{data}), copies of the 4 function arguments \code{subj_per_arm},
\code{a0_vals}, \code{effect_vals}, and \code{rand_control_diff}, and finally
a \code{objtype} value indicating that \code{simple_sim()} was used. Each element of
\code{data} is a four-dimensional array, where each dimension is determined by the
length of parameters \code{subj_per_arm}, \code{a0_vals}, \code{effect_vals}, and
\code{rand_control_diff}. The size of \code{data} depends on which results are
requested by the user. At a minimum, at least one of \code{subj_per_arm},
\code{a0_vals}, \code{effect_vals}, or \code{rand_control_diff} must contain at
least 2 values, while the other three must contain at least 1 value. The \code{data}
list will always contain two elements: an array of power results (\code{power}) and
an array of estimation results (\code{est}). In addition to \code{power} and
\code{est}, \code{data} may also contain elements \code{var}, \code{bias}, or
\code{mse}, depending on the values of \code{get_var}, \code{get_bias}, and
\code{get_mse}. The values returned in \code{est} are in the form of hazard ratios,
mean ratios, odds ratios, or mean differences depending on the value of
\code{outcome_type}. For a Gaussian outcome, the estimation results are
differences in group means (experimental group minus control group). For a
logistic outcome, the estimation results are odds ratios (experimental group over
control group). For lognormal and Poisson outcomes, the estimation results are mean
ratios (experimental group over control group). For a piecewise exponential or a
Weibull outcome, the estimation results are hazard ratios (experimental group over
control group). The values returned in \code{bias}, \code{var}, and \code{mse} are
on the scale of the values returned in \code{est}.
The object \code{bayes_ctd_array} has two primary methods, \code{print()} and
\code{plot()}, for printing and plotting slices of the arrays contained in
\code{bayes_ctd_array$data}.
As dimensions of the four dimensional array increases, the time required to complete
the simulation will increase; however, it will be faster than a similar simulation
based on repeated calls to MCMC routines to analyze each simulated trial.
The meaning of the estimation results, and the test used to generate power results,
depends on the outcome used. In all cases, power is based on a two-sided test
involving a (1-alpha)100\% credible interval, where the interval is used to determine
if the null hypothesis should be rejected (null value outside of the interval) or
not rejected (null value inside the interval). For a Gaussian outcome, the 95\%
credible interval is an interval for the difference in group means
(experimental group minus control group), and the test determines if 0 is in or
outside of the interval. For a Bernoulli outcome, the 95\% credible interval
is an interval for the odds ratio (experimental group over control group),
and the test determines if 1 is in or outside of the interval. For a lognormal or
a Poisson outcome, the 95\% credible interval is an interval for the mean ratio
(experimental group over control group), and the test determines if 1 is in or
outside of the interval. Finally, for a piecewise exponential or a Weibull outcome,
the 95\% credible interval is an interval for the hazard ratio (experimental group
over control group), and the test determines if 1 is in or outside of the interval.
For a Gaussian outcome, the \code{control_parms} values should be \code{(mean, sd)},
where mean is the mean parameter for the control group used in a call to \code{rnorm()},
and sd is the common sd parameter for both groups used in a call to\code{rlnorm()}.
For a Bernoulli outcome, the \code{control_parms} values should be \code{(prob)}, where
prob is the event probability for the control group used in a call to \code{rbinom()}.
For a lognormal outcome, the \code{control_parms} values should be \code{(meanlog, sdlog)},
where meanlog is the meanlog parameter for the control group used in a call to
\code{rlnorm()}, and sdlog is the common sdlog parameter for both groups used in
a call to \code{rlnorm()}.
For a Poisson outcome, the \code{control_parms} value should be \code{(lambda)}, where
lambda is the lambda parameter for the control group used in a call to \code{rpois()} and
is equal to the mean of a Poisson distribution.
For a Weibull outcome, the \code{control_parms} values should be \code{(scale, shape)},
where scale is the scale parameter for the control group used in a call to
\code{rweibull()}, and shape is the common shape parameter for both groups used in
a call to \code{rweibull()}.
For a piecewise exponential outcome, the \code{control_parms} values should be a vector
of lambdas used in a call to \code{eha::rpch()}. Each element in \code{control_parms}
is a hazard for an interval defined by the \code{time_vec} parameter.
Please refer to the examples for illustration of package use.
}
\examples{
#Run a Weibull simulation, using simple_sim().
#For meaningful results, trial_reps needs to be much larger than 2.
weibull_test <- simple_sim(trial_reps = 2, outcome_type = "weibull",
subj_per_arm = c(50, 100, 150, 200),
effect_vals = c(0.6, 1, 1.4),
control_parms = c(2.82487,3), time_vec = NULL,
censor_value = NULL, alpha = 0.05,
get_var = TRUE, get_bias = TRUE, get_mse = TRUE,
seedval=123, quietly=TRUE)
#Tabulate the simulation results for power.
test_table <- print(x=weibull_test, measure="power",
tab_type=NULL, subj_per_arm_val=NULL, a0_val=NULL,
effect_val=NULL, rand_control_diff_val=NULL)
print(test_table)
#Create a plot of the power simulation results.
plot(x=weibull_test, measure="power", tab_type=NULL,
smooth=FALSE, plot_out=TRUE)
#Create a plot of the estimated hazard ratio simulation results.
plot(x=weibull_test, measure="est", tab_type=NULL,
smooth=FALSE, plot_out=TRUE)
#Create a plot of the hazard ratio variance simulation results.
plot(x=weibull_test, measure="var", tab_type=NULL,
smooth=FALSE, plot_out=TRUE)
#Create a plot of the hazard ratio bias simulation results.
plot(x=weibull_test, measure="bias", tab_type=NULL,
smooth=FALSE, plot_out=TRUE)
#Create a plot of the hazard ratio mse simulation results.
plot(x=weibull_test, measure="mse", tab_type=NULL,
smooth=FALSE, plot_out=TRUE)
}
| /man/simple_sim.Rd | no_license | begglest/BayesCTDesign | R | false | true | 11,574 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/BayesCTDesigncode.R
\name{simple_sim}
\alias{simple_sim}
\title{Two Arm Bayesian Clinical Trial Simulation without Historical Data}
\usage{
simple_sim(
trial_reps = 100,
outcome_type = "weibull",
subj_per_arm = c(50, 100, 150, 200, 250),
effect_vals = c(0.6, 1, 1.4),
control_parms = NULL,
time_vec = NULL,
censor_value = NULL,
alpha = 0.05,
get_var = FALSE,
get_bias = FALSE,
get_mse = FALSE,
seedval = NULL,
quietly = TRUE
)
}
\arguments{
\item{trial_reps}{Number of trials to replicate within each combination of
\code{a0_vals}, \code{subj_per_arm}, \code{effect_vals}, and \code{rand_control_parms}.
As the number of trials increases, the precision of the estimate will increase.
Default is 100.}
\item{outcome_type}{Outcome distribution. Must be equal to \code{weibull},
\code{lognormal}, \code{pwe} (Piecewise Exponential), \code{gaussian},
\code{bernoulli}, or \code{poisson}. Default is \code{weibull}.}
\item{subj_per_arm}{A vector of sample sizes, all of which must be positive
integers. Default is \code{c(50, 100, 150, 200, 250)}.}
\item{effect_vals}{A vector of effects that should be reasonable for the
outcome_type being studied, hazard ratios for Weibull, odds ratios for
Bernoulli, mean ratios for Poisson, etc.. When \code{effect_vals} contain
the null effect for a given \code{outcome_type}, the \code{power} component
of \code{data} will contain an estimate of Type One Error. In order to
have a good set of Type One Error estimates, \code{trial_reps} need to be
at least 10,000. In such a case, if the total number of combinations
made up from \code{subj_per_arm}, \code{a0_vals}, \code{effect_vals}, and
\code{rand_control_diff} is very large, the time to complete the simulation
can be substantial. Default is \code{c(0.6, 1, 1.4)}.}
\item{control_parms}{A vector of parameter values defining the outcome
distribution for randomized controls. See Details for what is required for
each \code{outcome_type}.}
\item{time_vec}{A vector of time values that are used to create time periods
within which the exponential hazard is constant. Only used for piecewise
exponential models. Default is \code{NULL}.}
\item{censor_value}{A single value at which right censoring occurs when
simulating randomized subject outcomes. Used with survival outcomes.
Default is \code{NULL}, where \code{NULL} implies no right censoring.}
\item{alpha}{A number ranging between 0 and 1 that defines the acceptable Type 1
error rate. Default is 0.05.}
\item{get_var}{A TRUE/FALSE indicator of whether an array of variance
estimates will be returned. Default is \code{FALSE}.}
\item{get_bias}{A TRUE/FALSE indicator of whether an array of bias
estimates will be returned. Default is \code{FALSE}.}
\item{get_mse}{A TRUE/FALSE indicator of whether an array of MSE
estimates will be returned. Default is \code{FALSE}.}
\item{seedval}{A seed value for pseudo-random number generation.}
\item{quietly}{A TRUE/FALSE indicator of whether notes are printed
to output about simulation progress as the simulation runs. If
running interactively in RStudio or running in the R console,
\code{quietly} can be set to FALSE. If running in a Notebook or
knitr document, \code{quietly} needs to be set to TRUE. Otherwise
each note will be printed on a separate line and it will take up
a lot of output space. Default is \code{TRUE}.}
}
\value{
\code{simple_sim()} returns an S3 object of class \code{bayes_ctd_array}.
As noted in Details, an object of class \code{bayes_ctd_array} has 6 elements: a
list containing simulation results (\code{data}), copies of the 4 function
arguments \code{subj_per_arm}, \code{a0_vals}, \code{effect_vals}, and
\code{rand_control_diff}, and finally \code{objtype} indicating that \code{simple_sim()}
was used. See Details for a discussion about the contents of
\code{data}. Results from the simulation contained in the \code{bayes_ctd_array}
object can be printed or plotted using the \code{print()} and
\code{plot()} methods. The results can also be accessed using basic list
element identification and array slicing. For example, to get the power results
from a simulation, one could use the code \code{bayes_ctd_array$data$power}, where
\code{bayes_ctd_array} is replaced with the name of the variable containing the
\code{bayes_ctd_array} object. Even though this is a 4-dimensional array, the power
results only occupy a single 2-dimensional table. To print this 2-dimensional table,
one would use the code \code{bayes_ctd_array$data$power[,1,,1]}, where
\code{bayes_ctd_array} is replaced with the name of the variable containing the
\code{bayes_ctd_array} object.
}
\description{
\code{simple_sim()} returns an S3 object of class \code{bayes_ctd_array}, which
will contain simulation results for power, statistic estimation, bias, variance,
and mse as requested by user.
}
\details{
The object \code{bayes_ctd_array} has 6 elements: a list containing simulation
results (\code{data}), copies of the 4 function arguments \code{subj_per_arm},
\code{a0_vals}, \code{effect_vals}, and \code{rand_control_diff}, and finally
a \code{objtype} value indicating that \code{simple_sim()} was used. Each element of
\code{data} is a four-dimensional array, where each dimension is determined by the
length of parameters \code{subj_per_arm}, \code{a0_vals}, \code{effect_vals}, and
\code{rand_control_diff}. The size of \code{data} depends on which results are
requested by the user. At a minimum, at least one of \code{subj_per_arm},
\code{a0_vals}, \code{effect_vals}, or \code{rand_control_diff} must contain at
least 2 values, while the other three must contain at least 1 value. The \code{data}
list will always contain two elements: an array of power results (\code{power}) and
an array of estimation results (\code{est}). In addition to \code{power} and
\code{est}, \code{data} may also contain elements \code{var}, \code{bias}, or
\code{mse}, depending on the values of \code{get_var}, \code{get_bias}, and
\code{get_mse}. The values returned in \code{est} are in the form of hazard ratios,
mean ratios, odds ratios, or mean differences depending on the value of
\code{outcome_type}. For a Gaussian outcome, the estimation results are
differences in group means (experimental group minus control group). For a
logistic outcome, the estimation results are odds ratios (experimental group over
control group). For lognormal and Poisson outcomes, the estimation results are mean
ratios (experimental group over control group). For a piecewise exponential or a
Weibull outcome, the estimation results are hazard ratios (experimental group over
control group). The values returned in \code{bias}, \code{var}, and \code{mse} are
on the scale of the values returned in \code{est}.
The object \code{bayes_ctd_array} has two primary methods, \code{print()} and
\code{plot()}, for printing and plotting slices of the arrays contained in
\code{bayes_ctd_array$data}.
As dimensions of the four dimensional array increases, the time required to complete
the simulation will increase; however, it will be faster than a similar simulation
based on repeated calls to MCMC routines to analyze each simulated trial.
The meaning of the estimation results, and the test used to generate power results,
depends on the outcome used. In all cases, power is based on a two-sided test
involving a (1-alpha)100\% credible interval, where the interval is used to determine
if the null hypothesis should be rejected (null value outside of the interval) or
not rejected (null value inside the interval). For a Gaussian outcome, the 95\%
credible interval is an interval for the difference in group means
(experimental group minus control group), and the test determines if 0 is in or
outside of the interval. For a Bernoulli outcome, the 95\% credible interval
is an interval for the odds ratio (experimental group over control group),
and the test determines if 1 is in or outside of the interval. For a lognormal or
a Poisson outcome, the 95\% credible interval is an interval for the mean ratio
(experimental group over control group), and the test determines if 1 is in or
outside of the interval. Finally, for a piecewise exponential or a Weibull outcome,
the 95\% credible interval is an interval for the hazard ratio (experimental group
over control group), and the test determines if 1 is in or outside of the interval.
For a Gaussian outcome, the \code{control_parms} values should be \code{(mean, sd)},
where mean is the mean parameter for the control group used in a call to \code{rnorm()},
and sd is the common sd parameter for both groups used in a call to\code{rlnorm()}.
For a Bernoulli outcome, the \code{control_parms} values should be \code{(prob)}, where
prob is the event probability for the control group used in a call to \code{rbinom()}.
For a lognormal outcome, the \code{control_parms} values should be \code{(meanlog, sdlog)},
where meanlog is the meanlog parameter for the control group used in a call to
\code{rlnorm()}, and sdlog is the common sdlog parameter for both groups used in
a call to \code{rlnorm()}.
For a Poisson outcome, the \code{control_parms} value should be \code{(lambda)}, where
lambda is the lambda parameter for the control group used in a call to \code{rpois()} and
is equal to the mean of a Poisson distribution.
For a Weibull outcome, the \code{control_parms} values should be \code{(scale, shape)},
where scale is the scale parameter for the control group used in a call to
\code{rweibull()}, and shape is the common shape parameter for both groups used in
a call to \code{rweibull()}.
For a piecewise exponential outcome, the \code{control_parms} values should be a vector
of lambdas used in a call to \code{eha::rpch()}. Each element in \code{control_parms}
is a hazard for an interval defined by the \code{time_vec} parameter.
Please refer to the examples for illustration of package use.
}
\examples{
#Run a Weibull simulation, using simple_sim().
#For meaningful results, trial_reps needs to be much larger than 2.
weibull_test <- simple_sim(trial_reps = 2, outcome_type = "weibull",
subj_per_arm = c(50, 100, 150, 200),
effect_vals = c(0.6, 1, 1.4),
control_parms = c(2.82487,3), time_vec = NULL,
censor_value = NULL, alpha = 0.05,
get_var = TRUE, get_bias = TRUE, get_mse = TRUE,
seedval=123, quietly=TRUE)
#Tabulate the simulation results for power.
test_table <- print(x=weibull_test, measure="power",
tab_type=NULL, subj_per_arm_val=NULL, a0_val=NULL,
effect_val=NULL, rand_control_diff_val=NULL)
print(test_table)
#Create a plot of the power simulation results.
plot(x=weibull_test, measure="power", tab_type=NULL,
smooth=FALSE, plot_out=TRUE)
#Create a plot of the estimated hazard ratio simulation results.
plot(x=weibull_test, measure="est", tab_type=NULL,
smooth=FALSE, plot_out=TRUE)
#Create a plot of the hazard ratio variance simulation results.
plot(x=weibull_test, measure="var", tab_type=NULL,
smooth=FALSE, plot_out=TRUE)
#Create a plot of the hazard ratio bias simulation results.
plot(x=weibull_test, measure="bias", tab_type=NULL,
smooth=FALSE, plot_out=TRUE)
#Create a plot of the hazard ratio mse simulation results.
plot(x=weibull_test, measure="mse", tab_type=NULL,
smooth=FALSE, plot_out=TRUE)
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/watcher.R
\name{watch}
\alias{watch}
\title{Watch a directory for changes (additions, deletions & modifications).}
\usage{
watch(path, callback, pattern = NULL, hash = TRUE)
}
\arguments{
\item{path}{character vector of paths to watch. Omit trailing backslash.}
\item{callback}{function called everytime a change occurs. It should
have three parameters: added, deleted, modified, and should return
TRUE to keep watching, or FALSE to stop.}
\item{pattern}{file pattern passed to \code{\link[=dir]{dir()}}}
\item{hash}{hashes are more accurate at detecting changes, but are slower
for large files. When FALSE, uses modification time stamps}
}
\description{
This is used to power the \code{\link[=auto_test]{auto_test()}} and
\code{\link[=auto_test_package]{auto_test_package()}} functions which are used to rerun tests
whenever source code changes.
}
\details{
Use Ctrl + break (windows), Esc (mac gui) or Ctrl + C (command line) to
stop the watcher.
}
\keyword{internal}
| /man/watch.Rd | permissive | r-lib/testthat | R | false | true | 1,054 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/watcher.R
\name{watch}
\alias{watch}
\title{Watch a directory for changes (additions, deletions & modifications).}
\usage{
watch(path, callback, pattern = NULL, hash = TRUE)
}
\arguments{
\item{path}{character vector of paths to watch. Omit trailing backslash.}
\item{callback}{function called everytime a change occurs. It should
have three parameters: added, deleted, modified, and should return
TRUE to keep watching, or FALSE to stop.}
\item{pattern}{file pattern passed to \code{\link[=dir]{dir()}}}
\item{hash}{hashes are more accurate at detecting changes, but are slower
for large files. When FALSE, uses modification time stamps}
}
\description{
This is used to power the \code{\link[=auto_test]{auto_test()}} and
\code{\link[=auto_test_package]{auto_test_package()}} functions which are used to rerun tests
whenever source code changes.
}
\details{
Use Ctrl + break (windows), Esc (mac gui) or Ctrl + C (command line) to
stop the watcher.
}
\keyword{internal}
|
seed <- 342
log.wt <- 0.0
penalty <- 2.8115950178536287e-8
intervals.send <- c()
intervals.recv <- c(56, 112, 225, 450, 900, 1800, 3600, 7200, 14400, 28800, 57600, 115200, 230400, 460800, 921600, 1843200, 3686400, 7372800, 14745600, 29491200, 58982400)
dev.null <- 358759.0022669336
df.null <- 35567
dev.resid <- 226167.00235044342
df.resid <- 35402
df <- 165
coefs <- c(6.790845106117373, 5.737098439427963, 5.675782005325002, 5.344653068837055, 5.064705848361622, 4.76500165672019, 4.755429533798208, 4.676561337150504, 4.336734659718355, 4.250245483418125, 4.235804531329551, 4.14638616337982, 3.9839180700026238, 3.945053900124883, 3.7360592293308814, 3.515971547856865, 3.2447737282323055, 2.9448232180854643, 2.4594870808987808, 2.0748797444896754, 1.5216415497294673, 0.9521954179342118, 0.9970221815202441, 0.5079245766326584, 0.204612324952474, -0.9255738647451008, -0.1599709286944946, 1.1229700010531105, 1.0997867611491268, -1.6889468710986568, -2.623383108237885, -2.8036643856726147, -0.35882027893364077, 0.7709098590359476, 1.2336333858737705, -1.1317325047760587, 0.8384422010398739, -1.4331500374507786, 0.23069116676404602, -1.3183210426399556, 1.0195960167600944, 0.8631677168441598, -0.8643596281771939, -2.6625812918775815, -0.8752538165861484, -0.5992808795922524, -0.5906134047994189, 9.944830192057938e-2, 0.4379305048368973, -0.5457675343514404, -0.2707086168073308, 0.7473080181553704, -2.784960777269633, 1.6453134799744773, 0.78480370609113, 1.1247977726147083, -2.1410910219767705, -0.34568287859261165, -0.6918285214733937, 1.2939594221307043, 0.8985938545311535, 0.8971174589940984, -2.0370230848455315, -0.8986636100917562, -0.5665209844082988, 0.25971414717104224, 0.6612809317496768, -7.643986687273446e-2, -1.5838099593994726, -0.5840553960420741, -1.9540264706110246, -0.6139848335789257, 0.4763487419156967, 1.0698581571262944, 0.8301553492593408, -0.45739731488427415, -1.3148867312810297, -1.5491758909274695, 0.14281887936245397, 0.7455872730075669, 1.287517859212263, 2.78556126352994e-2, 0.25175807315216037, -1.6179606007219347, -0.946114544926569, 0.42542149498717774, 1.2710158343817568, 0.48096324380564026, 0.9176889428195724, -1.862826414647925, 0.5181095287408145, 0.8472167617956549, 0.90197106387007, 0.3651519409339762, 0.19417561935858016, 1.4080213903953154, -0.3421518305774009, 9.413935320900704e-3, -0.12758711341393145, -4.121365228629297e-2, 0.23272121025577047, -0.122439973969776, 0.9656475131814233, 0.40729628629759834, 0.5545987384945829, 0.863799642328108, 1.0450603945342596, -0.6246812040210127, -0.6755073221344826, -0.9783290506141454, 0.4050413622988412, 0.55382235951153, 1.6430830490858397, -0.8032538749275443, -0.38175167049647635, -1.1279904421530256, 0.8597609475028455, -0.22592519882820508, 0.5462324486566863, 0.6650577980071637, -0.23881301393731066, -0.4711469026187611, -1.1626988442188138, -0.2567285239055263, 0.34883415449412325, 0.9515135049333393, 9.460948604516535e-2, 1.0565959767686024, -0.5864338880880218, -0.33718161458250573, 0.2983935427998611, 0.9596231131229661, 0.9227898889517011, 0.4810596983455916, 4.666896825990316e-2, 1.091127101514808, -0.2790199199591993, 1.1042343582279388, 0.7506814263685173, 1.0747271562378828, 0.8927328463575381, -0.740893879942483, -1.1086987637761554, 0.7439283296973215, 0.5207691915284437, 0.5792379438702503, -0.2490648858703582, -0.7942053087800607, -1.868268123437045, 1.3494342890793056, 0.16339080513787094, 1.2887268750471617, -0.20400147430858892, -0.14689262528741534, -0.22110848523559484, -1.6896037888088262, -2.096899996180469, 0.8918587011359251, 1.2234807928864462, -0.22966127145260323, 1.5519464017904743, -0.1467344706589123, -0.21184786976551484, 0.12572788480685362, 1.2528289838553173)
| /analysis/boot/boot342.R | no_license | patperry/interaction-proc | R | false | false | 3,740 | r | seed <- 342
log.wt <- 0.0
penalty <- 2.8115950178536287e-8
intervals.send <- c()
intervals.recv <- c(56, 112, 225, 450, 900, 1800, 3600, 7200, 14400, 28800, 57600, 115200, 230400, 460800, 921600, 1843200, 3686400, 7372800, 14745600, 29491200, 58982400)
dev.null <- 358759.0022669336
df.null <- 35567
dev.resid <- 226167.00235044342
df.resid <- 35402
df <- 165
coefs <- c(6.790845106117373, 5.737098439427963, 5.675782005325002, 5.344653068837055, 5.064705848361622, 4.76500165672019, 4.755429533798208, 4.676561337150504, 4.336734659718355, 4.250245483418125, 4.235804531329551, 4.14638616337982, 3.9839180700026238, 3.945053900124883, 3.7360592293308814, 3.515971547856865, 3.2447737282323055, 2.9448232180854643, 2.4594870808987808, 2.0748797444896754, 1.5216415497294673, 0.9521954179342118, 0.9970221815202441, 0.5079245766326584, 0.204612324952474, -0.9255738647451008, -0.1599709286944946, 1.1229700010531105, 1.0997867611491268, -1.6889468710986568, -2.623383108237885, -2.8036643856726147, -0.35882027893364077, 0.7709098590359476, 1.2336333858737705, -1.1317325047760587, 0.8384422010398739, -1.4331500374507786, 0.23069116676404602, -1.3183210426399556, 1.0195960167600944, 0.8631677168441598, -0.8643596281771939, -2.6625812918775815, -0.8752538165861484, -0.5992808795922524, -0.5906134047994189, 9.944830192057938e-2, 0.4379305048368973, -0.5457675343514404, -0.2707086168073308, 0.7473080181553704, -2.784960777269633, 1.6453134799744773, 0.78480370609113, 1.1247977726147083, -2.1410910219767705, -0.34568287859261165, -0.6918285214733937, 1.2939594221307043, 0.8985938545311535, 0.8971174589940984, -2.0370230848455315, -0.8986636100917562, -0.5665209844082988, 0.25971414717104224, 0.6612809317496768, -7.643986687273446e-2, -1.5838099593994726, -0.5840553960420741, -1.9540264706110246, -0.6139848335789257, 0.4763487419156967, 1.0698581571262944, 0.8301553492593408, -0.45739731488427415, -1.3148867312810297, -1.5491758909274695, 0.14281887936245397, 0.7455872730075669, 1.287517859212263, 2.78556126352994e-2, 0.25175807315216037, -1.6179606007219347, -0.946114544926569, 0.42542149498717774, 1.2710158343817568, 0.48096324380564026, 0.9176889428195724, -1.862826414647925, 0.5181095287408145, 0.8472167617956549, 0.90197106387007, 0.3651519409339762, 0.19417561935858016, 1.4080213903953154, -0.3421518305774009, 9.413935320900704e-3, -0.12758711341393145, -4.121365228629297e-2, 0.23272121025577047, -0.122439973969776, 0.9656475131814233, 0.40729628629759834, 0.5545987384945829, 0.863799642328108, 1.0450603945342596, -0.6246812040210127, -0.6755073221344826, -0.9783290506141454, 0.4050413622988412, 0.55382235951153, 1.6430830490858397, -0.8032538749275443, -0.38175167049647635, -1.1279904421530256, 0.8597609475028455, -0.22592519882820508, 0.5462324486566863, 0.6650577980071637, -0.23881301393731066, -0.4711469026187611, -1.1626988442188138, -0.2567285239055263, 0.34883415449412325, 0.9515135049333393, 9.460948604516535e-2, 1.0565959767686024, -0.5864338880880218, -0.33718161458250573, 0.2983935427998611, 0.9596231131229661, 0.9227898889517011, 0.4810596983455916, 4.666896825990316e-2, 1.091127101514808, -0.2790199199591993, 1.1042343582279388, 0.7506814263685173, 1.0747271562378828, 0.8927328463575381, -0.740893879942483, -1.1086987637761554, 0.7439283296973215, 0.5207691915284437, 0.5792379438702503, -0.2490648858703582, -0.7942053087800607, -1.868268123437045, 1.3494342890793056, 0.16339080513787094, 1.2887268750471617, -0.20400147430858892, -0.14689262528741534, -0.22110848523559484, -1.6896037888088262, -2.096899996180469, 0.8918587011359251, 1.2234807928864462, -0.22966127145260323, 1.5519464017904743, -0.1467344706589123, -0.21184786976551484, 0.12572788480685362, 1.2528289838553173)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lactin2_1995.R
\name{lactin2_1995}
\alias{lactin2_1995}
\title{Lactin2 model for fitting thermal performance curves}
\usage{
lactin2_1995(temp, a, b, tmax, delta_t)
}
\arguments{
\item{temp}{temperature in degrees centigrade}
\item{a}{constant that determines the steepness of the rising portion of the curve}
\item{b}{constant that determines the height of the overall curve}
\item{tmax}{the temperature at which the curve begins to decelerate beyond the optimum (ºC)}
\item{delta_t}{thermal safety margin (ºC)}
}
\value{
a numeric vector of rate values based on the temperatures and parameter values provided to the function
}
\description{
Lactin2 model for fitting thermal performance curves
}
\details{
Equation:
\deqn{rate= = exp^{a \cdot temp} - exp^{a \cdot t_{max} - \bigg(\frac{t_{max} - temp}{\delta _{t}}\bigg)} + b}{%
rate = exp(a.temp) - exp(a.tmax - ((tmax - temp) / delta_t)) + b}
Start values in \code{get_start_vals} are derived from the data or sensible values from the literature.
Limits in \code{get_lower_lims} and \code{get_upper_lims} are derived from the data or based extreme values that are unlikely to occur in ecological settings.
}
\note{
Generally we found this model easy to fit.
}
\examples{
# load in ggplot
library(ggplot2)
# subset for the first TPC curve
data('chlorella_tpc')
d <- subset(chlorella_tpc, curve_id == 1)
# get start values and fit model
start_vals <- get_start_vals(d$temp, d$rate, model_name = 'lactin2_1995')
# fit model
mod <- nls.multstart::nls_multstart(rate~lactin2_1995(temp = temp, a, b, tmax, delta_t),
data = d,
iter = c(3,3,3,3),
start_lower = start_vals - 10,
start_upper = start_vals + 10,
lower = get_lower_lims(d$temp, d$rate, model_name = 'lactin2_1995'),
upper = get_upper_lims(d$temp, d$rate, model_name = 'lactin2_1995'),
supp_errors = 'Y',
convergence_count = FALSE)
# look at model fit
summary(mod)
# get predictions
preds <- data.frame(temp = seq(min(d$temp), max(d$temp), length.out = 100))
preds <- broom::augment(mod, newdata = preds)
# plot
ggplot(preds) +
geom_point(aes(temp, rate), d) +
geom_line(aes(temp, .fitted), col = 'blue') +
theme_bw()
}
\references{
Lactin, D.J., Holliday, N.J., Johnson, D.L. & Craigen, R. Improved rate models of temperature-dependent development by arthropods. Environmental Entomology 24, 69-75 (1995)
}
| /man/lactin2_1995.Rd | no_license | padpadpadpad/rTPC | R | false | true | 2,408 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/lactin2_1995.R
\name{lactin2_1995}
\alias{lactin2_1995}
\title{Lactin2 model for fitting thermal performance curves}
\usage{
lactin2_1995(temp, a, b, tmax, delta_t)
}
\arguments{
\item{temp}{temperature in degrees centigrade}
\item{a}{constant that determines the steepness of the rising portion of the curve}
\item{b}{constant that determines the height of the overall curve}
\item{tmax}{the temperature at which the curve begins to decelerate beyond the optimum (ºC)}
\item{delta_t}{thermal safety margin (ºC)}
}
\value{
a numeric vector of rate values based on the temperatures and parameter values provided to the function
}
\description{
Lactin2 model for fitting thermal performance curves
}
\details{
Equation:
\deqn{rate= = exp^{a \cdot temp} - exp^{a \cdot t_{max} - \bigg(\frac{t_{max} - temp}{\delta _{t}}\bigg)} + b}{%
rate = exp(a.temp) - exp(a.tmax - ((tmax - temp) / delta_t)) + b}
Start values in \code{get_start_vals} are derived from the data or sensible values from the literature.
Limits in \code{get_lower_lims} and \code{get_upper_lims} are derived from the data or based extreme values that are unlikely to occur in ecological settings.
}
\note{
Generally we found this model easy to fit.
}
\examples{
# load in ggplot
library(ggplot2)
# subset for the first TPC curve
data('chlorella_tpc')
d <- subset(chlorella_tpc, curve_id == 1)
# get start values and fit model
start_vals <- get_start_vals(d$temp, d$rate, model_name = 'lactin2_1995')
# fit model
mod <- nls.multstart::nls_multstart(rate~lactin2_1995(temp = temp, a, b, tmax, delta_t),
data = d,
iter = c(3,3,3,3),
start_lower = start_vals - 10,
start_upper = start_vals + 10,
lower = get_lower_lims(d$temp, d$rate, model_name = 'lactin2_1995'),
upper = get_upper_lims(d$temp, d$rate, model_name = 'lactin2_1995'),
supp_errors = 'Y',
convergence_count = FALSE)
# look at model fit
summary(mod)
# get predictions
preds <- data.frame(temp = seq(min(d$temp), max(d$temp), length.out = 100))
preds <- broom::augment(mod, newdata = preds)
# plot
ggplot(preds) +
geom_point(aes(temp, rate), d) +
geom_line(aes(temp, .fitted), col = 'blue') +
theme_bw()
}
\references{
Lactin, D.J., Holliday, N.J., Johnson, D.L. & Craigen, R. Improved rate models of temperature-dependent development by arthropods. Environmental Entomology 24, 69-75 (1995)
}
|
#'countsAll
#'
#'@docType data
#'@name countsAll
#'@format data.frame
NULL | /R/countsAll.R | no_license | mi2-warsaw/directRNAExplorer | R | false | false | 74 | r | #'countsAll
#'
#'@docType data
#'@name countsAll
#'@format data.frame
NULL |
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/redshift-utils.R
\name{temp_table_name}
\alias{temp_table_name}
\title{Generate names for temp tables}
\usage{
temp_table_name(n = 1, prefix = "tt_")
}
\arguments{
\item{n}{integer. The number of names to generate}
\item{prefix}{character. The prefix to use for the table name, defaults to tt_}
}
\value{
}
\description{
Generates uuids (without dashes) to use as names for temp tables and adds a prefix.
}
| /man/temp_table_name.Rd | permissive | zapier/redshiftTools | R | false | true | 487 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/redshift-utils.R
\name{temp_table_name}
\alias{temp_table_name}
\title{Generate names for temp tables}
\usage{
temp_table_name(n = 1, prefix = "tt_")
}
\arguments{
\item{n}{integer. The number of names to generate}
\item{prefix}{character. The prefix to use for the table name, defaults to tt_}
}
\value{
}
\description{
Generates uuids (without dashes) to use as names for temp tables and adds a prefix.
}
|
#' Calculate Black-Scholes implied volatility
#'
#' @param price (vector of) option price
#' @param strike (vector of) strike price
#' @param spot (vector of) spot price
#' @param texp (vector of) time to expiry
#' @param intr interest rate
#' @param divr dividend rate
#' @param cp call/put sign. \code{1} for call, \code{-1} for put.
#' @param forward forward price. If given, \code{forward} overrides \code{spot}
#' @param df discount factor. If given, \code{df} overrides \code{intr}
#' @return Black-Scholes implied volatility
#'
#' @references Giner, G., & Smyth, G. K. (2016). statmod: Probability Calculations
#' for the Inverse Gaussian Distribution. The R Journal, 8(1), 339-351.
#' \url{https://doi.org/10.32614/RJ-2016-024}
#'
#' @export
#'
#' @examples
#' spot <- 100
#' strike <- 100
#' texp <- 1.2
#' sigma <- 0.2
#' intr <- 0.05
#' price <- 20
#' FER::BlackScholesImpvol(price, strike, spot, texp, intr=intr)
#'
#' @seealso \code{\link{BlackScholesPrice}}
#'
BlackScholesImpvol <- function(
price, strike=forward, spot, texp=1,
intr=0, divr=0, cp=1L,
forward=spot*exp(-divr*texp)/df, df=exp(-intr*texp)
){
optval <- (price/df - pmax(cp*(forward-strike), 0))/pmin(forward, strike)
# we use inverse CDF of inversegaussian distribution
mu <- 2/abs(log(strike/forward))
x <- statmod::qinvgauss(optval, mean=mu, lower.tail=F)
sig <- 2/sqrt(x*texp)
return( sig )
}
| /pkg/R/blackscholes_impvol.R | permissive | daifengqi/FE-R | R | false | false | 1,398 | r | #' Calculate Black-Scholes implied volatility
#'
#' @param price (vector of) option price
#' @param strike (vector of) strike price
#' @param spot (vector of) spot price
#' @param texp (vector of) time to expiry
#' @param intr interest rate
#' @param divr dividend rate
#' @param cp call/put sign. \code{1} for call, \code{-1} for put.
#' @param forward forward price. If given, \code{forward} overrides \code{spot}
#' @param df discount factor. If given, \code{df} overrides \code{intr}
#' @return Black-Scholes implied volatility
#'
#' @references Giner, G., & Smyth, G. K. (2016). statmod: Probability Calculations
#' for the Inverse Gaussian Distribution. The R Journal, 8(1), 339-351.
#' \url{https://doi.org/10.32614/RJ-2016-024}
#'
#' @export
#'
#' @examples
#' spot <- 100
#' strike <- 100
#' texp <- 1.2
#' sigma <- 0.2
#' intr <- 0.05
#' price <- 20
#' FER::BlackScholesImpvol(price, strike, spot, texp, intr=intr)
#'
#' @seealso \code{\link{BlackScholesPrice}}
#'
BlackScholesImpvol <- function(
price, strike=forward, spot, texp=1,
intr=0, divr=0, cp=1L,
forward=spot*exp(-divr*texp)/df, df=exp(-intr*texp)
){
optval <- (price/df - pmax(cp*(forward-strike), 0))/pmin(forward, strike)
# we use inverse CDF of inversegaussian distribution
mu <- 2/abs(log(strike/forward))
x <- statmod::qinvgauss(optval, mean=mu, lower.tail=F)
sig <- 2/sqrt(x*texp)
return( sig )
}
|
#!/usr/bin/env Rscript
library(data.table)
library(GagnonMR)
library(tidyverse)
library(furrr)
setwd("/mnt/sda/gagelo01/Projects/small_MR_exploration/replication_will_clean")
gwasvcf::set_bcftools()
gwasvcf::set_plink()
ldref <-"/home/couchr02/Mendel_Commun/Christian/LDlocal/EUR_rs"
harm <- fread("Data/Modified/harm_class.txt")
harm <- harm[class != "WHRonly-"] #there is no value at doing the analysis on WHRonly-
all_mr_methods_safely <- safely(GagnonMR::all_mr_methods)
options(future.globals.maxSize= 5e9)
plan(multisession, workers = 6, gc = TRUE) #I should try using multicore
res <- future_map(split(harm, harm$exposure_outcome),function(x) {all_mr_methods(x)}, .options = furrr_options(seed = TRUE)) %>%
rbindlist(.,fill = TRUE)
fwrite(res, "Data/Modified/res_class.txt")
message("This script finished without errors") | /Analysis/2bb_reswinkler.R | no_license | LaboArsenault/BMI_WC_NAFLD | R | false | false | 836 | r | #!/usr/bin/env Rscript
library(data.table)
library(GagnonMR)
library(tidyverse)
library(furrr)
setwd("/mnt/sda/gagelo01/Projects/small_MR_exploration/replication_will_clean")
gwasvcf::set_bcftools()
gwasvcf::set_plink()
ldref <-"/home/couchr02/Mendel_Commun/Christian/LDlocal/EUR_rs"
harm <- fread("Data/Modified/harm_class.txt")
harm <- harm[class != "WHRonly-"] #there is no value at doing the analysis on WHRonly-
all_mr_methods_safely <- safely(GagnonMR::all_mr_methods)
options(future.globals.maxSize= 5e9)
plan(multisession, workers = 6, gc = TRUE) #I should try using multicore
res <- future_map(split(harm, harm$exposure_outcome),function(x) {all_mr_methods(x)}, .options = furrr_options(seed = TRUE)) %>%
rbindlist(.,fill = TRUE)
fwrite(res, "Data/Modified/res_class.txt")
message("This script finished without errors") |
# Programmed by Daniel Fuller
# Canada Research Chair in Population Physical Activity
# School of Human Kinetics and Recreation
# Memorial University of Newfoundland
# dfuller [AT] mun [DOT] ca | www.walkabilly.ca/home/
# Modified by Javad Rahimipour Anaraki on 28/11/17 updated 21/06/18
# Ph.D. Candidate
# Department of Computer Science
# Memorial University of Newfoundland
# jra066 [AT] mun [DOT] ca | www.cs.mun.ca/~jra066
# Modified by Hui (Henry) Luan
# Postdoc Fellow
# School of Human Kinetics and Recreation
# Memorial University of Newfoundland
# hluan [AT] mun [DOT] ca
# input: accel_sec_geneav raw CSV data files
# output: Clean and collapsed to second level CSV data files
rm(list = ls())
#========================Libraries=========================
list.of.packages <-
c("lubridate",
"stringr",
"data.table",
"dplyr",
"car",
"kimisc")
new.packages <-
list.of.packages[!(list.of.packages %in% installed.packages()[, "Package"])]
if (length(new.packages))
install.packages(new.packages)
library(lubridate)
library(stringr)
library(data.table)
library(dplyr)
library(car)
library(kimisc)
#=========================Variables========================
OS <- Sys.info()
if (OS["sysname"] == "Windows") {
path <-
"Z:/HeroX/data/GENEActiv/"
intrPath <-
"Z:/HeroX/data/"
} else {
path <-
"/HeroX/data/GENEActiv/"
intrPath <-
"/HeroX/data/"
}
setwd(path)
wear_loc <- "wrist"
#Timezone
timeZone <- "America/St_Johns"
Sys.setenv(TZ = timeZone)
#Required user id to be processed
participants <-
list.dirs(path = path,
full.names = FALSE,
recursive = FALSE)
#====================Read in data files====================
for (j in 1:length(participants)) {
uid <- participants[j]
print(uid)
filenames <-
dir(paste0(path, uid),
pattern = "([0-9].csv)",
full.names = TRUE)
intervals <-
read.csv(paste(intrPath, "intervals.csv", sep = ""), sep = ",")
#=====================Data prepration======================
nFiles <- length(filenames)
if (nFiles > 0) {
for (i in 1:nFiles) {
if (file.size(filenames[i]) == 0 |
file.exists(paste0(path, uid, "/", unlist(strsplit(
basename(filenames[i]), "[.]"
))[1], "_labeled.csv"))) {
print(paste0("GENEActiv file for ", uid, " exists"))
next
}
accel_data <-
fread(
filenames[i],
skip = 99,
header = FALSE,
col.names = c(
"time",
"x_axis",
"y_axis",
"z_axis",
"lux",
"button",
"temp"
),
select = 1:7,
sep = ",",
data.table = FALSE,
nThread = 4
)
#Filtering data out based on uid and start and end date
usrInfo <- intervals[intervals[, "userid"] == uid,]
startDate <- as.character(usrInfo[, "start"])
endDate <- as.character(usrInfo[, "end"])
#Cut data based on start and end dates
accel_data <-
accel_data[(substring(format.Date(accel_data[, "time"]), 1, 10) >= format.Date(startDate)),]
accel_data <-
accel_data[(substring(format.Date(accel_data[, "time"]), 1, 10) <= format.Date(endDate)),]
fileName <-
paste0(path, uid, "/", unlist(strsplit(basename(filenames), "[.]"))[1], "_labeled.csv")
if (nrow(accel_data) == 0) {
print(paste0("No data from ", startDate, " to ", endDate))
file.create(fileName)
next
}
#Correct time
accel_data$time1 <-
str_sub(accel_data$time, 1, str_length(accel_data$time) - 4) ## Remove the milliseconds
accel_data$time2 <-
ymd_hms(accel_data$time1) ## Create a POSIXct formated variable
accel_data$day <-
day(accel_data$time2) ## Create a day variable
accel_data$hour <-
hour(accel_data$time2) ## Create an hour variable
accel_data$minute <-
minute(accel_data$time2) ## Create a minute variable
accel_data$second <-
second(accel_data$time2) ## Create a second variable
#Calculate vector magnitude
accel_data$vec_mag = sqrt(accel_data$x_axis ^ 2 + accel_data$y_axis ^
2 + accel_data$z_axis ^ 2)
accel_data$vec_mag_g = accel_data$vec_mag - 1
#Collapse data to one second
accel_sec_genea <- accel_data %>%
group_by(day, hour, minute, second) %>%
summarise(
time = first(time),
m_x_axis = mean(x_axis),
m_y_axis = mean(y_axis),
m_z_axis = mean(z_axis),
vec_mag = mean(vec_mag),
vec_mag_g = mean(abs(vec_mag_g)),
vec_mag_median = median(vec_mag),
vec_mag_g_median = median(abs(vec_mag_g)),
sd_x_axis = sd(x_axis),
sd_y_axis = sd(y_axis),
sd_z_axis = sd(z_axis)
)
accel_sec_genea$activity_mean <-
car::recode(
accel_sec_genea$vec_mag_g,
"lo:0.190='1.Sedentary';
0.190:0.314='2.Light';
0.314:0.9989='3.Moderate';
0.9989:hi='4.Vigorous';",
as.factor = TRUE
)
accel_sec_genea$activity_median <-
car::recode(
accel_sec_genea$vec_mag_g_median,
"lo:0.190='1.Sedentary';
0.190:0.314='2.Light';
0.314:0.9989='3.Moderate';
0.9989:hi='4.Vigorous';",
as.factor = TRUE
)
#===========================Sleep==========================
x <- accel_sec_genea$m_x_axis
y <- accel_sec_genea$m_y_axis
z <- accel_sec_genea$m_z_axis
accel_sec_genea$angle <-
atan(z / sqrt(x ^ 2 + y ^ 2)) * 180 / pi
start_id <- seq.int(1, nrow(accel_sec_genea) - 1, by = 5)
end_id <-
c((start_id - 1)[2:length(start_id)], nrow(accel_sec_genea))
mean_angle5s <-
sapply(1:length(start_id), function(i) {
mean(accel_sec_genea$angle[start_id[i]:end_id[i]])
})
flag_s <- seq(1, length(mean_angle5s))
flag_e <- flag_s + 1
flag_e <- flag_e[-length(flag_e)]
flag <-
sapply(1:length(flag_e), function(i) {
ifelse(abs(mean_angle5s[flag_e[i]] - mean_angle5s[flag_s[i]]) <= 5, 1, 0)
})
flag_final <- c(1, flag)
zero_IDs <- which(flag_final == 0)
zero_s <- zero_IDs[-length(zero_IDs)]
zero_e <- zero_IDs[-1]
for (i in 1:length(zero_s)) {
if ((zero_e[i] - zero_s[i]) < 121 && (zero_e[i] - zero_s[i]) > 1)
flag_final[(zero_s[i] + 1):(zero_e[i] - 1)] = 0
}
# Reverse to second-level
flag_second <- rep(flag_final, each = 5)
sleep_id <- which(flag_second == 1)
activity_final <- as.character(accel_sec_genea$activity_mean)
activity_final[sleep_id] <- "0.Sleep"
accel_sec_genea$activity_final <-
activity_final[1:nrow(accel_sec_genea)]
table(accel_sec_genea$activity_final)
#=========================Exporting========================
#Save the results as a CSV file
write.csv(accel_sec_genea, fileName,
row.names = FALSE)
}
}
}
| /GENEActiv/GENEActivDataPrep.R | no_license | walkabillylab/wearable_device_classification | R | false | false | 7,536 | r | # Programmed by Daniel Fuller
# Canada Research Chair in Population Physical Activity
# School of Human Kinetics and Recreation
# Memorial University of Newfoundland
# dfuller [AT] mun [DOT] ca | www.walkabilly.ca/home/
# Modified by Javad Rahimipour Anaraki on 28/11/17 updated 21/06/18
# Ph.D. Candidate
# Department of Computer Science
# Memorial University of Newfoundland
# jra066 [AT] mun [DOT] ca | www.cs.mun.ca/~jra066
# Modified by Hui (Henry) Luan
# Postdoc Fellow
# School of Human Kinetics and Recreation
# Memorial University of Newfoundland
# hluan [AT] mun [DOT] ca
# input: accel_sec_geneav raw CSV data files
# output: Clean and collapsed to second level CSV data files
rm(list = ls())
#========================Libraries=========================
list.of.packages <-
c("lubridate",
"stringr",
"data.table",
"dplyr",
"car",
"kimisc")
new.packages <-
list.of.packages[!(list.of.packages %in% installed.packages()[, "Package"])]
if (length(new.packages))
install.packages(new.packages)
library(lubridate)
library(stringr)
library(data.table)
library(dplyr)
library(car)
library(kimisc)
#=========================Variables========================
OS <- Sys.info()
if (OS["sysname"] == "Windows") {
path <-
"Z:/HeroX/data/GENEActiv/"
intrPath <-
"Z:/HeroX/data/"
} else {
path <-
"/HeroX/data/GENEActiv/"
intrPath <-
"/HeroX/data/"
}
setwd(path)
wear_loc <- "wrist"
#Timezone
timeZone <- "America/St_Johns"
Sys.setenv(TZ = timeZone)
#Required user id to be processed
participants <-
list.dirs(path = path,
full.names = FALSE,
recursive = FALSE)
#====================Read in data files====================
for (j in 1:length(participants)) {
uid <- participants[j]
print(uid)
filenames <-
dir(paste0(path, uid),
pattern = "([0-9].csv)",
full.names = TRUE)
intervals <-
read.csv(paste(intrPath, "intervals.csv", sep = ""), sep = ",")
#=====================Data prepration======================
nFiles <- length(filenames)
if (nFiles > 0) {
for (i in 1:nFiles) {
if (file.size(filenames[i]) == 0 |
file.exists(paste0(path, uid, "/", unlist(strsplit(
basename(filenames[i]), "[.]"
))[1], "_labeled.csv"))) {
print(paste0("GENEActiv file for ", uid, " exists"))
next
}
accel_data <-
fread(
filenames[i],
skip = 99,
header = FALSE,
col.names = c(
"time",
"x_axis",
"y_axis",
"z_axis",
"lux",
"button",
"temp"
),
select = 1:7,
sep = ",",
data.table = FALSE,
nThread = 4
)
#Filtering data out based on uid and start and end date
usrInfo <- intervals[intervals[, "userid"] == uid,]
startDate <- as.character(usrInfo[, "start"])
endDate <- as.character(usrInfo[, "end"])
#Cut data based on start and end dates
accel_data <-
accel_data[(substring(format.Date(accel_data[, "time"]), 1, 10) >= format.Date(startDate)),]
accel_data <-
accel_data[(substring(format.Date(accel_data[, "time"]), 1, 10) <= format.Date(endDate)),]
fileName <-
paste0(path, uid, "/", unlist(strsplit(basename(filenames), "[.]"))[1], "_labeled.csv")
if (nrow(accel_data) == 0) {
print(paste0("No data from ", startDate, " to ", endDate))
file.create(fileName)
next
}
#Correct time
accel_data$time1 <-
str_sub(accel_data$time, 1, str_length(accel_data$time) - 4) ## Remove the milliseconds
accel_data$time2 <-
ymd_hms(accel_data$time1) ## Create a POSIXct formated variable
accel_data$day <-
day(accel_data$time2) ## Create a day variable
accel_data$hour <-
hour(accel_data$time2) ## Create an hour variable
accel_data$minute <-
minute(accel_data$time2) ## Create a minute variable
accel_data$second <-
second(accel_data$time2) ## Create a second variable
#Calculate vector magnitude
accel_data$vec_mag = sqrt(accel_data$x_axis ^ 2 + accel_data$y_axis ^
2 + accel_data$z_axis ^ 2)
accel_data$vec_mag_g = accel_data$vec_mag - 1
#Collapse data to one second
accel_sec_genea <- accel_data %>%
group_by(day, hour, minute, second) %>%
summarise(
time = first(time),
m_x_axis = mean(x_axis),
m_y_axis = mean(y_axis),
m_z_axis = mean(z_axis),
vec_mag = mean(vec_mag),
vec_mag_g = mean(abs(vec_mag_g)),
vec_mag_median = median(vec_mag),
vec_mag_g_median = median(abs(vec_mag_g)),
sd_x_axis = sd(x_axis),
sd_y_axis = sd(y_axis),
sd_z_axis = sd(z_axis)
)
accel_sec_genea$activity_mean <-
car::recode(
accel_sec_genea$vec_mag_g,
"lo:0.190='1.Sedentary';
0.190:0.314='2.Light';
0.314:0.9989='3.Moderate';
0.9989:hi='4.Vigorous';",
as.factor = TRUE
)
accel_sec_genea$activity_median <-
car::recode(
accel_sec_genea$vec_mag_g_median,
"lo:0.190='1.Sedentary';
0.190:0.314='2.Light';
0.314:0.9989='3.Moderate';
0.9989:hi='4.Vigorous';",
as.factor = TRUE
)
#===========================Sleep==========================
x <- accel_sec_genea$m_x_axis
y <- accel_sec_genea$m_y_axis
z <- accel_sec_genea$m_z_axis
accel_sec_genea$angle <-
atan(z / sqrt(x ^ 2 + y ^ 2)) * 180 / pi
start_id <- seq.int(1, nrow(accel_sec_genea) - 1, by = 5)
end_id <-
c((start_id - 1)[2:length(start_id)], nrow(accel_sec_genea))
mean_angle5s <-
sapply(1:length(start_id), function(i) {
mean(accel_sec_genea$angle[start_id[i]:end_id[i]])
})
flag_s <- seq(1, length(mean_angle5s))
flag_e <- flag_s + 1
flag_e <- flag_e[-length(flag_e)]
flag <-
sapply(1:length(flag_e), function(i) {
ifelse(abs(mean_angle5s[flag_e[i]] - mean_angle5s[flag_s[i]]) <= 5, 1, 0)
})
flag_final <- c(1, flag)
zero_IDs <- which(flag_final == 0)
zero_s <- zero_IDs[-length(zero_IDs)]
zero_e <- zero_IDs[-1]
for (i in 1:length(zero_s)) {
if ((zero_e[i] - zero_s[i]) < 121 && (zero_e[i] - zero_s[i]) > 1)
flag_final[(zero_s[i] + 1):(zero_e[i] - 1)] = 0
}
# Reverse to second-level
flag_second <- rep(flag_final, each = 5)
sleep_id <- which(flag_second == 1)
activity_final <- as.character(accel_sec_genea$activity_mean)
activity_final[sleep_id] <- "0.Sleep"
accel_sec_genea$activity_final <-
activity_final[1:nrow(accel_sec_genea)]
table(accel_sec_genea$activity_final)
#=========================Exporting========================
#Save the results as a CSV file
write.csv(accel_sec_genea, fileName,
row.names = FALSE)
}
}
}
|
####################
## Function for visualizing univariate relations
####################
VisualizeRelation <- function(data=deer[["summer"]],model=GLMMs[["summer"]],predvar="dist_to_water",type="RF"){
len <- 100
isfac <- is.factor(data[[predvar]])
dataclasses <- sapply(data,class)
if(!isfac){
if(type=="GLMM"){
standvar <- sprintf("stand_%s",predvar)
}else{
standvar <- predvar
}
dim <- data[,standvar]
range <- seq(min(dim),max(dim),length=len)
realmean <- mean(data[,predvar])
realsd <- sd(data[,predvar])
newdata <- data.frame(temp=range)
# head(newdata,50)
names(newdata) <- c(standvar)
if(type=="GLMM"){
allvars <- names(model@frame)
}else{
allvars <- pred.names
}
othervars <- allvars[!allvars%in%c(standvar,"used")]
}else{
faclevs <- levels(data[[predvar]])
newdata <- data.frame(temp=factor(faclevs,levels=faclevs))
names(newdata) <- c(predvar)
if(type=="GLMM"){
allvars <- names(model@frame)
}else{
allvars <- pred.names
}
othervars <- allvars[!allvars%in%c(predvar,"used")]
}
var = othervars[2]
for(var in othervars){
thisvar <- data[,var]
if(is.factor(thisvar)){
tab <- table(thisvar)
vals <- names(tab)
levs <- levels(thisvar)
mostcom <- vals[which.max(tab)]
newvec <- factor(rep(mostcom,times=nrow(newdata)),levels=levs)
newdata[,var] <- newvec
}else{
newdata[,var] <- mean(thisvar)
}
}
if(type=="GLMM"){
pred <- plogis(predict(model,newdata))
}else{
i=pred.names[3]
for(i in pred.names){
if(dataclasses[i]=="integer") newdata[,i] <- as.integer(round(newdata[,i]))
}
pred <- numeric(nrow(newdata))
i=1
for(i in 1:nrow(newdata)){
pred[i]<-as.numeric(predict(model,newdata[i,],OOB=TRUE,type="prob")[[1]][,2])
}
}
if(!isfac){
plot(range,pred,xlab=predictorNames[pred.names==predvar],ylab="Use probability",type="l",lwd=2,xaxt="n")
ats <- seq(min(range),max(range),length=6)
if(type=="GLMM"){
axis(1,ats,labels = round(realmean+ats*realsd))
}else{
axis(1,ats,labels = round(ats))
}
rug(jitter(data[seq(1,nrow(data),50),standvar]), ticksize = 0.03, side = 1, lwd = 0.5, col = par("fg"))
}else{
par(mai=c(1.5,1,1,.2))
plot(pred~newdata[,1],xlab="",main=predictorNames[pred.names==predvar],ylab="Use probability",lwd=2,las=2)
}
}
####################
## Function for visualizing interactions
####################
VisualizeInteraction <- function(data=deer[["summer"]],model=GLMMs[["summer"]],var1="dist_to_water",var2="elevation",type="GLMM"){
len <- 50
if(type=="GLMM"){
standvar1 <- sprintf("stand_%s",var1)
standvar2 <- sprintf("stand_%s",var2)
realmean1 <- mean(data[,var1])
realsd1 <- sd(data[,var1])
realmean2 <- mean(data[,var2])
realsd2 <- sd(data[,var2])
}else{
standvar1 <- var1
standvar2 <- var2
}
firstdim <- data[,standvar1]
seconddim <- data[,standvar2]
range1 <- seq(min(firstdim),max(firstdim),length=len)
range2 <- seq(min(seconddim),max(seconddim),length=len)
newdata <- expand.grid(range1,range2)
# head(newdata,50)
names(newdata) <- c(standvar1,standvar2)
if(type=="GLMM"){
allvars <- names(model@frame)
}else{
allvars <- pred.names
}
othervars <- allvars[!allvars%in%c(standvar1,standvar2,"used")]
var = othervars[2]
for(var in othervars){
thisvar <- data[,var]
if(is.factor(thisvar)){
tab <- table(thisvar)
vals <- names(tab)
levs <- levels(thisvar)
mostcom <- vals[which.max(tab)]
newvec <- factor(rep(mostcom,times=nrow(newdata)),levels=levs)
newdata[,var] <- newvec
}else{
newdata[,var] <- mean(thisvar)
}
}
if(type=="GLMM"){
pred <- plogis(predict(model,newdata))
}else{
i=pred.names[3]
for(i in pred.names){
if(dataclasses[i]=="integer") newdata[,i] <- as.integer(round(newdata[,i]))
}
pred <- numeric(nrow(newdata))
i=1
for(i in 1:nrow(newdata)){
pred[i]<-as.numeric(predict(model,newdata[i,],OOB=TRUE,type="prob")[[1]][,2])
}
}
predmat <- matrix(pred,nrow=len,ncol=len)
par(mai=c(0,0,0,0))
if(type=="GLMM"){
persp(realmean1+realsd1*range1,realmean2+realsd2*range2,predmat,xlab=var1,ylab=var2,theta = 55, phi = 40, r = sqrt(10), d = 3,
ticktype = "detailed", mgp = c(4, 1, 0))
}else{
persp(range1,range2,predmat,xlab=var1,ylab=var2,theta = 55, phi = 40, r = sqrt(10), d = 3,
ticktype = "detailed", mgp = c(4, 1, 0))
}
}
#####################
## CROSS VALIDATION
#####################
n.folds=3
type= "GLMM" #"RF"
season="summer"
fullmodel=GLMMs[[season]] #RFs[[season]]
CrossValidateByDeer <- function(n.folds,season="summer",type="RF",plot=F){
uniquedeer <- as.character(unique(deer[[season]]$altid))
ndeer <- length(uniquedeer)
folds_df <- data.frame(
deer = uniquedeer,
fold = rep_len(1:n.folds,ndeer)
)
foldVector <- folds_df$fold[match(as.character(deer[[season]]$altid),folds_df$deer)]
predictCols <- which(names(deer[[season]])%in%pred.names)
if(type=="RF"){
fullmodel<-RFs[[season]]
}else{
fullmodel <- GLMMs[[season]]
}
CVresults <- list()
CVresults$CVpred <- numeric(nrow(deer[[season]]))
CVresults$realpred <- numeric(nrow(deer[[season]]))
CVresults$observed <- numeric(nrow(deer[[season]]))
if(type=="RF"){
response="used_fac" #"resp_factor"
}else{
response="used" #"resp_factor"
}
counter = 1
i=1
for(i in 1:n.folds){
if(type=="RF"){
model <- cforest(formula1, data = deer[[season]][which(foldVector!=i),], controls=cforestControl)
predict_CV <- predict(model,newdata=deer[[season]][which(foldVector==i),],type="prob")
predict_real <- predict(fullmodel,newdata=deer[[season]][which(foldVector==i),],type="prob")
REAL <- deer[[season]]$used[which(foldVector==i)]
j=1
for(j in 1:length(which(foldVector==i))){
CVresults$CVpred[counter] <- as.numeric(predict_CV[[j]][,2])
CVresults$observed[counter] <- as.numeric(REAL[j])
CVresults$realpred[counter] <- as.numeric(predict_real[[j]][,2])
counter = counter + 1
}
}else{
if(season=="summer"){
model <- glmer(used~stand_dist_to_water + stand_cos_aspect + stand_sin_aspect +
stand_elevation + stand_slope + veg_class + stand_elevation:stand_slope +
stand_dist_to_water:stand_slope + stand_dist_to_water:stand_elevation +
(1|altid), family="binomial", data=deer[[season]][which(foldVector!=i),],na.action="na.fail")
}else{
model <- glmer(used~stand_dist_to_water + stand_cos_aspect + stand_sin_aspect +
stand_elevation + stand_slope + veg_class + stand_elevation:stand_slope +
stand_dist_to_water:stand_slope +
(1|altid), family="binomial", data=deer[[season]][which(foldVector!=i),],na.action="na.fail")
}
CVresults$CVpred[which(foldVector==i)] <- plogis(predict(model,newdata=deer[[season]][which(foldVector==i),],allow.new.levels = TRUE))
CVresults$realpred[which(foldVector==i)] <- predict(fullmodel,newdata=deer[[season]][which(foldVector==i),],allow.new.levels = TRUE)
CVresults$observed[which(foldVector==i)] <- deer[[season]]$used[which(foldVector==i)]
}
}
CVresults$CV_RMSE = sqrt(mean((CVresults$observed-CVresults$CVpred)^2)) # root mean squared error for holdout samples in 10-fold cross-validation ...
CVresults$real_RMSE = sqrt(mean((CVresults$observed-CVresults$realpred)^2)) # root mean squared error for residuals from final model
# realprediction <- predict(rf_model1,newdata=df,type="prob")
if(plot){
graphics.off()
par(mfrow=c(2,1))
par(ask=T)
pred <- prediction(CVresults$CVpred,CVresults$observed) # for holdout samples in cross-validation
perf <- performance(pred,"tpr","fpr")
auc <- performance(pred,"auc")
plot(perf, main=sprintf("%s Cross Validation",type))
text(.9,.1,paste("AUC = ",round(auc@y.values[[1]],2),sep=""))
pred <- prediction(CVresults$realpred,CVresults$observed) # for final model
perf <- performance(pred,"tpr","fpr")
auc <- performance(pred,"auc")
plot(perf, main=sprintf("%s Full Model",type))
text(.9,.1,paste("AUC = ",round(auc@y.values[[1]],2),sep=""))
}else{
pred <- prediction(CVresults$CVpred,CVresults$observed) # for holdout samples in cross-validation
perf <- performance(pred,"tpr","fpr")
CVresults$auc_CV <- performance(pred,"auc")
pred <- prediction(CVresults$realpred,CVresults$observed) # for final model
perf <- performance(pred,"tpr","fpr")
CVresults$auc_full <- performance(pred,"auc")
}
# COHEN KAPPA statistics
thresholds <- seq(0.01,0.99,length=101) # "artificial" extinction thresholds across which to examine performance
kappa <- numeric(length(thresholds))
for(i in 1:length(thresholds)){
trueLabels <- CVresults$observed
predLabels <- ifelse(CVresults$CVpred>=thresholds[i],1,0)
tot <- length(CVresults$observed)
tp <- length(which((trueLabels==1)&(predLabels==1)))
tn <- length(which((trueLabels==0)&(predLabels==0)))
fp <- length(which((trueLabels==0)&(predLabels==1)))
fn <- length(which((trueLabels==1)&(predLabels==0)))
pr_agree <- (tp+tn)/tot # overall agreement, or accuracy
pr_agree_rand <- ((tp+fn)/tot)*((tp+fp)/tot)+((fn+tn)/tot)*((fp+tn)/tot)
kappa[i] <- (pr_agree-pr_agree_rand)/(1-pr_agree_rand)
}
#plot(thresholds,kappa,type="l",xlab="Threshold", ylab="Cohen's Kappa", main="Holdout sample performance")
# find threshold value associated with highest Kappa for C-V data
CVresults$cutoff_CV <- thresholds[which.max(kappa)] # max kappa cutoff
kappa <- numeric(length(thresholds))
for(i in 1:length(thresholds)){
trueLabels <- CVresults$observed
predLabels <- ifelse(CVresults$realpred>=thresholds[i],1,0)
tot <- length(CVresults$observed)
tp <- length(which((trueLabels==1)&(predLabels==1)))
tn <- length(which((trueLabels==0)&(predLabels==0)))
fp <- length(which((trueLabels==0)&(predLabels==1)))
fn <- length(which((trueLabels==1)&(predLabels==0)))
pr_agree <- (tp+tn)/tot # overall agreement, or accuracy
pr_agree_rand <- ((tp+fn)/tot)*((tp+fp)/tot)+((fn+tn)/tot)*((fp+tn)/tot)
kappa[i] <- (pr_agree-pr_agree_rand)/(1-pr_agree_rand)
}
#plot(thresholds,kappa,type="l",xlab="Threshold", ylab="Cohen's Kappa", main="Performance: full model")
CVresults$cutoff_full <- thresholds[which.max(kappa)] # max kappa cutoff
### display confusion matrix and kappa for a single threshold
trueLabels <- CVresults$observed
predLabels <- ifelse(CVresults$CVpred>=CVresults$cutoff_CV,1,0)
tot <- length(CVresults$observed)
tp <- length(which((trueLabels==1)&(predLabels==1)))
tn <- length(which((trueLabels==0)&(predLabels==0)))
fp <- length(which((trueLabels==0)&(predLabels==1)))
fn <- length(which((trueLabels==1)&(predLabels==0)))
pr_agree <- (tp+tn)/tot # overall agreement, or accuracy
pr_agree_rand <- ((tp+fn)/tot)*((tp+fp)/tot)+((fn+tn)/tot)*((fp+tn)/tot)
CVresults$bestkappa_CV <- (pr_agree-pr_agree_rand)/(1-pr_agree_rand)
CVresults$confusionmat <- matrix(c(tn,fn,fp,tp),nrow=2,ncol=2)
rownames(CVresults$confusionmat) <- c("Actual not used","Actual used")
colnames(CVresults$confusionmat) <- c("Predicted not used","Predicted used")
CVresults$sensitivity <- tp/(tp+fn)
CVresults$specificity <- tn/(tn+fp)
CVresults$toterror <- (fn+fp)/tot
CVresults$CVpred[which(CVresults$CVpred==1)] <- 0.999999
CVresults$CVpred[which(CVresults$CVpred==0)] <- 0.000001
CVresults$realpred[which(CVresults$realpred==1)] <- 0.999999
CVresults$realpred[which(CVresults$realpred==0)] <- 0.000001
realdata = CVresults$observed
fit_deviance_CV <- mean(-2*(dbinom(CVresults$observed,1,CVresults$CVpred,log=T)-dbinom(realdata,1,realdata,log=T)))
fit_deviance_real <- mean(-2*(dbinom(CVresults$observed,1,CVresults$realpred,log=T)-dbinom(realdata,1,realdata,log=T)))
null_deviance <- mean(-2*(dbinom(CVresults$observed,1,mean(CVresults$observed),log=T)-dbinom(realdata,1,realdata,log=T)))
CVresults$deviance_explained_CV <- (null_deviance-fit_deviance_CV)/null_deviance # based on holdout samples
CVresults$deviance_explained_real <- (null_deviance-fit_deviance_real)/null_deviance # based on full model...
return(CVresults)
}
PlotPerformance <- function(CVresults){
graphics.off()
par(mfrow=c(2,1))
#par(ask=T)
pred <- prediction(CVresults$CVpred,CVresults$observed) # for holdout samples in cross-validation
perf <- performance(pred,"tpr","fpr")
auc <- performance(pred,"auc")
plot(perf, main=sprintf("%s Cross Validation",type))
text(.9,.1,paste("AUC = ",round(auc@y.values[[1]],2),sep=""))
pred <- prediction(CVresults$realpred,CVresults$observed) # for final model
perf <- performance(pred,"tpr","fpr")
auc <- performance(pred,"auc")
plot(perf, main=sprintf("%s Full Model",type))
text(.9,.1,paste("AUC = ",round(auc@y.values[[1]],2),sep=""))
}
| /METHODS_PAPER_ALLFUNCTIONS.R | no_license | bniebuhr/Mule_Deer_RFvsRSF | R | false | false | 13,416 | r | ####################
## Function for visualizing univariate relations
####################
VisualizeRelation <- function(data=deer[["summer"]],model=GLMMs[["summer"]],predvar="dist_to_water",type="RF"){
len <- 100
isfac <- is.factor(data[[predvar]])
dataclasses <- sapply(data,class)
if(!isfac){
if(type=="GLMM"){
standvar <- sprintf("stand_%s",predvar)
}else{
standvar <- predvar
}
dim <- data[,standvar]
range <- seq(min(dim),max(dim),length=len)
realmean <- mean(data[,predvar])
realsd <- sd(data[,predvar])
newdata <- data.frame(temp=range)
# head(newdata,50)
names(newdata) <- c(standvar)
if(type=="GLMM"){
allvars <- names(model@frame)
}else{
allvars <- pred.names
}
othervars <- allvars[!allvars%in%c(standvar,"used")]
}else{
faclevs <- levels(data[[predvar]])
newdata <- data.frame(temp=factor(faclevs,levels=faclevs))
names(newdata) <- c(predvar)
if(type=="GLMM"){
allvars <- names(model@frame)
}else{
allvars <- pred.names
}
othervars <- allvars[!allvars%in%c(predvar,"used")]
}
var = othervars[2]
for(var in othervars){
thisvar <- data[,var]
if(is.factor(thisvar)){
tab <- table(thisvar)
vals <- names(tab)
levs <- levels(thisvar)
mostcom <- vals[which.max(tab)]
newvec <- factor(rep(mostcom,times=nrow(newdata)),levels=levs)
newdata[,var] <- newvec
}else{
newdata[,var] <- mean(thisvar)
}
}
if(type=="GLMM"){
pred <- plogis(predict(model,newdata))
}else{
i=pred.names[3]
for(i in pred.names){
if(dataclasses[i]=="integer") newdata[,i] <- as.integer(round(newdata[,i]))
}
pred <- numeric(nrow(newdata))
i=1
for(i in 1:nrow(newdata)){
pred[i]<-as.numeric(predict(model,newdata[i,],OOB=TRUE,type="prob")[[1]][,2])
}
}
if(!isfac){
plot(range,pred,xlab=predictorNames[pred.names==predvar],ylab="Use probability",type="l",lwd=2,xaxt="n")
ats <- seq(min(range),max(range),length=6)
if(type=="GLMM"){
axis(1,ats,labels = round(realmean+ats*realsd))
}else{
axis(1,ats,labels = round(ats))
}
rug(jitter(data[seq(1,nrow(data),50),standvar]), ticksize = 0.03, side = 1, lwd = 0.5, col = par("fg"))
}else{
par(mai=c(1.5,1,1,.2))
plot(pred~newdata[,1],xlab="",main=predictorNames[pred.names==predvar],ylab="Use probability",lwd=2,las=2)
}
}
####################
## Function for visualizing interactions
####################
VisualizeInteraction <- function(data=deer[["summer"]],model=GLMMs[["summer"]],var1="dist_to_water",var2="elevation",type="GLMM"){
len <- 50
if(type=="GLMM"){
standvar1 <- sprintf("stand_%s",var1)
standvar2 <- sprintf("stand_%s",var2)
realmean1 <- mean(data[,var1])
realsd1 <- sd(data[,var1])
realmean2 <- mean(data[,var2])
realsd2 <- sd(data[,var2])
}else{
standvar1 <- var1
standvar2 <- var2
}
firstdim <- data[,standvar1]
seconddim <- data[,standvar2]
range1 <- seq(min(firstdim),max(firstdim),length=len)
range2 <- seq(min(seconddim),max(seconddim),length=len)
newdata <- expand.grid(range1,range2)
# head(newdata,50)
names(newdata) <- c(standvar1,standvar2)
if(type=="GLMM"){
allvars <- names(model@frame)
}else{
allvars <- pred.names
}
othervars <- allvars[!allvars%in%c(standvar1,standvar2,"used")]
var = othervars[2]
for(var in othervars){
thisvar <- data[,var]
if(is.factor(thisvar)){
tab <- table(thisvar)
vals <- names(tab)
levs <- levels(thisvar)
mostcom <- vals[which.max(tab)]
newvec <- factor(rep(mostcom,times=nrow(newdata)),levels=levs)
newdata[,var] <- newvec
}else{
newdata[,var] <- mean(thisvar)
}
}
if(type=="GLMM"){
pred <- plogis(predict(model,newdata))
}else{
i=pred.names[3]
for(i in pred.names){
if(dataclasses[i]=="integer") newdata[,i] <- as.integer(round(newdata[,i]))
}
pred <- numeric(nrow(newdata))
i=1
for(i in 1:nrow(newdata)){
pred[i]<-as.numeric(predict(model,newdata[i,],OOB=TRUE,type="prob")[[1]][,2])
}
}
predmat <- matrix(pred,nrow=len,ncol=len)
par(mai=c(0,0,0,0))
if(type=="GLMM"){
persp(realmean1+realsd1*range1,realmean2+realsd2*range2,predmat,xlab=var1,ylab=var2,theta = 55, phi = 40, r = sqrt(10), d = 3,
ticktype = "detailed", mgp = c(4, 1, 0))
}else{
persp(range1,range2,predmat,xlab=var1,ylab=var2,theta = 55, phi = 40, r = sqrt(10), d = 3,
ticktype = "detailed", mgp = c(4, 1, 0))
}
}
#####################
## CROSS VALIDATION
#####################
n.folds=3
type= "GLMM" #"RF"
season="summer"
fullmodel=GLMMs[[season]] #RFs[[season]]
CrossValidateByDeer <- function(n.folds,season="summer",type="RF",plot=F){
uniquedeer <- as.character(unique(deer[[season]]$altid))
ndeer <- length(uniquedeer)
folds_df <- data.frame(
deer = uniquedeer,
fold = rep_len(1:n.folds,ndeer)
)
foldVector <- folds_df$fold[match(as.character(deer[[season]]$altid),folds_df$deer)]
predictCols <- which(names(deer[[season]])%in%pred.names)
if(type=="RF"){
fullmodel<-RFs[[season]]
}else{
fullmodel <- GLMMs[[season]]
}
CVresults <- list()
CVresults$CVpred <- numeric(nrow(deer[[season]]))
CVresults$realpred <- numeric(nrow(deer[[season]]))
CVresults$observed <- numeric(nrow(deer[[season]]))
if(type=="RF"){
response="used_fac" #"resp_factor"
}else{
response="used" #"resp_factor"
}
counter = 1
i=1
for(i in 1:n.folds){
if(type=="RF"){
model <- cforest(formula1, data = deer[[season]][which(foldVector!=i),], controls=cforestControl)
predict_CV <- predict(model,newdata=deer[[season]][which(foldVector==i),],type="prob")
predict_real <- predict(fullmodel,newdata=deer[[season]][which(foldVector==i),],type="prob")
REAL <- deer[[season]]$used[which(foldVector==i)]
j=1
for(j in 1:length(which(foldVector==i))){
CVresults$CVpred[counter] <- as.numeric(predict_CV[[j]][,2])
CVresults$observed[counter] <- as.numeric(REAL[j])
CVresults$realpred[counter] <- as.numeric(predict_real[[j]][,2])
counter = counter + 1
}
}else{
if(season=="summer"){
model <- glmer(used~stand_dist_to_water + stand_cos_aspect + stand_sin_aspect +
stand_elevation + stand_slope + veg_class + stand_elevation:stand_slope +
stand_dist_to_water:stand_slope + stand_dist_to_water:stand_elevation +
(1|altid), family="binomial", data=deer[[season]][which(foldVector!=i),],na.action="na.fail")
}else{
model <- glmer(used~stand_dist_to_water + stand_cos_aspect + stand_sin_aspect +
stand_elevation + stand_slope + veg_class + stand_elevation:stand_slope +
stand_dist_to_water:stand_slope +
(1|altid), family="binomial", data=deer[[season]][which(foldVector!=i),],na.action="na.fail")
}
CVresults$CVpred[which(foldVector==i)] <- plogis(predict(model,newdata=deer[[season]][which(foldVector==i),],allow.new.levels = TRUE))
CVresults$realpred[which(foldVector==i)] <- predict(fullmodel,newdata=deer[[season]][which(foldVector==i),],allow.new.levels = TRUE)
CVresults$observed[which(foldVector==i)] <- deer[[season]]$used[which(foldVector==i)]
}
}
CVresults$CV_RMSE = sqrt(mean((CVresults$observed-CVresults$CVpred)^2)) # root mean squared error for holdout samples in 10-fold cross-validation ...
CVresults$real_RMSE = sqrt(mean((CVresults$observed-CVresults$realpred)^2)) # root mean squared error for residuals from final model
# realprediction <- predict(rf_model1,newdata=df,type="prob")
if(plot){
graphics.off()
par(mfrow=c(2,1))
par(ask=T)
pred <- prediction(CVresults$CVpred,CVresults$observed) # for holdout samples in cross-validation
perf <- performance(pred,"tpr","fpr")
auc <- performance(pred,"auc")
plot(perf, main=sprintf("%s Cross Validation",type))
text(.9,.1,paste("AUC = ",round(auc@y.values[[1]],2),sep=""))
pred <- prediction(CVresults$realpred,CVresults$observed) # for final model
perf <- performance(pred,"tpr","fpr")
auc <- performance(pred,"auc")
plot(perf, main=sprintf("%s Full Model",type))
text(.9,.1,paste("AUC = ",round(auc@y.values[[1]],2),sep=""))
}else{
pred <- prediction(CVresults$CVpred,CVresults$observed) # for holdout samples in cross-validation
perf <- performance(pred,"tpr","fpr")
CVresults$auc_CV <- performance(pred,"auc")
pred <- prediction(CVresults$realpred,CVresults$observed) # for final model
perf <- performance(pred,"tpr","fpr")
CVresults$auc_full <- performance(pred,"auc")
}
# COHEN KAPPA statistics
thresholds <- seq(0.01,0.99,length=101) # "artificial" extinction thresholds across which to examine performance
kappa <- numeric(length(thresholds))
for(i in 1:length(thresholds)){
trueLabels <- CVresults$observed
predLabels <- ifelse(CVresults$CVpred>=thresholds[i],1,0)
tot <- length(CVresults$observed)
tp <- length(which((trueLabels==1)&(predLabels==1)))
tn <- length(which((trueLabels==0)&(predLabels==0)))
fp <- length(which((trueLabels==0)&(predLabels==1)))
fn <- length(which((trueLabels==1)&(predLabels==0)))
pr_agree <- (tp+tn)/tot # overall agreement, or accuracy
pr_agree_rand <- ((tp+fn)/tot)*((tp+fp)/tot)+((fn+tn)/tot)*((fp+tn)/tot)
kappa[i] <- (pr_agree-pr_agree_rand)/(1-pr_agree_rand)
}
#plot(thresholds,kappa,type="l",xlab="Threshold", ylab="Cohen's Kappa", main="Holdout sample performance")
# find threshold value associated with highest Kappa for C-V data
CVresults$cutoff_CV <- thresholds[which.max(kappa)] # max kappa cutoff
kappa <- numeric(length(thresholds))
for(i in 1:length(thresholds)){
trueLabels <- CVresults$observed
predLabels <- ifelse(CVresults$realpred>=thresholds[i],1,0)
tot <- length(CVresults$observed)
tp <- length(which((trueLabels==1)&(predLabels==1)))
tn <- length(which((trueLabels==0)&(predLabels==0)))
fp <- length(which((trueLabels==0)&(predLabels==1)))
fn <- length(which((trueLabels==1)&(predLabels==0)))
pr_agree <- (tp+tn)/tot # overall agreement, or accuracy
pr_agree_rand <- ((tp+fn)/tot)*((tp+fp)/tot)+((fn+tn)/tot)*((fp+tn)/tot)
kappa[i] <- (pr_agree-pr_agree_rand)/(1-pr_agree_rand)
}
#plot(thresholds,kappa,type="l",xlab="Threshold", ylab="Cohen's Kappa", main="Performance: full model")
CVresults$cutoff_full <- thresholds[which.max(kappa)] # max kappa cutoff
### display confusion matrix and kappa for a single threshold
trueLabels <- CVresults$observed
predLabels <- ifelse(CVresults$CVpred>=CVresults$cutoff_CV,1,0)
tot <- length(CVresults$observed)
tp <- length(which((trueLabels==1)&(predLabels==1)))
tn <- length(which((trueLabels==0)&(predLabels==0)))
fp <- length(which((trueLabels==0)&(predLabels==1)))
fn <- length(which((trueLabels==1)&(predLabels==0)))
pr_agree <- (tp+tn)/tot # overall agreement, or accuracy
pr_agree_rand <- ((tp+fn)/tot)*((tp+fp)/tot)+((fn+tn)/tot)*((fp+tn)/tot)
CVresults$bestkappa_CV <- (pr_agree-pr_agree_rand)/(1-pr_agree_rand)
CVresults$confusionmat <- matrix(c(tn,fn,fp,tp),nrow=2,ncol=2)
rownames(CVresults$confusionmat) <- c("Actual not used","Actual used")
colnames(CVresults$confusionmat) <- c("Predicted not used","Predicted used")
CVresults$sensitivity <- tp/(tp+fn)
CVresults$specificity <- tn/(tn+fp)
CVresults$toterror <- (fn+fp)/tot
CVresults$CVpred[which(CVresults$CVpred==1)] <- 0.999999
CVresults$CVpred[which(CVresults$CVpred==0)] <- 0.000001
CVresults$realpred[which(CVresults$realpred==1)] <- 0.999999
CVresults$realpred[which(CVresults$realpred==0)] <- 0.000001
realdata = CVresults$observed
fit_deviance_CV <- mean(-2*(dbinom(CVresults$observed,1,CVresults$CVpred,log=T)-dbinom(realdata,1,realdata,log=T)))
fit_deviance_real <- mean(-2*(dbinom(CVresults$observed,1,CVresults$realpred,log=T)-dbinom(realdata,1,realdata,log=T)))
null_deviance <- mean(-2*(dbinom(CVresults$observed,1,mean(CVresults$observed),log=T)-dbinom(realdata,1,realdata,log=T)))
CVresults$deviance_explained_CV <- (null_deviance-fit_deviance_CV)/null_deviance # based on holdout samples
CVresults$deviance_explained_real <- (null_deviance-fit_deviance_real)/null_deviance # based on full model...
return(CVresults)
}
PlotPerformance <- function(CVresults){
graphics.off()
par(mfrow=c(2,1))
#par(ask=T)
pred <- prediction(CVresults$CVpred,CVresults$observed) # for holdout samples in cross-validation
perf <- performance(pred,"tpr","fpr")
auc <- performance(pred,"auc")
plot(perf, main=sprintf("%s Cross Validation",type))
text(.9,.1,paste("AUC = ",round(auc@y.values[[1]],2),sep=""))
pred <- prediction(CVresults$realpred,CVresults$observed) # for final model
perf <- performance(pred,"tpr","fpr")
auc <- performance(pred,"auc")
plot(perf, main=sprintf("%s Full Model",type))
text(.9,.1,paste("AUC = ",round(auc@y.values[[1]],2),sep=""))
}
|
#Download and unzip data
if(!file.exists("data.zip")){
print("Downloading data")
url<-"https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2FNEI_data.zip"
download.file(url,"data.zip",method="curl")
unzip("data.zip")
}
#Read data in
NEI <- readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
#Total emissions in Baltimore from 1999 to 2008 by type.
te<-tapply(NEI[NEI$fips %in% "24510",]$Emission,
list(NEI[NEI$fips %in% "24510", ]$year,NEI[NEI$fips %in% "24510", ]$type),sum)
#Plot of te using ggplot2 system
require(ggplot2)
require(reshape)
ggplot(melt(te),aes(X2,value,fill=factor(X1)))+
geom_bar(stat="identity", position="dodge") +
xlab("Type") + ylab(expression(PM[2.5])) +
labs(title="Emissions in Baltimore by type and year, in tons") +
scale_fill_discrete(name="Year")+
theme(plot.title=element_text(size=rel(1.4)))
#Save plot
dev.copy(png,"plots/plot3.png",units="px",height=480,width=480)
dev.off()
| /Project2/plot3.R | no_license | KobaKhit/Exploratory-Data-Analysis-Projects | R | false | false | 976 | r | #Download and unzip data
if(!file.exists("data.zip")){
print("Downloading data")
url<-"https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2FNEI_data.zip"
download.file(url,"data.zip",method="curl")
unzip("data.zip")
}
#Read data in
NEI <- readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
#Total emissions in Baltimore from 1999 to 2008 by type.
te<-tapply(NEI[NEI$fips %in% "24510",]$Emission,
list(NEI[NEI$fips %in% "24510", ]$year,NEI[NEI$fips %in% "24510", ]$type),sum)
#Plot of te using ggplot2 system
require(ggplot2)
require(reshape)
ggplot(melt(te),aes(X2,value,fill=factor(X1)))+
geom_bar(stat="identity", position="dodge") +
xlab("Type") + ylab(expression(PM[2.5])) +
labs(title="Emissions in Baltimore by type and year, in tons") +
scale_fill_discrete(name="Year")+
theme(plot.title=element_text(size=rel(1.4)))
#Save plot
dev.copy(png,"plots/plot3.png",units="px",height=480,width=480)
dev.off()
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/SOcrs.R
\name{SOcrs}
\alias{SOcrs}
\title{SOmap coordinate system}
\usage{
SOcrs(crs = NULL)
}
\arguments{
\item{crs}{provide PROJ string to set the value}
}
\description{
Set or return the coordinate system currently in use.
}
\details{
If argument `crs` is NULL, the function returns the current value (which may be `NULL``).
}
\examples{
\dontrun{
SOmap()
SOcrs()
}
}
| /man/SOcrs.Rd | no_license | xichaow/SOmap | R | false | true | 449 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/SOcrs.R
\name{SOcrs}
\alias{SOcrs}
\title{SOmap coordinate system}
\usage{
SOcrs(crs = NULL)
}
\arguments{
\item{crs}{provide PROJ string to set the value}
}
\description{
Set or return the coordinate system currently in use.
}
\details{
If argument `crs` is NULL, the function returns the current value (which may be `NULL``).
}
\examples{
\dontrun{
SOmap()
SOcrs()
}
}
|
library(phyloseq)
library(dada2)
library(ggplot2)
library(Biostrings)
## Reading files ##
path <- "/Users/fay-weili/Desktop/Hornwort_amplicon/dada2/sample_fastq" # CHANGE ME to location of the fastq file
fns <- list.files(path, pattern="fastq.gz", full.names=TRUE)
#fn <- file.path(path, "JNP2_6_F01_ccs_demux.BCF1--BCR1.fastq.gz")
sample.names <- sapply(strsplit(basename(fns), ".fastq.gz"), '[', 1) # get sample names from fastq file names
cw <- "CGTAGCTTCCGGTGGTATCCACGT" # primer 1
cx <- "GGGGCAGGTAAGAAAGGGTTTCGTA" # primer 2
rc <- dada2:::rc
theme_set(theme_bw())
## Remove primers ##
nop <- file.path(paste(path,'_deprimers', sep=''), basename(fns))
prim <- removePrimers(fns, nop, primer.fwd=cw, primer.rev=dada2:::rc(cx), orient=TRUE, verbose=TRUE)
lens.fn <- lapply(nop, function(fn) nchar(getSequences(fn))) # plot len distribution
lens <- do.call(c, lens.fn)
hist(lens, 100)
## Filter ##
filt <- file.path(paste(path,'_deprimers_lenfiltered', sep=''), basename(fns))
track <- filterAndTrim(nop, filt, minQ=3, minLen=400, maxLen=1200, maxN=0, rm.phix=FALSE, maxEE=2, verbose=TRUE)
## DADA2 ##
drp <- derepFastq(filt, verbose=TRUE, qualityType="FastqQuality") # dereplicate
err <- learnErrors(drp, BAND_SIZE=32, multithread=TRUE, errorEstimationFunction=PacBioErrfun) # learn error
plotErrors(err)
dd <- dada(drp, err=err, BAND_SIZE=32, multithread=TRUE)
cbind(ccs=prim[,1], primers=prim[,2], filtered=track[,2], denoised=sapply(dd, function(x) sum(x$denoised)))
seqtab <- makeSequenceTable(dd); dim(seqtab)
## Save seqtab file ##
saveRDS(seqtab, 'rds')
## Check chimera ##
bim <- isBimeraDenovo(seqtab, minFoldParentOverAbundance=3.5, multithread=TRUE)
table(bim)
seqtab.nochim <- removeBimeraDenovo(seqtab, method="consensus", multithread=TRUE, verbose=TRUE)
dim(seqtab.nochim)
sum(seqtab.nochim)/sum(seqtab)
## Read metadata ##
#sample_df <- read.table("/Users/fayweili/Desktop/Hornwort_microbiome/dada2_test/TimeSeriesMeta_lib1.csv", header=TRUE, row.names=1, sep=",", stringsAsFactors=FALSE)
## Make phyloseq object ##
ps <- phyloseq(otu_table(seqtab.nochim, taxa_are_rows=FALSE)) #,sample_data(sample_df))
ps <- prune_samples(sample_names(ps) != "mock_community_5taxa_JNP1_5_E01.fastq.gz", ps)
ps <- prune_samples(sample_names(ps) != "negative_control_JNP1_5_E01.fastq.gz", ps)
## Rename taxa names from sequences to ASV ##
dna <- DNAStringSet(taxa_names(ps))
names(dna) <- taxa_names(ps)
ps <- merge_phyloseq(ps, dna)
taxa_names(ps) <- paste0("ASV", seq(ntaxa(ps)))
writeXStringSet((refseq(ps)), 'ASV.fa') # write sequences to fasta
## Filter out non-rbcLX ASVs based on usearch (ran elsewhere) ##
ASV_off_target <- c("ASV1182", "ASV1542", "ASV1614", "ASV1859", "ASV1858", "ASV1278", "ASV1915", "ASV1761", "ASV1760", "ASV1833", "ASV1911", "ASV1912", "ASV1609", "ASV1919", "ASV1252", "ASV1850", "ASV1525", "ASV1527", "ASV1335", "ASV1905", "ASV1409", "ASV1490", "ASV1781", "ASV1393", "ASV1783", "ASV853", "ASV1866", "ASV1541", "ASV1502", "ASV1687", "ASV1568", "ASV1457", "ASV1383", "ASV1907", "ASV1693", "ASV1826", "ASV1605", "ASV1638", "ASV1340", "ASV1779", "ASV1432", "ASV1940", "ASV1828", "ASV1829", "ASV1909", "ASV1867", "ASV1906", "ASV1458", "ASV1825", "ASV1279", "ASV1047", "ASV1606", "ASV1455", "ASV1712", "ASV1910", "ASV1341", "ASV1692", "ASV1218")
ps_ontarget <- prune_taxa(!(taxa_names(ps) %in% ASV_off_target), ps)
OTU = as(otu_table(ps_ontarget), "matrix")
OTUdf = as.data.frame(OTU)
write.csv(OTUdf, "ASV_on_target_table.csv") # write OTU table
## Transform otu table
#ps.prop <- transform_sample_counts(ps, function(otu) otu/sum(otu))
#ord.nmds.bray <- ordinate(ps.prop, method="NMDS", distance="bray")
#plot_ordination(ps.prop, ord.nmds.bray, color="Site", title="Bray NMDS")
#plot_ordination(ps.prop, ord.nmds.bray, color="Quadrat", title="Bray NMDS")
| /scripts/dada2.R | no_license | fayweili/hornwort_cyano_interaction | R | false | false | 3,795 | r | library(phyloseq)
library(dada2)
library(ggplot2)
library(Biostrings)
## Reading files ##
path <- "/Users/fay-weili/Desktop/Hornwort_amplicon/dada2/sample_fastq" # CHANGE ME to location of the fastq file
fns <- list.files(path, pattern="fastq.gz", full.names=TRUE)
#fn <- file.path(path, "JNP2_6_F01_ccs_demux.BCF1--BCR1.fastq.gz")
sample.names <- sapply(strsplit(basename(fns), ".fastq.gz"), '[', 1) # get sample names from fastq file names
cw <- "CGTAGCTTCCGGTGGTATCCACGT" # primer 1
cx <- "GGGGCAGGTAAGAAAGGGTTTCGTA" # primer 2
rc <- dada2:::rc
theme_set(theme_bw())
## Remove primers ##
nop <- file.path(paste(path,'_deprimers', sep=''), basename(fns))
prim <- removePrimers(fns, nop, primer.fwd=cw, primer.rev=dada2:::rc(cx), orient=TRUE, verbose=TRUE)
lens.fn <- lapply(nop, function(fn) nchar(getSequences(fn))) # plot len distribution
lens <- do.call(c, lens.fn)
hist(lens, 100)
## Filter ##
filt <- file.path(paste(path,'_deprimers_lenfiltered', sep=''), basename(fns))
track <- filterAndTrim(nop, filt, minQ=3, minLen=400, maxLen=1200, maxN=0, rm.phix=FALSE, maxEE=2, verbose=TRUE)
## DADA2 ##
drp <- derepFastq(filt, verbose=TRUE, qualityType="FastqQuality") # dereplicate
err <- learnErrors(drp, BAND_SIZE=32, multithread=TRUE, errorEstimationFunction=PacBioErrfun) # learn error
plotErrors(err)
dd <- dada(drp, err=err, BAND_SIZE=32, multithread=TRUE)
cbind(ccs=prim[,1], primers=prim[,2], filtered=track[,2], denoised=sapply(dd, function(x) sum(x$denoised)))
seqtab <- makeSequenceTable(dd); dim(seqtab)
## Save seqtab file ##
saveRDS(seqtab, 'rds')
## Check chimera ##
bim <- isBimeraDenovo(seqtab, minFoldParentOverAbundance=3.5, multithread=TRUE)
table(bim)
seqtab.nochim <- removeBimeraDenovo(seqtab, method="consensus", multithread=TRUE, verbose=TRUE)
dim(seqtab.nochim)
sum(seqtab.nochim)/sum(seqtab)
## Read metadata ##
#sample_df <- read.table("/Users/fayweili/Desktop/Hornwort_microbiome/dada2_test/TimeSeriesMeta_lib1.csv", header=TRUE, row.names=1, sep=",", stringsAsFactors=FALSE)
## Make phyloseq object ##
ps <- phyloseq(otu_table(seqtab.nochim, taxa_are_rows=FALSE)) #,sample_data(sample_df))
ps <- prune_samples(sample_names(ps) != "mock_community_5taxa_JNP1_5_E01.fastq.gz", ps)
ps <- prune_samples(sample_names(ps) != "negative_control_JNP1_5_E01.fastq.gz", ps)
## Rename taxa names from sequences to ASV ##
dna <- DNAStringSet(taxa_names(ps))
names(dna) <- taxa_names(ps)
ps <- merge_phyloseq(ps, dna)
taxa_names(ps) <- paste0("ASV", seq(ntaxa(ps)))
writeXStringSet((refseq(ps)), 'ASV.fa') # write sequences to fasta
## Filter out non-rbcLX ASVs based on usearch (ran elsewhere) ##
ASV_off_target <- c("ASV1182", "ASV1542", "ASV1614", "ASV1859", "ASV1858", "ASV1278", "ASV1915", "ASV1761", "ASV1760", "ASV1833", "ASV1911", "ASV1912", "ASV1609", "ASV1919", "ASV1252", "ASV1850", "ASV1525", "ASV1527", "ASV1335", "ASV1905", "ASV1409", "ASV1490", "ASV1781", "ASV1393", "ASV1783", "ASV853", "ASV1866", "ASV1541", "ASV1502", "ASV1687", "ASV1568", "ASV1457", "ASV1383", "ASV1907", "ASV1693", "ASV1826", "ASV1605", "ASV1638", "ASV1340", "ASV1779", "ASV1432", "ASV1940", "ASV1828", "ASV1829", "ASV1909", "ASV1867", "ASV1906", "ASV1458", "ASV1825", "ASV1279", "ASV1047", "ASV1606", "ASV1455", "ASV1712", "ASV1910", "ASV1341", "ASV1692", "ASV1218")
ps_ontarget <- prune_taxa(!(taxa_names(ps) %in% ASV_off_target), ps)
OTU = as(otu_table(ps_ontarget), "matrix")
OTUdf = as.data.frame(OTU)
write.csv(OTUdf, "ASV_on_target_table.csv") # write OTU table
## Transform otu table
#ps.prop <- transform_sample_counts(ps, function(otu) otu/sum(otu))
#ord.nmds.bray <- ordinate(ps.prop, method="NMDS", distance="bray")
#plot_ordination(ps.prop, ord.nmds.bray, color="Site", title="Bray NMDS")
#plot_ordination(ps.prop, ord.nmds.bray, color="Quadrat", title="Bray NMDS")
|
NEI <- readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
# question 1
d1999<-NEI[NEI[,6]==1999,]
d2002<-NEI[NEI[,6]==2002,]
d2005<-NEI[NEI[,6]==2005,]
d2008<-NEI[NEI[,6]==2008,]
total<-c(sum(d1999[,4]),sum(d2002[,4]),sum(d2005[,4]),sum(d2008[,4]))
png("plot1.png")
plot(total~c(1999,2002,2005,2008),type='l',ylab="total PM2.5",xlab="year")
dev.off()
| /plot1.R | no_license | yzhao13tx/ExData_Plotting1 | R | false | false | 383 | r |
NEI <- readRDS("summarySCC_PM25.rds")
SCC <- readRDS("Source_Classification_Code.rds")
# question 1
d1999<-NEI[NEI[,6]==1999,]
d2002<-NEI[NEI[,6]==2002,]
d2005<-NEI[NEI[,6]==2005,]
d2008<-NEI[NEI[,6]==2008,]
total<-c(sum(d1999[,4]),sum(d2002[,4]),sum(d2005[,4]),sum(d2008[,4]))
png("plot1.png")
plot(total~c(1999,2002,2005,2008),type='l',ylab="total PM2.5",xlab="year")
dev.off()
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Student.R
\name{n_dist,Student-method}
\alias{n_dist,Student-method}
\title{Distribution of the Sample Size}
\usage{
\S4method{n_dist}{Student}(
design,
n1,
nuisance,
summary = TRUE,
plot = FALSE,
iters = 10000,
seed = NULL,
range = 0,
allocation = c("approximate", "exact"),
...
)
}
\arguments{
\item{design}{Object of class \code{Student} created by \code{setupStudent}.}
\item{n1}{Either the sample size of the first stage (if
\code{recalculation = TRUE} or the toal sample size (if
\code{recalculation = FALSE}).}
\item{nuisance}{Value of the nuisance parameter. For the
Student's t-test this is the variance.}
\item{summary}{logical - is a summary of the sample size distribution desired?
Otherwise, a vector with sample sizes is returned.}
\item{plot}{Should a plot of the sample size distribution
be drawn?}
\item{iters}{Number of simulation iterations.}
\item{seed}{Random seed for simulation.}
\item{range}{this determines how far the plot whiskers extend out from the box.
If range is positive, the whiskers extend to the most extreme data point
which is no more than range times the interquartile range from the box.
A value of zero causes the whiskers to extend to the data extremes.}
\item{allocation}{Whether the allocation ratio should be preserved
exactly (\code{exact}) or approximately (\code{approximate}).}
\item{...}{Further optional arguments.}
}
\value{
Summary and/or plot of the sample size distribution for
every nuisance parameter and every value of n1.
}
\description{
Calculates the distribution of the total sample sizes of designs
with blinded sample size recalculation for different values of the
nuisance parameter or of n1.
}
\details{
The method is only vectorized in either \code{nuisance}
or \code{n1}.
}
\examples{
d <- setupStudent(alpha = .025, beta = .2, r = 1, delta = 3.5, delta_NI = 0,
alternative = "greater", n_max = 156)
n_dist(d, n1 = 20, nuisance = 5.5, summary = TRUE, plot = FALSE, seed = 2020)
}
| /blindrecalc/man/n_dist.Student.Rd | no_license | akhikolla/ClusterTests | R | false | true | 2,146 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/Student.R
\name{n_dist,Student-method}
\alias{n_dist,Student-method}
\title{Distribution of the Sample Size}
\usage{
\S4method{n_dist}{Student}(
design,
n1,
nuisance,
summary = TRUE,
plot = FALSE,
iters = 10000,
seed = NULL,
range = 0,
allocation = c("approximate", "exact"),
...
)
}
\arguments{
\item{design}{Object of class \code{Student} created by \code{setupStudent}.}
\item{n1}{Either the sample size of the first stage (if
\code{recalculation = TRUE} or the toal sample size (if
\code{recalculation = FALSE}).}
\item{nuisance}{Value of the nuisance parameter. For the
Student's t-test this is the variance.}
\item{summary}{logical - is a summary of the sample size distribution desired?
Otherwise, a vector with sample sizes is returned.}
\item{plot}{Should a plot of the sample size distribution
be drawn?}
\item{iters}{Number of simulation iterations.}
\item{seed}{Random seed for simulation.}
\item{range}{this determines how far the plot whiskers extend out from the box.
If range is positive, the whiskers extend to the most extreme data point
which is no more than range times the interquartile range from the box.
A value of zero causes the whiskers to extend to the data extremes.}
\item{allocation}{Whether the allocation ratio should be preserved
exactly (\code{exact}) or approximately (\code{approximate}).}
\item{...}{Further optional arguments.}
}
\value{
Summary and/or plot of the sample size distribution for
every nuisance parameter and every value of n1.
}
\description{
Calculates the distribution of the total sample sizes of designs
with blinded sample size recalculation for different values of the
nuisance parameter or of n1.
}
\details{
The method is only vectorized in either \code{nuisance}
or \code{n1}.
}
\examples{
d <- setupStudent(alpha = .025, beta = .2, r = 1, delta = 3.5, delta_NI = 0,
alternative = "greater", n_max = 156)
n_dist(d, n1 = 20, nuisance = 5.5, summary = TRUE, plot = FALSE, seed = 2020)
}
|
# ------------------------------------------------------------------------------
# Project: SRM - Stochastische Risikomodellierung und statistische Methoden
# ------------------------------------------------------------------------------
# Quantlet: SRM_fig2.14
# ------------------------------------------------------------------------------
# Description: Produces the QQ plots for four simulated samples of
# N(0,1), N(5,1), N(0,9) and N(5,9). QQ-plots compare empirical
# quantiles of a distribution with theoretical quantiles of the
# standard normal distribution.
# ------------------------------------------------------------------------------
# Keywords: qq-plot, simulation, normal, normal distribution, plot,
# graphical representation
# ------------------------------------------------------------------------------
# See also:
# ------------------------------------------------------------------------------
# Author: Wellisch
# ------------------------------------------------------------------------------
## clear history
rm(list = ls(all = TRUE))
graphics.off()
## Q-Q-Normal-Plots mit Sollgeraden
## Mit festem seed zur Reproduktion
set.seed(1234)
w = rnorm(100)
set.seed(1234)
x = rnorm(100, mean = 5)
set.seed(1234)
y = rnorm(100, sd = 3)
set.seed(1234)
z = rnorm(100, mean = 5, sd = 3)
par(mfrow = c(2, 2))
qqnorm(w, main = "Normal Q-Q-Plot der Stichprobe w", xlab = "theoretische Standardnormal-Quantile",
ylab = "empirische Quantile")
qqline(w)
qqnorm(x, main = "Normal Q-Q-Plot der Stichprobe x", xlab = "theoretische Standardnormal-Quantile",
ylab = "empirische Quantile")
qqline(x)
qqnorm(y, main = "Normal Q-Q-Plot der Stichprobe y", xlab = "theoretische Standardnormal-Quantile",
ylab = "empirische Quantile")
qqline(y)
qqnorm(z, main = "Normal Q-Q-Plot der Stichprobe z", xlab = "theoretische Standardnormal-Quantile",
ylab = "empirische Quantile")
qqline(z)
| /SRM_fig2.14/SRM_fig2.14.R | no_license | SHIccc/SRM | R | false | false | 1,996 | r | # ------------------------------------------------------------------------------
# Project: SRM - Stochastische Risikomodellierung und statistische Methoden
# ------------------------------------------------------------------------------
# Quantlet: SRM_fig2.14
# ------------------------------------------------------------------------------
# Description: Produces the QQ plots for four simulated samples of
# N(0,1), N(5,1), N(0,9) and N(5,9). QQ-plots compare empirical
# quantiles of a distribution with theoretical quantiles of the
# standard normal distribution.
# ------------------------------------------------------------------------------
# Keywords: qq-plot, simulation, normal, normal distribution, plot,
# graphical representation
# ------------------------------------------------------------------------------
# See also:
# ------------------------------------------------------------------------------
# Author: Wellisch
# ------------------------------------------------------------------------------
## clear history
rm(list = ls(all = TRUE))
graphics.off()
## Q-Q-Normal-Plots mit Sollgeraden
## Mit festem seed zur Reproduktion
set.seed(1234)
w = rnorm(100)
set.seed(1234)
x = rnorm(100, mean = 5)
set.seed(1234)
y = rnorm(100, sd = 3)
set.seed(1234)
z = rnorm(100, mean = 5, sd = 3)
par(mfrow = c(2, 2))
qqnorm(w, main = "Normal Q-Q-Plot der Stichprobe w", xlab = "theoretische Standardnormal-Quantile",
ylab = "empirische Quantile")
qqline(w)
qqnorm(x, main = "Normal Q-Q-Plot der Stichprobe x", xlab = "theoretische Standardnormal-Quantile",
ylab = "empirische Quantile")
qqline(x)
qqnorm(y, main = "Normal Q-Q-Plot der Stichprobe y", xlab = "theoretische Standardnormal-Quantile",
ylab = "empirische Quantile")
qqline(y)
qqnorm(z, main = "Normal Q-Q-Plot der Stichprobe z", xlab = "theoretische Standardnormal-Quantile",
ylab = "empirische Quantile")
qqline(z)
|
# Economy & Budget Frame Dictionary (fr_eco)
# Validated and thus recommended to be used for text annotation of lemmatized English migration-related texts
library(data.table)
library(stringi)
library(dplyr)
# Instruction: Please replace "all_text_mt_lemma" with the name of your text column. And "sample_ID" with the name of your text ID column.
# A text is coded as economy and budget-related if at least 1 sentence contains a migration-related word and a economy and budget-related word.
#################
# read and prepare text data
##############
setwd('') # directory
corpus <- fread("") # your dataset
# lower case text column
corpus$all_text_mt_lemma <- tolower(corpus$all_text_mt_lemma)
###########
## dictionary components
###########
# migration-related keywords
migration <- c('(?:^|\\W)asyl', '(?:^|\\W)immigrant', '(?:^|\\W)immigrat', '(?:^|\\W)migrant', '(?:^|\\W)migrat', '(?:^|\\W)refugee', '(?:^|\\W)people\\W*on\\W*the\\W*run',
'(?:^|\\W)foreigner', '(?:^|\\W)foreign\\W*background',
'(?:^|\\W)(paperless|undocumented)', '(?:^|\\W)sans\\W*pap(ie|e)r', '(?:^|\\W)guest\\W*worker', '(?:^|\\W)foreign\\W*worker',
'(?:^|\\W)emigra','(?:^|\\W)brain\\W*drain',
'(?:^|\\W)free\\W*movement', '(?:^|\\W)freedom\\W*of\\W*movement', '(?:^|\\W)movement\\W*of\\W*person',
'(?:^|\\W)unaccompanied\\W*child', '(?:^|\\W)unaccompanied\\W*minor')
budget <- '(?:^|\\W)budget'
cost <- '(?:^|\\W)cost\\W'
exclude_cost <- c('(?:^|\\W)human\\W*cost\\W', '(?:^|\\W)humanitarian\\W*cost\\W')
econom <- '(?:^|\\W)econom'
financ <- '(?:^|\\W)financ(?!ial\\stimes)'
fund <- '(?:^|\\W)(funding|funded|funds|fund)\\W' #not fundamentally
exclude_fund <- c('(?:^|\\W)charit', '(?:^|\\W)non\\W*governmental\\W*organi(s|z)ation', '(?:^|\\W)ngo\\s', '(?:^|\\W)donat', '(?:^|\\W)humanitarian\\W')
gdp <- c('(?:^|\\W)gdp', '(?:^|\\W)gross\\W*domestic\\W*product', '(?:^|\\W)global\\W*domestic\\W*product','(?:^|\\W)gross\\W*national\\W*product')
exclude_gdp <- "(?:^|\\W)police\\W*union"
money <- c('(?:^|\\W)money', '(?:^|\\W)monetar', '(?:^|\\W)public\\W*money')
numbers <- c("(?:^|\\W)(eur|€|\\$|£|gbp|dm|kr|sek|ft|huf|zł|pln|l|rol|ron)\\W*[0-9]{1,6}\\W*(thousand|million|billion|mrd|m)\\W",
"(?:^|\\W)(eur|€|\\$|£|gbp|dm|kr|sek|ft|huf|zł|pln|l|rol|ron)\\W*[0-9]{0,6}.?[0-9]{0,3}\\W*(thousand|million|billion|mrd|m)\\W",
"(?:^|\\W)(eur|€|\\$|£|gbp|dm|kr|sek|ft|huf|zł|pln|l|rol|ron)\\W*[0-9]{0,6}\\W*(thousand|million|billion|mrd|m)\\W",
"(?:^|\\W)(thousand|million|billion|mrd|m)\\W*(euro|dollar|pound|mark|krona|kronor|peseta|zloty|forint|romanian\\Wleu|kronor)\\W",
"(?:^|\\W)[0-9]{0,6}.?[0-9]{1,3}\\W*(euro|dollar|pound|mark|krona|kronor|peseta|zloty|forint|romanian\\Wleu|kronor)\\W")
tax <- '(?:^|\\W)tax(?!i)'
exclude_tax <- '(?:^|\\W)taxi'
other <- c('(?:^|\\W)government\\W*spend\\s*', '(?:^|\\W)resource', '(?:^|\\W)public\\W*spend', '(?:^|\\W)remittance')
exclude <- c("(?:^|\\W)traffick", "(?:^|\\W)smugg", "(?:^|\\W)arrest", "(?:^|\\W)fraud") #more genral crime/criminal/illegal does not really help
###########
## create dictionary
###########
dict_name <- c("migration", "budget", "cost", "exclude_cost",
"econom", "financ", "fund", "exclude_fund", "gdp",
"exclude_gdp", "money", "numbers","tax", "exclude_tax",
"other", "exclude")
###########
## sentence splitting
###########
corpus$all_text_mt_lemma <- as.character(corpus$all_text_mt_lemma)
corpus$all_text_mt_lemma <- gsub("(www)\\.(.+?)\\.([a-z]+)", "\\1\\2\\3",corpus$all_text_mt_lemma) ## Punkte aus Webseiten entfernen:
corpus$all_text_mt_lemma <- gsub("([a-zA-Z])\\.([a-zA-Z])", "\\1. \\2",corpus$all_text_mt_lemma) ## Verbleibende Punkte zwischen Buchstaben, Leerzeichen einfügen:
corpus$all_text_mt_lemma <- as.character(corpus$all_text_mt_lemma)
list_sent <- strsplit(corpus$all_text_mt_lemma, "(!\\s|\\.\\s|\\?\\s)")
corpus_sentence <- data.frame(sample_ID=rep(corpus$sample_ID, sapply(list_sent, length)), sentence=unlist(list_sent))
#######
## search pattern of dictionary in text corpus
#######
n <- length(dict_name)
evaluation_sent <- vector("list", n)
count <- 0
for (i in dict_name) {
count <- count + 1
print(count)
print(i)
match <- stri_count_regex(corpus_sentence$sentence, paste(get(i), collapse='|'))
evaluation_sent[[i]] <- data.frame(name=match)
}
evaluation_sent <- evaluation_sent[-c(1:n)]
count_per_dict_sentence <- do.call("cbind", evaluation_sent)
cols <- names(count_per_dict_sentence) == "name"
names(count_per_dict_sentence)[cols] <- paste0("name", seq.int(sum(cols)))
oldnames <- colnames(count_per_dict_sentence)
newnames <- names(evaluation_sent)
setnames(count_per_dict_sentence, old = oldnames, new = newnames)
head(count_per_dict_sentence)
colnames(count_per_dict_sentence)
###########
# some recoding
###########
count_per_dict_sentence$budget_combi <- case_when(
count_per_dict_sentence$budget >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$budget_combi[is.na(count_per_dict_sentence$budget_combi)] <- 0
count_per_dict_sentence$cost_combi <- case_when(
count_per_dict_sentence$cost >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude_cost ==1~ 0,
count_per_dict_sentence$cost >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude >=1~ 0,
count_per_dict_sentence$cost >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$cost_combi[is.na(count_per_dict_sentence$cost_combi)] <- 0
count_per_dict_sentence$econom_combi <- case_when(
count_per_dict_sentence$econom >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$econom_combi[is.na(count_per_dict_sentence$econom_combi)] <- 0
count_per_dict_sentence$financ_combi <- case_when(
count_per_dict_sentence$financ >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude >=1~ 0,
count_per_dict_sentence$financ >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$financ_combi[is.na(count_per_dict_sentence$financ_combi)] <- 0
count_per_dict_sentence$fund_combi <- case_when(
count_per_dict_sentence$fund >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude_fund >=1~ 0,
count_per_dict_sentence$fund >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$fund_combi[is.na(count_per_dict_sentence$fund_combi)] <- 0
count_per_dict_sentence$gdp_combi <- case_when(
count_per_dict_sentence$gdp >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude_gdp >=1~ 0,
count_per_dict_sentence$gdp >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$gdp_combi[is.na(count_per_dict_sentence$gdp_combi)] <- 0
count_per_dict_sentence$money_combi <- case_when(
count_per_dict_sentence$money >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude >=1~ 0,
count_per_dict_sentence$money >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude_fund >=1~ 0,
count_per_dict_sentence$money >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$money_combi[is.na(count_per_dict_sentence$money_combi)] <- 0
count_per_dict_sentence$numbers_combi <- case_when(
count_per_dict_sentence$numbers >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude >=1~ 0,
count_per_dict_sentence$numbers >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$numbers_combi[is.na(count_per_dict_sentence$numbers_combi)] <- 0
count_per_dict_sentence$tax_combi <- case_when(
count_per_dict_sentence$tax >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude_tax >=1~ 0,
count_per_dict_sentence$tax >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$tax_combi[is.na(count_per_dict_sentence$tax_combi)] <- 0
count_per_dict_sentence$other_combi <- case_when(
count_per_dict_sentence$other >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$other_combi[is.na(count_per_dict_sentence$other_combi)] <- 0
#check results on sentence level
# add sentence id for merging (order is correct)
count_per_dict_sentence$articleid_sent_id <- rownames(count_per_dict_sentence)
corpus_sentence$articleid_sent_id <- rownames(corpus_sentence)
#merge sentence and hits
corpus_sentence_with_dict_hits <- merge(corpus_sentence, count_per_dict_sentence, by = "articleid_sent_id")
#aggregate the sentence level hit results on article level (sum up results of several variables per article (doc_id))#all those that were searched for on sentence level
df_sent_results_agg <- corpus_sentence_with_dict_hits %>%
group_by(sample_ID) %>%
summarise_at(vars("migration", "budget_combi", "cost_combi", "exclude_cost", "econom_combi", "financ_combi", "fund_combi", "gdp_combi", "money_combi", "numbers_combi", "tax_combi", "other_combi", "exclude"), sum)
corpus_new <- merge(df_sent_results_agg, corpus, by = "sample_ID", all =T)
#recode combination of dictionary components that make up the dictionary
#calc fr_eco variable
corpus_sentence_with_dict_hits$fr_eco <- case_when(corpus_sentence_with_dict_hits$budget_combi >= 1 | corpus_sentence_with_dict_hits$cost_combi >= 1 | corpus_sentence_with_dict_hits$econom_combi >= 1 |
corpus_sentence_with_dict_hits$financ_combi >= 1 | corpus_sentence_with_dict_hits$fund_combi >= 1 | corpus_sentence_with_dict_hits$gdp_combi >= 1 |
corpus_sentence_with_dict_hits$numbers_combi >= 1 | corpus_sentence_with_dict_hits$money_combi >= 1 |corpus_sentence_with_dict_hits$tax_combi >= 1 | corpus_sentence_with_dict_hits$other_combi >= 1 ~ 1)#
#aggregate the sentence level hit results on article level
corpus_with_fr_agg <- corpus_sentence_with_dict_hits %>%
group_by(sample_ID) %>%
summarise_at(vars("fr_eco"), mean)
corpus_new <- merge(corpus_with_fr_agg, corpus, by = "sample_ID", all =T)
corpus_new <- subset(corpus_new, select = c(sample_ID, fr_eco))
corpus_new <- corpus_new %>% mutate(fr_eco = if_else(is.na(fr_eco), 0, fr_eco)) #set NA to 0 (necessary for sum within group later)
table(corpus_new$fr_eco) # descriptive overview
######
#save annotated dataset
###########
write.csv(corpus_new, "fr_eco.csv") #
| /Economy_Dictionary_Annotation.R | no_license | juliajung11/MultilingualTextAnalysis | R | false | false | 10,554 | r | # Economy & Budget Frame Dictionary (fr_eco)
# Validated and thus recommended to be used for text annotation of lemmatized English migration-related texts
library(data.table)
library(stringi)
library(dplyr)
# Instruction: Please replace "all_text_mt_lemma" with the name of your text column. And "sample_ID" with the name of your text ID column.
# A text is coded as economy and budget-related if at least 1 sentence contains a migration-related word and a economy and budget-related word.
#################
# read and prepare text data
##############
setwd('') # directory
corpus <- fread("") # your dataset
# lower case text column
corpus$all_text_mt_lemma <- tolower(corpus$all_text_mt_lemma)
###########
## dictionary components
###########
# migration-related keywords
migration <- c('(?:^|\\W)asyl', '(?:^|\\W)immigrant', '(?:^|\\W)immigrat', '(?:^|\\W)migrant', '(?:^|\\W)migrat', '(?:^|\\W)refugee', '(?:^|\\W)people\\W*on\\W*the\\W*run',
'(?:^|\\W)foreigner', '(?:^|\\W)foreign\\W*background',
'(?:^|\\W)(paperless|undocumented)', '(?:^|\\W)sans\\W*pap(ie|e)r', '(?:^|\\W)guest\\W*worker', '(?:^|\\W)foreign\\W*worker',
'(?:^|\\W)emigra','(?:^|\\W)brain\\W*drain',
'(?:^|\\W)free\\W*movement', '(?:^|\\W)freedom\\W*of\\W*movement', '(?:^|\\W)movement\\W*of\\W*person',
'(?:^|\\W)unaccompanied\\W*child', '(?:^|\\W)unaccompanied\\W*minor')
budget <- '(?:^|\\W)budget'
cost <- '(?:^|\\W)cost\\W'
exclude_cost <- c('(?:^|\\W)human\\W*cost\\W', '(?:^|\\W)humanitarian\\W*cost\\W')
econom <- '(?:^|\\W)econom'
financ <- '(?:^|\\W)financ(?!ial\\stimes)'
fund <- '(?:^|\\W)(funding|funded|funds|fund)\\W' #not fundamentally
exclude_fund <- c('(?:^|\\W)charit', '(?:^|\\W)non\\W*governmental\\W*organi(s|z)ation', '(?:^|\\W)ngo\\s', '(?:^|\\W)donat', '(?:^|\\W)humanitarian\\W')
gdp <- c('(?:^|\\W)gdp', '(?:^|\\W)gross\\W*domestic\\W*product', '(?:^|\\W)global\\W*domestic\\W*product','(?:^|\\W)gross\\W*national\\W*product')
exclude_gdp <- "(?:^|\\W)police\\W*union"
money <- c('(?:^|\\W)money', '(?:^|\\W)monetar', '(?:^|\\W)public\\W*money')
numbers <- c("(?:^|\\W)(eur|€|\\$|£|gbp|dm|kr|sek|ft|huf|zł|pln|l|rol|ron)\\W*[0-9]{1,6}\\W*(thousand|million|billion|mrd|m)\\W",
"(?:^|\\W)(eur|€|\\$|£|gbp|dm|kr|sek|ft|huf|zł|pln|l|rol|ron)\\W*[0-9]{0,6}.?[0-9]{0,3}\\W*(thousand|million|billion|mrd|m)\\W",
"(?:^|\\W)(eur|€|\\$|£|gbp|dm|kr|sek|ft|huf|zł|pln|l|rol|ron)\\W*[0-9]{0,6}\\W*(thousand|million|billion|mrd|m)\\W",
"(?:^|\\W)(thousand|million|billion|mrd|m)\\W*(euro|dollar|pound|mark|krona|kronor|peseta|zloty|forint|romanian\\Wleu|kronor)\\W",
"(?:^|\\W)[0-9]{0,6}.?[0-9]{1,3}\\W*(euro|dollar|pound|mark|krona|kronor|peseta|zloty|forint|romanian\\Wleu|kronor)\\W")
tax <- '(?:^|\\W)tax(?!i)'
exclude_tax <- '(?:^|\\W)taxi'
other <- c('(?:^|\\W)government\\W*spend\\s*', '(?:^|\\W)resource', '(?:^|\\W)public\\W*spend', '(?:^|\\W)remittance')
exclude <- c("(?:^|\\W)traffick", "(?:^|\\W)smugg", "(?:^|\\W)arrest", "(?:^|\\W)fraud") #more genral crime/criminal/illegal does not really help
###########
## create dictionary
###########
dict_name <- c("migration", "budget", "cost", "exclude_cost",
"econom", "financ", "fund", "exclude_fund", "gdp",
"exclude_gdp", "money", "numbers","tax", "exclude_tax",
"other", "exclude")
###########
## sentence splitting
###########
corpus$all_text_mt_lemma <- as.character(corpus$all_text_mt_lemma)
corpus$all_text_mt_lemma <- gsub("(www)\\.(.+?)\\.([a-z]+)", "\\1\\2\\3",corpus$all_text_mt_lemma) ## Punkte aus Webseiten entfernen:
corpus$all_text_mt_lemma <- gsub("([a-zA-Z])\\.([a-zA-Z])", "\\1. \\2",corpus$all_text_mt_lemma) ## Verbleibende Punkte zwischen Buchstaben, Leerzeichen einfügen:
corpus$all_text_mt_lemma <- as.character(corpus$all_text_mt_lemma)
list_sent <- strsplit(corpus$all_text_mt_lemma, "(!\\s|\\.\\s|\\?\\s)")
corpus_sentence <- data.frame(sample_ID=rep(corpus$sample_ID, sapply(list_sent, length)), sentence=unlist(list_sent))
#######
## search pattern of dictionary in text corpus
#######
n <- length(dict_name)
evaluation_sent <- vector("list", n)
count <- 0
for (i in dict_name) {
count <- count + 1
print(count)
print(i)
match <- stri_count_regex(corpus_sentence$sentence, paste(get(i), collapse='|'))
evaluation_sent[[i]] <- data.frame(name=match)
}
evaluation_sent <- evaluation_sent[-c(1:n)]
count_per_dict_sentence <- do.call("cbind", evaluation_sent)
cols <- names(count_per_dict_sentence) == "name"
names(count_per_dict_sentence)[cols] <- paste0("name", seq.int(sum(cols)))
oldnames <- colnames(count_per_dict_sentence)
newnames <- names(evaluation_sent)
setnames(count_per_dict_sentence, old = oldnames, new = newnames)
head(count_per_dict_sentence)
colnames(count_per_dict_sentence)
###########
# some recoding
###########
count_per_dict_sentence$budget_combi <- case_when(
count_per_dict_sentence$budget >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$budget_combi[is.na(count_per_dict_sentence$budget_combi)] <- 0
count_per_dict_sentence$cost_combi <- case_when(
count_per_dict_sentence$cost >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude_cost ==1~ 0,
count_per_dict_sentence$cost >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude >=1~ 0,
count_per_dict_sentence$cost >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$cost_combi[is.na(count_per_dict_sentence$cost_combi)] <- 0
count_per_dict_sentence$econom_combi <- case_when(
count_per_dict_sentence$econom >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$econom_combi[is.na(count_per_dict_sentence$econom_combi)] <- 0
count_per_dict_sentence$financ_combi <- case_when(
count_per_dict_sentence$financ >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude >=1~ 0,
count_per_dict_sentence$financ >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$financ_combi[is.na(count_per_dict_sentence$financ_combi)] <- 0
count_per_dict_sentence$fund_combi <- case_when(
count_per_dict_sentence$fund >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude_fund >=1~ 0,
count_per_dict_sentence$fund >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$fund_combi[is.na(count_per_dict_sentence$fund_combi)] <- 0
count_per_dict_sentence$gdp_combi <- case_when(
count_per_dict_sentence$gdp >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude_gdp >=1~ 0,
count_per_dict_sentence$gdp >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$gdp_combi[is.na(count_per_dict_sentence$gdp_combi)] <- 0
count_per_dict_sentence$money_combi <- case_when(
count_per_dict_sentence$money >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude >=1~ 0,
count_per_dict_sentence$money >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude_fund >=1~ 0,
count_per_dict_sentence$money >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$money_combi[is.na(count_per_dict_sentence$money_combi)] <- 0
count_per_dict_sentence$numbers_combi <- case_when(
count_per_dict_sentence$numbers >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude >=1~ 0,
count_per_dict_sentence$numbers >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$numbers_combi[is.na(count_per_dict_sentence$numbers_combi)] <- 0
count_per_dict_sentence$tax_combi <- case_when(
count_per_dict_sentence$tax >=1 & count_per_dict_sentence$migration >=1 & count_per_dict_sentence$exclude_tax >=1~ 0,
count_per_dict_sentence$tax >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$tax_combi[is.na(count_per_dict_sentence$tax_combi)] <- 0
count_per_dict_sentence$other_combi <- case_when(
count_per_dict_sentence$other >=1 & count_per_dict_sentence$migration >=1 ~ 1)
count_per_dict_sentence$other_combi[is.na(count_per_dict_sentence$other_combi)] <- 0
#check results on sentence level
# add sentence id for merging (order is correct)
count_per_dict_sentence$articleid_sent_id <- rownames(count_per_dict_sentence)
corpus_sentence$articleid_sent_id <- rownames(corpus_sentence)
#merge sentence and hits
corpus_sentence_with_dict_hits <- merge(corpus_sentence, count_per_dict_sentence, by = "articleid_sent_id")
#aggregate the sentence level hit results on article level (sum up results of several variables per article (doc_id))#all those that were searched for on sentence level
df_sent_results_agg <- corpus_sentence_with_dict_hits %>%
group_by(sample_ID) %>%
summarise_at(vars("migration", "budget_combi", "cost_combi", "exclude_cost", "econom_combi", "financ_combi", "fund_combi", "gdp_combi", "money_combi", "numbers_combi", "tax_combi", "other_combi", "exclude"), sum)
corpus_new <- merge(df_sent_results_agg, corpus, by = "sample_ID", all =T)
#recode combination of dictionary components that make up the dictionary
#calc fr_eco variable
corpus_sentence_with_dict_hits$fr_eco <- case_when(corpus_sentence_with_dict_hits$budget_combi >= 1 | corpus_sentence_with_dict_hits$cost_combi >= 1 | corpus_sentence_with_dict_hits$econom_combi >= 1 |
corpus_sentence_with_dict_hits$financ_combi >= 1 | corpus_sentence_with_dict_hits$fund_combi >= 1 | corpus_sentence_with_dict_hits$gdp_combi >= 1 |
corpus_sentence_with_dict_hits$numbers_combi >= 1 | corpus_sentence_with_dict_hits$money_combi >= 1 |corpus_sentence_with_dict_hits$tax_combi >= 1 | corpus_sentence_with_dict_hits$other_combi >= 1 ~ 1)#
#aggregate the sentence level hit results on article level
corpus_with_fr_agg <- corpus_sentence_with_dict_hits %>%
group_by(sample_ID) %>%
summarise_at(vars("fr_eco"), mean)
corpus_new <- merge(corpus_with_fr_agg, corpus, by = "sample_ID", all =T)
corpus_new <- subset(corpus_new, select = c(sample_ID, fr_eco))
corpus_new <- corpus_new %>% mutate(fr_eco = if_else(is.na(fr_eco), 0, fr_eco)) #set NA to 0 (necessary for sum within group later)
table(corpus_new$fr_eco) # descriptive overview
######
#save annotated dataset
###########
write.csv(corpus_new, "fr_eco.csv") #
|
set.seed(1)
sims = 3
outmat = sapply(1:sims,function(x){
N0 = 1
times = 50
N = vector(length = times)
bi = dbinom(0:2,2,1-exp(-100/(2*N0)))
N[1] = N0*(sample(c(0,1,2),1,replace = FALSE,bi))
sum(replicate(100,sample(c(0,1,2),1,replace = FALSE,bi)))
for (t in 2:times) {
bi = dbinom(0:2,2,1-exp(-100/(2*N[t-1])))
N[t] = sum(replicate(N[t-1],(sample(c(0,1,2),1,replace = FALSE,bi))))
}
N
})
matplot(1:times, outmat,type = "l",las = 1, ylab = "Population", xlab= "seasons")
| /StocasticGrowthOfAPlantPopulation.R | no_license | Geoffrey-Harper/StocasticPlantGrowthModel | R | false | false | 504 | r | set.seed(1)
sims = 3
outmat = sapply(1:sims,function(x){
N0 = 1
times = 50
N = vector(length = times)
bi = dbinom(0:2,2,1-exp(-100/(2*N0)))
N[1] = N0*(sample(c(0,1,2),1,replace = FALSE,bi))
sum(replicate(100,sample(c(0,1,2),1,replace = FALSE,bi)))
for (t in 2:times) {
bi = dbinom(0:2,2,1-exp(-100/(2*N[t-1])))
N[t] = sum(replicate(N[t-1],(sample(c(0,1,2),1,replace = FALSE,bi))))
}
N
})
matplot(1:times, outmat,type = "l",las = 1, ylab = "Population", xlab= "seasons")
|
# group by individual and measure fraction of each gender not counting the individual
# Getting a uniquified view of thread metrics for plotting
thread_info = data_thread %>%
filter(UniqueContributors>5) %>%
select(Year,
Title,
Link,
Type,
ThreadId,
DebateSize,
Female_Contributions,
FemaleParticipation,
Live,
UniqueContributors,
UniqueFemaleContributors,
UniqueFemaleParticipation,
starts_with('Thread_'),
-starts_with('Thread_Text')) %>%
unique()
# test whether unique female participation is segmented to threads in a significant way
prop.test(thread_info$UniqueFemaleContributors,thread_info$UniqueContributors)
# display results of prp test
female_confint0 = binom.confint(sum(thread_info$UniqueFemaleContributors),sum(thread_info$UniqueContributors),methods='wilson')[5:6]
female_confint = cbind(as.character(thread_info$ThreadId),thread_info$DebateSize,thread_info$Title,binom.confint(thread_info$UniqueFemaleContributors,thread_info$UniqueContributors,methods='wilson')[,4:6])
names(female_confint)[1]='Thread'
names(female_confint)[2]='ThreadSize'
names(female_confint)[3]='Title'
ggplot(female_confint,aes(reorder(Thread,ThreadSize),mean))+
geom_point()+
geom_errorbar(aes(ymin=lower, ymax=upper))+
geom_hline(yintercept=female_confint0$lower)+
geom_hline(yintercept=female_confint0$upper)+
ylab('Female Fraction')+
xlab('Thread')+
theme(axis.text.x = element_text(angle = 45, hjust=1))
# smae but for all female participation
prop.test(thread_info$Female_Contributions,thread_info$DebateSize)
# display result
female_confint0 = binom.confint(sum(thread_info$Female_Contributions),sum(thread_info$DebateSize),methods='wilson')[5:6]
female_confint = cbind(as.character(thread_info$ThreadId),binom.confint(thread_info$Female_Contributions,thread_info$DebateSize,methods='wilson')[,4:6])
names(female_confint)[1]='Thread'
ggplot(female_confint,aes(Thread,mean))+
geom_point()+
geom_errorbar(aes(ymin=lower, ymax=upper))+
geom_hline(yintercept=female_confint0$lower)+
geom_hline(yintercept=female_confint0$upper)
# | /edge_thread_compare_female.R | no_license | lots-of-things/edge_forum | R | false | false | 2,178 | r | # group by individual and measure fraction of each gender not counting the individual
# Getting a uniquified view of thread metrics for plotting
thread_info = data_thread %>%
filter(UniqueContributors>5) %>%
select(Year,
Title,
Link,
Type,
ThreadId,
DebateSize,
Female_Contributions,
FemaleParticipation,
Live,
UniqueContributors,
UniqueFemaleContributors,
UniqueFemaleParticipation,
starts_with('Thread_'),
-starts_with('Thread_Text')) %>%
unique()
# test whether unique female participation is segmented to threads in a significant way
prop.test(thread_info$UniqueFemaleContributors,thread_info$UniqueContributors)
# display results of prp test
female_confint0 = binom.confint(sum(thread_info$UniqueFemaleContributors),sum(thread_info$UniqueContributors),methods='wilson')[5:6]
female_confint = cbind(as.character(thread_info$ThreadId),thread_info$DebateSize,thread_info$Title,binom.confint(thread_info$UniqueFemaleContributors,thread_info$UniqueContributors,methods='wilson')[,4:6])
names(female_confint)[1]='Thread'
names(female_confint)[2]='ThreadSize'
names(female_confint)[3]='Title'
ggplot(female_confint,aes(reorder(Thread,ThreadSize),mean))+
geom_point()+
geom_errorbar(aes(ymin=lower, ymax=upper))+
geom_hline(yintercept=female_confint0$lower)+
geom_hline(yintercept=female_confint0$upper)+
ylab('Female Fraction')+
xlab('Thread')+
theme(axis.text.x = element_text(angle = 45, hjust=1))
# smae but for all female participation
prop.test(thread_info$Female_Contributions,thread_info$DebateSize)
# display result
female_confint0 = binom.confint(sum(thread_info$Female_Contributions),sum(thread_info$DebateSize),methods='wilson')[5:6]
female_confint = cbind(as.character(thread_info$ThreadId),binom.confint(thread_info$Female_Contributions,thread_info$DebateSize,methods='wilson')[,4:6])
names(female_confint)[1]='Thread'
ggplot(female_confint,aes(Thread,mean))+
geom_point()+
geom_errorbar(aes(ymin=lower, ymax=upper))+
geom_hline(yintercept=female_confint0$lower)+
geom_hline(yintercept=female_confint0$upper)
# |
###############################################
requiredPackages = c('ggplot2','scales', 'gridExtra')
package.check <- lapply(
requiredPackages,
FUN = function(x) {
if (!require(x, character.only = TRUE)) {
install.packages(x, dependencies = TRUE)
library(x, character.only = TRUE)
}
}
)
###############################################
setwd("~/COVID19-Singapore-master/Data") # Setting Working Directory
COVID <- read.csv('SortedRecoveryData.csv', sep=",", header = TRUE, fileEncoding="UTF-8-BOM")
pdf("~/COVID19-Singapore-master/Loess-Curve.pdf", width = 10, height = 6)
# Loess Smoothing
p <- qplot(data=COVID,x=as.Date(COVID_Confirm_Date),y=DaysInHospital, label = PatientID, geom=("point"), hjust=0, vjust=0)+
geom_text(check_overlap = TRUE, size = 3, hjust = 0, nudge_x = 0.05)+
scale_x_date(breaks = as.Date(COVID$COVID_Confirm_Date), labels = date_format("%m/%d")) +
theme(axis.text.x = element_text(angle = 90)) +
geom_vline(xintercept=c(as.numeric(as.Date("2020-01-23")),as.numeric(as.Date("2020-02-03")), as.numeric(as.Date("2020-03-16"))),linetype=5, size =1, colour="red") +
xlab("Case-wise Date of +ve Confirmation") +
ylab("#Days from +ve-Confirmation to Clinical Recovery") +
#ggtitle ("Clinical Recovery") +
theme_bw() +
theme(axis.text.x = element_text(face="bold", color="black", size=8, angle = 90),
axis.text.y = element_text(face="bold", color="black", size=8),
axis.title=element_text(size=12,face="bold")
)
print(p)
p + stat_smooth(color = "#FC4E07", fill = "#FC4E07", method = "loess")
p + stat_smooth(aes(outfit=fit<<-..y..), color = "#FC4E07", fill = "#FC4E07", method = "loess")
dev.off()
rm(list=ls())
############################################### | /Scripts/R_Scripts/Loess_SmoothCurve.R | no_license | GVCL/COVID19-Singapore | R | false | false | 1,900 | r | ###############################################
requiredPackages = c('ggplot2','scales', 'gridExtra')
package.check <- lapply(
requiredPackages,
FUN = function(x) {
if (!require(x, character.only = TRUE)) {
install.packages(x, dependencies = TRUE)
library(x, character.only = TRUE)
}
}
)
###############################################
setwd("~/COVID19-Singapore-master/Data") # Setting Working Directory
COVID <- read.csv('SortedRecoveryData.csv', sep=",", header = TRUE, fileEncoding="UTF-8-BOM")
pdf("~/COVID19-Singapore-master/Loess-Curve.pdf", width = 10, height = 6)
# Loess Smoothing
p <- qplot(data=COVID,x=as.Date(COVID_Confirm_Date),y=DaysInHospital, label = PatientID, geom=("point"), hjust=0, vjust=0)+
geom_text(check_overlap = TRUE, size = 3, hjust = 0, nudge_x = 0.05)+
scale_x_date(breaks = as.Date(COVID$COVID_Confirm_Date), labels = date_format("%m/%d")) +
theme(axis.text.x = element_text(angle = 90)) +
geom_vline(xintercept=c(as.numeric(as.Date("2020-01-23")),as.numeric(as.Date("2020-02-03")), as.numeric(as.Date("2020-03-16"))),linetype=5, size =1, colour="red") +
xlab("Case-wise Date of +ve Confirmation") +
ylab("#Days from +ve-Confirmation to Clinical Recovery") +
#ggtitle ("Clinical Recovery") +
theme_bw() +
theme(axis.text.x = element_text(face="bold", color="black", size=8, angle = 90),
axis.text.y = element_text(face="bold", color="black", size=8),
axis.title=element_text(size=12,face="bold")
)
print(p)
p + stat_smooth(color = "#FC4E07", fill = "#FC4E07", method = "loess")
p + stat_smooth(aes(outfit=fit<<-..y..), color = "#FC4E07", fill = "#FC4E07", method = "loess")
dev.off()
rm(list=ls())
############################################### |
linearStrand1 <- function(g,maxBetti=vcount(g))
{
x <- rep(0,vcount(g))
d <- degree(g)
for(i in 0:min(maxBetti,vcount(g))) {
x[i+1] <- sum(sapply(d,choose,i+1))-length(cliques(g,i+2,i+2))
if(x[i+1]==0) break
}
a <- matrix(c(0,x[x>0]),nrow=1)
a <- list(graded=cbind(c(1,0),rbind(rep(0,ncol(a)),a)))
class(a) <- c("linear.strand","exact")
a
}
linearStrand2 <- function(g,maxBetti=vcount(g))
{
n <- vcount(g)
x <- rep(0,n)
h <- graph.complementer(g)
for(i in 1:min(maxBetti,(n-1))) {
S <- combn(n,i+1)-1
for(j in 1:ncol(S)){
k <- no.clusters(subgraph(h,S[,j]))-1
x[i+1] <- x[i+1] + k
}
if(x[i+1]==0) break
}
a <- matrix(c(0,x[x>0]),nrow=1)
a <- list(graded=cbind(c(1,0),rbind(rep(0,ncol(a)),a)))
class(a) <- c("linear.strand","exact")
a
}
linearStrandLower <- function(g,maxBetti=vcount(g))
{
x <- rep(0,vcount(g))
d <- degree(g)
x[1] <- ecount(g)
if(maxBetti==1) return(x[1])
x[2] <- count.angles(g)+2*count.triangles(g)
if(maxBetti==2) return(x[1:2])
n <- vcount(g)
A <- matrix(NA,nrow=100,ncol=100)
for(i in 2:min(n,maxBetti)) {
x[i+1] <- sum(sapply(d,choose,i+1))-length(cliques(g,i+2,i+2))
j <- floor((i+2)/2)
for(a in 2:j){
z <- max(c(a,i+2-a))
if(z>nrow(A)){
B <- matrix(NA,nrow=z,ncol=z)
B[1:nrow(A),1:ncol(A)] <- A
A <- B
}
if(is.na(A[a,i+2-a])){
b <- count.bipartite(g,a,i+2-a)
A[a,i+2-a] <- b
A[i+2-a,a] <- b
} else {
b <- A[a,i+2-a]
}
x[i+1] <- x[i+1]+b
}
if(x[i+1]==0) break
}
a <- matrix(c(0,x[x>0]),nrow=1)
a <- list(graded=cbind(c(1,0),rbind(rep(0,ncol(a)),a)))
class(a) <- c("linear.strand","lower.bound")
a
}
linearStrand <- function(g,maxBetti=vcount(g),exact=FALSE)
{
if(ecount(g)==0) return(0)
if(is.chordal(graph.complementer(g))){
a <- mfr(g)
class(a) <- c("linear.strand","exact")
return(a)
}
if(count.squares(g)>0) {
if(exact) {
a <- linearStrand2(g,maxBetti=maxBetti)
} else {
a <- linearStrandLower(g,maxBetti=maxBetti)
}
} else {
a <- linearStrand1(g,maxBetti=maxBetti)
}
a
}
betti2 <- function(g) ecount(g)
betti3 <- function(g)
{
x <- rep(0,2)
x[1] <- count.angles(g)+2*count.triangles(g)
x[2] <- count.bars(g)
x
}
print.linear.strand <- function(x,...)
{
cat("Linear Strand of a Minimal Free Resolution:\n")
cat(x$graded,"\n")
if(inherits(x,"exact")) cat("Betti numbers are exact\n")
if(inherits(x,"lower.bound")) cat("Betti numbers are a lower bound\n")
}
| /R/partial.R | no_license | cran/mfr | R | false | false | 2,460 | r |
linearStrand1 <- function(g,maxBetti=vcount(g))
{
x <- rep(0,vcount(g))
d <- degree(g)
for(i in 0:min(maxBetti,vcount(g))) {
x[i+1] <- sum(sapply(d,choose,i+1))-length(cliques(g,i+2,i+2))
if(x[i+1]==0) break
}
a <- matrix(c(0,x[x>0]),nrow=1)
a <- list(graded=cbind(c(1,0),rbind(rep(0,ncol(a)),a)))
class(a) <- c("linear.strand","exact")
a
}
linearStrand2 <- function(g,maxBetti=vcount(g))
{
n <- vcount(g)
x <- rep(0,n)
h <- graph.complementer(g)
for(i in 1:min(maxBetti,(n-1))) {
S <- combn(n,i+1)-1
for(j in 1:ncol(S)){
k <- no.clusters(subgraph(h,S[,j]))-1
x[i+1] <- x[i+1] + k
}
if(x[i+1]==0) break
}
a <- matrix(c(0,x[x>0]),nrow=1)
a <- list(graded=cbind(c(1,0),rbind(rep(0,ncol(a)),a)))
class(a) <- c("linear.strand","exact")
a
}
linearStrandLower <- function(g,maxBetti=vcount(g))
{
x <- rep(0,vcount(g))
d <- degree(g)
x[1] <- ecount(g)
if(maxBetti==1) return(x[1])
x[2] <- count.angles(g)+2*count.triangles(g)
if(maxBetti==2) return(x[1:2])
n <- vcount(g)
A <- matrix(NA,nrow=100,ncol=100)
for(i in 2:min(n,maxBetti)) {
x[i+1] <- sum(sapply(d,choose,i+1))-length(cliques(g,i+2,i+2))
j <- floor((i+2)/2)
for(a in 2:j){
z <- max(c(a,i+2-a))
if(z>nrow(A)){
B <- matrix(NA,nrow=z,ncol=z)
B[1:nrow(A),1:ncol(A)] <- A
A <- B
}
if(is.na(A[a,i+2-a])){
b <- count.bipartite(g,a,i+2-a)
A[a,i+2-a] <- b
A[i+2-a,a] <- b
} else {
b <- A[a,i+2-a]
}
x[i+1] <- x[i+1]+b
}
if(x[i+1]==0) break
}
a <- matrix(c(0,x[x>0]),nrow=1)
a <- list(graded=cbind(c(1,0),rbind(rep(0,ncol(a)),a)))
class(a) <- c("linear.strand","lower.bound")
a
}
linearStrand <- function(g,maxBetti=vcount(g),exact=FALSE)
{
if(ecount(g)==0) return(0)
if(is.chordal(graph.complementer(g))){
a <- mfr(g)
class(a) <- c("linear.strand","exact")
return(a)
}
if(count.squares(g)>0) {
if(exact) {
a <- linearStrand2(g,maxBetti=maxBetti)
} else {
a <- linearStrandLower(g,maxBetti=maxBetti)
}
} else {
a <- linearStrand1(g,maxBetti=maxBetti)
}
a
}
betti2 <- function(g) ecount(g)
betti3 <- function(g)
{
x <- rep(0,2)
x[1] <- count.angles(g)+2*count.triangles(g)
x[2] <- count.bars(g)
x
}
print.linear.strand <- function(x,...)
{
cat("Linear Strand of a Minimal Free Resolution:\n")
cat(x$graded,"\n")
if(inherits(x,"exact")) cat("Betti numbers are exact\n")
if(inherits(x,"lower.bound")) cat("Betti numbers are a lower bound\n")
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/gcx.r
\name{gcx}
\alias{gcx}
\title{GCx of a list of genes}
\usage{
gcx(x, pos = 1)
}
\arguments{
\item{x}{a list of KZsqns objects.}
\item{pos}{the position of the codon for which GCx is calculated. 1: GC1; 2:GC2; 3: GC3; 4:GC12}
}
\value{
a matrix of the GCx of the list of genes
}
\description{
Calculates the GC1,2,3 and GC12 contents of a list of genes
}
\examples{
}
| /man/gcx.Rd | permissive | HVoltBb/kodonz | R | false | true | 453 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/gcx.r
\name{gcx}
\alias{gcx}
\title{GCx of a list of genes}
\usage{
gcx(x, pos = 1)
}
\arguments{
\item{x}{a list of KZsqns objects.}
\item{pos}{the position of the codon for which GCx is calculated. 1: GC1; 2:GC2; 3: GC3; 4:GC12}
}
\value{
a matrix of the GCx of the list of genes
}
\description{
Calculates the GC1,2,3 and GC12 contents of a list of genes
}
\examples{
}
|
theme_Publication <- function(base_size=14, base_family="Helvetica") {
library(grid)
library(ggthemes)
(theme_foundation(base_size=base_size, base_family=base_family)
+ theme(plot.title = element_text(face = "bold",
size = rel(1.2), hjust = 0.5),
text = element_text(),
panel.background = element_rect(colour = NA),
plot.background = element_rect(colour = NA),
panel.border = element_rect(colour = NA),
axis.title = element_text(face = "bold",size = rel(1)),
axis.title.y = element_text(angle=90,vjust =2),
axis.title.x = element_text(vjust = -0.2),
# axis.title.x=element_blank(),
# axis.text.x = element_text(angle=45, hjust=1),
axis.text = element_text(),
# axis.text.x = element_blank(),
axis.line = element_line(colour="black"),
axis.ticks = element_line(),
# axis.ticks.x = element_blank(),
panel.grid.major = element_line(colour="#f0f0f0"),
panel.grid.minor = element_blank(),
legend.key = element_rect(colour = NA),
legend.position = "bottom",
# legend.position = "right",
legend.direction = "horizontal",
# legend.direction = "vertical",
legend.key.size = unit(0.2, "cm"),
legend.margin = unit(0, "cm"),
# legend.title = element_text(face="italic"),
legend.title = element_blank(),
strip.background = element_rect(colour="#f0f0f0",fill="#f0f0f0"),
strip.text = element_text(face="bold"),
plot.margin = unit(c(10,5,5,5),"mm"),
# plot.margin = unit(c(0.5,4,0.5,8),"cm")
))
}
scale_fill_Publication <- function(...){
library(scales)
discrete_scale("fill", "Publication",
manual_pal(values=c("#fb9a99", "#386cb0", "#984ea3", "#ffff33", "#fdb462",
"#7fc97f", "#ef3b2c", "#662506", "#a6cee3"
)), ...)
}
scale_colour_Publication <- function(...){
library(scales)
discrete_scale("colour", "Publication",
manual_pal(values=c("#fb9a99","#386cb0", "#984ea3", "#ffff33", "#fdb462",
"#7fc97f", "#ef3b2c", "#662506", "#a6cee3"
)), ...)
}
#fdb462 - dark yellow
#984ea3 - purple
#386cb0 - blue
#7fc97f - green
#ef3b2c - red
#a6cee3 - light blue
#ffff33 - bright yellow
#662506 - brown
#fb9a99 - pink
| /R/~old/R/themePublication.R | permissive | BaderLab/POPPATHR | R | false | false | 2,668 | r | theme_Publication <- function(base_size=14, base_family="Helvetica") {
library(grid)
library(ggthemes)
(theme_foundation(base_size=base_size, base_family=base_family)
+ theme(plot.title = element_text(face = "bold",
size = rel(1.2), hjust = 0.5),
text = element_text(),
panel.background = element_rect(colour = NA),
plot.background = element_rect(colour = NA),
panel.border = element_rect(colour = NA),
axis.title = element_text(face = "bold",size = rel(1)),
axis.title.y = element_text(angle=90,vjust =2),
axis.title.x = element_text(vjust = -0.2),
# axis.title.x=element_blank(),
# axis.text.x = element_text(angle=45, hjust=1),
axis.text = element_text(),
# axis.text.x = element_blank(),
axis.line = element_line(colour="black"),
axis.ticks = element_line(),
# axis.ticks.x = element_blank(),
panel.grid.major = element_line(colour="#f0f0f0"),
panel.grid.minor = element_blank(),
legend.key = element_rect(colour = NA),
legend.position = "bottom",
# legend.position = "right",
legend.direction = "horizontal",
# legend.direction = "vertical",
legend.key.size = unit(0.2, "cm"),
legend.margin = unit(0, "cm"),
# legend.title = element_text(face="italic"),
legend.title = element_blank(),
strip.background = element_rect(colour="#f0f0f0",fill="#f0f0f0"),
strip.text = element_text(face="bold"),
plot.margin = unit(c(10,5,5,5),"mm"),
# plot.margin = unit(c(0.5,4,0.5,8),"cm")
))
}
scale_fill_Publication <- function(...){
library(scales)
discrete_scale("fill", "Publication",
manual_pal(values=c("#fb9a99", "#386cb0", "#984ea3", "#ffff33", "#fdb462",
"#7fc97f", "#ef3b2c", "#662506", "#a6cee3"
)), ...)
}
scale_colour_Publication <- function(...){
library(scales)
discrete_scale("colour", "Publication",
manual_pal(values=c("#fb9a99","#386cb0", "#984ea3", "#ffff33", "#fdb462",
"#7fc97f", "#ef3b2c", "#662506", "#a6cee3"
)), ...)
}
#fdb462 - dark yellow
#984ea3 - purple
#386cb0 - blue
#7fc97f - green
#ef3b2c - red
#a6cee3 - light blue
#ffff33 - bright yellow
#662506 - brown
#fb9a99 - pink
|
#' Read in SIBER format data and generate the SIBER object
#'
#' This function takes raw isotope data and creates a SIBER object which
#' contains information in a structured manner that enables other functions to
#' loop over groups and communities, fit Bayesian ellipses, and afterwards,
#' generate various plots, and additional analyses on the posterior
#' distributions.
#'
#' @param data.in Specified In a basic R data.frame or matrix comprising 4
#' columns. The first two of which are typically isotope tracers, then the
#' third is a column that indicates the group membership, and the fourth
#' column indicates the community membership of an observation. Communities
#' labels should be entered as sequential numbers. As of v2.0.1 group labels
#' can be entered as strings and/or numbers and need not be sequential.
#' @return A siber list object, that contains data that helps with various model
#' fitting and plotting. \itemize{ \item {original.data}{The original data as
#' passed into this function} \item {iso.summary}{The max, min, mean and
#' median of the isotope data useful for plotting} \item {sample.sizes}{The
#' number of observations tabulated by group and community} \item {raw.data}{A
#' list object of length equal to the number of communities} }
#' @examples
#' data(demo.siber.data)
#' my.siber.data <- createSiberObject(demo.siber.data)
#' names(my.siber.data)
#'
#' @export
createSiberObject <- function (data.in) {
# Check that the column headers have exactly the correct string
if (!all(names(data.in) == c("iso1", "iso2", "group", "community"))){
stop('The header names in your data file must match
c("iso1", "iso2", "group", "community") exactly')
}
# error if community is not a sequential numeric vector
# if (!is.numeric(data.in$community)){
# stop('The community column must be a sequential numeric vector
# indicating the community membership of each observation.')
# }
# force group and community variable to be factors
data.in$group <- as.factor(data.in$group)
data.in$community <- as.factor(data.in$community)
# create an object that is a list, into which the raw data,
# its transforms, and various calculated metrics can be stored.
siber <- list()
# keep the original data in its original format just in case
# In fact, it can be easier to work with this format, as tapply
# works well on it. I half wonder if i could just keep all the data like this
# if i were able to use tapply more effectively on multiple inputs.
siber$original.data <- data.in
# store all the grouping labels
siber$all.groups <- levels(data.in$group)
# store all the community labels
siber$all.communities <- levels(data.in$community)
# some basic summary stats helpful for plotting
my.summary <- function(x){
c(min = min(x), max = max(x), mean = mean(x), median = stats::median(x))
}
siber$iso.summary <- apply(data.in[,1:2], 2, my.summary)
siber$sample.sizes <- tapply(data.in[,1],
list(data.in[,4], data.in[,3]),
length)
if (any(siber$sample.sizes < 5, na.rm = TRUE)){
warning("At least one of your groups has less than 5 observations.
The absolute minimum sample size for each group is 3 in order
for the various ellipses and corresponding metrics to be
calculated. More reasonably though, a minimum of 5 data points
are required to calculate the two means and the 2x2 covariance
matrix and not run out of degrees of freedom. Check the item
named 'sample.sizes' in the object returned by this function
in order to locate the offending group. Bear in mind that NAs in
the sample.size matrix simply indicate groups that are not
present in that community, and is an acceptable data structure
for these analyses.")
}
# store the raw data as list split by the community variable
# and rename the list components
siber$raw.data <- split(data.in, data.in$community)
#names(siber$raw.data) <- paste("community",
# unique(data.in$community), sep = "")
# how many communities are there
siber$n.communities <- length(unique(data.in$community))
# now many groups are in each community
siber$n.groups <- matrix(NA, 2, length(siber$raw.data),
dimnames = list(c("community", "n.groups"),
rep("", siber$n.communities)))
siber$n.groups[1, ] <- unique(data.in$community)
siber$n.groups[2, ] <- tapply(data.in$group, data.in$community,
function(x){length(unique(x))})
# ------------------------------------------------------------------------------
# create empty arrays in which to store the Maximum Likelihood estimates
# of the means and covariances for each group.
siber$ML.mu <- list()
siber$ML.cov <- list()
siber$group.names <- list()
for (i in 1:siber$n.communities){
siber$ML.mu[[i]] <- array(NA, dim=c(1, 2, siber$n.groups[2,i]) )
siber$ML.cov[[i]] <- array(NA, dim=c(2, 2, siber$n.groups[2,i]) )
}
# loop over each community and extract the Maximum Likelihood estimates
# of the location (their simple independent means) and covariance
# matrix of each group that comprises the community.
# I hvae done this as a loop, as I cant see how to be clever
# and use some of the apply() or analogue functions.
for (i in 1:siber$n.communities) {
siber$group.names[[i]] <- unique(siber$raw.data[[i]]$group)
for (j in 1:siber$n.groups[2,i]) {
# AJ - issue - (group == j)
tmp <- subset(siber$raw.data[[i]],
siber$raw.data[[i]]$group == siber$group.names[[i]][j])
mu.tmp <- colMeans(cbind(tmp$iso1, tmp$iso2))
cov.tmp <- stats::cov(cbind(tmp$iso1, tmp$iso2))
siber$ML.mu[[i]][,,j] <- mu.tmp
siber$ML.cov[[i]][,,j] <- cov.tmp
} # end of loop over groups
# Add names to the dimensions of the array
dimnames(siber$ML.mu[[i]]) <- list(NULL,
c("iso1", "iso2"),
siber$group.names[[i]])
dimnames(siber$ML.cov[[i]]) <- list(c("iso1", "iso2"),
c("iso1", "iso2"),
siber$group.names[[i]])
} # end of loop over communities
names(siber$ML.mu) <- siber$all.communities
names(siber$ML.cov) <- siber$all.communities
# ------------------------------------------------------------------------------
# create z-score transformed versions of the raw data.
# we can then use the saved information in the mean and
# covariances for each group stored in ML.mu and ML.cov
# to backtransform them later.
# first create a copy of the raw data into a list zscore.data
siber$zscore.data <- siber$raw.data
for (i in 1:siber$n.communities) {
# apply z-score transform to each group within the community via tapply()
# using the function scale()
siber$zscore.data[[i]][,1] <- unlist(tapply(siber$raw.data[[i]]$iso1,
siber$raw.data[[i]]$group,
scale))
siber$zscore.data[[i]][,2] <- unlist(tapply(siber$raw.data[[i]]$iso2,
siber$raw.data[[i]]$group,
scale))
}
return(siber)
} # end of function
| /R/createSiberObject.R | no_license | bsharris381/SIBER | R | false | false | 7,361 | r | #' Read in SIBER format data and generate the SIBER object
#'
#' This function takes raw isotope data and creates a SIBER object which
#' contains information in a structured manner that enables other functions to
#' loop over groups and communities, fit Bayesian ellipses, and afterwards,
#' generate various plots, and additional analyses on the posterior
#' distributions.
#'
#' @param data.in Specified In a basic R data.frame or matrix comprising 4
#' columns. The first two of which are typically isotope tracers, then the
#' third is a column that indicates the group membership, and the fourth
#' column indicates the community membership of an observation. Communities
#' labels should be entered as sequential numbers. As of v2.0.1 group labels
#' can be entered as strings and/or numbers and need not be sequential.
#' @return A siber list object, that contains data that helps with various model
#' fitting and plotting. \itemize{ \item {original.data}{The original data as
#' passed into this function} \item {iso.summary}{The max, min, mean and
#' median of the isotope data useful for plotting} \item {sample.sizes}{The
#' number of observations tabulated by group and community} \item {raw.data}{A
#' list object of length equal to the number of communities} }
#' @examples
#' data(demo.siber.data)
#' my.siber.data <- createSiberObject(demo.siber.data)
#' names(my.siber.data)
#'
#' @export
createSiberObject <- function (data.in) {
# Check that the column headers have exactly the correct string
if (!all(names(data.in) == c("iso1", "iso2", "group", "community"))){
stop('The header names in your data file must match
c("iso1", "iso2", "group", "community") exactly')
}
# error if community is not a sequential numeric vector
# if (!is.numeric(data.in$community)){
# stop('The community column must be a sequential numeric vector
# indicating the community membership of each observation.')
# }
# force group and community variable to be factors
data.in$group <- as.factor(data.in$group)
data.in$community <- as.factor(data.in$community)
# create an object that is a list, into which the raw data,
# its transforms, and various calculated metrics can be stored.
siber <- list()
# keep the original data in its original format just in case
# In fact, it can be easier to work with this format, as tapply
# works well on it. I half wonder if i could just keep all the data like this
# if i were able to use tapply more effectively on multiple inputs.
siber$original.data <- data.in
# store all the grouping labels
siber$all.groups <- levels(data.in$group)
# store all the community labels
siber$all.communities <- levels(data.in$community)
# some basic summary stats helpful for plotting
my.summary <- function(x){
c(min = min(x), max = max(x), mean = mean(x), median = stats::median(x))
}
siber$iso.summary <- apply(data.in[,1:2], 2, my.summary)
siber$sample.sizes <- tapply(data.in[,1],
list(data.in[,4], data.in[,3]),
length)
if (any(siber$sample.sizes < 5, na.rm = TRUE)){
warning("At least one of your groups has less than 5 observations.
The absolute minimum sample size for each group is 3 in order
for the various ellipses and corresponding metrics to be
calculated. More reasonably though, a minimum of 5 data points
are required to calculate the two means and the 2x2 covariance
matrix and not run out of degrees of freedom. Check the item
named 'sample.sizes' in the object returned by this function
in order to locate the offending group. Bear in mind that NAs in
the sample.size matrix simply indicate groups that are not
present in that community, and is an acceptable data structure
for these analyses.")
}
# store the raw data as list split by the community variable
# and rename the list components
siber$raw.data <- split(data.in, data.in$community)
#names(siber$raw.data) <- paste("community",
# unique(data.in$community), sep = "")
# how many communities are there
siber$n.communities <- length(unique(data.in$community))
# now many groups are in each community
siber$n.groups <- matrix(NA, 2, length(siber$raw.data),
dimnames = list(c("community", "n.groups"),
rep("", siber$n.communities)))
siber$n.groups[1, ] <- unique(data.in$community)
siber$n.groups[2, ] <- tapply(data.in$group, data.in$community,
function(x){length(unique(x))})
# ------------------------------------------------------------------------------
# create empty arrays in which to store the Maximum Likelihood estimates
# of the means and covariances for each group.
siber$ML.mu <- list()
siber$ML.cov <- list()
siber$group.names <- list()
for (i in 1:siber$n.communities){
siber$ML.mu[[i]] <- array(NA, dim=c(1, 2, siber$n.groups[2,i]) )
siber$ML.cov[[i]] <- array(NA, dim=c(2, 2, siber$n.groups[2,i]) )
}
# loop over each community and extract the Maximum Likelihood estimates
# of the location (their simple independent means) and covariance
# matrix of each group that comprises the community.
# I hvae done this as a loop, as I cant see how to be clever
# and use some of the apply() or analogue functions.
for (i in 1:siber$n.communities) {
siber$group.names[[i]] <- unique(siber$raw.data[[i]]$group)
for (j in 1:siber$n.groups[2,i]) {
# AJ - issue - (group == j)
tmp <- subset(siber$raw.data[[i]],
siber$raw.data[[i]]$group == siber$group.names[[i]][j])
mu.tmp <- colMeans(cbind(tmp$iso1, tmp$iso2))
cov.tmp <- stats::cov(cbind(tmp$iso1, tmp$iso2))
siber$ML.mu[[i]][,,j] <- mu.tmp
siber$ML.cov[[i]][,,j] <- cov.tmp
} # end of loop over groups
# Add names to the dimensions of the array
dimnames(siber$ML.mu[[i]]) <- list(NULL,
c("iso1", "iso2"),
siber$group.names[[i]])
dimnames(siber$ML.cov[[i]]) <- list(c("iso1", "iso2"),
c("iso1", "iso2"),
siber$group.names[[i]])
} # end of loop over communities
names(siber$ML.mu) <- siber$all.communities
names(siber$ML.cov) <- siber$all.communities
# ------------------------------------------------------------------------------
# create z-score transformed versions of the raw data.
# we can then use the saved information in the mean and
# covariances for each group stored in ML.mu and ML.cov
# to backtransform them later.
# first create a copy of the raw data into a list zscore.data
siber$zscore.data <- siber$raw.data
for (i in 1:siber$n.communities) {
# apply z-score transform to each group within the community via tapply()
# using the function scale()
siber$zscore.data[[i]][,1] <- unlist(tapply(siber$raw.data[[i]]$iso1,
siber$raw.data[[i]]$group,
scale))
siber$zscore.data[[i]][,2] <- unlist(tapply(siber$raw.data[[i]]$iso2,
siber$raw.data[[i]]$group,
scale))
}
return(siber)
} # end of function
|
#============ DYNAMIC PLOTS ======================
#++++++++++++++++++++++++++++++++++++++++++++++++++
plotShiny <- function(eval, pointOfInterest){
data <- eval$thresholdSummary[eval$thresholdSummary$Eval%in%c('test','validation'),]
# pointOfInterest # this is a threshold
pointOfInterest <- data[pointOfInterest,]
rocobject <- plotly::plot_ly(x = 1-c(0,data$specificity,1)) %>%
plotly::add_lines(y = c(1,data$sensitivity,0),name = "hv",
text = paste('Risk Threshold:',c(0,data$predictionThreshold,1)),
line = list(shape = "hv",
color = 'rgb(22, 96, 167)'),
fill = 'tozeroy') %>%
plotly::add_trace(x= c(0,1), y = c(0,1),mode = 'lines',
line = list(dash = "dash"), color = I('black'),
type='scatter') %>%
plotly::add_trace(x= 1-pointOfInterest$specificity, y=pointOfInterest$sensitivity,
mode = 'markers', symbols='x') %>% # change the colour of this!
plotly::add_lines(x=c(1-pointOfInterest$specificity, 1-pointOfInterest$specificity),
y = c(0,1),
line = list(dash ='solid',
color = 'black')) %>%
plotly::layout(title = "ROC Plot",
xaxis = list(title = "1-specificity"),
yaxis = list (title = "Sensitivity"),
showlegend = FALSE)
popAv <- data$trueCount[1]/(data$trueCount[1] + data$falseCount[1])
probject <- plotly::plot_ly(x = data$sensitivity) %>%
plotly::add_lines(y = data$positivePredictiveValue, name = "hv",
text = paste('Risk Threshold:',data$predictionThreshold),
line = list(shape = "hv",
color = 'rgb(22, 96, 167)'),
fill = 'tozeroy') %>%
plotly::add_trace(x= c(0,1), y = c(popAv,popAv),mode = 'lines',
line = list(dash = "dash"), color = I('red'),
type='scatter') %>%
plotly::add_trace(x= pointOfInterest$sensitivity, y=pointOfInterest$positivePredictiveValue,
mode = 'markers', symbols='x') %>%
plotly::add_lines(x=c(pointOfInterest$sensitivity, pointOfInterest$sensitivity),
y = c(0,1),
line = list(dash ='solid',
color = 'black')) %>%
plotly::layout(title = "PR Plot",
xaxis = list(title = "Recall"),
yaxis = list (title = "Precision"),
showlegend = FALSE)
# add F1 score
f1object <- plotly::plot_ly(x = data$predictionThreshold) %>%
plotly::add_lines(y = data$f1Score, name = "hv",
text = paste('Risk Threshold:',data$predictionThreshold),
line = list(shape = "hv",
color = 'rgb(22, 96, 167)'),
fill = 'tozeroy') %>%
plotly::add_trace(x= pointOfInterest$predictionThreshold, y=pointOfInterest$f1Score,
mode = 'markers', symbols='x') %>%
plotly::add_lines(x=c(pointOfInterest$predictionThreshold, pointOfInterest$predictionThreshold),
y = c(0,1),
line = list(dash ='solid',
color = 'black')) %>%
plotly::layout(title = "F1-Score Plot",
xaxis = list(title = "Prediction Threshold"),
yaxis = list (title = "F1-Score"),
showlegend = FALSE)
# create 2x2 table with TP, FP, TN, FN and threshold
threshold <- pointOfInterest$predictionThreshold
TP <- pointOfInterest$truePositiveCount
TN <- pointOfInterest$trueNegativeCount
FP <- pointOfInterest$falsePositiveCount
FN <- pointOfInterest$falseNegativeCount
preferenceThreshold <- pointOfInterest$preferenceThreshold
return(list(roc = rocobject,
pr = probject,
f1score = f1object,
threshold = threshold, prefthreshold=preferenceThreshold,
TP = TP, TN=TN,
FP= FP, FN=FN))
}
plotCovariateSummary <- function(covariateSummary){
#writeLines(paste(colnames(covariateSummary)))
#writeLines(paste(covariateSummary[1,]))
# remove na values
covariateSummary$CovariateMeanWithNoOutcome[is.na(covariateSummary$CovariateMeanWithNoOutcome)] <- 0
covariateSummary$CovariateMeanWithOutcome[is.na(covariateSummary$CovariateMeanWithOutcome)] <- 0
if(!'covariateValue'%in%colnames(covariateSummary)){
covariateSummary$covariateValue <- 1
}
if(sum(is.na(covariateSummary$covariateValue))>0){
covariateSummary$covariateValue[is.na(covariateSummary$covariateValue)] <- 0
}
# SPEED EDIT remove the none model variables
covariateSummary <- covariateSummary[covariateSummary$covariateValue!=0,]
# save dots based on coef value
covariateSummary$size <- abs(covariateSummary$covariateValue)
covariateSummary$size[is.na(covariateSummary$size)] <- 4
covariateSummary$size <- 4+4*covariateSummary$size/max(covariateSummary$size)
# color based on analysis id
covariateSummary$color <- sapply(covariateSummary$covariateName, function(x) ifelse(is.na(x), '', strsplit(as.character(x), ' ')[[1]][1]))
l <- list(x = 0.01, y = 1,
font = list(
family = "sans-serif",
size = 10,
color = "#000"),
bgcolor = "#E2E2E2",
bordercolor = "#FFFFFF",
borderwidth = 1)
#covariateSummary$annotation <- sapply(covariateSummary$covariateName, getName)
covariateSummary$annotation <- covariateSummary$covariateName
ind <- covariateSummary$CovariateMeanWithNoOutcome <=1 & covariateSummary$CovariateMeanWithOutcome <= 1
# create two plots -1 or less or g1
binary <- plotly::plot_ly(x = covariateSummary$CovariateMeanWithNoOutcome[ind],
#size = covariateSummary$size[ind],
showlegend = F) %>%
plotly::add_markers(y = covariateSummary$CovariateMeanWithOutcome[ind],
color=factor(covariateSummary$color[ind]),
text = paste(covariateSummary$annotation[ind]),
showlegend = T
) %>%
plotly::add_trace(x= c(0,1), y = c(0,1),mode = 'lines',
line = list(dash = "dash"), color = I('black'),
type='scatter', showlegend = FALSE) %>%
plotly::layout(#title = 'Prevalance of baseline predictors in persons with and without outcome',
xaxis = list(title = "Prevalance in persons without outcome",
range = c(0, 1)),
yaxis = list(title = "Prevalance in persons with outcome",
range = c(0, 1)),
legend = l, showlegend = T)
if(sum(!ind)>0){
maxValue <- max(c(covariateSummary$CovariateMeanWithNoOutcome[!ind],
covariateSummary$CovariateMeanWithOutcome[!ind]), na.rm = T)
meas <- plotly::plot_ly(x = covariateSummary$CovariateMeanWithNoOutcome[!ind] ) %>%
plotly::add_markers(y = covariateSummary$CovariateMeanWithOutcome[!ind],
text = paste(covariateSummary$annotation[!ind])) %>%
plotly::add_trace(x= c(0,maxValue), y = c(0,maxValue),mode = 'lines',
line = list(dash = "dash"), color = I('black'),
type='scatter', showlegend = FALSE) %>%
plotly::layout(#title = 'Prevalance of baseline predictors in persons with and without outcome',
xaxis = list(title = "Mean in persons without outcome"),
yaxis = list(title = "Mean in persons with outcome"),
showlegend = FALSE)
} else {
meas <- NULL
}
return(list(binary=binary,
meas = meas))
}
plotPredictedPDF <- function(evaluation, type='test', fileName=NULL){
ind <- evaluation$thresholdSummary$Eval==type
x<- evaluation$thresholdSummary[ind,c('predictionThreshold','truePositiveCount','trueNegativeCount',
'falsePositiveCount','falseNegativeCount')]
x<- x[order(x$predictionThreshold,-x$truePositiveCount, -x$falsePositiveCount),]
x$out <- c(x$truePositiveCount[-length(x$truePositiveCount)]-x$truePositiveCount[-1], x$truePositiveCount[length(x$truePositiveCount)])
x$nout <- c(x$falsePositiveCount[-length(x$falsePositiveCount)]-x$falsePositiveCount[-1], x$falsePositiveCount[length(x$falsePositiveCount)])
vals <- c()
for(i in 1:length(x$predictionThreshold)){
if(i!=length(x$predictionThreshold)){
upper <- x$predictionThreshold[i+1]} else {upper <- min(x$predictionThreshold[i]+0.01,1)}
val <- x$predictionThreshold[i]+runif(x$out[i])*(upper-x$predictionThreshold[i])
vals <- c(val, vals)
}
vals[!is.na(vals)]
vals2 <- c()
for(i in 1:length(x$predictionThreshold)){
if(i!=length(x$predictionThreshold)){
upper <- x$predictionThreshold[i+1]} else {upper <- min(x$predictionThreshold[i]+0.01,1)}
val2 <- x$predictionThreshold[i]+runif(x$nout[i])*(upper-x$predictionThreshold[i])
vals2 <- c(val2, vals2)
}
vals2[!is.na(vals2)]
x <- rbind(data.frame(variable=rep('outcome',length(vals)), value=vals),
data.frame(variable=rep('No outcome',length(vals2)), value=vals2)
)
plot <- ggplot2::ggplot(x, ggplot2::aes(x=x$value,
group=x$variable,
fill=x$variable)) +
ggplot2::geom_density(ggplot2::aes(x=x$value, fill=x$variable), alpha=.3) +
ggplot2::scale_x_continuous("Prediction Threshold")+#, limits=c(0,1)) +
ggplot2::scale_y_continuous("Density") +
ggplot2::guides(fill=ggplot2::guide_legend(title="Class"))
if (!is.null(fileName))
ggplot2::ggsave(fileName, plot, width = 5, height = 4.5, dpi = 400)
return(plot)
}
plotPreferencePDF <- function(evaluation, type='test', fileName=NULL){
ind <- evaluation$thresholdSummary$Eval==type
x<- evaluation$thresholdSummary[ind,c('preferenceThreshold','truePositiveCount','trueNegativeCount',
'falsePositiveCount','falseNegativeCount')]
x<- x[order(x$preferenceThreshold,-x$truePositiveCount),]
x$out <- c(x$truePositiveCount[-length(x$truePositiveCount)]-x$truePositiveCount[-1], x$truePositiveCount[length(x$truePositiveCount)])
x$nout <- c(x$falsePositiveCount[-length(x$falsePositiveCount)]-x$falsePositiveCount[-1], x$falsePositiveCount[length(x$falsePositiveCount)])
vals <- c()
for(i in 1:length(x$preferenceThreshold)){
if(i!=length(x$preferenceThreshold)){
upper <- x$preferenceThreshold[i+1]} else {upper <- 1}
val <- x$preferenceThreshold[i]+runif(x$out[i])*(upper-x$preferenceThreshold[i])
vals <- c(val, vals)
}
vals[!is.na(vals)]
vals2 <- c()
for(i in 1:length(x$preferenceThreshold)){
if(i!=length(x$preferenceThreshold)){
upper <- x$preferenceThreshold[i+1]} else {upper <- 1}
val2 <- x$preferenceThreshold[i]+runif(x$nout[i])*(upper-x$preferenceThreshold[i])
vals2 <- c(val2, vals2)
}
vals2[!is.na(vals2)]
x <- rbind(data.frame(variable=rep('outcome',length(vals)), value=vals),
data.frame(variable=rep('No outcome',length(vals2)), value=vals2)
)
plot <- ggplot2::ggplot(x, ggplot2::aes(x=x$value,
group=x$variable,
fill=x$variable)) +
ggplot2::geom_density(ggplot2::aes(x=x$value, fill=x$variable), alpha=.3) +
ggplot2::scale_x_continuous("Preference Threshold")+#, limits=c(0,1)) +
ggplot2::scale_y_continuous("Density") +
ggplot2::guides(fill=ggplot2::guide_legend(title="Class"))
if (!is.null(fileName))
ggplot2::ggsave(fileName, plot, width = 5, height = 4.5, dpi = 400)
return(plot)
}
plotDemographicSummary <- function(evaluation, type='test', fileName=NULL){
if (!all(is.na(evaluation$demographicSummary$averagePredictedProbability))){
ind <- evaluation$demographicSummary$Eval==type
x<- evaluation$demographicSummary[ind,colnames(evaluation$demographicSummary)%in%c('ageGroup','genGroup','averagePredictedProbability',
'PersonCountAtRisk', 'PersonCountWithOutcome')]
# remove -1 values:
x <- x[x$PersonCountWithOutcome != -1,]
if(nrow(x)==0){
return(NULL)
}
# end remove -1 values
x$observed <- x$PersonCountWithOutcome/x$PersonCountAtRisk
x <- x[,colnames(x)%in%c('ageGroup','genGroup','averagePredictedProbability','observed')]
# if age or gender missing add
if(sum(colnames(x)=='ageGroup')==1 && sum(colnames(x)=='genGroup')==0 ){
x$genGroup = rep('Non', nrow(x))
evaluation$demographicSummary$genGroup = rep('Non', nrow(evaluation$demographicSummary))
}
if(sum(colnames(x)=='ageGroup')==0 && sum(colnames(x)=='genGroup')==1 ){
x$ageGroup = rep('-1', nrow(x))
evaluation$demographicSummary$ageGroup = rep('-1', nrow(evaluation$demographicSummary))
}
x <- reshape2::melt(x, id.vars=c('ageGroup','genGroup'))
# 1.96*StDevPredictedProbability
ci <- evaluation$demographicSummary[,colnames(evaluation$demographicSummary)%in%c('ageGroup','genGroup','averagePredictedProbability','StDevPredictedProbability')]
ci$StDevPredictedProbability[is.na(ci$StDevPredictedProbability)] <- 1
ci$lower <- ci$averagePredictedProbability-1.96*ci$StDevPredictedProbability
ci$lower[ci$lower <0] <- 0
ci$upper <- ci$averagePredictedProbability+1.96*ci$StDevPredictedProbability
ci$upper[ci$upper >1] <- max(ci$upper[ci$upper <1])
x$age <- gsub('Age group:','', x$ageGroup)
x$age <- factor(x$age,levels=c(" 0-4"," 5-9"," 10-14",
" 15-19"," 20-24"," 25-29"," 30-34"," 35-39"," 40-44",
" 45-49"," 50-54"," 55-59"," 60-64"," 65-69"," 70-74",
" 75-79"," 80-84"," 85-89"," 90-94"," 95-99","-1"),ordered=TRUE)
x <- merge(x, ci[,c('ageGroup','genGroup','lower','upper')], by=c('ageGroup','genGroup'))
plot <- ggplot2::ggplot(data=x,
ggplot2::aes(x=age, group=variable*genGroup)) +
ggplot2::geom_line(ggplot2::aes(y=value, group=variable,
color=variable,
linetype = variable))+
ggplot2::geom_ribbon(data=x[x$variable!='observed',],
ggplot2::aes(ymin=lower, ymax=upper
, group=genGroup),
fill="blue", alpha="0.2") +
ggplot2::facet_grid(.~ genGroup, scales = "free") +
ggplot2::theme(axis.text.x = ggplot2::element_text(angle = 90, hjust = 1)) +
#ggplot2::coord_flip() +
ggplot2::scale_y_continuous("Fraction") +
ggplot2::scale_x_discrete("Age") +
ggplot2::scale_color_manual(values = c("royalblue4","red"),
guide = ggplot2::guide_legend(title = NULL),
labels = c("Expected", "Observed")) +
ggplot2::guides(linetype=FALSE)
if (!is.null(fileName))
ggplot2::ggsave(fileName, plot, width = 7, height = 4.5, dpi = 400)
return(plot)
}
}
#=============
# CALIBRATIONSUMMARY PLOTS
#=============
#' Plot the calibration
#'
#' @details
#' Create a plot showing the calibration
#' #'
#' @param evaluation A prediction object as generated using the
#' \code{\link{runPlp}} function.
#' @param type options: 'train' or test'
#' @param fileName Name of the file where the plot should be saved, for example
#' 'plot.png'. See the function \code{ggsave} in the ggplot2 package for
#' supported file formats.
#'
#' @return
#' A ggplot object. Use the \code{\link[ggplot2]{ggsave}} function to save to file in a different
#' format.
#'
#' @export
plotSparseCalibration <- function(evaluation, type='test', fileName=NULL){
ind <- evaluation$calibrationSummary$Eval==type
x<- evaluation$calibrationSummary[ind,c('averagePredictedProbability','observedIncidence')]
maxVal <- max(x$averagePredictedProbability,x$observedIncidence)
model <- stats::lm(observedIncidence~averagePredictedProbability, data=x)
res <- model$coefficients
names(res) <- c('Intercept','Gradient')
# confidence int
interceptConf <- stats::confint(model)[1,]
gradientConf <- stats::confint(model)[2,]
cis <- data.frame(lci = interceptConf[1]+seq(0,1,length.out = nrow(x))*gradientConf[1],
uci = interceptConf[2]+seq(0,1,length.out = nrow(x))*gradientConf[2],
x=seq(0,1,length.out = nrow(x)))
x <- cbind(x, cis)
# TODO: CHECK INPUT
plot <- ggplot2::ggplot(data=x,
ggplot2::aes(x=averagePredictedProbability, y=observedIncidence
)) +
ggplot2::geom_ribbon(ggplot2::aes(ymin=lci,ymax=uci, x=x),
fill="blue", alpha="0.2") +
ggplot2::geom_point(size=1, color='darkblue') +
ggplot2::coord_cartesian(ylim = c(0, maxVal), xlim =c(0,maxVal)) +
ggplot2::geom_abline(intercept = 0, slope = 1, linetype = 2, size=1,
show.legend = TRUE) +
ggplot2::geom_abline(intercept = res['Intercept'], slope = res['Gradient'],
linetype = 1,show.legend = TRUE,
color='darkblue') +
ggplot2::scale_x_continuous("Average Predicted Probability") +
ggplot2::scale_y_continuous("Observed Fraction With Outcome")
if (!is.null(fileName))
ggplot2::ggsave(fileName, plot, width = 5, height = 3.5, dpi = 400)
return(plot)
}
#=============
# CALIBRATIONSUMMARY PLOTS 2
#=============
#' Plot the conventional calibration
#'
#' @details
#' Create a plot showing the calibration
#' #'
#' @param evaluation A prediction object as generated using the
#' \code{\link{runPlp}} function.
#' @param type options: 'train' or test'
#' @param fileName Name of the file where the plot should be saved, for example
#' 'plot.png'. See the function \code{ggsave} in the ggplot2 package for
#' supported file formats.
#'
#' @return
#' A ggplot object. Use the \code{\link[ggplot2]{ggsave}} function to save to file in a different
#' format.
#'
#' @export
plotSparseCalibration2 <- function(evaluation, type='test', fileName=NULL){
ind <- evaluation$calibrationSummary$Eval==type
x<- evaluation$calibrationSummary[ind,c('averagePredictedProbability','observedIncidence', 'PersonCountAtRisk')]
cis <- apply(x, 1, function(x) binom.test(x[2]*x[3], x[3], alternative = c("two.sided"), conf.level = 0.95)$conf.int)
x$lci <- cis[1,]
x$uci <- cis[2,]
maxes <- max(max(x$averagePredictedProbability), max(x$observedIncidence))*1.1
# TODO: CHECK INPUT
limits <- ggplot2::aes(ymax = x$uci, ymin= x$lci)
plot <- ggplot2::ggplot(data=x,
ggplot2::aes(x=averagePredictedProbability, y=observedIncidence
)) +
ggplot2::geom_point(size=2, color='black') +
ggplot2::geom_errorbar(limits) +
#ggplot2::geom_smooth(method=lm, se=F, colour='darkgrey') +
ggplot2::geom_line(colour='darkgrey') +
ggplot2::geom_abline(intercept = 0, slope = 1, linetype = 5, size=0.4,
show.legend = TRUE) +
ggplot2::scale_x_continuous("Average Predicted Probability") +
ggplot2::scale_y_continuous("Observed Fraction With Outcome") +
ggplot2::coord_cartesian(xlim = c(0, maxes), ylim=c(0,maxes))
if (!is.null(fileName))
ggplot2::ggsave(fileName, plot, width = 5, height = 3.5, dpi = 400)
return(plot)
}
plotPredictionDistribution <- function(evaluation, type='test', fileName=NULL){
ind <- evaluation$predictionDistribution$Eval==type
x<- evaluation$predictionDistribution[ind,]
#(x=Class, y=predictedProbabllity sequence: min->P05->P25->Median->P75->P95->max)
plot <- ggplot2::ggplot(x, ggplot2::aes(x=as.factor(class),
ymin=MinPredictedProbability,
lower=P25PredictedProbability,
middle=MedianPredictedProbability,
upper=P75PredictedProbability,
ymax=MaxPredictedProbability,
color=as.factor(class))) +
ggplot2::coord_flip() +
ggplot2::geom_boxplot(stat="identity") +
ggplot2::scale_x_discrete("Class") +
ggplot2::scale_y_continuous("Predicted Probability") +
ggplot2::theme(legend.position="none") +
ggplot2::geom_segment(ggplot2::aes(x = 0.9, y = x$P05PredictedProbability[x$class==0],
xend = 1.1, yend = x$P05PredictedProbability[x$class==0]), color='red') +
ggplot2::geom_segment(ggplot2::aes(x = 0.9, y = x$P95PredictedProbability[x$class==0],
xend = 1.1, yend = x$P95PredictedProbability[x$class==0]), color='red') +
ggplot2::geom_segment(ggplot2::aes(x = 1.9, y = x$P05PredictedProbability[x$class==1],
xend = 2.1, yend = x$P05PredictedProbability[x$class==1])) +
ggplot2::geom_segment(ggplot2::aes(x = 1.9, y = x$P95PredictedProbability[x$class==1],
xend = 2.1, yend = x$P95PredictedProbability[x$class==1]))
if (!is.null(fileName))
ggplot2::ggsave(fileName, plot, width = 5, height = 4.5, dpi = 400)
return(plot)
} | /OhdsiEurope2019/plots.R | no_license | OHDSI/ShinyDeploy | R | false | false | 22,096 | r | #============ DYNAMIC PLOTS ======================
#++++++++++++++++++++++++++++++++++++++++++++++++++
plotShiny <- function(eval, pointOfInterest){
data <- eval$thresholdSummary[eval$thresholdSummary$Eval%in%c('test','validation'),]
# pointOfInterest # this is a threshold
pointOfInterest <- data[pointOfInterest,]
rocobject <- plotly::plot_ly(x = 1-c(0,data$specificity,1)) %>%
plotly::add_lines(y = c(1,data$sensitivity,0),name = "hv",
text = paste('Risk Threshold:',c(0,data$predictionThreshold,1)),
line = list(shape = "hv",
color = 'rgb(22, 96, 167)'),
fill = 'tozeroy') %>%
plotly::add_trace(x= c(0,1), y = c(0,1),mode = 'lines',
line = list(dash = "dash"), color = I('black'),
type='scatter') %>%
plotly::add_trace(x= 1-pointOfInterest$specificity, y=pointOfInterest$sensitivity,
mode = 'markers', symbols='x') %>% # change the colour of this!
plotly::add_lines(x=c(1-pointOfInterest$specificity, 1-pointOfInterest$specificity),
y = c(0,1),
line = list(dash ='solid',
color = 'black')) %>%
plotly::layout(title = "ROC Plot",
xaxis = list(title = "1-specificity"),
yaxis = list (title = "Sensitivity"),
showlegend = FALSE)
popAv <- data$trueCount[1]/(data$trueCount[1] + data$falseCount[1])
probject <- plotly::plot_ly(x = data$sensitivity) %>%
plotly::add_lines(y = data$positivePredictiveValue, name = "hv",
text = paste('Risk Threshold:',data$predictionThreshold),
line = list(shape = "hv",
color = 'rgb(22, 96, 167)'),
fill = 'tozeroy') %>%
plotly::add_trace(x= c(0,1), y = c(popAv,popAv),mode = 'lines',
line = list(dash = "dash"), color = I('red'),
type='scatter') %>%
plotly::add_trace(x= pointOfInterest$sensitivity, y=pointOfInterest$positivePredictiveValue,
mode = 'markers', symbols='x') %>%
plotly::add_lines(x=c(pointOfInterest$sensitivity, pointOfInterest$sensitivity),
y = c(0,1),
line = list(dash ='solid',
color = 'black')) %>%
plotly::layout(title = "PR Plot",
xaxis = list(title = "Recall"),
yaxis = list (title = "Precision"),
showlegend = FALSE)
# add F1 score
f1object <- plotly::plot_ly(x = data$predictionThreshold) %>%
plotly::add_lines(y = data$f1Score, name = "hv",
text = paste('Risk Threshold:',data$predictionThreshold),
line = list(shape = "hv",
color = 'rgb(22, 96, 167)'),
fill = 'tozeroy') %>%
plotly::add_trace(x= pointOfInterest$predictionThreshold, y=pointOfInterest$f1Score,
mode = 'markers', symbols='x') %>%
plotly::add_lines(x=c(pointOfInterest$predictionThreshold, pointOfInterest$predictionThreshold),
y = c(0,1),
line = list(dash ='solid',
color = 'black')) %>%
plotly::layout(title = "F1-Score Plot",
xaxis = list(title = "Prediction Threshold"),
yaxis = list (title = "F1-Score"),
showlegend = FALSE)
# create 2x2 table with TP, FP, TN, FN and threshold
threshold <- pointOfInterest$predictionThreshold
TP <- pointOfInterest$truePositiveCount
TN <- pointOfInterest$trueNegativeCount
FP <- pointOfInterest$falsePositiveCount
FN <- pointOfInterest$falseNegativeCount
preferenceThreshold <- pointOfInterest$preferenceThreshold
return(list(roc = rocobject,
pr = probject,
f1score = f1object,
threshold = threshold, prefthreshold=preferenceThreshold,
TP = TP, TN=TN,
FP= FP, FN=FN))
}
plotCovariateSummary <- function(covariateSummary){
#writeLines(paste(colnames(covariateSummary)))
#writeLines(paste(covariateSummary[1,]))
# remove na values
covariateSummary$CovariateMeanWithNoOutcome[is.na(covariateSummary$CovariateMeanWithNoOutcome)] <- 0
covariateSummary$CovariateMeanWithOutcome[is.na(covariateSummary$CovariateMeanWithOutcome)] <- 0
if(!'covariateValue'%in%colnames(covariateSummary)){
covariateSummary$covariateValue <- 1
}
if(sum(is.na(covariateSummary$covariateValue))>0){
covariateSummary$covariateValue[is.na(covariateSummary$covariateValue)] <- 0
}
# SPEED EDIT remove the none model variables
covariateSummary <- covariateSummary[covariateSummary$covariateValue!=0,]
# save dots based on coef value
covariateSummary$size <- abs(covariateSummary$covariateValue)
covariateSummary$size[is.na(covariateSummary$size)] <- 4
covariateSummary$size <- 4+4*covariateSummary$size/max(covariateSummary$size)
# color based on analysis id
covariateSummary$color <- sapply(covariateSummary$covariateName, function(x) ifelse(is.na(x), '', strsplit(as.character(x), ' ')[[1]][1]))
l <- list(x = 0.01, y = 1,
font = list(
family = "sans-serif",
size = 10,
color = "#000"),
bgcolor = "#E2E2E2",
bordercolor = "#FFFFFF",
borderwidth = 1)
#covariateSummary$annotation <- sapply(covariateSummary$covariateName, getName)
covariateSummary$annotation <- covariateSummary$covariateName
ind <- covariateSummary$CovariateMeanWithNoOutcome <=1 & covariateSummary$CovariateMeanWithOutcome <= 1
# create two plots -1 or less or g1
binary <- plotly::plot_ly(x = covariateSummary$CovariateMeanWithNoOutcome[ind],
#size = covariateSummary$size[ind],
showlegend = F) %>%
plotly::add_markers(y = covariateSummary$CovariateMeanWithOutcome[ind],
color=factor(covariateSummary$color[ind]),
text = paste(covariateSummary$annotation[ind]),
showlegend = T
) %>%
plotly::add_trace(x= c(0,1), y = c(0,1),mode = 'lines',
line = list(dash = "dash"), color = I('black'),
type='scatter', showlegend = FALSE) %>%
plotly::layout(#title = 'Prevalance of baseline predictors in persons with and without outcome',
xaxis = list(title = "Prevalance in persons without outcome",
range = c(0, 1)),
yaxis = list(title = "Prevalance in persons with outcome",
range = c(0, 1)),
legend = l, showlegend = T)
if(sum(!ind)>0){
maxValue <- max(c(covariateSummary$CovariateMeanWithNoOutcome[!ind],
covariateSummary$CovariateMeanWithOutcome[!ind]), na.rm = T)
meas <- plotly::plot_ly(x = covariateSummary$CovariateMeanWithNoOutcome[!ind] ) %>%
plotly::add_markers(y = covariateSummary$CovariateMeanWithOutcome[!ind],
text = paste(covariateSummary$annotation[!ind])) %>%
plotly::add_trace(x= c(0,maxValue), y = c(0,maxValue),mode = 'lines',
line = list(dash = "dash"), color = I('black'),
type='scatter', showlegend = FALSE) %>%
plotly::layout(#title = 'Prevalance of baseline predictors in persons with and without outcome',
xaxis = list(title = "Mean in persons without outcome"),
yaxis = list(title = "Mean in persons with outcome"),
showlegend = FALSE)
} else {
meas <- NULL
}
return(list(binary=binary,
meas = meas))
}
plotPredictedPDF <- function(evaluation, type='test', fileName=NULL){
ind <- evaluation$thresholdSummary$Eval==type
x<- evaluation$thresholdSummary[ind,c('predictionThreshold','truePositiveCount','trueNegativeCount',
'falsePositiveCount','falseNegativeCount')]
x<- x[order(x$predictionThreshold,-x$truePositiveCount, -x$falsePositiveCount),]
x$out <- c(x$truePositiveCount[-length(x$truePositiveCount)]-x$truePositiveCount[-1], x$truePositiveCount[length(x$truePositiveCount)])
x$nout <- c(x$falsePositiveCount[-length(x$falsePositiveCount)]-x$falsePositiveCount[-1], x$falsePositiveCount[length(x$falsePositiveCount)])
vals <- c()
for(i in 1:length(x$predictionThreshold)){
if(i!=length(x$predictionThreshold)){
upper <- x$predictionThreshold[i+1]} else {upper <- min(x$predictionThreshold[i]+0.01,1)}
val <- x$predictionThreshold[i]+runif(x$out[i])*(upper-x$predictionThreshold[i])
vals <- c(val, vals)
}
vals[!is.na(vals)]
vals2 <- c()
for(i in 1:length(x$predictionThreshold)){
if(i!=length(x$predictionThreshold)){
upper <- x$predictionThreshold[i+1]} else {upper <- min(x$predictionThreshold[i]+0.01,1)}
val2 <- x$predictionThreshold[i]+runif(x$nout[i])*(upper-x$predictionThreshold[i])
vals2 <- c(val2, vals2)
}
vals2[!is.na(vals2)]
x <- rbind(data.frame(variable=rep('outcome',length(vals)), value=vals),
data.frame(variable=rep('No outcome',length(vals2)), value=vals2)
)
plot <- ggplot2::ggplot(x, ggplot2::aes(x=x$value,
group=x$variable,
fill=x$variable)) +
ggplot2::geom_density(ggplot2::aes(x=x$value, fill=x$variable), alpha=.3) +
ggplot2::scale_x_continuous("Prediction Threshold")+#, limits=c(0,1)) +
ggplot2::scale_y_continuous("Density") +
ggplot2::guides(fill=ggplot2::guide_legend(title="Class"))
if (!is.null(fileName))
ggplot2::ggsave(fileName, plot, width = 5, height = 4.5, dpi = 400)
return(plot)
}
plotPreferencePDF <- function(evaluation, type='test', fileName=NULL){
ind <- evaluation$thresholdSummary$Eval==type
x<- evaluation$thresholdSummary[ind,c('preferenceThreshold','truePositiveCount','trueNegativeCount',
'falsePositiveCount','falseNegativeCount')]
x<- x[order(x$preferenceThreshold,-x$truePositiveCount),]
x$out <- c(x$truePositiveCount[-length(x$truePositiveCount)]-x$truePositiveCount[-1], x$truePositiveCount[length(x$truePositiveCount)])
x$nout <- c(x$falsePositiveCount[-length(x$falsePositiveCount)]-x$falsePositiveCount[-1], x$falsePositiveCount[length(x$falsePositiveCount)])
vals <- c()
for(i in 1:length(x$preferenceThreshold)){
if(i!=length(x$preferenceThreshold)){
upper <- x$preferenceThreshold[i+1]} else {upper <- 1}
val <- x$preferenceThreshold[i]+runif(x$out[i])*(upper-x$preferenceThreshold[i])
vals <- c(val, vals)
}
vals[!is.na(vals)]
vals2 <- c()
for(i in 1:length(x$preferenceThreshold)){
if(i!=length(x$preferenceThreshold)){
upper <- x$preferenceThreshold[i+1]} else {upper <- 1}
val2 <- x$preferenceThreshold[i]+runif(x$nout[i])*(upper-x$preferenceThreshold[i])
vals2 <- c(val2, vals2)
}
vals2[!is.na(vals2)]
x <- rbind(data.frame(variable=rep('outcome',length(vals)), value=vals),
data.frame(variable=rep('No outcome',length(vals2)), value=vals2)
)
plot <- ggplot2::ggplot(x, ggplot2::aes(x=x$value,
group=x$variable,
fill=x$variable)) +
ggplot2::geom_density(ggplot2::aes(x=x$value, fill=x$variable), alpha=.3) +
ggplot2::scale_x_continuous("Preference Threshold")+#, limits=c(0,1)) +
ggplot2::scale_y_continuous("Density") +
ggplot2::guides(fill=ggplot2::guide_legend(title="Class"))
if (!is.null(fileName))
ggplot2::ggsave(fileName, plot, width = 5, height = 4.5, dpi = 400)
return(plot)
}
plotDemographicSummary <- function(evaluation, type='test', fileName=NULL){
if (!all(is.na(evaluation$demographicSummary$averagePredictedProbability))){
ind <- evaluation$demographicSummary$Eval==type
x<- evaluation$demographicSummary[ind,colnames(evaluation$demographicSummary)%in%c('ageGroup','genGroup','averagePredictedProbability',
'PersonCountAtRisk', 'PersonCountWithOutcome')]
# remove -1 values:
x <- x[x$PersonCountWithOutcome != -1,]
if(nrow(x)==0){
return(NULL)
}
# end remove -1 values
x$observed <- x$PersonCountWithOutcome/x$PersonCountAtRisk
x <- x[,colnames(x)%in%c('ageGroup','genGroup','averagePredictedProbability','observed')]
# if age or gender missing add
if(sum(colnames(x)=='ageGroup')==1 && sum(colnames(x)=='genGroup')==0 ){
x$genGroup = rep('Non', nrow(x))
evaluation$demographicSummary$genGroup = rep('Non', nrow(evaluation$demographicSummary))
}
if(sum(colnames(x)=='ageGroup')==0 && sum(colnames(x)=='genGroup')==1 ){
x$ageGroup = rep('-1', nrow(x))
evaluation$demographicSummary$ageGroup = rep('-1', nrow(evaluation$demographicSummary))
}
x <- reshape2::melt(x, id.vars=c('ageGroup','genGroup'))
# 1.96*StDevPredictedProbability
ci <- evaluation$demographicSummary[,colnames(evaluation$demographicSummary)%in%c('ageGroup','genGroup','averagePredictedProbability','StDevPredictedProbability')]
ci$StDevPredictedProbability[is.na(ci$StDevPredictedProbability)] <- 1
ci$lower <- ci$averagePredictedProbability-1.96*ci$StDevPredictedProbability
ci$lower[ci$lower <0] <- 0
ci$upper <- ci$averagePredictedProbability+1.96*ci$StDevPredictedProbability
ci$upper[ci$upper >1] <- max(ci$upper[ci$upper <1])
x$age <- gsub('Age group:','', x$ageGroup)
x$age <- factor(x$age,levels=c(" 0-4"," 5-9"," 10-14",
" 15-19"," 20-24"," 25-29"," 30-34"," 35-39"," 40-44",
" 45-49"," 50-54"," 55-59"," 60-64"," 65-69"," 70-74",
" 75-79"," 80-84"," 85-89"," 90-94"," 95-99","-1"),ordered=TRUE)
x <- merge(x, ci[,c('ageGroup','genGroup','lower','upper')], by=c('ageGroup','genGroup'))
plot <- ggplot2::ggplot(data=x,
ggplot2::aes(x=age, group=variable*genGroup)) +
ggplot2::geom_line(ggplot2::aes(y=value, group=variable,
color=variable,
linetype = variable))+
ggplot2::geom_ribbon(data=x[x$variable!='observed',],
ggplot2::aes(ymin=lower, ymax=upper
, group=genGroup),
fill="blue", alpha="0.2") +
ggplot2::facet_grid(.~ genGroup, scales = "free") +
ggplot2::theme(axis.text.x = ggplot2::element_text(angle = 90, hjust = 1)) +
#ggplot2::coord_flip() +
ggplot2::scale_y_continuous("Fraction") +
ggplot2::scale_x_discrete("Age") +
ggplot2::scale_color_manual(values = c("royalblue4","red"),
guide = ggplot2::guide_legend(title = NULL),
labels = c("Expected", "Observed")) +
ggplot2::guides(linetype=FALSE)
if (!is.null(fileName))
ggplot2::ggsave(fileName, plot, width = 7, height = 4.5, dpi = 400)
return(plot)
}
}
#=============
# CALIBRATIONSUMMARY PLOTS
#=============
#' Plot the calibration
#'
#' @details
#' Create a plot showing the calibration
#' #'
#' @param evaluation A prediction object as generated using the
#' \code{\link{runPlp}} function.
#' @param type options: 'train' or test'
#' @param fileName Name of the file where the plot should be saved, for example
#' 'plot.png'. See the function \code{ggsave} in the ggplot2 package for
#' supported file formats.
#'
#' @return
#' A ggplot object. Use the \code{\link[ggplot2]{ggsave}} function to save to file in a different
#' format.
#'
#' @export
plotSparseCalibration <- function(evaluation, type='test', fileName=NULL){
ind <- evaluation$calibrationSummary$Eval==type
x<- evaluation$calibrationSummary[ind,c('averagePredictedProbability','observedIncidence')]
maxVal <- max(x$averagePredictedProbability,x$observedIncidence)
model <- stats::lm(observedIncidence~averagePredictedProbability, data=x)
res <- model$coefficients
names(res) <- c('Intercept','Gradient')
# confidence int
interceptConf <- stats::confint(model)[1,]
gradientConf <- stats::confint(model)[2,]
cis <- data.frame(lci = interceptConf[1]+seq(0,1,length.out = nrow(x))*gradientConf[1],
uci = interceptConf[2]+seq(0,1,length.out = nrow(x))*gradientConf[2],
x=seq(0,1,length.out = nrow(x)))
x <- cbind(x, cis)
# TODO: CHECK INPUT
plot <- ggplot2::ggplot(data=x,
ggplot2::aes(x=averagePredictedProbability, y=observedIncidence
)) +
ggplot2::geom_ribbon(ggplot2::aes(ymin=lci,ymax=uci, x=x),
fill="blue", alpha="0.2") +
ggplot2::geom_point(size=1, color='darkblue') +
ggplot2::coord_cartesian(ylim = c(0, maxVal), xlim =c(0,maxVal)) +
ggplot2::geom_abline(intercept = 0, slope = 1, linetype = 2, size=1,
show.legend = TRUE) +
ggplot2::geom_abline(intercept = res['Intercept'], slope = res['Gradient'],
linetype = 1,show.legend = TRUE,
color='darkblue') +
ggplot2::scale_x_continuous("Average Predicted Probability") +
ggplot2::scale_y_continuous("Observed Fraction With Outcome")
if (!is.null(fileName))
ggplot2::ggsave(fileName, plot, width = 5, height = 3.5, dpi = 400)
return(plot)
}
#=============
# CALIBRATIONSUMMARY PLOTS 2
#=============
#' Plot the conventional calibration
#'
#' @details
#' Create a plot showing the calibration
#' #'
#' @param evaluation A prediction object as generated using the
#' \code{\link{runPlp}} function.
#' @param type options: 'train' or test'
#' @param fileName Name of the file where the plot should be saved, for example
#' 'plot.png'. See the function \code{ggsave} in the ggplot2 package for
#' supported file formats.
#'
#' @return
#' A ggplot object. Use the \code{\link[ggplot2]{ggsave}} function to save to file in a different
#' format.
#'
#' @export
plotSparseCalibration2 <- function(evaluation, type='test', fileName=NULL){
ind <- evaluation$calibrationSummary$Eval==type
x<- evaluation$calibrationSummary[ind,c('averagePredictedProbability','observedIncidence', 'PersonCountAtRisk')]
cis <- apply(x, 1, function(x) binom.test(x[2]*x[3], x[3], alternative = c("two.sided"), conf.level = 0.95)$conf.int)
x$lci <- cis[1,]
x$uci <- cis[2,]
maxes <- max(max(x$averagePredictedProbability), max(x$observedIncidence))*1.1
# TODO: CHECK INPUT
limits <- ggplot2::aes(ymax = x$uci, ymin= x$lci)
plot <- ggplot2::ggplot(data=x,
ggplot2::aes(x=averagePredictedProbability, y=observedIncidence
)) +
ggplot2::geom_point(size=2, color='black') +
ggplot2::geom_errorbar(limits) +
#ggplot2::geom_smooth(method=lm, se=F, colour='darkgrey') +
ggplot2::geom_line(colour='darkgrey') +
ggplot2::geom_abline(intercept = 0, slope = 1, linetype = 5, size=0.4,
show.legend = TRUE) +
ggplot2::scale_x_continuous("Average Predicted Probability") +
ggplot2::scale_y_continuous("Observed Fraction With Outcome") +
ggplot2::coord_cartesian(xlim = c(0, maxes), ylim=c(0,maxes))
if (!is.null(fileName))
ggplot2::ggsave(fileName, plot, width = 5, height = 3.5, dpi = 400)
return(plot)
}
plotPredictionDistribution <- function(evaluation, type='test', fileName=NULL){
ind <- evaluation$predictionDistribution$Eval==type
x<- evaluation$predictionDistribution[ind,]
#(x=Class, y=predictedProbabllity sequence: min->P05->P25->Median->P75->P95->max)
plot <- ggplot2::ggplot(x, ggplot2::aes(x=as.factor(class),
ymin=MinPredictedProbability,
lower=P25PredictedProbability,
middle=MedianPredictedProbability,
upper=P75PredictedProbability,
ymax=MaxPredictedProbability,
color=as.factor(class))) +
ggplot2::coord_flip() +
ggplot2::geom_boxplot(stat="identity") +
ggplot2::scale_x_discrete("Class") +
ggplot2::scale_y_continuous("Predicted Probability") +
ggplot2::theme(legend.position="none") +
ggplot2::geom_segment(ggplot2::aes(x = 0.9, y = x$P05PredictedProbability[x$class==0],
xend = 1.1, yend = x$P05PredictedProbability[x$class==0]), color='red') +
ggplot2::geom_segment(ggplot2::aes(x = 0.9, y = x$P95PredictedProbability[x$class==0],
xend = 1.1, yend = x$P95PredictedProbability[x$class==0]), color='red') +
ggplot2::geom_segment(ggplot2::aes(x = 1.9, y = x$P05PredictedProbability[x$class==1],
xend = 2.1, yend = x$P05PredictedProbability[x$class==1])) +
ggplot2::geom_segment(ggplot2::aes(x = 1.9, y = x$P95PredictedProbability[x$class==1],
xend = 2.1, yend = x$P95PredictedProbability[x$class==1]))
if (!is.null(fileName))
ggplot2::ggsave(fileName, plot, width = 5, height = 4.5, dpi = 400)
return(plot)
} |
MBF=function(data,group){
n=tapply(data, group, length)
k=length(tapply(data, group, length))
xbar=tapply(data, group, mean)
var=tapply(data, group, var)
N=sum(n);
B=sum(n*(xbar-mean(data))^2)/sum((1-(n/sum(n)))*var);
df11=(sum((1-(n-sum(n)))*var)^2);
df12=sum(((1-(n-sum(n)))^2*(var)^2)/(n-1));
df2=(df11/df12);
df1=((sum((1-n/N)*var))^2)/((sum(var^2))+(sum(n*var/N))^2-(2*sum(n*var^2/N)));
pvalue=1-pf(B,df1,df2);
result=matrix(c(round(B,digits=4),round(df1),round(df2),round(pvalue,digits=4)))
rownames(result)=c("Test Statistic","df1","df2","p-value")
colnames(result)=c("Modified Welch")
return(t(result))
}
| /R/modifiedBrownForsythe.R | no_license | cran/doex | R | false | false | 646 | r | MBF=function(data,group){
n=tapply(data, group, length)
k=length(tapply(data, group, length))
xbar=tapply(data, group, mean)
var=tapply(data, group, var)
N=sum(n);
B=sum(n*(xbar-mean(data))^2)/sum((1-(n/sum(n)))*var);
df11=(sum((1-(n-sum(n)))*var)^2);
df12=sum(((1-(n-sum(n)))^2*(var)^2)/(n-1));
df2=(df11/df12);
df1=((sum((1-n/N)*var))^2)/((sum(var^2))+(sum(n*var/N))^2-(2*sum(n*var^2/N)));
pvalue=1-pf(B,df1,df2);
result=matrix(c(round(B,digits=4),round(df1),round(df2),round(pvalue,digits=4)))
rownames(result)=c("Test Statistic","df1","df2","p-value")
colnames(result)=c("Modified Welch")
return(t(result))
}
|
######################################################################################
######## SENTIMENTAL ANALYSIS FOR NAMED ENTITIES COLLECTED FROM API-1 & API-2 ########
#Reference Source website : https://github.com/rOpenGov/rtimes
######################################################################################
#Importing the common positive and negative words lists into data sets
pos.words = scan('C:/DESKTOP/Home/twitter-sentiment-analysis-tutorial-201107-master/twitter-sentiment-analysis-tutorial-201107-master/data/opinion-lexicon-English/positive-words.txt', what='character', comment.char=';')
neg.words = scan('C:/DESKTOP/Home/twitter-sentiment-analysis-tutorial-201107-master/twitter-sentiment-analysis-tutorial-201107-master/data/opinion-lexicon-English/negative-words.txt', what='character', comment.char=';')
#Adding more common words to positive and negative databases
pos.words=c(pos.words, 'Congrats', 'prizes', 'prize', 'thanks', 'thnx', 'Grt', 'gr8', 'plz', 'trending', 'recovering', 'brainstorm', 'leader')
neg.words = c(neg.words, 'Fight', 'fighting', 'wtf', 'arrest', 'no', 'not')
#list of named entities available from API 1
#View(person_entity_API_1)
#View(location_entity_API_1)
#View(organization_entity_API_1)
#View(date_entity_API_1)
#View(money_entity_API_1)
#list of named entities available from API 2
#View(person_entity_API_2_final)
#View(location_entity_API_2_final)
#View(organization_entity_API_2_final)
#View(money_entity_API_2_final)
#View(date_entity_API_2_final)
# Initialising the variables
sentences <- NULL
sentence <- NULL
#Function to provide score of a sentense as per the frequency of negative and positive words
score.sentiment = function(sentences, pos.words, neg.words, .progress='none')
{
require(plyr)
require(stringr)
list=lapply(sentences, function(sentence, pos.words, neg.words)
{
sentence = gsub('[[:punct:]]',' ',sentence)
sentence = gsub('[[:cntrl:]]','',sentence)
sentence = gsub('\\d+','',sentence)
sentence = gsub('\n','',sentence)
sentence = tolower(sentence)
word.list = str_split(sentence, '\\s+')
words = unlist(word.list)
pos.matches = match(words, pos.words)
neg.matches = match(words, neg.words)
pos.matches = !is.na(pos.matches)
neg.matches = !is.na(neg.matches)
pp=sum(pos.matches)
nn = sum(neg.matches)
score = sum(pos.matches) - sum(neg.matches)
list1=c(score, pp, nn)
return (list1)
}, pos.words, neg.words)
score_new=lapply(list, `[[`, 1)
pp1=score=lapply(list, `[[`, 2)
nn1=score=lapply(list, `[[`, 3)
scores.df = data.frame(score=score_new, text=sentences)
positive.df = data.frame(Positive=pp1, text=sentences)
negative.df = data.frame(Negative=nn1, text=sentences)
list_df=list(scores.df, positive.df, negative.df)
return(list_df)
}
##########################################################################################################
########################### SENTIMENTAL ANALYSIS FOR NAMED ENTITY FROM API - 1 ###########################
##########################################################################################################
#Assigning the sentiment obtained for each named entity from API-1
person_entity_API_1_result = score.sentiment(person_entity_API_1, pos.words, neg.words)
location_entity_API_1_result = score.sentiment(location_entity_API_1, pos.words, neg.words)
organization_entity_API_1_result = score.sentiment(organization_entity_API_1, pos.words, neg.words)
date_entity_API_1_result = score.sentiment(date_entity_API_1, pos.words, neg.words)
money_entity_API_1_result = score.sentiment(money_entity_API_1, pos.words, neg.words)
#View(person_entity_API_1_result)
#View(location_entity_API_1_result)
#View(organization_entity_API_1_result)
#View(date_entity_API_1_result)
#View(money_entity_API_1_result)
#list of named entities available from API 1
#View(person_entity_API_1)
#View(location_entity_API_1)
#View(organization_entity_API_1)
#View(date_entity_API_1)
#View(money_entity_API_1)
###########################################
###### SENTIMENTAL ANALYSIS FOR PERSON ######
###########################################
#Creating a copy of result data frame
person_entity_API_1_result_Column_1=person_entity_API_1_result[[1]]
person_entity_API_1_result_Column_2=person_entity_API_1_result[[2]]
person_entity_API_1_result_Column_3=person_entity_API_1_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
person_entity_API_1_result_Score=person_entity_API_1_result_Column_1[1,]
person_entity_API_1_result_Positive=person_entity_API_1_result_Column_2[1,]
person_entity_API_1_result_Negative=person_entity_API_1_result_Column_3[1,]
person_entity_API_1_result_Score_1=melt(person_entity_API_1_result_Score, ,var='Score')
person_entity_API_1_result_Positive_2=melt(person_entity_API_1_result_Positive, ,var='Positive')
person_entity_API_1_result_Negative_3=melt(person_entity_API_1_result_Negative, ,var='Negative')
person_entity_API_1_result_Score_1['Score'] = NULL
person_entity_API_1_result_Positive_2['Positive'] = NULL
person_entity_API_1_result_Negative_3['Negative'] = NULL
#Creating data frame
person_entity_API_1_result_table1 = data.frame(Text=person_entity_API_1_result[1], Score=person_entity_API_1_result_Score_1)
person_entity_API_1_result_table2 = data.frame(Text=person_entity_API_1_result[2], Score=person_entity_API_1_result_Positive_2)
person_entity_API_1_result_table3 = data.frame(Text=person_entity_API_1_result[3], Score=person_entity_API_1_result_Negative_3)
#Merging three data frames into one
person_entity_API_1_result_table_final=data.frame(Text=person_entity_API_1_result_table1, Score=person_entity_API_1_result_table1, Positive=person_entity_API_1_result_table2, Negative=person_entity_API_1_result_table3)
#person_entity_API_1_result_table_final
person_entity_API_1_result_table_final=data.frame(Text=person_entity_API_1_result_table1$Text.person.1, Score=person_entity_API_1_result_table1$Score.value, Positive=person_entity_API_1_result_table2$Score.value, Negative=person_entity_API_1_result_table3$Score.value)
#View(person_entity_API_1_result_table_final)
###########################################
###### SENTIMENTAL ANALYSIS FOR LOCATION ######
###########################################
#Creating a copy of result data frame
location_entity_API_1_result_Column_1=location_entity_API_1_result[[1]]
location_entity_API_1_result_Column_2=location_entity_API_1_result[[2]]
location_entity_API_1_result_Column_3=location_entity_API_1_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
location_entity_API_1_result_Score=location_entity_API_1_result_Column_1[1,]
location_entity_API_1_result_Positive=location_entity_API_1_result_Column_2[1,]
location_entity_API_1_result_Negative=location_entity_API_1_result_Column_3[1,]
location_entity_API_1_result_Score_1=melt(location_entity_API_1_result_Score, ,var='Score')
location_entity_API_1_result_Positive_2=melt(location_entity_API_1_result_Positive, ,var='Positive')
location_entity_API_1_result_Negative_3=melt(location_entity_API_1_result_Negative, ,var='Negative')
location_entity_API_1_result_Score_1['Score'] = NULL
location_entity_API_1_result_Positive_2['Positive'] = NULL
location_entity_API_1_result_Negative_3['Negative'] = NULL
#Creating data frame
location_entity_API_1_result_table1 = data.frame(Text=location_entity_API_1_result[1], Score=location_entity_API_1_result_Score_1)
location_entity_API_1_result_table2 = data.frame(Text=location_entity_API_1_result[2], Score=location_entity_API_1_result_Positive_2)
location_entity_API_1_result_table3 = data.frame(Text=location_entity_API_1_result[3], Score=location_entity_API_1_result_Negative_3)
#Merging three data frames into one
location_entity_API_1_result_table_final=data.frame(Text=location_entity_API_1_result_table1, Score=location_entity_API_1_result_table1, Positive=location_entity_API_1_result_table2, Negative=location_entity_API_1_result_table3)
#location_entity_API_1_result_table_final
location_entity_API_1_result_table_final=data.frame(Text=location_entity_API_1_result_table1$Text.location.1, Score=location_entity_API_1_result_table1$Score.value, Positive=location_entity_API_1_result_table2$Score.value, Negative=location_entity_API_1_result_table3$Score.value)
#View(location_entity_API_1_result_table_final)
#################################################
###### SENTIMENTAL ANALYSIS FOR ORGANIZATION ######
#################################################
#Creating a copy of result data frame
organization_entity_API_1_result_Column_1=organization_entity_API_1_result[[1]]
organization_entity_API_1_result_Column_2=organization_entity_API_1_result[[2]]
organization_entity_API_1_result_Column_3=organization_entity_API_1_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
organization_entity_API_1_result_Score=organization_entity_API_1_result_Column_1[1,]
organization_entity_API_1_result_Positive=organization_entity_API_1_result_Column_2[1,]
organization_entity_API_1_result_Negative=organization_entity_API_1_result_Column_3[1,]
organization_entity_API_1_result_Score_1=melt(organization_entity_API_1_result_Score, ,var='Score')
organization_entity_API_1_result_Positive_2=melt(organization_entity_API_1_result_Positive, ,var='Positive')
organization_entity_API_1_result_Negative_3=melt(organization_entity_API_1_result_Negative, ,var='Negative')
organization_entity_API_1_result_Score_1['Score'] = NULL
organization_entity_API_1_result_Positive_2['Positive'] = NULL
organization_entity_API_1_result_Negative_3['Negative'] = NULL
#Creating data frame
organization_entity_API_1_result_table1 = data.frame(Text=organization_entity_API_1_result[1], Score=organization_entity_API_1_result_Score_1)
organization_entity_API_1_result_table2 = data.frame(Text=organization_entity_API_1_result[2], Score=organization_entity_API_1_result_Positive_2)
organization_entity_API_1_result_table3 = data.frame(Text=organization_entity_API_1_result[3], Score=organization_entity_API_1_result_Negative_3)
#Merging three data frames into one
organization_entity_API_1_result_table_final=data.frame(Text=organization_entity_API_1_result_table1, Score=organization_entity_API_1_result_table1, Positive=organization_entity_API_1_result_table2, Negative=organization_entity_API_1_result_table3)
#organization_entity_API_1_result_table_final
organization_entity_API_1_result_table_final=data.frame(Text=organization_entity_API_1_result_table1$Text.organization.1, Score=organization_entity_API_1_result_table1$Score.value, Positive=organization_entity_API_1_result_table2$Score.value, Negative=organization_entity_API_1_result_table3$Score.value)
#View(organization_entity_API_1_result_table_final)
###########################################
###### SENTIMENTAL ANALYSIS FOR MONEY ######
###########################################
#Creating a copy of result data frame
money_entity_API_1_result_Column_1=money_entity_API_1_result[[1]]
money_entity_API_1_result_Column_2=money_entity_API_1_result[[2]]
money_entity_API_1_result_Column_3=money_entity_API_1_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
money_entity_API_1_result_Score=money_entity_API_1_result_Column_1[1,]
money_entity_API_1_result_Positive=money_entity_API_1_result_Column_2[1,]
money_entity_API_1_result_Negative=money_entity_API_1_result_Column_3[1,]
money_entity_API_1_result_Score_1=melt(money_entity_API_1_result_Score, ,var='Score')
money_entity_API_1_result_Positive_2=melt(money_entity_API_1_result_Positive, ,var='Positive')
money_entity_API_1_result_Negative_3=melt(money_entity_API_1_result_Negative, ,var='Negative')
money_entity_API_1_result_Score_1['Score'] = NULL
money_entity_API_1_result_Positive_2['Positive'] = NULL
money_entity_API_1_result_Negative_3['Negative'] = NULL
#Creating data frame
money_entity_API_1_result_table1 = data.frame(Text=money_entity_API_1_result[1], Score=money_entity_API_1_result_Score_1)
money_entity_API_1_result_table2 = data.frame(Text=money_entity_API_1_result[2], Score=money_entity_API_1_result_Positive_2)
money_entity_API_1_result_table3 = data.frame(Text=money_entity_API_1_result[3], Score=money_entity_API_1_result_Negative_3)
#Merging three data frames into one
money_entity_API_1_result_table_final=data.frame(Text=money_entity_API_1_result_table1, Score=money_entity_API_1_result_table1, Positive=money_entity_API_1_result_table2, Negative=money_entity_API_1_result_table3)
#money_entity_API_1_result_table_final
money_entity_API_1_result_table_final=data.frame(Text=money_entity_API_1_result_table1$Text.money.1, Score=money_entity_API_1_result_table1$Score.value, Positive=money_entity_API_1_result_table2$Score.value, Negative=money_entity_API_1_result_table3$Score.value)
#View(money_entity_API_1_result_table_final)
###########################################
###### SENTIMENTAL ANALYSIS FOR DATE ######
###########################################
#Creating a copy of result data frame
date_entity_API_1_result_Column_1=date_entity_API_1_result[[1]]
date_entity_API_1_result_Column_2=date_entity_API_1_result[[2]]
date_entity_API_1_result_Column_3=date_entity_API_1_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
date_entity_API_1_result_Score=date_entity_API_1_result_Column_1[1,]
date_entity_API_1_result_Positive=date_entity_API_1_result_Column_2[1,]
date_entity_API_1_result_Negative=date_entity_API_1_result_Column_3[1,]
date_entity_API_1_result_Score_1=melt(date_entity_API_1_result_Score, ,var='Score')
date_entity_API_1_result_Positive_2=melt(date_entity_API_1_result_Positive, ,var='Positive')
date_entity_API_1_result_Negative_3=melt(date_entity_API_1_result_Negative, ,var='Negative')
date_entity_API_1_result_Score_1['Score'] = NULL
date_entity_API_1_result_Positive_2['Positive'] = NULL
date_entity_API_1_result_Negative_3['Negative'] = NULL
#Creating data frame
date_entity_API_1_result_table1 = data.frame(Text=date_entity_API_1_result[1], Score=date_entity_API_1_result_Score_1)
date_entity_API_1_result_table2 = data.frame(Text=date_entity_API_1_result[2], Score=date_entity_API_1_result_Positive_2)
date_entity_API_1_result_table3 = data.frame(Text=date_entity_API_1_result[3], Score=date_entity_API_1_result_Negative_3)
#Merging three data frames into one
date_entity_API_1_result_table_final=data.frame(Text=date_entity_API_1_result_table1, Score=date_entity_API_1_result_table1, Positive=date_entity_API_1_result_table2, Negative=date_entity_API_1_result_table3)
#date_entity_API_1_result_table_final
date_entity_API_1_result_table_final=data.frame(Text=date_entity_API_1_result_table1$Text.date.1, Score=date_entity_API_1_result_table1$Score.value, Positive=date_entity_API_1_result_table2$Score.value, Negative=date_entity_API_1_result_table3$Score.value)
#View(date_entity_API_1_result_table_final)
##########################################################################################################
########################### SENTIMENTAL ANALYSIS FOR NAMED ENTITY FROM API - 2 ###########################
##########################################################################################################
#Assigning the sentiment obtained for each named entity from API-2
person_entity_API_2_result = score.sentiment(person_entity_API_2, pos.words, neg.words)
location_entity_API_2_result = score.sentiment(location_entity_API_2, pos.words, neg.words)
organization_entity_API_2_result = score.sentiment(organization_entity_API_2, pos.words, neg.words)
date_entity_API_2_result = score.sentiment(date_entity_API_2, pos.words, neg.words)
money_entity_API_2_result = score.sentiment(money_entity_API_2, pos.words, neg.words)
#View(person_entity_API_2_result)
#View(location_entity_API_2_result)
#View(organization_entity_API_2_result)
#View(date_entity_API_2_result)
#View(money_entity_API_2_result)
#list of named entities available from API 1
#View(person_entity_API_2)
#View(location_entity_API_2)
#View(organization_entity_API_2)
#View(date_entity_API_2)
#View(money_entity_API_2)
###########################################
###### SENTIMENTAL ANALYSIS FOR PERSON ######
###########################################
#Creating a copy of result data frame
person_entity_API_2_result_Column_1=person_entity_API_2_result[[1]]
person_entity_API_2_result_Column_2=person_entity_API_2_result[[2]]
person_entity_API_2_result_Column_3=person_entity_API_2_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
person_entity_API_2_result_Score=person_entity_API_2_result_Column_1[1,]
person_entity_API_2_result_Positive=person_entity_API_2_result_Column_2[1,]
person_entity_API_2_result_Negative=person_entity_API_2_result_Column_3[1,]
person_entity_API_2_result_Score_1=melt(person_entity_API_2_result_Score, ,var='Score')
person_entity_API_2_result_Positive_2=melt(person_entity_API_2_result_Positive, ,var='Positive')
person_entity_API_2_result_Negative_3=melt(person_entity_API_2_result_Negative, ,var='Negative')
person_entity_API_2_result_Score_1['Score'] = NULL
person_entity_API_2_result_Positive_2['Positive'] = NULL
person_entity_API_2_result_Negative_3['Negative'] = NULL
#Creating data frame
person_entity_API_2_result_table1 = data.frame(Text=person_entity_API_2_result[1], Score=person_entity_API_2_result_Score_1)
person_entity_API_2_result_table2 = data.frame(Text=person_entity_API_2_result[2], Score=person_entity_API_2_result_Positive_2)
person_entity_API_2_result_table3 = data.frame(Text=person_entity_API_2_result[3], Score=person_entity_API_2_result_Negative_3)
#Merging three data frames into one
person_entity_API_2_result_table_final=data.frame(Text=person_entity_API_2_result_table1, Score=person_entity_API_2_result_table1, Positive=person_entity_API_2_result_table2, Negative=person_entity_API_2_result_table3)
#person_entity_API_2_result_table_final
person_entity_API_2_result_table_final=data.frame(Text=person_entity_API_2_result_table1$Text.person.1, Score=person_entity_API_2_result_table1$Score.value, Positive=person_entity_API_2_result_table2$Score.value, Negative=person_entity_API_2_result_table3$Score.value)
#View(person_entity_API_2_result_table_final)
###########################################
###### SENTIMENTAL ANALYSIS FOR LOCATION ######
###########################################
#Creating a copy of result data frame
location_entity_API_2_result_Column_1=location_entity_API_2_result[[1]]
location_entity_API_2_result_Column_2=location_entity_API_2_result[[2]]
location_entity_API_2_result_Column_3=location_entity_API_2_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
location_entity_API_2_result_Score=location_entity_API_2_result_Column_1[1,]
location_entity_API_2_result_Positive=location_entity_API_2_result_Column_2[1,]
location_entity_API_2_result_Negative=location_entity_API_2_result_Column_3[1,]
location_entity_API_2_result_Score_1=melt(location_entity_API_2_result_Score, ,var='Score')
location_entity_API_2_result_Positive_2=melt(location_entity_API_2_result_Positive, ,var='Positive')
location_entity_API_2_result_Negative_3=melt(location_entity_API_2_result_Negative, ,var='Negative')
location_entity_API_2_result_Score_1['Score'] = NULL
location_entity_API_2_result_Positive_2['Positive'] = NULL
location_entity_API_2_result_Negative_3['Negative'] = NULL
#Creating data frame
location_entity_API_2_result_table1 = data.frame(Text=location_entity_API_2_result[1], Score=location_entity_API_2_result_Score_1)
location_entity_API_2_result_table2 = data.frame(Text=location_entity_API_2_result[2], Score=location_entity_API_2_result_Positive_2)
location_entity_API_2_result_table3 = data.frame(Text=location_entity_API_2_result[3], Score=location_entity_API_2_result_Negative_3)
#Merging three data frames into one
location_entity_API_2_result_table_final=data.frame(Text=location_entity_API_2_result_table1, Score=location_entity_API_2_result_table1, Positive=location_entity_API_2_result_table2, Negative=location_entity_API_2_result_table3)
#location_entity_API_2_result_table_final
location_entity_API_2_result_table_final=data.frame(Text=location_entity_API_2_result_table1$Text.location.1, Score=location_entity_API_2_result_table1$Score.value, Positive=location_entity_API_2_result_table2$Score.value, Negative=location_entity_API_2_result_table3$Score.value)
#View(location_entity_API_2_result_table_final)
#################################################
###### SENTIMENTAL ANALYSIS FOR ORGANIZATION ######
#################################################
#Creating a copy of result data frame
organization_entity_API_2_result_Column_1=organization_entity_API_2_result[[1]]
organization_entity_API_2_result_Column_2=organization_entity_API_2_result[[2]]
organization_entity_API_2_result_Column_3=organization_entity_API_2_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
organization_entity_API_2_result_Score=organization_entity_API_2_result_Column_1[1,]
organization_entity_API_2_result_Positive=organization_entity_API_2_result_Column_2[1,]
organization_entity_API_2_result_Negative=organization_entity_API_2_result_Column_3[1,]
organization_entity_API_2_result_Score_1=melt(organization_entity_API_2_result_Score, ,var='Score')
organization_entity_API_2_result_Positive_2=melt(organization_entity_API_2_result_Positive, ,var='Positive')
organization_entity_API_2_result_Negative_3=melt(organization_entity_API_2_result_Negative, ,var='Negative')
organization_entity_API_2_result_Score_1['Score'] = NULL
organization_entity_API_2_result_Positive_2['Positive'] = NULL
organization_entity_API_2_result_Negative_3['Negative'] = NULL
#Creating data frame
organization_entity_API_2_result_table1 = data.frame(Text=organization_entity_API_2_result[1], Score=organization_entity_API_2_result_Score_1)
organization_entity_API_2_result_table2 = data.frame(Text=organization_entity_API_2_result[2], Score=organization_entity_API_2_result_Positive_2)
organization_entity_API_2_result_table3 = data.frame(Text=organization_entity_API_2_result[3], Score=organization_entity_API_2_result_Negative_3)
#Merging three data frames into one
organization_entity_API_2_result_table_final=data.frame(Text=organization_entity_API_2_result_table1, Score=organization_entity_API_2_result_table1, Positive=organization_entity_API_2_result_table2, Negative=organization_entity_API_2_result_table3)
#organization_entity_API_2_result_table_final
organization_entity_API_2_result_table_final=data.frame(Text=organization_entity_API_2_result_table1$Text.organization.1, Score=organization_entity_API_2_result_table1$Score.value, Positive=organization_entity_API_2_result_table2$Score.value, Negative=organization_entity_API_2_result_table3$Score.value)
#View(organization_entity_API_2_result_table_final)
###########################################
###### SENTIMENTAL ANALYSIS FOR MONEY ######
###########################################
#Creating a copy of result data frame
money_entity_API_2_result_Column_1=money_entity_API_2_result[[1]]
money_entity_API_2_result_Column_2=money_entity_API_2_result[[2]]
money_entity_API_2_result_Column_3=money_entity_API_2_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
money_entity_API_2_result_Score=money_entity_API_2_result_Column_1[1,]
money_entity_API_2_result_Positive=money_entity_API_2_result_Column_2[1,]
money_entity_API_2_result_Negative=money_entity_API_2_result_Column_3[1,]
money_entity_API_2_result_Score_1=melt(money_entity_API_2_result_Score, ,var='Score')
money_entity_API_2_result_Positive_2=melt(money_entity_API_2_result_Positive, ,var='Positive')
money_entity_API_2_result_Negative_3=melt(money_entity_API_2_result_Negative, ,var='Negative')
money_entity_API_2_result_Score_1['Score'] = NULL
money_entity_API_2_result_Positive_2['Positive'] = NULL
money_entity_API_2_result_Negative_3['Negative'] = NULL
#Creating data frame
money_entity_API_2_result_table1 = data.frame(Text=money_entity_API_2_result[1], Score=money_entity_API_2_result_Score_1)
money_entity_API_2_result_table2 = data.frame(Text=money_entity_API_2_result[2], Score=money_entity_API_2_result_Positive_2)
money_entity_API_2_result_table3 = data.frame(Text=money_entity_API_2_result[3], Score=money_entity_API_2_result_Negative_3)
#Merging three data frames into one
money_entity_API_2_result_table_final=data.frame(Text=money_entity_API_2_result_table1, Score=money_entity_API_2_result_table1, Positive=money_entity_API_2_result_table2, Negative=money_entity_API_2_result_table3)
#money_entity_API_2_result_table_final
money_entity_API_2_result_table_final=data.frame(Text=money_entity_API_2_result_table1$Text.money.1, Score=money_entity_API_2_result_table1$Score.value, Positive=money_entity_API_2_result_table2$Score.value, Negative=money_entity_API_2_result_table3$Score.value)
#View(money_entity_API_2_result_table_final)
###########################################
###### SENTIMENTAL ANALYSIS FOR DATE ######
###########################################
#Creating a copy of result data frame
date_entity_API_2_result_Column_1=date_entity_API_2_result[[1]]
date_entity_API_2_result_Column_2=date_entity_API_2_result[[2]]
date_entity_API_2_result_Column_3=date_entity_API_2_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
date_entity_API_2_result_Score=date_entity_API_2_result_Column_1[1,]
date_entity_API_2_result_Positive=date_entity_API_2_result_Column_2[1,]
date_entity_API_2_result_Negative=date_entity_API_2_result_Column_3[1,]
date_entity_API_2_result_Score_1=melt(date_entity_API_2_result_Score, ,var='Score')
date_entity_API_2_result_Positive_2=melt(date_entity_API_2_result_Positive, ,var='Positive')
date_entity_API_2_result_Negative_3=melt(date_entity_API_2_result_Negative, ,var='Negative')
date_entity_API_2_result_Score_1['Score'] = NULL
date_entity_API_2_result_Positive_2['Positive'] = NULL
date_entity_API_2_result_Negative_3['Negative'] = NULL
#Creating data frame
date_entity_API_2_result_table1 = data.frame(Text=date_entity_API_2_result[1], Score=date_entity_API_2_result_Score_1)
date_entity_API_2_result_table2 = data.frame(Text=date_entity_API_2_result[2], Score=date_entity_API_2_result_Positive_2)
date_entity_API_2_result_table3 = data.frame(Text=date_entity_API_2_result[3], Score=date_entity_API_2_result_Negative_3)
#Merging three data frames into one
date_entity_API_2_result_table_final=data.frame(Text=date_entity_API_2_result_table1, Score=date_entity_API_2_result_table1, Positive=date_entity_API_2_result_table2, Negative=date_entity_API_2_result_table3)
#date_entity_API_2_result_table_final
date_entity_API_2_result_table_final=data.frame(Text=date_entity_API_2_result_table1$Text.date.1, Score=date_entity_API_2_result_table1$Score.value, Positive=date_entity_API_2_result_table2$Score.value, Negative=date_entity_API_2_result_table3$Score.value)
#View(date_entity_API_2_result_table_final)
#####################################
#Pie Chart for Sentimental Analysis API-1
#####################################
#Pie Chart for Sentimental Analysis for person_entity_API_1
person_entity_API_1_result_table_final_slices <- c(sum(person_entity_API_1_result_table_final$Positive), sum(person_entity_API_1_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(person_entity_API_1_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#Pie Chart for Sentimental Analysis for location_entity_API_1
location_entity_API_1_result_table_final_slices <- c(sum(location_entity_API_1_result_table_final$Positive), sum(location_entity_API_1_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(location_entity_API_1_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#Pie Chart for Sentimental Analysis for organization_entity_API_1
organization_entity_API_1_result_table_final_slices <- c(sum(organization_entity_API_1_result_table_final$Positive), sum(organization_entity_API_1_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(organization_entity_API_1_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#Pie Chart for Sentimental Analysis for money_entity_API_1
money_entity_API_1_result_table_final_slices <- c(sum(money_entity_API_1_result_table_final$Positive), sum(money_entity_API_1_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(money_entity_API_1_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#Pie Chart for Sentimental Analysis for date_entity_API_1
date_entity_API_1_result_table_final_slices <- c(sum(date_entity_API_1_result_table_final$Positive), sum(date_entity_API_1_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(date_entity_API_1_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#####################################
#Pie Chart for Sentimental Analysis API-2
#####################################
#Pie Chart for Sentimental Analysis
person_entity_API_2_result_table_final_slices <- c(sum(person_entity_API_2_result_table_final$Positive), sum(person_entity_API_2_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(person_entity_API_2_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#Pie Chart for Sentimental Analysis
location_entity_API_2_result_table_final_slices <- c(sum(location_entity_API_2_result_table_final$Positive), sum(location_entity_API_2_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(location_entity_API_2_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#Pie Chart for Sentimental Analysis
organization_entity_API_2_result_table_final_slices <- c(sum(organization_entity_API_2_result_table_final$Positive), sum(organization_entity_API_2_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(organization_entity_API_2_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#Pie Chart for Sentimental Analysis
money_entity_API_2_result_table_final_slices <- c(sum(money_entity_API_2_result_table_final$Positive), sum(money_entity_API_2_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(money_entity_API_2_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#Pie Chart for Sentimental Analysis
date_entity_API_2_result_table_final_slices <- c(sum(date_entity_API_2_result_table_final$Positive), sum(date_entity_API_2_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(date_entity_API_2_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
| /03_SENTIMENT_ANALYSIS.R | no_license | adash92/Sentimental-Analysis-Twitter | R | false | false | 32,503 | r | ######################################################################################
######## SENTIMENTAL ANALYSIS FOR NAMED ENTITIES COLLECTED FROM API-1 & API-2 ########
#Reference Source website : https://github.com/rOpenGov/rtimes
######################################################################################
#Importing the common positive and negative words lists into data sets
pos.words = scan('C:/DESKTOP/Home/twitter-sentiment-analysis-tutorial-201107-master/twitter-sentiment-analysis-tutorial-201107-master/data/opinion-lexicon-English/positive-words.txt', what='character', comment.char=';')
neg.words = scan('C:/DESKTOP/Home/twitter-sentiment-analysis-tutorial-201107-master/twitter-sentiment-analysis-tutorial-201107-master/data/opinion-lexicon-English/negative-words.txt', what='character', comment.char=';')
#Adding more common words to positive and negative databases
pos.words=c(pos.words, 'Congrats', 'prizes', 'prize', 'thanks', 'thnx', 'Grt', 'gr8', 'plz', 'trending', 'recovering', 'brainstorm', 'leader')
neg.words = c(neg.words, 'Fight', 'fighting', 'wtf', 'arrest', 'no', 'not')
#list of named entities available from API 1
#View(person_entity_API_1)
#View(location_entity_API_1)
#View(organization_entity_API_1)
#View(date_entity_API_1)
#View(money_entity_API_1)
#list of named entities available from API 2
#View(person_entity_API_2_final)
#View(location_entity_API_2_final)
#View(organization_entity_API_2_final)
#View(money_entity_API_2_final)
#View(date_entity_API_2_final)
# Initialising the variables
sentences <- NULL
sentence <- NULL
#Function to provide score of a sentense as per the frequency of negative and positive words
score.sentiment = function(sentences, pos.words, neg.words, .progress='none')
{
require(plyr)
require(stringr)
list=lapply(sentences, function(sentence, pos.words, neg.words)
{
sentence = gsub('[[:punct:]]',' ',sentence)
sentence = gsub('[[:cntrl:]]','',sentence)
sentence = gsub('\\d+','',sentence)
sentence = gsub('\n','',sentence)
sentence = tolower(sentence)
word.list = str_split(sentence, '\\s+')
words = unlist(word.list)
pos.matches = match(words, pos.words)
neg.matches = match(words, neg.words)
pos.matches = !is.na(pos.matches)
neg.matches = !is.na(neg.matches)
pp=sum(pos.matches)
nn = sum(neg.matches)
score = sum(pos.matches) - sum(neg.matches)
list1=c(score, pp, nn)
return (list1)
}, pos.words, neg.words)
score_new=lapply(list, `[[`, 1)
pp1=score=lapply(list, `[[`, 2)
nn1=score=lapply(list, `[[`, 3)
scores.df = data.frame(score=score_new, text=sentences)
positive.df = data.frame(Positive=pp1, text=sentences)
negative.df = data.frame(Negative=nn1, text=sentences)
list_df=list(scores.df, positive.df, negative.df)
return(list_df)
}
##########################################################################################################
########################### SENTIMENTAL ANALYSIS FOR NAMED ENTITY FROM API - 1 ###########################
##########################################################################################################
#Assigning the sentiment obtained for each named entity from API-1
person_entity_API_1_result = score.sentiment(person_entity_API_1, pos.words, neg.words)
location_entity_API_1_result = score.sentiment(location_entity_API_1, pos.words, neg.words)
organization_entity_API_1_result = score.sentiment(organization_entity_API_1, pos.words, neg.words)
date_entity_API_1_result = score.sentiment(date_entity_API_1, pos.words, neg.words)
money_entity_API_1_result = score.sentiment(money_entity_API_1, pos.words, neg.words)
#View(person_entity_API_1_result)
#View(location_entity_API_1_result)
#View(organization_entity_API_1_result)
#View(date_entity_API_1_result)
#View(money_entity_API_1_result)
#list of named entities available from API 1
#View(person_entity_API_1)
#View(location_entity_API_1)
#View(organization_entity_API_1)
#View(date_entity_API_1)
#View(money_entity_API_1)
###########################################
###### SENTIMENTAL ANALYSIS FOR PERSON ######
###########################################
#Creating a copy of result data frame
person_entity_API_1_result_Column_1=person_entity_API_1_result[[1]]
person_entity_API_1_result_Column_2=person_entity_API_1_result[[2]]
person_entity_API_1_result_Column_3=person_entity_API_1_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
person_entity_API_1_result_Score=person_entity_API_1_result_Column_1[1,]
person_entity_API_1_result_Positive=person_entity_API_1_result_Column_2[1,]
person_entity_API_1_result_Negative=person_entity_API_1_result_Column_3[1,]
person_entity_API_1_result_Score_1=melt(person_entity_API_1_result_Score, ,var='Score')
person_entity_API_1_result_Positive_2=melt(person_entity_API_1_result_Positive, ,var='Positive')
person_entity_API_1_result_Negative_3=melt(person_entity_API_1_result_Negative, ,var='Negative')
person_entity_API_1_result_Score_1['Score'] = NULL
person_entity_API_1_result_Positive_2['Positive'] = NULL
person_entity_API_1_result_Negative_3['Negative'] = NULL
#Creating data frame
person_entity_API_1_result_table1 = data.frame(Text=person_entity_API_1_result[1], Score=person_entity_API_1_result_Score_1)
person_entity_API_1_result_table2 = data.frame(Text=person_entity_API_1_result[2], Score=person_entity_API_1_result_Positive_2)
person_entity_API_1_result_table3 = data.frame(Text=person_entity_API_1_result[3], Score=person_entity_API_1_result_Negative_3)
#Merging three data frames into one
person_entity_API_1_result_table_final=data.frame(Text=person_entity_API_1_result_table1, Score=person_entity_API_1_result_table1, Positive=person_entity_API_1_result_table2, Negative=person_entity_API_1_result_table3)
#person_entity_API_1_result_table_final
person_entity_API_1_result_table_final=data.frame(Text=person_entity_API_1_result_table1$Text.person.1, Score=person_entity_API_1_result_table1$Score.value, Positive=person_entity_API_1_result_table2$Score.value, Negative=person_entity_API_1_result_table3$Score.value)
#View(person_entity_API_1_result_table_final)
###########################################
###### SENTIMENTAL ANALYSIS FOR LOCATION ######
###########################################
#Creating a copy of result data frame
location_entity_API_1_result_Column_1=location_entity_API_1_result[[1]]
location_entity_API_1_result_Column_2=location_entity_API_1_result[[2]]
location_entity_API_1_result_Column_3=location_entity_API_1_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
location_entity_API_1_result_Score=location_entity_API_1_result_Column_1[1,]
location_entity_API_1_result_Positive=location_entity_API_1_result_Column_2[1,]
location_entity_API_1_result_Negative=location_entity_API_1_result_Column_3[1,]
location_entity_API_1_result_Score_1=melt(location_entity_API_1_result_Score, ,var='Score')
location_entity_API_1_result_Positive_2=melt(location_entity_API_1_result_Positive, ,var='Positive')
location_entity_API_1_result_Negative_3=melt(location_entity_API_1_result_Negative, ,var='Negative')
location_entity_API_1_result_Score_1['Score'] = NULL
location_entity_API_1_result_Positive_2['Positive'] = NULL
location_entity_API_1_result_Negative_3['Negative'] = NULL
#Creating data frame
location_entity_API_1_result_table1 = data.frame(Text=location_entity_API_1_result[1], Score=location_entity_API_1_result_Score_1)
location_entity_API_1_result_table2 = data.frame(Text=location_entity_API_1_result[2], Score=location_entity_API_1_result_Positive_2)
location_entity_API_1_result_table3 = data.frame(Text=location_entity_API_1_result[3], Score=location_entity_API_1_result_Negative_3)
#Merging three data frames into one
location_entity_API_1_result_table_final=data.frame(Text=location_entity_API_1_result_table1, Score=location_entity_API_1_result_table1, Positive=location_entity_API_1_result_table2, Negative=location_entity_API_1_result_table3)
#location_entity_API_1_result_table_final
location_entity_API_1_result_table_final=data.frame(Text=location_entity_API_1_result_table1$Text.location.1, Score=location_entity_API_1_result_table1$Score.value, Positive=location_entity_API_1_result_table2$Score.value, Negative=location_entity_API_1_result_table3$Score.value)
#View(location_entity_API_1_result_table_final)
#################################################
###### SENTIMENTAL ANALYSIS FOR ORGANIZATION ######
#################################################
#Creating a copy of result data frame
organization_entity_API_1_result_Column_1=organization_entity_API_1_result[[1]]
organization_entity_API_1_result_Column_2=organization_entity_API_1_result[[2]]
organization_entity_API_1_result_Column_3=organization_entity_API_1_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
organization_entity_API_1_result_Score=organization_entity_API_1_result_Column_1[1,]
organization_entity_API_1_result_Positive=organization_entity_API_1_result_Column_2[1,]
organization_entity_API_1_result_Negative=organization_entity_API_1_result_Column_3[1,]
organization_entity_API_1_result_Score_1=melt(organization_entity_API_1_result_Score, ,var='Score')
organization_entity_API_1_result_Positive_2=melt(organization_entity_API_1_result_Positive, ,var='Positive')
organization_entity_API_1_result_Negative_3=melt(organization_entity_API_1_result_Negative, ,var='Negative')
organization_entity_API_1_result_Score_1['Score'] = NULL
organization_entity_API_1_result_Positive_2['Positive'] = NULL
organization_entity_API_1_result_Negative_3['Negative'] = NULL
#Creating data frame
organization_entity_API_1_result_table1 = data.frame(Text=organization_entity_API_1_result[1], Score=organization_entity_API_1_result_Score_1)
organization_entity_API_1_result_table2 = data.frame(Text=organization_entity_API_1_result[2], Score=organization_entity_API_1_result_Positive_2)
organization_entity_API_1_result_table3 = data.frame(Text=organization_entity_API_1_result[3], Score=organization_entity_API_1_result_Negative_3)
#Merging three data frames into one
organization_entity_API_1_result_table_final=data.frame(Text=organization_entity_API_1_result_table1, Score=organization_entity_API_1_result_table1, Positive=organization_entity_API_1_result_table2, Negative=organization_entity_API_1_result_table3)
#organization_entity_API_1_result_table_final
organization_entity_API_1_result_table_final=data.frame(Text=organization_entity_API_1_result_table1$Text.organization.1, Score=organization_entity_API_1_result_table1$Score.value, Positive=organization_entity_API_1_result_table2$Score.value, Negative=organization_entity_API_1_result_table3$Score.value)
#View(organization_entity_API_1_result_table_final)
###########################################
###### SENTIMENTAL ANALYSIS FOR MONEY ######
###########################################
#Creating a copy of result data frame
money_entity_API_1_result_Column_1=money_entity_API_1_result[[1]]
money_entity_API_1_result_Column_2=money_entity_API_1_result[[2]]
money_entity_API_1_result_Column_3=money_entity_API_1_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
money_entity_API_1_result_Score=money_entity_API_1_result_Column_1[1,]
money_entity_API_1_result_Positive=money_entity_API_1_result_Column_2[1,]
money_entity_API_1_result_Negative=money_entity_API_1_result_Column_3[1,]
money_entity_API_1_result_Score_1=melt(money_entity_API_1_result_Score, ,var='Score')
money_entity_API_1_result_Positive_2=melt(money_entity_API_1_result_Positive, ,var='Positive')
money_entity_API_1_result_Negative_3=melt(money_entity_API_1_result_Negative, ,var='Negative')
money_entity_API_1_result_Score_1['Score'] = NULL
money_entity_API_1_result_Positive_2['Positive'] = NULL
money_entity_API_1_result_Negative_3['Negative'] = NULL
#Creating data frame
money_entity_API_1_result_table1 = data.frame(Text=money_entity_API_1_result[1], Score=money_entity_API_1_result_Score_1)
money_entity_API_1_result_table2 = data.frame(Text=money_entity_API_1_result[2], Score=money_entity_API_1_result_Positive_2)
money_entity_API_1_result_table3 = data.frame(Text=money_entity_API_1_result[3], Score=money_entity_API_1_result_Negative_3)
#Merging three data frames into one
money_entity_API_1_result_table_final=data.frame(Text=money_entity_API_1_result_table1, Score=money_entity_API_1_result_table1, Positive=money_entity_API_1_result_table2, Negative=money_entity_API_1_result_table3)
#money_entity_API_1_result_table_final
money_entity_API_1_result_table_final=data.frame(Text=money_entity_API_1_result_table1$Text.money.1, Score=money_entity_API_1_result_table1$Score.value, Positive=money_entity_API_1_result_table2$Score.value, Negative=money_entity_API_1_result_table3$Score.value)
#View(money_entity_API_1_result_table_final)
###########################################
###### SENTIMENTAL ANALYSIS FOR DATE ######
###########################################
#Creating a copy of result data frame
date_entity_API_1_result_Column_1=date_entity_API_1_result[[1]]
date_entity_API_1_result_Column_2=date_entity_API_1_result[[2]]
date_entity_API_1_result_Column_3=date_entity_API_1_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
date_entity_API_1_result_Score=date_entity_API_1_result_Column_1[1,]
date_entity_API_1_result_Positive=date_entity_API_1_result_Column_2[1,]
date_entity_API_1_result_Negative=date_entity_API_1_result_Column_3[1,]
date_entity_API_1_result_Score_1=melt(date_entity_API_1_result_Score, ,var='Score')
date_entity_API_1_result_Positive_2=melt(date_entity_API_1_result_Positive, ,var='Positive')
date_entity_API_1_result_Negative_3=melt(date_entity_API_1_result_Negative, ,var='Negative')
date_entity_API_1_result_Score_1['Score'] = NULL
date_entity_API_1_result_Positive_2['Positive'] = NULL
date_entity_API_1_result_Negative_3['Negative'] = NULL
#Creating data frame
date_entity_API_1_result_table1 = data.frame(Text=date_entity_API_1_result[1], Score=date_entity_API_1_result_Score_1)
date_entity_API_1_result_table2 = data.frame(Text=date_entity_API_1_result[2], Score=date_entity_API_1_result_Positive_2)
date_entity_API_1_result_table3 = data.frame(Text=date_entity_API_1_result[3], Score=date_entity_API_1_result_Negative_3)
#Merging three data frames into one
date_entity_API_1_result_table_final=data.frame(Text=date_entity_API_1_result_table1, Score=date_entity_API_1_result_table1, Positive=date_entity_API_1_result_table2, Negative=date_entity_API_1_result_table3)
#date_entity_API_1_result_table_final
date_entity_API_1_result_table_final=data.frame(Text=date_entity_API_1_result_table1$Text.date.1, Score=date_entity_API_1_result_table1$Score.value, Positive=date_entity_API_1_result_table2$Score.value, Negative=date_entity_API_1_result_table3$Score.value)
#View(date_entity_API_1_result_table_final)
##########################################################################################################
########################### SENTIMENTAL ANALYSIS FOR NAMED ENTITY FROM API - 2 ###########################
##########################################################################################################
#Assigning the sentiment obtained for each named entity from API-2
person_entity_API_2_result = score.sentiment(person_entity_API_2, pos.words, neg.words)
location_entity_API_2_result = score.sentiment(location_entity_API_2, pos.words, neg.words)
organization_entity_API_2_result = score.sentiment(organization_entity_API_2, pos.words, neg.words)
date_entity_API_2_result = score.sentiment(date_entity_API_2, pos.words, neg.words)
money_entity_API_2_result = score.sentiment(money_entity_API_2, pos.words, neg.words)
#View(person_entity_API_2_result)
#View(location_entity_API_2_result)
#View(organization_entity_API_2_result)
#View(date_entity_API_2_result)
#View(money_entity_API_2_result)
#list of named entities available from API 1
#View(person_entity_API_2)
#View(location_entity_API_2)
#View(organization_entity_API_2)
#View(date_entity_API_2)
#View(money_entity_API_2)
###########################################
###### SENTIMENTAL ANALYSIS FOR PERSON ######
###########################################
#Creating a copy of result data frame
person_entity_API_2_result_Column_1=person_entity_API_2_result[[1]]
person_entity_API_2_result_Column_2=person_entity_API_2_result[[2]]
person_entity_API_2_result_Column_3=person_entity_API_2_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
person_entity_API_2_result_Score=person_entity_API_2_result_Column_1[1,]
person_entity_API_2_result_Positive=person_entity_API_2_result_Column_2[1,]
person_entity_API_2_result_Negative=person_entity_API_2_result_Column_3[1,]
person_entity_API_2_result_Score_1=melt(person_entity_API_2_result_Score, ,var='Score')
person_entity_API_2_result_Positive_2=melt(person_entity_API_2_result_Positive, ,var='Positive')
person_entity_API_2_result_Negative_3=melt(person_entity_API_2_result_Negative, ,var='Negative')
person_entity_API_2_result_Score_1['Score'] = NULL
person_entity_API_2_result_Positive_2['Positive'] = NULL
person_entity_API_2_result_Negative_3['Negative'] = NULL
#Creating data frame
person_entity_API_2_result_table1 = data.frame(Text=person_entity_API_2_result[1], Score=person_entity_API_2_result_Score_1)
person_entity_API_2_result_table2 = data.frame(Text=person_entity_API_2_result[2], Score=person_entity_API_2_result_Positive_2)
person_entity_API_2_result_table3 = data.frame(Text=person_entity_API_2_result[3], Score=person_entity_API_2_result_Negative_3)
#Merging three data frames into one
person_entity_API_2_result_table_final=data.frame(Text=person_entity_API_2_result_table1, Score=person_entity_API_2_result_table1, Positive=person_entity_API_2_result_table2, Negative=person_entity_API_2_result_table3)
#person_entity_API_2_result_table_final
person_entity_API_2_result_table_final=data.frame(Text=person_entity_API_2_result_table1$Text.person.1, Score=person_entity_API_2_result_table1$Score.value, Positive=person_entity_API_2_result_table2$Score.value, Negative=person_entity_API_2_result_table3$Score.value)
#View(person_entity_API_2_result_table_final)
###########################################
###### SENTIMENTAL ANALYSIS FOR LOCATION ######
###########################################
#Creating a copy of result data frame
location_entity_API_2_result_Column_1=location_entity_API_2_result[[1]]
location_entity_API_2_result_Column_2=location_entity_API_2_result[[2]]
location_entity_API_2_result_Column_3=location_entity_API_2_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
location_entity_API_2_result_Score=location_entity_API_2_result_Column_1[1,]
location_entity_API_2_result_Positive=location_entity_API_2_result_Column_2[1,]
location_entity_API_2_result_Negative=location_entity_API_2_result_Column_3[1,]
location_entity_API_2_result_Score_1=melt(location_entity_API_2_result_Score, ,var='Score')
location_entity_API_2_result_Positive_2=melt(location_entity_API_2_result_Positive, ,var='Positive')
location_entity_API_2_result_Negative_3=melt(location_entity_API_2_result_Negative, ,var='Negative')
location_entity_API_2_result_Score_1['Score'] = NULL
location_entity_API_2_result_Positive_2['Positive'] = NULL
location_entity_API_2_result_Negative_3['Negative'] = NULL
#Creating data frame
location_entity_API_2_result_table1 = data.frame(Text=location_entity_API_2_result[1], Score=location_entity_API_2_result_Score_1)
location_entity_API_2_result_table2 = data.frame(Text=location_entity_API_2_result[2], Score=location_entity_API_2_result_Positive_2)
location_entity_API_2_result_table3 = data.frame(Text=location_entity_API_2_result[3], Score=location_entity_API_2_result_Negative_3)
#Merging three data frames into one
location_entity_API_2_result_table_final=data.frame(Text=location_entity_API_2_result_table1, Score=location_entity_API_2_result_table1, Positive=location_entity_API_2_result_table2, Negative=location_entity_API_2_result_table3)
#location_entity_API_2_result_table_final
location_entity_API_2_result_table_final=data.frame(Text=location_entity_API_2_result_table1$Text.location.1, Score=location_entity_API_2_result_table1$Score.value, Positive=location_entity_API_2_result_table2$Score.value, Negative=location_entity_API_2_result_table3$Score.value)
#View(location_entity_API_2_result_table_final)
#################################################
###### SENTIMENTAL ANALYSIS FOR ORGANIZATION ######
#################################################
#Creating a copy of result data frame
organization_entity_API_2_result_Column_1=organization_entity_API_2_result[[1]]
organization_entity_API_2_result_Column_2=organization_entity_API_2_result[[2]]
organization_entity_API_2_result_Column_3=organization_entity_API_2_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
organization_entity_API_2_result_Score=organization_entity_API_2_result_Column_1[1,]
organization_entity_API_2_result_Positive=organization_entity_API_2_result_Column_2[1,]
organization_entity_API_2_result_Negative=organization_entity_API_2_result_Column_3[1,]
organization_entity_API_2_result_Score_1=melt(organization_entity_API_2_result_Score, ,var='Score')
organization_entity_API_2_result_Positive_2=melt(organization_entity_API_2_result_Positive, ,var='Positive')
organization_entity_API_2_result_Negative_3=melt(organization_entity_API_2_result_Negative, ,var='Negative')
organization_entity_API_2_result_Score_1['Score'] = NULL
organization_entity_API_2_result_Positive_2['Positive'] = NULL
organization_entity_API_2_result_Negative_3['Negative'] = NULL
#Creating data frame
organization_entity_API_2_result_table1 = data.frame(Text=organization_entity_API_2_result[1], Score=organization_entity_API_2_result_Score_1)
organization_entity_API_2_result_table2 = data.frame(Text=organization_entity_API_2_result[2], Score=organization_entity_API_2_result_Positive_2)
organization_entity_API_2_result_table3 = data.frame(Text=organization_entity_API_2_result[3], Score=organization_entity_API_2_result_Negative_3)
#Merging three data frames into one
organization_entity_API_2_result_table_final=data.frame(Text=organization_entity_API_2_result_table1, Score=organization_entity_API_2_result_table1, Positive=organization_entity_API_2_result_table2, Negative=organization_entity_API_2_result_table3)
#organization_entity_API_2_result_table_final
organization_entity_API_2_result_table_final=data.frame(Text=organization_entity_API_2_result_table1$Text.organization.1, Score=organization_entity_API_2_result_table1$Score.value, Positive=organization_entity_API_2_result_table2$Score.value, Negative=organization_entity_API_2_result_table3$Score.value)
#View(organization_entity_API_2_result_table_final)
###########################################
###### SENTIMENTAL ANALYSIS FOR MONEY ######
###########################################
#Creating a copy of result data frame
money_entity_API_2_result_Column_1=money_entity_API_2_result[[1]]
money_entity_API_2_result_Column_2=money_entity_API_2_result[[2]]
money_entity_API_2_result_Column_3=money_entity_API_2_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
money_entity_API_2_result_Score=money_entity_API_2_result_Column_1[1,]
money_entity_API_2_result_Positive=money_entity_API_2_result_Column_2[1,]
money_entity_API_2_result_Negative=money_entity_API_2_result_Column_3[1,]
money_entity_API_2_result_Score_1=melt(money_entity_API_2_result_Score, ,var='Score')
money_entity_API_2_result_Positive_2=melt(money_entity_API_2_result_Positive, ,var='Positive')
money_entity_API_2_result_Negative_3=melt(money_entity_API_2_result_Negative, ,var='Negative')
money_entity_API_2_result_Score_1['Score'] = NULL
money_entity_API_2_result_Positive_2['Positive'] = NULL
money_entity_API_2_result_Negative_3['Negative'] = NULL
#Creating data frame
money_entity_API_2_result_table1 = data.frame(Text=money_entity_API_2_result[1], Score=money_entity_API_2_result_Score_1)
money_entity_API_2_result_table2 = data.frame(Text=money_entity_API_2_result[2], Score=money_entity_API_2_result_Positive_2)
money_entity_API_2_result_table3 = data.frame(Text=money_entity_API_2_result[3], Score=money_entity_API_2_result_Negative_3)
#Merging three data frames into one
money_entity_API_2_result_table_final=data.frame(Text=money_entity_API_2_result_table1, Score=money_entity_API_2_result_table1, Positive=money_entity_API_2_result_table2, Negative=money_entity_API_2_result_table3)
#money_entity_API_2_result_table_final
money_entity_API_2_result_table_final=data.frame(Text=money_entity_API_2_result_table1$Text.money.1, Score=money_entity_API_2_result_table1$Score.value, Positive=money_entity_API_2_result_table2$Score.value, Negative=money_entity_API_2_result_table3$Score.value)
#View(money_entity_API_2_result_table_final)
###########################################
###### SENTIMENTAL ANALYSIS FOR DATE ######
###########################################
#Creating a copy of result data frame
date_entity_API_2_result_Column_1=date_entity_API_2_result[[1]]
date_entity_API_2_result_Column_2=date_entity_API_2_result[[2]]
date_entity_API_2_result_Column_3=date_entity_API_2_result[[3]]
#Creating three different data frames for Score, Positive and Negative
#Storing the first row(Containing the sentiment scores) in variable q
date_entity_API_2_result_Score=date_entity_API_2_result_Column_1[1,]
date_entity_API_2_result_Positive=date_entity_API_2_result_Column_2[1,]
date_entity_API_2_result_Negative=date_entity_API_2_result_Column_3[1,]
date_entity_API_2_result_Score_1=melt(date_entity_API_2_result_Score, ,var='Score')
date_entity_API_2_result_Positive_2=melt(date_entity_API_2_result_Positive, ,var='Positive')
date_entity_API_2_result_Negative_3=melt(date_entity_API_2_result_Negative, ,var='Negative')
date_entity_API_2_result_Score_1['Score'] = NULL
date_entity_API_2_result_Positive_2['Positive'] = NULL
date_entity_API_2_result_Negative_3['Negative'] = NULL
#Creating data frame
date_entity_API_2_result_table1 = data.frame(Text=date_entity_API_2_result[1], Score=date_entity_API_2_result_Score_1)
date_entity_API_2_result_table2 = data.frame(Text=date_entity_API_2_result[2], Score=date_entity_API_2_result_Positive_2)
date_entity_API_2_result_table3 = data.frame(Text=date_entity_API_2_result[3], Score=date_entity_API_2_result_Negative_3)
#Merging three data frames into one
date_entity_API_2_result_table_final=data.frame(Text=date_entity_API_2_result_table1, Score=date_entity_API_2_result_table1, Positive=date_entity_API_2_result_table2, Negative=date_entity_API_2_result_table3)
#date_entity_API_2_result_table_final
date_entity_API_2_result_table_final=data.frame(Text=date_entity_API_2_result_table1$Text.date.1, Score=date_entity_API_2_result_table1$Score.value, Positive=date_entity_API_2_result_table2$Score.value, Negative=date_entity_API_2_result_table3$Score.value)
#View(date_entity_API_2_result_table_final)
#####################################
#Pie Chart for Sentimental Analysis API-1
#####################################
#Pie Chart for Sentimental Analysis for person_entity_API_1
person_entity_API_1_result_table_final_slices <- c(sum(person_entity_API_1_result_table_final$Positive), sum(person_entity_API_1_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(person_entity_API_1_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#Pie Chart for Sentimental Analysis for location_entity_API_1
location_entity_API_1_result_table_final_slices <- c(sum(location_entity_API_1_result_table_final$Positive), sum(location_entity_API_1_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(location_entity_API_1_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#Pie Chart for Sentimental Analysis for organization_entity_API_1
organization_entity_API_1_result_table_final_slices <- c(sum(organization_entity_API_1_result_table_final$Positive), sum(organization_entity_API_1_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(organization_entity_API_1_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#Pie Chart for Sentimental Analysis for money_entity_API_1
money_entity_API_1_result_table_final_slices <- c(sum(money_entity_API_1_result_table_final$Positive), sum(money_entity_API_1_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(money_entity_API_1_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#Pie Chart for Sentimental Analysis for date_entity_API_1
date_entity_API_1_result_table_final_slices <- c(sum(date_entity_API_1_result_table_final$Positive), sum(date_entity_API_1_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(date_entity_API_1_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#####################################
#Pie Chart for Sentimental Analysis API-2
#####################################
#Pie Chart for Sentimental Analysis
person_entity_API_2_result_table_final_slices <- c(sum(person_entity_API_2_result_table_final$Positive), sum(person_entity_API_2_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(person_entity_API_2_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#Pie Chart for Sentimental Analysis
location_entity_API_2_result_table_final_slices <- c(sum(location_entity_API_2_result_table_final$Positive), sum(location_entity_API_2_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(location_entity_API_2_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#Pie Chart for Sentimental Analysis
organization_entity_API_2_result_table_final_slices <- c(sum(organization_entity_API_2_result_table_final$Positive), sum(organization_entity_API_2_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(organization_entity_API_2_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#Pie Chart for Sentimental Analysis
money_entity_API_2_result_table_final_slices <- c(sum(money_entity_API_2_result_table_final$Positive), sum(money_entity_API_2_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(money_entity_API_2_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
#Pie Chart for Sentimental Analysis
date_entity_API_2_result_table_final_slices <- c(sum(date_entity_API_2_result_table_final$Positive), sum(date_entity_API_2_result_table_final$Negative))
labels <- c("Positive", "Negative")
pie3D(date_entity_API_2_result_table_final_slices, labels = labels, col=rainbow(length(labels)),explode=0.00, main="Sentiment Analysis")
|
#List all the downloaded files if the table has been updated a new file will be downloaded,
#then delete the old file to save space
annotation.Names = list.files(".","Job-*")
#delete.files <- annotation.Names[-which.max(file.mtime(annotation.Names))]
#unlink(delete.files)
unlink(annotation.Names)
#static content
grantdf.Rdata <- synGet("syn5574249")
load(grantdf.Rdata@filePath)
#Static content
#grant.df <- read.csv("all_grants_posterior_prob.csv",stringsAsFactors = F)
#grant.df <- grant.df[!duplicated(grant.df$AwardTitle),]
#Change to yes and no
grant.df$Metastasis_YN[grant.df$Metastasis_YN == 'y'] <- 'yes'
grant.df$Metastasis_YN[grant.df$Metastasis_YN == 'n'] <- 'no'
grant.df$Predicted_metaYN[grant.df$Predicted_metaYN == 'y'] <- 'yes'
grant.df$Predicted_metaYN[grant.df$Predicted_metaYN == 'n'] <- 'no'
#grant.df$ROW_INDEX <- paste(grant.df$ROW_ID,grant.df$ROW_VERSION,sep="_")
Breast <- as.numeric(grant.df$Breast_Cancer)
Breast[is.na(Breast)] <- 0
grant.MBC <- grant.df[grant.df$Metastasis_YN == 'yes' & Breast >= 50,]
allTitles <- grant.MBC$AwardTitle
allPathways <- grant.MBC$Pathway
allmetaStage <-grant.MBC$Metastasis_stage
| /load.R | permissive | Sage-Bionetworks/AvonMBCShiny | R | false | false | 1,147 | r | #List all the downloaded files if the table has been updated a new file will be downloaded,
#then delete the old file to save space
annotation.Names = list.files(".","Job-*")
#delete.files <- annotation.Names[-which.max(file.mtime(annotation.Names))]
#unlink(delete.files)
unlink(annotation.Names)
#static content
grantdf.Rdata <- synGet("syn5574249")
load(grantdf.Rdata@filePath)
#Static content
#grant.df <- read.csv("all_grants_posterior_prob.csv",stringsAsFactors = F)
#grant.df <- grant.df[!duplicated(grant.df$AwardTitle),]
#Change to yes and no
grant.df$Metastasis_YN[grant.df$Metastasis_YN == 'y'] <- 'yes'
grant.df$Metastasis_YN[grant.df$Metastasis_YN == 'n'] <- 'no'
grant.df$Predicted_metaYN[grant.df$Predicted_metaYN == 'y'] <- 'yes'
grant.df$Predicted_metaYN[grant.df$Predicted_metaYN == 'n'] <- 'no'
#grant.df$ROW_INDEX <- paste(grant.df$ROW_ID,grant.df$ROW_VERSION,sep="_")
Breast <- as.numeric(grant.df$Breast_Cancer)
Breast[is.na(Breast)] <- 0
grant.MBC <- grant.df[grant.df$Metastasis_YN == 'yes' & Breast >= 50,]
allTitles <- grant.MBC$AwardTitle
allPathways <- grant.MBC$Pathway
allmetaStage <-grant.MBC$Metastasis_stage
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/flowlayers.R
\name{get_bp}
\alias{get_bp}
\title{get_bp}
\usage{
get_bp(city)
}
\arguments{
\item{city}{Name of city for which to obtain bounding polygon}
}
\value{
Matrix of coordinates of bounding polygon
}
\description{
get bounding polygons for cities (Accra, Kathmandu)
}
| /man/get_bp.Rd | no_license | atfutures-labs/flowlayers2 | R | false | true | 355 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/flowlayers.R
\name{get_bp}
\alias{get_bp}
\title{get_bp}
\usage{
get_bp(city)
}
\arguments{
\item{city}{Name of city for which to obtain bounding polygon}
}
\value{
Matrix of coordinates of bounding polygon
}
\description{
get bounding polygons for cities (Accra, Kathmandu)
}
|
#' @param Costs A vector containing the costs each coalition has to pay.
| /CoopGame/man-roxygen/param/Costs.R | no_license | anwanjohannes/CoopGame | R | false | false | 75 | r | #' @param Costs A vector containing the costs each coalition has to pay.
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/svm.r
\name{svm}
\alias{svm}
\title{svm}
\usage{
svm(x, y, maxiter = 500)
}
\arguments{
\item{x, y}{The input data \code{x} and response \code{y}. Each must be a shaq, and
each must be distributed in an identical fashion. See the details section
for more information.}
\item{maxiter}{The maximum number of iterations.}
}
\value{
The return is the output of an \code{optim()} call.
}
\description{
Support vector machine. The internals are nearly identical to that of the
logistic regression fitter, except that here we use the "hinged loss".
}
\details{
The optimization uses Nelder-Mead.
Both of \code{x} and \code{y} must be distributed in an identical fashion.
This means that the number of rows owned by each MPI rank should match, and
the data rows \code{x} and response rows \code{y} should be aligned.
Additionally, each MPI rank should own at least one row. Ideally they should
be load balanced, so that each MPI rank owns roughly the same amount of data.
}
\section{Communication}{
The communication consists of an allreduce of 1 double (the local
cost/objective function value) at each iteration of the optimization.
}
\examples{
\dontrun{
library(kazaam)
comm.set.seed(1234, diff=TRUE)
x = ranshaq(rnorm, 10, 3)
y = ranshaq(function(i) sample(0:1, size=i, replace=TRUE), 10)
fit = svm(x, y)
comm.print(fit)
finalize()
}
}
\references{
Efron, B. and Hastie, T., 2016. Computer Age Statistical Inference (Vol. 5).
Cambridge University Press.
}
\seealso{
\code{\link{glm}}
}
| /man/svm.Rd | permissive | RBigData/kazaam | R | false | true | 1,573 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/svm.r
\name{svm}
\alias{svm}
\title{svm}
\usage{
svm(x, y, maxiter = 500)
}
\arguments{
\item{x, y}{The input data \code{x} and response \code{y}. Each must be a shaq, and
each must be distributed in an identical fashion. See the details section
for more information.}
\item{maxiter}{The maximum number of iterations.}
}
\value{
The return is the output of an \code{optim()} call.
}
\description{
Support vector machine. The internals are nearly identical to that of the
logistic regression fitter, except that here we use the "hinged loss".
}
\details{
The optimization uses Nelder-Mead.
Both of \code{x} and \code{y} must be distributed in an identical fashion.
This means that the number of rows owned by each MPI rank should match, and
the data rows \code{x} and response rows \code{y} should be aligned.
Additionally, each MPI rank should own at least one row. Ideally they should
be load balanced, so that each MPI rank owns roughly the same amount of data.
}
\section{Communication}{
The communication consists of an allreduce of 1 double (the local
cost/objective function value) at each iteration of the optimization.
}
\examples{
\dontrun{
library(kazaam)
comm.set.seed(1234, diff=TRUE)
x = ranshaq(rnorm, 10, 3)
y = ranshaq(function(i) sample(0:1, size=i, replace=TRUE), 10)
fit = svm(x, y)
comm.print(fit)
finalize()
}
}
\references{
Efron, B. and Hastie, T., 2016. Computer Age Statistical Inference (Vol. 5).
Cambridge University Press.
}
\seealso{
\code{\link{glm}}
}
|
library(ggplot2)
library(plyr)
library(RColorBrewer)
library(reshape2)
options(digits=15)
setwd("/Users/stephanf/SCP/randomSplitResults")
files <- list.files()
data <- data.frame()
for (file in files) {
tmp <- read.table(file, sep=";", header=T)
tmp$cls <- "rf"
data <- rbind(data, tmp)
}
data$metric <- gsub("_mean", "", data$metric)
data$ratio <- paste(data$c_miss, data$c_action, sep=":")
base_size <- 36
line_size <- 1
point_size <- 4
data <- subset(data, dataset != "bpic2018")
data <- subset(data, dataset != "uwv")
data$method <- as.character(data$method)
data$dataset <- as.character(data$dataset)
#data$method[data$method=="single_threshold"] <- "Single Alarm"
data$dataset[data$dataset=="uwv_all"] <- "uwv"
data$dataset[data$dataset=="traffic_fines_1"] <- "traffic_fines"
data$ratio <- as.factor(data$ratio)
data$ratio_com <- as.factor(data$c_com)
data <- subset(data, cls=="rf")
head(data)
dt_multi <- subset(data, metric=="cost_avg" & method=="same_time_nonmyopic")
dt_alarm1 <- subset(data, metric=="cost_avg" & method=="opt_threshold")
dt_alarm2 <- subset(data, metric=="cost_avg" & method=="alarm2")
dt_single <- merge(dt_alarm1, dt_alarm2, by=c("dataset", "c_miss", "c_action", "c_postpone", "c_com", "early_type", "cls", "ratio"), suffixes=c("_alarm1", "_alarm2"))
dt_single$value_single <- dt_single$value_alarm1
dt_single$value_single <- ifelse(dt_single$value_single > dt_single$value_alarm2, dt_single$value_alarm2 ,dt_single$value_single)
dt_merged <- merge(dt_multi, dt_single, by=c("dataset", "c_miss", "c_action", "c_postpone", "c_com", "early_type", "cls", "ratio"), suffixes=c("_multi", "_single"))
print(subset(dt_merged,c_com > 19))
dt_merged$benefit <- dt_merged$value/dt_merged$value_single
min_value = 0.9
max_value = 1.1
dt_merged$benefit <- ifelse(dt_merged$benefit < min_value,min_value,dt_merged$benefit)
dt_merged$benefit <- ifelse(dt_merged$benefit > max_value,max_value,dt_merged$benefit)
dt_merged$benefit <-ifelse(((dt_merged$c_action * 1.2) + (dt_merged$c_com * 0.5)) < 0.9*(dt_merged$c_action + dt_merged$c_com),dt_merged$benefit,3000)
ggplot(subset(dt_merged, cls=="rf" & c_com > 0), aes(factor(ratio_com_alarm1), factor(ratio))) + geom_tile(aes(fill = benefit), colour = "black") +
theme_bw(base_size=base_size) + scale_fill_gradientn(colours=c("green4","green3","green2","white","red2","red3","red4"),breaks=c(0.95,1.0,1.05), limits=c(min_value,max_value),name="ratio") +
xlab("c_com") + ylab("c_out : c_in") + theme(axis.text.x = element_text(size=20)) + facet_wrap( ~ dataset, ncol=3)
ggsave("/Users/stephanf/Dropbox/Dokumente/Masterstudium/Masterthesis/testresults/MultiAlarm/result_cost_avg_randomSplit_same_time_nonmyopic_vs_single.pdf", width = 20, height = 13)
| /plot_1_vs_1_results.R | permissive | samadeusfp/prescriptiveProcessMonitoring | R | false | false | 2,750 | r | library(ggplot2)
library(plyr)
library(RColorBrewer)
library(reshape2)
options(digits=15)
setwd("/Users/stephanf/SCP/randomSplitResults")
files <- list.files()
data <- data.frame()
for (file in files) {
tmp <- read.table(file, sep=";", header=T)
tmp$cls <- "rf"
data <- rbind(data, tmp)
}
data$metric <- gsub("_mean", "", data$metric)
data$ratio <- paste(data$c_miss, data$c_action, sep=":")
base_size <- 36
line_size <- 1
point_size <- 4
data <- subset(data, dataset != "bpic2018")
data <- subset(data, dataset != "uwv")
data$method <- as.character(data$method)
data$dataset <- as.character(data$dataset)
#data$method[data$method=="single_threshold"] <- "Single Alarm"
data$dataset[data$dataset=="uwv_all"] <- "uwv"
data$dataset[data$dataset=="traffic_fines_1"] <- "traffic_fines"
data$ratio <- as.factor(data$ratio)
data$ratio_com <- as.factor(data$c_com)
data <- subset(data, cls=="rf")
head(data)
dt_multi <- subset(data, metric=="cost_avg" & method=="same_time_nonmyopic")
dt_alarm1 <- subset(data, metric=="cost_avg" & method=="opt_threshold")
dt_alarm2 <- subset(data, metric=="cost_avg" & method=="alarm2")
dt_single <- merge(dt_alarm1, dt_alarm2, by=c("dataset", "c_miss", "c_action", "c_postpone", "c_com", "early_type", "cls", "ratio"), suffixes=c("_alarm1", "_alarm2"))
dt_single$value_single <- dt_single$value_alarm1
dt_single$value_single <- ifelse(dt_single$value_single > dt_single$value_alarm2, dt_single$value_alarm2 ,dt_single$value_single)
dt_merged <- merge(dt_multi, dt_single, by=c("dataset", "c_miss", "c_action", "c_postpone", "c_com", "early_type", "cls", "ratio"), suffixes=c("_multi", "_single"))
print(subset(dt_merged,c_com > 19))
dt_merged$benefit <- dt_merged$value/dt_merged$value_single
min_value = 0.9
max_value = 1.1
dt_merged$benefit <- ifelse(dt_merged$benefit < min_value,min_value,dt_merged$benefit)
dt_merged$benefit <- ifelse(dt_merged$benefit > max_value,max_value,dt_merged$benefit)
dt_merged$benefit <-ifelse(((dt_merged$c_action * 1.2) + (dt_merged$c_com * 0.5)) < 0.9*(dt_merged$c_action + dt_merged$c_com),dt_merged$benefit,3000)
ggplot(subset(dt_merged, cls=="rf" & c_com > 0), aes(factor(ratio_com_alarm1), factor(ratio))) + geom_tile(aes(fill = benefit), colour = "black") +
theme_bw(base_size=base_size) + scale_fill_gradientn(colours=c("green4","green3","green2","white","red2","red3","red4"),breaks=c(0.95,1.0,1.05), limits=c(min_value,max_value),name="ratio") +
xlab("c_com") + ylab("c_out : c_in") + theme(axis.text.x = element_text(size=20)) + facet_wrap( ~ dataset, ncol=3)
ggsave("/Users/stephanf/Dropbox/Dokumente/Masterstudium/Masterthesis/testresults/MultiAlarm/result_cost_avg_randomSplit_same_time_nonmyopic_vs_single.pdf", width = 20, height = 13)
|
testlist <- list(m = NULL, repetitions = 0L, in_m = structure(c(2.31584307392677e+77, 9.53818252170341e+295, 4.0343042887248e-305, 4.12396251261199e-221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(10L, 3L)))
result <- do.call(CNull:::communities_individual_based_sampling_alpha,testlist)
str(result) | /CNull/inst/testfiles/communities_individual_based_sampling_alpha/AFL_communities_individual_based_sampling_alpha/communities_individual_based_sampling_alpha_valgrind_files/1615780497-test.R | no_license | akhikolla/updatedatatype-list2 | R | false | false | 347 | r | testlist <- list(m = NULL, repetitions = 0L, in_m = structure(c(2.31584307392677e+77, 9.53818252170341e+295, 4.0343042887248e-305, 4.12396251261199e-221, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(10L, 3L)))
result <- do.call(CNull:::communities_individual_based_sampling_alpha,testlist)
str(result) |
#' List to array.
#'
#' Reduce/simplify a list of homogenous objects to an array
#'
#' @param res list of input data
#' @param labels a data frame of labels, one row for each element of res
#' @param .drop should extra dimensions be dropped (TRUE) or preserved (FALSE)
#' @family list simplification functions
#' @keywords internal
list_to_array <- function(res, labels = NULL, .drop = FALSE) {
if (length(res) == 0) return(vector())
n <- length(res)
atomic <- sapply(res, is.atomic)
if (all(atomic)) {
# Atomics need to be same size
dlength <- unique.default(llply(res, dims))
if (length(dlength) != 1)
stop("Results must have the same number of dimensions.")
dims <- unique(do.call("rbind", llply(res, amv_dim)))
if (is.null(dims) || !all(dims > 0))
stop("Results must have one or more dimensions.", call. = FALSE)
if (nrow(dims) != 1)
stop("Results must have the same dimensions.", call. = FALSE)
res_dim <- amv_dim(res[[1]])
res_labels <- amv_dimnames(res[[1]])
res_index <- expand.grid(res_labels)
res <- unname(unlist(res))
} else {
# Lists are degenerate case where every element is a singleton
res_index <- as.data.frame(matrix(0, 1, 0))
res_dim <- numeric()
res_labels <- NULL
attr(res, "split_type") <- NULL
attr(res, "split_labels") <- NULL
class(res) <- class(res)[2]
}
if (is.null(labels)) {
labels <- data.frame(X = seq_len(n))
in_labels <- list(NULL)
in_dim <- n
} else {
in_labels <- lapply(labels,
function(x) if(is.factor(x)) levels(x) else sort(unique(x)))
in_dim <- sapply(in_labels, length)
}
# Work out where each result should go in the new array
index_old <- rep(id(rev(labels)), each = nrow(res_index))
index_new <- rep(id(rev(res_index)), nrow(labels))
index <- (index_new - 1) * prod(in_dim) + index_old
out_dim <- unname(c(in_dim, res_dim))
out_labels <- c(in_labels, res_labels)
n <- prod(out_dim)
if (length(index) < n) {
overall <- match(1:n, index, nomatch = NA)
} else {
overall <- order(index)
}
out_array <- res[overall]
dim(out_array) <- out_dim
dimnames(out_array) <- out_labels
if (.drop) reduce_dim(out_array) else out_array
}
| /R/simplify-array.r | no_license | vspinu/plyr | R | false | false | 2,276 | r | #' List to array.
#'
#' Reduce/simplify a list of homogenous objects to an array
#'
#' @param res list of input data
#' @param labels a data frame of labels, one row for each element of res
#' @param .drop should extra dimensions be dropped (TRUE) or preserved (FALSE)
#' @family list simplification functions
#' @keywords internal
list_to_array <- function(res, labels = NULL, .drop = FALSE) {
if (length(res) == 0) return(vector())
n <- length(res)
atomic <- sapply(res, is.atomic)
if (all(atomic)) {
# Atomics need to be same size
dlength <- unique.default(llply(res, dims))
if (length(dlength) != 1)
stop("Results must have the same number of dimensions.")
dims <- unique(do.call("rbind", llply(res, amv_dim)))
if (is.null(dims) || !all(dims > 0))
stop("Results must have one or more dimensions.", call. = FALSE)
if (nrow(dims) != 1)
stop("Results must have the same dimensions.", call. = FALSE)
res_dim <- amv_dim(res[[1]])
res_labels <- amv_dimnames(res[[1]])
res_index <- expand.grid(res_labels)
res <- unname(unlist(res))
} else {
# Lists are degenerate case where every element is a singleton
res_index <- as.data.frame(matrix(0, 1, 0))
res_dim <- numeric()
res_labels <- NULL
attr(res, "split_type") <- NULL
attr(res, "split_labels") <- NULL
class(res) <- class(res)[2]
}
if (is.null(labels)) {
labels <- data.frame(X = seq_len(n))
in_labels <- list(NULL)
in_dim <- n
} else {
in_labels <- lapply(labels,
function(x) if(is.factor(x)) levels(x) else sort(unique(x)))
in_dim <- sapply(in_labels, length)
}
# Work out where each result should go in the new array
index_old <- rep(id(rev(labels)), each = nrow(res_index))
index_new <- rep(id(rev(res_index)), nrow(labels))
index <- (index_new - 1) * prod(in_dim) + index_old
out_dim <- unname(c(in_dim, res_dim))
out_labels <- c(in_labels, res_labels)
n <- prod(out_dim)
if (length(index) < n) {
overall <- match(1:n, index, nomatch = NA)
} else {
overall <- order(index)
}
out_array <- res[overall]
dim(out_array) <- out_dim
dimnames(out_array) <- out_labels
if (.drop) reduce_dim(out_array) else out_array
}
|
context("sampling")
library(tidyverse)
library(metasim)
big <- runif(3, 5, 100)
small <- runif(3, 0.1, 0.9)
test_sample_norm <-
sim_sample(10, 0, "norm", list(mean = 20, sd = 1))
test_sample_norm_another <-
sim_sample(10, 0, "norm", list(mean = 104, sd = 0.3))
test_sample_pareto <- sim_sample(10, 0, "pareto", list(1, 2))
test_that("samples are plausible", {
expect_is(test_sample_norm, "numeric")
expect_lt(test_sample_norm %>% mean(), 50)
expect_gt(test_sample_norm %>% mean(), 5)
expect_lt(test_sample_norm %>% mean(), 22)
expect_gt(test_sample_norm %>% mean(), 18)
expect_is(test_sample_norm_another, "numeric")
expect_lt(test_sample_norm_another %>% mean(), 200)
expect_gt(test_sample_norm_another %>% mean(), 5)
expect_lt(test_sample_norm_another %>% mean(), 106)
expect_gt(test_sample_norm_another %>% mean(), 102)
expect_is(test_sample_pareto, "numeric")
expect_lt(test_sample_pareto %>% mean(), 100)
expect_gt(test_sample_pareto %>% mean(), 0)
# test the exponetial
expect_is(sim_sample(rdist = "norm", par = list(mean = 437, sd = 0.7)),
"numeric")
expect_is(sim_sample(rdist = "exp", par = list(rate = 2)), "numeric")
expect_is(sim_sample(rdist = "lnorm", par = list(meanlog = 49, sd = 0.2)),
"numeric")
expect_gt(sim_sample(rdist = "norm", par = list(mean = 437, sd = 0.7)) %>%
length, 2)
expect_gt(sim_sample(rdist = "exp", par = list(rate = 2)) %>% length, 2)
expect_gt(sim_sample(rdist = "lnorm", par = list(meanlog = 49, sd = 0.2)) %>%
length,
2)
expect_gt(sim_sample(rdist = "norm", par = list(mean = 437, sd = 0.7)) %>%
unique %>% length, 2)
expect_gt(sim_sample(rdist = "exp", par = list(rate = 2)) %>%
unique %>% length, 2)
expect_gt(sim_sample(rdist = "lnorm", par = list(meanlog = 49, sd = 0.2)) %>%
unique %>% length, 2)
expect_is(sim_sample(rdist = "lnorm",
par = list(meanlog = big[[1]], sd = small[1])),
"numeric")
expect_gt(sim_sample(rdist = "lnorm",
par = list(meanlog = big[[1]], sd = small[1])) %>%
length, 2)
})
test_that("sim stats gives non-empty dataframe",{
expect_is(sim_stats(), "data.frame")
expect_gt(sim_stats() %>% nrow(), 2)
# test distributions
# norm
expect_is(sim_stats(rdist = "norm", par = list(mean = 57, sd = 0.2)), "data.frame")
expect_gt(sim_stats(rdist = "norm", par = list(mean = 57, sd = 0.2)) %>% nrow(), 2)
expect_is(sim_stats(rdist = "norm", par = list(mean = big[[1]], sd = small[[1]])), "data.frame")
expect_gt(sim_stats(rdist = "norm", par = list(mean = big[[1]], sd = small[[1]])) %>% nrow(), 2)
# lnorm
expect_is(sim_stats(rdist = "lnorm", par = list(mean = 57, sd = 0.2)), "data.frame")
expect_gt(sim_stats(rdist = "lnorm", par = list(mean = 57, sd = 0.2)) %>% nrow(), 2)
expect_is(sim_stats(rdist = "lnorm", par = list(mean = big[[1]], sd = small[[1]])), "data.frame")
expect_gt(sim_stats(rdist = "lnorm", par = list(mean = big[[1]], sd = small[[1]])) %>% nrow(), 2)
# exp
expect_is(sim_stats(rdist = "exp", par = list(rate = 3)), "data.frame")
expect_gt(sim_stats(rdist = "exp", par = list(rate = 3)) %>% nrow(), 2)
expect_is(sim_stats(rdist = "exp", par = list(rate = round(big[[1]]))), "data.frame")
expect_gt(sim_stats(rdist = "exp", par = list(rate = round(big[[1]]))) %>% nrow(), 2)
# pareto
expect_is(sim_stats(rdist = "pareto", par = list(shape = 3, scale = 2)), "data.frame")
expect_gt(sim_stats(rdist = "pareto", par = list(shape = 3, scale = 2)) %>% nrow(), 2)
})
| /tests/testthat/test-sampling.R | no_license | kylehamilton/metasim | R | false | false | 3,679 | r | context("sampling")
library(tidyverse)
library(metasim)
big <- runif(3, 5, 100)
small <- runif(3, 0.1, 0.9)
test_sample_norm <-
sim_sample(10, 0, "norm", list(mean = 20, sd = 1))
test_sample_norm_another <-
sim_sample(10, 0, "norm", list(mean = 104, sd = 0.3))
test_sample_pareto <- sim_sample(10, 0, "pareto", list(1, 2))
test_that("samples are plausible", {
expect_is(test_sample_norm, "numeric")
expect_lt(test_sample_norm %>% mean(), 50)
expect_gt(test_sample_norm %>% mean(), 5)
expect_lt(test_sample_norm %>% mean(), 22)
expect_gt(test_sample_norm %>% mean(), 18)
expect_is(test_sample_norm_another, "numeric")
expect_lt(test_sample_norm_another %>% mean(), 200)
expect_gt(test_sample_norm_another %>% mean(), 5)
expect_lt(test_sample_norm_another %>% mean(), 106)
expect_gt(test_sample_norm_another %>% mean(), 102)
expect_is(test_sample_pareto, "numeric")
expect_lt(test_sample_pareto %>% mean(), 100)
expect_gt(test_sample_pareto %>% mean(), 0)
# test the exponetial
expect_is(sim_sample(rdist = "norm", par = list(mean = 437, sd = 0.7)),
"numeric")
expect_is(sim_sample(rdist = "exp", par = list(rate = 2)), "numeric")
expect_is(sim_sample(rdist = "lnorm", par = list(meanlog = 49, sd = 0.2)),
"numeric")
expect_gt(sim_sample(rdist = "norm", par = list(mean = 437, sd = 0.7)) %>%
length, 2)
expect_gt(sim_sample(rdist = "exp", par = list(rate = 2)) %>% length, 2)
expect_gt(sim_sample(rdist = "lnorm", par = list(meanlog = 49, sd = 0.2)) %>%
length,
2)
expect_gt(sim_sample(rdist = "norm", par = list(mean = 437, sd = 0.7)) %>%
unique %>% length, 2)
expect_gt(sim_sample(rdist = "exp", par = list(rate = 2)) %>%
unique %>% length, 2)
expect_gt(sim_sample(rdist = "lnorm", par = list(meanlog = 49, sd = 0.2)) %>%
unique %>% length, 2)
expect_is(sim_sample(rdist = "lnorm",
par = list(meanlog = big[[1]], sd = small[1])),
"numeric")
expect_gt(sim_sample(rdist = "lnorm",
par = list(meanlog = big[[1]], sd = small[1])) %>%
length, 2)
})
test_that("sim stats gives non-empty dataframe",{
expect_is(sim_stats(), "data.frame")
expect_gt(sim_stats() %>% nrow(), 2)
# test distributions
# norm
expect_is(sim_stats(rdist = "norm", par = list(mean = 57, sd = 0.2)), "data.frame")
expect_gt(sim_stats(rdist = "norm", par = list(mean = 57, sd = 0.2)) %>% nrow(), 2)
expect_is(sim_stats(rdist = "norm", par = list(mean = big[[1]], sd = small[[1]])), "data.frame")
expect_gt(sim_stats(rdist = "norm", par = list(mean = big[[1]], sd = small[[1]])) %>% nrow(), 2)
# lnorm
expect_is(sim_stats(rdist = "lnorm", par = list(mean = 57, sd = 0.2)), "data.frame")
expect_gt(sim_stats(rdist = "lnorm", par = list(mean = 57, sd = 0.2)) %>% nrow(), 2)
expect_is(sim_stats(rdist = "lnorm", par = list(mean = big[[1]], sd = small[[1]])), "data.frame")
expect_gt(sim_stats(rdist = "lnorm", par = list(mean = big[[1]], sd = small[[1]])) %>% nrow(), 2)
# exp
expect_is(sim_stats(rdist = "exp", par = list(rate = 3)), "data.frame")
expect_gt(sim_stats(rdist = "exp", par = list(rate = 3)) %>% nrow(), 2)
expect_is(sim_stats(rdist = "exp", par = list(rate = round(big[[1]]))), "data.frame")
expect_gt(sim_stats(rdist = "exp", par = list(rate = round(big[[1]]))) %>% nrow(), 2)
# pareto
expect_is(sim_stats(rdist = "pareto", par = list(shape = 3, scale = 2)), "data.frame")
expect_gt(sim_stats(rdist = "pareto", par = list(shape = 3, scale = 2)) %>% nrow(), 2)
})
|
library(QCSimulator)
### Name: CNOT4_03
### Title: 4 qubit CNOT gate (control-0,target-3)
### Aliases: CNOT4_03
### ** Examples
# Initialze global variables
init()
CNOT4_03(q1001_)
CNOT4_03(I16)
| /data/genthat_extracted_code/QCSimulator/examples/CNOT4_03.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 203 | r | library(QCSimulator)
### Name: CNOT4_03
### Title: 4 qubit CNOT gate (control-0,target-3)
### Aliases: CNOT4_03
### ** Examples
# Initialze global variables
init()
CNOT4_03(q1001_)
CNOT4_03(I16)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/model_predict.R
\name{h2o_predict_binary}
\alias{h2o_predict_binary}
\title{H2O Predict using Binary file}
\usage{
h2o_predict_binary(df, model_path, sample = NA)
}
\arguments{
\item{df}{Dataframe. Data to insert into the model.}
\item{model_path}{Character. Relative model path directory or zip file.}
\item{sample}{Integer. How many rows should the function predict?}
}
\value{
vector with predicted results.
}
\description{
This function lets the user predict using the h2o binary file.
Note that it works with the files generated when using the
function export_results(). Recommendation: use the
h2o_predict_MOJO() function when possible - it let's you change
h2o's version without problem.
}
\seealso{
Other Machine Learning:
\code{\link{ROC}()},
\code{\link{conf_mat}()},
\code{\link{export_results}()},
\code{\link{gain_lift}()},
\code{\link{h2o_automl}()},
\code{\link{h2o_predict_API}()},
\code{\link{h2o_predict_MOJO}()},
\code{\link{h2o_predict_model}()},
\code{\link{h2o_selectmodel}()},
\code{\link{impute}()},
\code{\link{iter_seeds}()},
\code{\link{lasso_vars}()},
\code{\link{model_metrics}()},
\code{\link{model_preprocess}()},
\code{\link{msplit}()}
Other H2O:
\code{\link{h2o_predict_API}()},
\code{\link{h2o_predict_MOJO}()},
\code{\link{h2o_predict_model}()}
}
\concept{H2O}
\concept{Machine Learning}
| /man/h2o_predict_binary.Rd | no_license | laresbernardo/lares | R | false | true | 1,406 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/model_predict.R
\name{h2o_predict_binary}
\alias{h2o_predict_binary}
\title{H2O Predict using Binary file}
\usage{
h2o_predict_binary(df, model_path, sample = NA)
}
\arguments{
\item{df}{Dataframe. Data to insert into the model.}
\item{model_path}{Character. Relative model path directory or zip file.}
\item{sample}{Integer. How many rows should the function predict?}
}
\value{
vector with predicted results.
}
\description{
This function lets the user predict using the h2o binary file.
Note that it works with the files generated when using the
function export_results(). Recommendation: use the
h2o_predict_MOJO() function when possible - it let's you change
h2o's version without problem.
}
\seealso{
Other Machine Learning:
\code{\link{ROC}()},
\code{\link{conf_mat}()},
\code{\link{export_results}()},
\code{\link{gain_lift}()},
\code{\link{h2o_automl}()},
\code{\link{h2o_predict_API}()},
\code{\link{h2o_predict_MOJO}()},
\code{\link{h2o_predict_model}()},
\code{\link{h2o_selectmodel}()},
\code{\link{impute}()},
\code{\link{iter_seeds}()},
\code{\link{lasso_vars}()},
\code{\link{model_metrics}()},
\code{\link{model_preprocess}()},
\code{\link{msplit}()}
Other H2O:
\code{\link{h2o_predict_API}()},
\code{\link{h2o_predict_MOJO}()},
\code{\link{h2o_predict_model}()}
}
\concept{H2O}
\concept{Machine Learning}
|
#' Liver protein expression dataset
#'
#' Data for mediation analysis of liver protein expression.
#' Dataset is a list of objects for target, mediator, annotation, covar (covariates),
#' qtl.geno.
#' There are 192 diversity outbred mice and 8050 genes with measured mRNA expression.
#' The 'target' is the level of expression of gene 'Tmem68'.
#' The mRNA transcript expression is measured for 8050 genes in 'mediator'.
#' The 'annotation' has positional information for each of the genes.
#' The 'covar' has Sex and Diet information for the 192 mice.
#' The 'driver' object has the driver as allele probabilities for the 8 founder genotypes for the 192 mice
#' at the location of the Tmem68 gene.
#'
#' @docType data
#'
#' @usage data(Tmem68)
#'
#' @format An object of class \code{"cross"}; see \code{\link[qtl]{read.cross}}.
#'
#' @keywords datasets
#'
#' @references Chick et al. (2016)
#' (\href{https://dx.org/10.1038/nature18270}{Nature 534: 500-505})
#'
#' @source \href{https://github.com/churchill-lab/intermediate}{Churchill GitHub}
#'
#' @examples
#' data(Tmem68)
#' str(Tmem68)
"Tmem68" | /R/Tmem68.R | no_license | byandell/Tmem68 | R | false | false | 1,102 | r | #' Liver protein expression dataset
#'
#' Data for mediation analysis of liver protein expression.
#' Dataset is a list of objects for target, mediator, annotation, covar (covariates),
#' qtl.geno.
#' There are 192 diversity outbred mice and 8050 genes with measured mRNA expression.
#' The 'target' is the level of expression of gene 'Tmem68'.
#' The mRNA transcript expression is measured for 8050 genes in 'mediator'.
#' The 'annotation' has positional information for each of the genes.
#' The 'covar' has Sex and Diet information for the 192 mice.
#' The 'driver' object has the driver as allele probabilities for the 8 founder genotypes for the 192 mice
#' at the location of the Tmem68 gene.
#'
#' @docType data
#'
#' @usage data(Tmem68)
#'
#' @format An object of class \code{"cross"}; see \code{\link[qtl]{read.cross}}.
#'
#' @keywords datasets
#'
#' @references Chick et al. (2016)
#' (\href{https://dx.org/10.1038/nature18270}{Nature 534: 500-505})
#'
#' @source \href{https://github.com/churchill-lab/intermediate}{Churchill GitHub}
#'
#' @examples
#' data(Tmem68)
#' str(Tmem68)
"Tmem68" |
library(assertthat)
library(dplyr)
library(readxl)
library(stringr)
library(tibble)
library(tidyr)
## 出典 data source : 総務省
## 【総計】平成30年住民基本台帳人口・世帯数、平成29年人口動態(市区町村別)
## https://www.soumu.go.jp/menu_news/s-news/01gyosei02_02000177.html
## https://www.soumu.go.jp/main_content/000563140.xls
## から抜粋
## 神奈川県の政令指定都市の人口を取り出す
in_filename <- 'city/incoming/000563140.xls'
all_df <- readxl::read_excel(in_filename)
all_df <- all_df[5:NROW(all_df), c(2,3,6)]
names(all_df) <- c('pref', 'city_ward', 'population')
all_df <- as_tibble(all_df)
all_df$population <- as.integer(all_df$population)
kanagawa_df <- all_df %>%
dplyr::filter(pref=='神奈川県')
city_df <- kanagawa_df[!is.na(kanagawa_df$city_ward %>% stringr::str_extract('^.+市.+区$')),] %>%
tidyr::separate('city_ward', into=c('city', 'ward'), sep='市') %>%
dplyr::mutate(city=paste0(city, '市'))
## 横浜市、川崎市、相模原市の人口
expected <- c(3737845, 1488031, 718192)
## 列名をべた書きする(シンボル)
actual <- city_df %>% dplyr::select(-ward) %>%
dplyr::group_by(pref, city) %>%
dplyr::summarize_each(list(sum)) %>%
ungroup()
assertthat::assert_that(assertthat::are_equal(expected, actual$population))
summary_df <- actual
## selectに文字列リテラルを渡す
actual <- city_df %>% dplyr::select(-'ward') %>%
dplyr::group_by(pref, city) %>%
dplyr::summarize_each(list(sum)) %>%
ungroup()
assertthat::assert_that(assertthat::are_equal(expected, actual$population))
## selectに文字列変数を渡す。これができないとハードコーディングだらけで辛い。
name <- 'ward'
actual <- city_df %>% dplyr::select(-(!!name)) %>%
dplyr::group_by(pref, city) %>%
dplyr::summarize_each(list(sum)) %>%
ungroup()
assertthat::assert_that(assertthat::are_equal(expected, actual$population))
name_set <- c('pref', 'ward')
actual <- city_df %>% dplyr::select(-(!!(name_set))) %>%
dplyr::group_by(city) %>%
dplyr::summarize_each(list(sum)) %>%
ungroup()
assertthat::assert_that(assertthat::are_equal(expected, actual$population))
## group_byに文字列変数を渡す。これができないとハードコーディングだらけで辛い。
name <- c('city')
actual <- city_df %>% dplyr::select(-c('pref', 'ward')) %>%
dplyr::group_by(!!(rlang::sym(name))) %>%
dplyr::summarize_each(list(sum)) %>%
ungroup()
assertthat::assert_that(assertthat::are_equal(expected, actual$population))
name_set <- c('pref', 'city')
actual <- city_df %>% dplyr::select(-'ward') %>%
dplyr::group_by(!!!(rlang::syms(name_set))) %>%
dplyr::summarize_each(list(sum)) %>%
ungroup()
assertthat::assert_that(assertthat::are_equal(expected, actual$population))
## mutate先の列名をべた書きする(シンボル)
actual <- summary_df %>% dplyr::mutate(n=population)
assertthat::assert_that(assertthat::are_equal(expected, actual$population))
## mutate先の列名として文字列変数を渡す。これができないとハードコーディングだらけで辛い。
name <- c('n')
actual <- summary_df %>% dplyr::mutate(!!(name):=population)
assertthat::assert_that(assertthat::are_equal(expected, actual$n))
x_name <- c('population')
y_name <- c('y')
actual <- summary_df %>% dplyr::mutate(!!(y_name):=!!(rlang::sym(x_name)))
assertthat::assert_that(assertthat::are_equal(expected, actual$y))
## 指定した列から特定の要素を抜き出す
pattern <- '横浜市.+区'
n_wards <- 18
## 列名をハードコーディングする
actual <- na.omit(kanagawa_df$city_ward %>% str_extract(pattern))
assertthat::assert_that(n_wards == NROW(actual))
actual <- na.omit(kanagawa_df[['city_ward']] %>% str_extract(pattern))
assertthat::assert_that(n_wards == NROW(actual))
name <- 'city_ward'
actual <- na.omit(kanagawa_df[[name]] %>% str_extract(pattern))
assertthat::assert_that(n_wards == NROW(actual))
| /scripts/stock_price/variable_column_name.R | permissive | zettsu-t/cPlusPlusFriend | R | false | false | 4,127 | r | library(assertthat)
library(dplyr)
library(readxl)
library(stringr)
library(tibble)
library(tidyr)
## 出典 data source : 総務省
## 【総計】平成30年住民基本台帳人口・世帯数、平成29年人口動態(市区町村別)
## https://www.soumu.go.jp/menu_news/s-news/01gyosei02_02000177.html
## https://www.soumu.go.jp/main_content/000563140.xls
## から抜粋
## 神奈川県の政令指定都市の人口を取り出す
in_filename <- 'city/incoming/000563140.xls'
all_df <- readxl::read_excel(in_filename)
all_df <- all_df[5:NROW(all_df), c(2,3,6)]
names(all_df) <- c('pref', 'city_ward', 'population')
all_df <- as_tibble(all_df)
all_df$population <- as.integer(all_df$population)
kanagawa_df <- all_df %>%
dplyr::filter(pref=='神奈川県')
city_df <- kanagawa_df[!is.na(kanagawa_df$city_ward %>% stringr::str_extract('^.+市.+区$')),] %>%
tidyr::separate('city_ward', into=c('city', 'ward'), sep='市') %>%
dplyr::mutate(city=paste0(city, '市'))
## 横浜市、川崎市、相模原市の人口
expected <- c(3737845, 1488031, 718192)
## 列名をべた書きする(シンボル)
actual <- city_df %>% dplyr::select(-ward) %>%
dplyr::group_by(pref, city) %>%
dplyr::summarize_each(list(sum)) %>%
ungroup()
assertthat::assert_that(assertthat::are_equal(expected, actual$population))
summary_df <- actual
## selectに文字列リテラルを渡す
actual <- city_df %>% dplyr::select(-'ward') %>%
dplyr::group_by(pref, city) %>%
dplyr::summarize_each(list(sum)) %>%
ungroup()
assertthat::assert_that(assertthat::are_equal(expected, actual$population))
## selectに文字列変数を渡す。これができないとハードコーディングだらけで辛い。
name <- 'ward'
actual <- city_df %>% dplyr::select(-(!!name)) %>%
dplyr::group_by(pref, city) %>%
dplyr::summarize_each(list(sum)) %>%
ungroup()
assertthat::assert_that(assertthat::are_equal(expected, actual$population))
name_set <- c('pref', 'ward')
actual <- city_df %>% dplyr::select(-(!!(name_set))) %>%
dplyr::group_by(city) %>%
dplyr::summarize_each(list(sum)) %>%
ungroup()
assertthat::assert_that(assertthat::are_equal(expected, actual$population))
## group_byに文字列変数を渡す。これができないとハードコーディングだらけで辛い。
name <- c('city')
actual <- city_df %>% dplyr::select(-c('pref', 'ward')) %>%
dplyr::group_by(!!(rlang::sym(name))) %>%
dplyr::summarize_each(list(sum)) %>%
ungroup()
assertthat::assert_that(assertthat::are_equal(expected, actual$population))
name_set <- c('pref', 'city')
actual <- city_df %>% dplyr::select(-'ward') %>%
dplyr::group_by(!!!(rlang::syms(name_set))) %>%
dplyr::summarize_each(list(sum)) %>%
ungroup()
assertthat::assert_that(assertthat::are_equal(expected, actual$population))
## mutate先の列名をべた書きする(シンボル)
actual <- summary_df %>% dplyr::mutate(n=population)
assertthat::assert_that(assertthat::are_equal(expected, actual$population))
## mutate先の列名として文字列変数を渡す。これができないとハードコーディングだらけで辛い。
name <- c('n')
actual <- summary_df %>% dplyr::mutate(!!(name):=population)
assertthat::assert_that(assertthat::are_equal(expected, actual$n))
x_name <- c('population')
y_name <- c('y')
actual <- summary_df %>% dplyr::mutate(!!(y_name):=!!(rlang::sym(x_name)))
assertthat::assert_that(assertthat::are_equal(expected, actual$y))
## 指定した列から特定の要素を抜き出す
pattern <- '横浜市.+区'
n_wards <- 18
## 列名をハードコーディングする
actual <- na.omit(kanagawa_df$city_ward %>% str_extract(pattern))
assertthat::assert_that(n_wards == NROW(actual))
actual <- na.omit(kanagawa_df[['city_ward']] %>% str_extract(pattern))
assertthat::assert_that(n_wards == NROW(actual))
name <- 'city_ward'
actual <- na.omit(kanagawa_df[[name]] %>% str_extract(pattern))
assertthat::assert_that(n_wards == NROW(actual))
|
library(dplyr)
library(shiny)
library(leaflet)
load('app.rdata')
ui <- bootstrapPage(
tags$style(type = "text/css", "html, body {width:100%;height:100%}"),
leafletOutput("map", width = "100%", height = "100%"),
# slider panel
absolutePanel(
id = 'controls',
class = 'panel panel-default',
fixed = T,
height = 'auto',
top=50,
right = 50,
left = 'auto',
bottom = 'auto',
width = 'auto',
draggable = T,
selectInput('map', "Map Type", choices = c(
"CartoDB.Positron",
"Esri.WorldImagery"),
selected = "CartoDB.Positron"),
sliderInput(
"range",
"year Range",
min=min(df$year),
max = max(df$year),
value = range(df$year),
step=1
)
)
)
# SHiny server
server <- function(input, output, session) {
filteredData <- reactive({
df[df$year >= input$range[1] & df$year <= input$range[2],]
})
# Map output
output$map <- renderLeaflet({
leaflet(df) %>%
fitBounds(~min(longitude), ~min(latitude), ~max(longitude), ~max(latitude)) %>%
addProviderTiles(input$map)
})
# plot points on the map
observe({
leafletProxy("map", data = filteredData()) %>%
clearShapes() %>% clearPopups() %>% clearMarkers() %>% clearMarkerClusters() %>%
addCircleMarkers(lng=~longitude, lat=~latitude, popup = ~paste(paste("<h3>", filteredData()$conflict_name, "</h3>"),
paste("<b>Side A:</b>", side_a, "<br>", "<b>Side B:</b>", side_b, "<br>",
"<b>Date:</b>", paste(date_start, date_end, sep = " - "), '<br>', "<b>Casualties:</b>", best , sep = " ")),
clusterOptions = markerClusterOptions())
})
}
# Run the application
shinyApp(ui = ui, server = server)
| /app.R | no_license | MikeKozelMSU/final | R | false | false | 1,950 | r | library(dplyr)
library(shiny)
library(leaflet)
load('app.rdata')
ui <- bootstrapPage(
tags$style(type = "text/css", "html, body {width:100%;height:100%}"),
leafletOutput("map", width = "100%", height = "100%"),
# slider panel
absolutePanel(
id = 'controls',
class = 'panel panel-default',
fixed = T,
height = 'auto',
top=50,
right = 50,
left = 'auto',
bottom = 'auto',
width = 'auto',
draggable = T,
selectInput('map', "Map Type", choices = c(
"CartoDB.Positron",
"Esri.WorldImagery"),
selected = "CartoDB.Positron"),
sliderInput(
"range",
"year Range",
min=min(df$year),
max = max(df$year),
value = range(df$year),
step=1
)
)
)
# SHiny server
server <- function(input, output, session) {
filteredData <- reactive({
df[df$year >= input$range[1] & df$year <= input$range[2],]
})
# Map output
output$map <- renderLeaflet({
leaflet(df) %>%
fitBounds(~min(longitude), ~min(latitude), ~max(longitude), ~max(latitude)) %>%
addProviderTiles(input$map)
})
# plot points on the map
observe({
leafletProxy("map", data = filteredData()) %>%
clearShapes() %>% clearPopups() %>% clearMarkers() %>% clearMarkerClusters() %>%
addCircleMarkers(lng=~longitude, lat=~latitude, popup = ~paste(paste("<h3>", filteredData()$conflict_name, "</h3>"),
paste("<b>Side A:</b>", side_a, "<br>", "<b>Side B:</b>", side_b, "<br>",
"<b>Date:</b>", paste(date_start, date_end, sep = " - "), '<br>', "<b>Casualties:</b>", best , sep = " ")),
clusterOptions = markerClusterOptions())
})
}
# Run the application
shinyApp(ui = ui, server = server)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/adonis_pairwise.R
\name{adonis_pairwise}
\alias{adonis_pairwise}
\title{Pairwise comparisons for permutational multivariate analysis of variance using distance matrices}
\usage{
adonis_pairwise(
x,
dd,
group.var,
add_permdisp = TRUE,
permut = 999,
p.adj = "fdr",
all_results = T,
comparison_sep = ".",
permdisp_type = "median",
...
)
}
\arguments{
\item{x}{Sample meta-data (data frame for the independent variables)}
\item{dd}{Dissimilarity matrix between samples}
\item{group.var}{Name of the independent variable to test (RHS in adonis formula)}
\item{add_permdisp}{Logical; if TRUE (default), results of tests for homogeneity of multivariate dispersions will be added to output (see \code{\link[vegan]{betadisper}})}
\item{permut}{Number of permutations required}
\item{p.adj}{Logical or character; if TRUE, adjust P-values for multiple comparisons with FDR; if character, specify correction method from \code{\link{p.adjust}} ("holm", "hochberg", "hommel", "bonferroni", "BH", "BY", "fdr", or "none")}
\item{all_results}{Logical, return results of adonis and data subsets for each pairwise comparison}
\item{comparison_sep}{Character string to separate the levels of independent variable the in the pairwise comparison names (default, ".")}
\item{permdisp_type}{Use the spatial median (default) or the group centroid for the analysis for homogeneity of multivariate dispersions (see \code{\link[vegan]{betadisper}})}
\item{...}{Additional arguments will be passed to \code{\link[vegan]{adonis}} and \code{\link[vegan]{permutest}}}
}
\value{
List with adinonis and betadisper results.
}
\description{
Pairwise comparisons for permutational multivariate analysis of variance using distance matrices
}
\examples{
library(vegan)
data(dune)
data(dune.env)
# Compare all Management levels
adonis2(dune ~ Management, data = dune.env)
# Pairwise comparisons between Management levels
tst <- adonis_pairwise(x = dune.env, dd = vegdist(dune), group.var = "Management")
tst$Adonis.tab
tst$Betadisper.tab
}
\seealso{
\code{\link[vegan]{adonis}}, \code{\link[vegan]{betadisper}}
}
| /man/adonis_pairwise.Rd | permissive | vmikk/metagMisc | R | false | true | 2,183 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/adonis_pairwise.R
\name{adonis_pairwise}
\alias{adonis_pairwise}
\title{Pairwise comparisons for permutational multivariate analysis of variance using distance matrices}
\usage{
adonis_pairwise(
x,
dd,
group.var,
add_permdisp = TRUE,
permut = 999,
p.adj = "fdr",
all_results = T,
comparison_sep = ".",
permdisp_type = "median",
...
)
}
\arguments{
\item{x}{Sample meta-data (data frame for the independent variables)}
\item{dd}{Dissimilarity matrix between samples}
\item{group.var}{Name of the independent variable to test (RHS in adonis formula)}
\item{add_permdisp}{Logical; if TRUE (default), results of tests for homogeneity of multivariate dispersions will be added to output (see \code{\link[vegan]{betadisper}})}
\item{permut}{Number of permutations required}
\item{p.adj}{Logical or character; if TRUE, adjust P-values for multiple comparisons with FDR; if character, specify correction method from \code{\link{p.adjust}} ("holm", "hochberg", "hommel", "bonferroni", "BH", "BY", "fdr", or "none")}
\item{all_results}{Logical, return results of adonis and data subsets for each pairwise comparison}
\item{comparison_sep}{Character string to separate the levels of independent variable the in the pairwise comparison names (default, ".")}
\item{permdisp_type}{Use the spatial median (default) or the group centroid for the analysis for homogeneity of multivariate dispersions (see \code{\link[vegan]{betadisper}})}
\item{...}{Additional arguments will be passed to \code{\link[vegan]{adonis}} and \code{\link[vegan]{permutest}}}
}
\value{
List with adinonis and betadisper results.
}
\description{
Pairwise comparisons for permutational multivariate analysis of variance using distance matrices
}
\examples{
library(vegan)
data(dune)
data(dune.env)
# Compare all Management levels
adonis2(dune ~ Management, data = dune.env)
# Pairwise comparisons between Management levels
tst <- adonis_pairwise(x = dune.env, dd = vegdist(dune), group.var = "Management")
tst$Adonis.tab
tst$Betadisper.tab
}
\seealso{
\code{\link[vegan]{adonis}}, \code{\link[vegan]{betadisper}}
}
|
library(shiny)
library(timevis)
library(plotly)
library(RColorBrewer)
library(leaflet)
library(visNetwork)
library(rsconnect)
data<- data.frame(
id = 1:11,
content = c("Ecole Spéciale des Travaux Publics<br>du Bâtiment et de l'Industrie - Paris" ,"Stage et vacation au<br>bureau du Patrimoine Immobilier<br>CNRS","Stage en<br>conduite de travaux<br>Vinci Construction","Stage Correspondante<br>Systèmes d'information<br>Bouygues Construction", "<b>Ingénieur d'études - BONNA SABLA</b>" , "Fondation Louis Vuitton","Chantier Ligne à Grande Vitesse SEA","<b>Responsable bureau d'études - BONNA SABLA</b>","Mise en place<br>du BIM en interne", "Projet de<br>standardisation<br>des produits<br>Bonna Sabla","<b>Certificat<br>Data Scientist<br>CEPE Formation<br>continue<br>ENSAE ENSAI</b>"),
start = c( "2006-09-01", "2007-03-01" , "2008-07-01" ,"2009-06-01", "2009-09-01", "2010-02-01","2012-03-01","2014-01-01", "2015-07-01" ,"2017-03-01" , "2021-03-22"),
end = c( "2009-08-01", NA , NA, NA, "2013-12-31", NA,NA, "2021-05-01", NA,NA, NA),
group = c(3, 2,2,2,2, 1, 1,2,1,1,3),
style=(c("background-color: #A8DDB5;","background-color:#7BCCC4;","background-color:#7BCCC4;","background-color:#7BCCC4;","background-color:#08589E;","","","background-color:#08589E;","","","background-color: #A8DDB5;"))
)
groupes = data.frame(id = 1:3, content = c("Projets", "Experience<br>professionnelle","Formation"))
systeme_expl<-c("Windows","MacOS","Ubuntu")
progr<-c("R","Python","VBA")
packagesRutilisés<-c("shiny","ggplot2","caret","tidymodel","dplyr","officeR")
datascience<-c("Text mining","Graph mining","Webscraping","Visualisation","Machine learning","Deep learning","Application interactive")
fig<-data.frame(labels = c("Systèmes<br>d'exploitation","Langages<br>de programmation","Data science","Compétences<br>générales","Soft<br>skills","Windows","MacOS","Ubuntu","R","Python","VBA","Text mining","Graph mining","Webscraping","Visualisation","Machine<br>learning","Deep<br>learning","Application<br>interactive","Gestion<br>de projets","Management","Communication orale<br>et écrite","Rigueur","Motivation","Organisation","Efficacité"),
parents = c("","","","","","Systèmes<br>d'exploitation","Systèmes<br>d'exploitation","Systèmes<br>d'exploitation","Langages<br>de programmation","Langages<br>de programmation","Langages<br>de programmation","Data science","Data science","Data science","Data science","Data science","Data science","Data science","Compétences<br>générales","Compétences<br>générales","Compétences<br>générales","Soft<br>skills","Soft<br>skills","Soft<br>skills","Soft<br>skills"),
colors = c("#A8DDB5","#7BCCC4","#4EB3D3","#2B8CBE","#08589E"))
nodes <- data.frame(id=c(1:17),
label=c("python","R","panda","numpy","matplotlib","Pandas","scikit-learn","nltk","textBlob","plotly","leaflet","tidymodel","ggplot2","caret","dplyr","shiny","officeR" ),
shape=c("diamond","diamond","star","star","star","star","star","star","star","star","star","star","star","star","star","star","star"),
color = c("#08589E", "#2B8CBE", "#7BCCC4", "#7BCCC4", "#7BCCC4","#7BCCC4", "#7BCCC4", "#7BCCC4","#7BCCC4", "#7BCCC4", "#7BCCC4","#7BCCC4", "#7BCCC4", "#7BCCC4","#7BCCC4", "#7BCCC4", "#7BCCC4"),
shadow = c(TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE)
)
edges <- data.frame(from=c(1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2 ,2 ,2, 2),
to=c(3,4, 5 , 6, 7, 8, 9, 10, 11, 12, 13, 11, 10, 14, 15 ,16, 17))
| /global.R | no_license | agnes-me/AppliCV | R | false | false | 3,821 | r | library(shiny)
library(timevis)
library(plotly)
library(RColorBrewer)
library(leaflet)
library(visNetwork)
library(rsconnect)
data<- data.frame(
id = 1:11,
content = c("Ecole Spéciale des Travaux Publics<br>du Bâtiment et de l'Industrie - Paris" ,"Stage et vacation au<br>bureau du Patrimoine Immobilier<br>CNRS","Stage en<br>conduite de travaux<br>Vinci Construction","Stage Correspondante<br>Systèmes d'information<br>Bouygues Construction", "<b>Ingénieur d'études - BONNA SABLA</b>" , "Fondation Louis Vuitton","Chantier Ligne à Grande Vitesse SEA","<b>Responsable bureau d'études - BONNA SABLA</b>","Mise en place<br>du BIM en interne", "Projet de<br>standardisation<br>des produits<br>Bonna Sabla","<b>Certificat<br>Data Scientist<br>CEPE Formation<br>continue<br>ENSAE ENSAI</b>"),
start = c( "2006-09-01", "2007-03-01" , "2008-07-01" ,"2009-06-01", "2009-09-01", "2010-02-01","2012-03-01","2014-01-01", "2015-07-01" ,"2017-03-01" , "2021-03-22"),
end = c( "2009-08-01", NA , NA, NA, "2013-12-31", NA,NA, "2021-05-01", NA,NA, NA),
group = c(3, 2,2,2,2, 1, 1,2,1,1,3),
style=(c("background-color: #A8DDB5;","background-color:#7BCCC4;","background-color:#7BCCC4;","background-color:#7BCCC4;","background-color:#08589E;","","","background-color:#08589E;","","","background-color: #A8DDB5;"))
)
groupes = data.frame(id = 1:3, content = c("Projets", "Experience<br>professionnelle","Formation"))
systeme_expl<-c("Windows","MacOS","Ubuntu")
progr<-c("R","Python","VBA")
packagesRutilisés<-c("shiny","ggplot2","caret","tidymodel","dplyr","officeR")
datascience<-c("Text mining","Graph mining","Webscraping","Visualisation","Machine learning","Deep learning","Application interactive")
fig<-data.frame(labels = c("Systèmes<br>d'exploitation","Langages<br>de programmation","Data science","Compétences<br>générales","Soft<br>skills","Windows","MacOS","Ubuntu","R","Python","VBA","Text mining","Graph mining","Webscraping","Visualisation","Machine<br>learning","Deep<br>learning","Application<br>interactive","Gestion<br>de projets","Management","Communication orale<br>et écrite","Rigueur","Motivation","Organisation","Efficacité"),
parents = c("","","","","","Systèmes<br>d'exploitation","Systèmes<br>d'exploitation","Systèmes<br>d'exploitation","Langages<br>de programmation","Langages<br>de programmation","Langages<br>de programmation","Data science","Data science","Data science","Data science","Data science","Data science","Data science","Compétences<br>générales","Compétences<br>générales","Compétences<br>générales","Soft<br>skills","Soft<br>skills","Soft<br>skills","Soft<br>skills"),
colors = c("#A8DDB5","#7BCCC4","#4EB3D3","#2B8CBE","#08589E"))
nodes <- data.frame(id=c(1:17),
label=c("python","R","panda","numpy","matplotlib","Pandas","scikit-learn","nltk","textBlob","plotly","leaflet","tidymodel","ggplot2","caret","dplyr","shiny","officeR" ),
shape=c("diamond","diamond","star","star","star","star","star","star","star","star","star","star","star","star","star","star","star"),
color = c("#08589E", "#2B8CBE", "#7BCCC4", "#7BCCC4", "#7BCCC4","#7BCCC4", "#7BCCC4", "#7BCCC4","#7BCCC4", "#7BCCC4", "#7BCCC4","#7BCCC4", "#7BCCC4", "#7BCCC4","#7BCCC4", "#7BCCC4", "#7BCCC4"),
shadow = c(TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE,TRUE)
)
edges <- data.frame(from=c(1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2 ,2 ,2, 2),
to=c(3,4, 5 , 6, 7, 8, 9, 10, 11, 12, 13, 11, 10, 14, 15 ,16, 17))
|
url <- paste("https://api.tfl.gov.uk/Journey/JourneyResults/", start_lat, ",", start_lon, "/to/", end_lat, ",", end_lon,
"?time=1000&journeyPreference=LeastTime&mode=tube&app_id=", tfl_app_id, "&app_key=", tfl_app_key ,sep="")
json_data <- fromJSON(RCurl::getURL(url), simplify = FALSE)
if (json_data$`$type` == 'Tfl.Api.Presentation.Entities.JourneyPlanner.ItineraryResult, Tfl.Api.Presentation.Entities') {
temp_results_frame <- data.frame(unique_id = numeric(),
lat = numeric(),
lon = numeric(),
mode = character(),
line = character(),
stringsAsFactors = FALSE)
for (r in 1:length(json_data$journeys[[1]]$legs)) {
line <- json_data$journeys[[1]]$legs[[r]]$routeOptions[[1]]$name
linestring <- json_data$journeys[[1]]$legs[[r]]$path$lineString
linestring <- gsub(" ", "", linestring, fixed=TRUE)
linestring <- gsub("[", "", linestring, fixed = TRUE)
linestring <- gsub("]", "", linestring, fixed = TRUE)
linestring <- unlist(strsplit(linestring, split = ","))
per_leg_results <- data.frame(unique_id = numeric(),
lat = numeric(),
lon = numeric(),
mode = character(),
line = character(),
stringsAsFactors = FALSE)
l <- 2
m <- 1
for (k in 1:(length(linestring)/2)){
per_leg_results[k,] <- c(as.numeric(i), as.numeric(linestring[m]), as.numeric(linestring[l]), mode, line)
l <- l+2
m <- m+2
}
temp_results_frame <- rbindlist(list(temp_results_frame, per_leg_results), use.names=TRUE)
rm(per_leg_results)
temp_results_frame$unique_id <- as.numeric(temp_results_frame$unique_id)
temp_results_frame$lat <- as.numeric(temp_results_frame$lat)
temp_results_frame$lon <- as.numeric(temp_results_frame$lon)
}
temp_results_frame[temp_results_frame$line == "","line"] <- NA
temp_results_frame$line <- fill_missing_info(temp_results_frame$line)
results <- rbindlist(list(results, temp_results_frame), use.names=TRUE)
print(paste("At", Sys.time(), "routing unique journey id", i, "was completed using TfL tube mode"))
rm(temp_results_frame, k, l, m, r, line, linestring, url, json_data)
unique_journeys[i,]$status <- 'tfl'
} else
{ print(paste("At", Sys.time(), "routing unique journey id", i, "was NOT completed using TfL tube mode, the error id will be logged"))
error_ids[j,] <- i
j <- j+1
}
rm(tfl_app_id, tfl_app_key) | /routing/tube_routing.R | permissive | JimShady/one-person-simple-lhem | R | false | false | 2,747 | r | url <- paste("https://api.tfl.gov.uk/Journey/JourneyResults/", start_lat, ",", start_lon, "/to/", end_lat, ",", end_lon,
"?time=1000&journeyPreference=LeastTime&mode=tube&app_id=", tfl_app_id, "&app_key=", tfl_app_key ,sep="")
json_data <- fromJSON(RCurl::getURL(url), simplify = FALSE)
if (json_data$`$type` == 'Tfl.Api.Presentation.Entities.JourneyPlanner.ItineraryResult, Tfl.Api.Presentation.Entities') {
temp_results_frame <- data.frame(unique_id = numeric(),
lat = numeric(),
lon = numeric(),
mode = character(),
line = character(),
stringsAsFactors = FALSE)
for (r in 1:length(json_data$journeys[[1]]$legs)) {
line <- json_data$journeys[[1]]$legs[[r]]$routeOptions[[1]]$name
linestring <- json_data$journeys[[1]]$legs[[r]]$path$lineString
linestring <- gsub(" ", "", linestring, fixed=TRUE)
linestring <- gsub("[", "", linestring, fixed = TRUE)
linestring <- gsub("]", "", linestring, fixed = TRUE)
linestring <- unlist(strsplit(linestring, split = ","))
per_leg_results <- data.frame(unique_id = numeric(),
lat = numeric(),
lon = numeric(),
mode = character(),
line = character(),
stringsAsFactors = FALSE)
l <- 2
m <- 1
for (k in 1:(length(linestring)/2)){
per_leg_results[k,] <- c(as.numeric(i), as.numeric(linestring[m]), as.numeric(linestring[l]), mode, line)
l <- l+2
m <- m+2
}
temp_results_frame <- rbindlist(list(temp_results_frame, per_leg_results), use.names=TRUE)
rm(per_leg_results)
temp_results_frame$unique_id <- as.numeric(temp_results_frame$unique_id)
temp_results_frame$lat <- as.numeric(temp_results_frame$lat)
temp_results_frame$lon <- as.numeric(temp_results_frame$lon)
}
temp_results_frame[temp_results_frame$line == "","line"] <- NA
temp_results_frame$line <- fill_missing_info(temp_results_frame$line)
results <- rbindlist(list(results, temp_results_frame), use.names=TRUE)
print(paste("At", Sys.time(), "routing unique journey id", i, "was completed using TfL tube mode"))
rm(temp_results_frame, k, l, m, r, line, linestring, url, json_data)
unique_journeys[i,]$status <- 'tfl'
} else
{ print(paste("At", Sys.time(), "routing unique journey id", i, "was NOT completed using TfL tube mode, the error id will be logged"))
error_ids[j,] <- i
j <- j+1
}
rm(tfl_app_id, tfl_app_key) |
library(rockchalk)
### Name: rbindFill
### Title: Stack together data frames
### Aliases: rbindFill
### ** Examples
set.seed(123123)
N <- 10000
dat <- genCorrelatedData2(N, means = c(10, 20, 5, 5, 6, 7, 9), sds = 3,
stde = 3, rho = .2, beta = c(1, 1, -1, 0.5))
dat1 <- dat
dat1$xcat1 <- factor(sample(c("a", "b", "c", "d"), N, replace=TRUE))
dat1$xcat2 <- factor(sample(c("M", "F"), N, replace=TRUE),
levels = c("M", "F"), labels = c("Male", "Female"))
dat1$y <- dat$y +
as.vector(contrasts(dat1$xcat1)[dat1$xcat1, ] %*% c(0.1, 0.2, 0.3))
dat1$xchar1 <- rep(letters[1:26], length.out = N)
dat2 <- dat
dat1$x3 <- NULL
dat2$x2 <- NULL
dat2$xcat2 <- factor(sample(c("M", "F"), N, replace=TRUE),
levels = c("M", "F"), labels = c("Male", "Female"))
dat2$xcat3 <- factor(sample(c("K1", "K2", "K3", "K4"), N, replace=TRUE))
dat2$xchar1 <- "1"
dat3 <- dat
dat3$x1 <- NULL
dat3$xcat3 <- factor(sample(c("L1", "L2", "L3", "L4"), N, replace=TRUE))
dat.stack <- rbindFill(dat1, dat2, dat3)
str(dat.stack)
## Possible BUG alert about base::rbind and plyr::rbind.fill
## Demonstrate the problem of a same-named variable that is factor in one and
## an ordered variable in the other
dat5 <- data.frame(ds = "5", x1 = rnorm(N),
xcat1 = gl(20, 5, labels = LETTERS[20:1]))
dat6 <- data.frame(ds = "6", x1 = rnorm(N),
xcat1 = gl(20, 5, labels = LETTERS[1:20], ordered = TRUE))
## rbind reduces xcat1 to factor, whether we bind dat5 or dat6 first.
stack1 <- base::rbind(dat5, dat6)
str(stack1)
## note xcat1 levels are ordered T, S, R, Q
stack2 <- base::rbind(dat6, dat5)
str(stack2)
## xcat1 levels are A, B, C, D
## stack3 <- plyr::rbind.fill(dat5, dat6)
## str(stack3)
## xcat1 is a factor with levels T, S, R, Q ...
## stack4 <- plyr::rbind.fill(dat6, dat5)
## str(stack4)
## oops, xcat1 is ordinal with levels A < B < C < D
## stack5 <- rbindFill(dat5, dat6)
| /data/genthat_extracted_code/rockchalk/examples/rbindFill.Rd.R | no_license | surayaaramli/typeRrh | R | false | false | 1,955 | r | library(rockchalk)
### Name: rbindFill
### Title: Stack together data frames
### Aliases: rbindFill
### ** Examples
set.seed(123123)
N <- 10000
dat <- genCorrelatedData2(N, means = c(10, 20, 5, 5, 6, 7, 9), sds = 3,
stde = 3, rho = .2, beta = c(1, 1, -1, 0.5))
dat1 <- dat
dat1$xcat1 <- factor(sample(c("a", "b", "c", "d"), N, replace=TRUE))
dat1$xcat2 <- factor(sample(c("M", "F"), N, replace=TRUE),
levels = c("M", "F"), labels = c("Male", "Female"))
dat1$y <- dat$y +
as.vector(contrasts(dat1$xcat1)[dat1$xcat1, ] %*% c(0.1, 0.2, 0.3))
dat1$xchar1 <- rep(letters[1:26], length.out = N)
dat2 <- dat
dat1$x3 <- NULL
dat2$x2 <- NULL
dat2$xcat2 <- factor(sample(c("M", "F"), N, replace=TRUE),
levels = c("M", "F"), labels = c("Male", "Female"))
dat2$xcat3 <- factor(sample(c("K1", "K2", "K3", "K4"), N, replace=TRUE))
dat2$xchar1 <- "1"
dat3 <- dat
dat3$x1 <- NULL
dat3$xcat3 <- factor(sample(c("L1", "L2", "L3", "L4"), N, replace=TRUE))
dat.stack <- rbindFill(dat1, dat2, dat3)
str(dat.stack)
## Possible BUG alert about base::rbind and plyr::rbind.fill
## Demonstrate the problem of a same-named variable that is factor in one and
## an ordered variable in the other
dat5 <- data.frame(ds = "5", x1 = rnorm(N),
xcat1 = gl(20, 5, labels = LETTERS[20:1]))
dat6 <- data.frame(ds = "6", x1 = rnorm(N),
xcat1 = gl(20, 5, labels = LETTERS[1:20], ordered = TRUE))
## rbind reduces xcat1 to factor, whether we bind dat5 or dat6 first.
stack1 <- base::rbind(dat5, dat6)
str(stack1)
## note xcat1 levels are ordered T, S, R, Q
stack2 <- base::rbind(dat6, dat5)
str(stack2)
## xcat1 levels are A, B, C, D
## stack3 <- plyr::rbind.fill(dat5, dat6)
## str(stack3)
## xcat1 is a factor with levels T, S, R, Q ...
## stack4 <- plyr::rbind.fill(dat6, dat5)
## str(stack4)
## oops, xcat1 is ordinal with levels A < B < C < D
## stack5 <- rbindFill(dat5, dat6)
|
# Extract Data from RBSN Stations:
# Address: http://reports.irimo.ir/jasperserver/login.html
# User Name: user
# Password: user
rm(list = ls())
# Load library:
library(dplyr)
library(ggplot2)
# Set Current Directori to RBSN_Data Folder:
setwd(dir = choose.dir(default = getwd(), caption = "Select RBSN Data Folder (*.csv): "))
# Read Data:
Data <- read.csv(file = "Mashhad_40745.csv", header = TRUE, skip = 2)
# Modified Data:
# remove X cloumn:
Data$X <- NULL
# remove empty rows:
Data <- Data[-which(x = Data$station_i == "" | Data$station_i == "station_i"), ]
# reset row names:
row.names(Data) <- NULL
# Remove "null", "nul" and "n" from Data:
Data[Data == "null"] <- NA
Data[Data == "n"] <- NA
Data[Data == "nul"] <- NA
# Reset Factor Level:
Data <- droplevels.data.frame(Data)
# Rename of date column:
colnames(Data)[5] <- "date"
# Change Class of date Cloumn:
Data$date <- as.POSIXct(x = as.character(x = Data$date), format = "%m/%d/%Y %H:%M")
# Change Class of Clumns:
Column.Names <- colnames(Data)[-c(1,5)]
for (ColName in Column.Names) {
eval(expr = parse(text = paste("Data$", ColName, " <- as.numeric(as.character(x = Data$",
ColName, "))", sep = "")))
}
# Group Date Variable into Day/Month/Year:
A <- Data %>%
mutate(month = format(x = date, "%m"), year = format(x = date, "%Y")) %>%
group_by(month, year) %>%
summarise(total = sum(rrr, na.rm = TRUE))
B <- Data %>%
mutate(year = format(x = date, "%Y")) %>%
group_by(year) %>%
summarise(total = sum(rrr, na.rm = TRUE))
# Order Data by the Year:
A <- A[order(A$year),]
A$date <- seq(from = as.Date("1951/01/01"), to = as.Date("2017/12/01"), by = "month")
B <- B[order(A$year),]
B$date <- seq(from = as.Date("1951/01/01"), to = as.Date("2017/12/01"), by = "year")
ggplot(data = B, mapping = aes(x = year, y = total, group = 1)) +
geom_point() +
geom_line(linetype = "dashed", color="red")
| /CODE_RBSN.r | permissive | shirazipooya/CODE_RBSN | R | false | false | 1,934 | r | # Extract Data from RBSN Stations:
# Address: http://reports.irimo.ir/jasperserver/login.html
# User Name: user
# Password: user
rm(list = ls())
# Load library:
library(dplyr)
library(ggplot2)
# Set Current Directori to RBSN_Data Folder:
setwd(dir = choose.dir(default = getwd(), caption = "Select RBSN Data Folder (*.csv): "))
# Read Data:
Data <- read.csv(file = "Mashhad_40745.csv", header = TRUE, skip = 2)
# Modified Data:
# remove X cloumn:
Data$X <- NULL
# remove empty rows:
Data <- Data[-which(x = Data$station_i == "" | Data$station_i == "station_i"), ]
# reset row names:
row.names(Data) <- NULL
# Remove "null", "nul" and "n" from Data:
Data[Data == "null"] <- NA
Data[Data == "n"] <- NA
Data[Data == "nul"] <- NA
# Reset Factor Level:
Data <- droplevels.data.frame(Data)
# Rename of date column:
colnames(Data)[5] <- "date"
# Change Class of date Cloumn:
Data$date <- as.POSIXct(x = as.character(x = Data$date), format = "%m/%d/%Y %H:%M")
# Change Class of Clumns:
Column.Names <- colnames(Data)[-c(1,5)]
for (ColName in Column.Names) {
eval(expr = parse(text = paste("Data$", ColName, " <- as.numeric(as.character(x = Data$",
ColName, "))", sep = "")))
}
# Group Date Variable into Day/Month/Year:
A <- Data %>%
mutate(month = format(x = date, "%m"), year = format(x = date, "%Y")) %>%
group_by(month, year) %>%
summarise(total = sum(rrr, na.rm = TRUE))
B <- Data %>%
mutate(year = format(x = date, "%Y")) %>%
group_by(year) %>%
summarise(total = sum(rrr, na.rm = TRUE))
# Order Data by the Year:
A <- A[order(A$year),]
A$date <- seq(from = as.Date("1951/01/01"), to = as.Date("2017/12/01"), by = "month")
B <- B[order(A$year),]
B$date <- seq(from = as.Date("1951/01/01"), to = as.Date("2017/12/01"), by = "year")
ggplot(data = B, mapping = aes(x = year, y = total, group = 1)) +
geom_point() +
geom_line(linetype = "dashed", color="red")
|
library(testthat)
library(NGSexpressionSet)
test_check("RFclust.SGE")
| /tests/testthat.R | permissive | stela2502/RFclust.SGE | R | false | false | 71 | r | library(testthat)
library(NGSexpressionSet)
test_check("RFclust.SGE")
|
### R code from vignette source 'multivariate-birth2.Rnw'
###################################################
### code chunk number 1: multivariate-birth2.Rnw:13-14 (eval = FALSE)
###################################################
## options(width=80)
###################################################
### code chunk number 2: multivariate-birth2.Rnw:19-22 (eval = FALSE)
###################################################
## library(catdata)
## data(birth)
## attach(birth)
###################################################
### code chunk number 3: multivariate-birth2.Rnw:29-36 (eval = FALSE)
###################################################
## intensive <- rep(0,length(Intensive))
## intensive[Intensive>0] <- 1
## Intensive <- intensive
##
## previous <- Previous
## previous[previous>1] <- 2
## Previous <- previous
###################################################
### code chunk number 4: multivariate-birth2.Rnw:41-42 (eval = FALSE)
###################################################
## library(gee)
###################################################
### code chunk number 5: multivariate-birth2.Rnw:47-54 (eval = FALSE)
###################################################
## library(VGAM)
## Birth <- as.data.frame(na.omit(cbind(Intensive, Cesarean, Sex, Weight, Previous,
## AgeMother)))
## detach(birth)
## bivarlogit <- vglm(cbind(Intensive , Cesarean) ~ Weight + AgeMother +
## as.factor(Sex) + as.factor(Previous), binom2.or(zero=NULL), data=Birth)
## summary(bivarlogit)
###################################################
### code chunk number 6: multivariate-birth2.Rnw:59-82 (eval = FALSE)
###################################################
## n <- dim(Birth)[1]
## ID <- rep(1:n,2)
##
## InterceptInt <- InterceptCes <- rep(1, 2*n)
## InterceptInt[(n+1):(2*n)] <- InterceptCes[1:n] <- 0
##
## AgeMotherInt <- AgeMotherCes <- rep(Birth$AgeMother,2)
## AgeMotherInt[(n+1):(2*n)] <- AgeMotherCes[1:n] <- 0
##
## SexInt <- SexCes <- rep(Birth$Sex,2)
## SexInt[SexInt==1] <- SexCes[SexCes==1] <- 0
## SexInt[SexInt==2] <- SexCes[SexCes==2] <- 1
## SexInt[(n+1):(2*n)] <- SexCes[1:n] <- 0
##
## PrevBase <- rep(Birth$Previous,2)
## PreviousInt1 <- PreviousCes1 <- PreviousInt2 <- PreviousCes2 <- rep(0, 2*n)
## PreviousInt1[PrevBase==1] <- PreviousCes1[PrevBase==1] <- 1
## PreviousInt2[PrevBase>=2] <- PreviousCes2[PrevBase>=2] <- 1
## PreviousInt1[(n+1):(2*n)] <- PreviousInt2[(n+1):(2*n)] <- PreviousCes1[1:n] <-
## PreviousCes2[1:n] <- 0
##
## WeightInt <- WeightCes <- rep(Birth$Weight,2)
## WeightInt[(n+1):(2*n)] <- WeightCes[1:n] <- 0
###################################################
### code chunk number 7: multivariate-birth2.Rnw:87-91 (eval = FALSE)
###################################################
## GeeDat <- as.data.frame(cbind(ID, InterceptInt, InterceptCes, SexInt , SexCes ,
## WeightInt , WeightCes , PreviousInt1 , PreviousInt2, PreviousCes1,
## PreviousCes2, AgeMotherInt , AgeMotherCes, Response=
## c(Birth$Intensive, Birth$Cesarean)))
###################################################
### code chunk number 8: multivariate-birth2.Rnw:96-102 (eval = FALSE)
###################################################
## gee1 <- gee (Response ~ -1 + InterceptInt + InterceptCes + WeightInt + WeightCes
## + AgeMotherInt + AgeMotherCes + SexInt + SexCes +
## PreviousInt1 + PreviousCes1 + PreviousInt2 + PreviousCes2,
## family=binomial(link=logit), id=ID, data=GeeDat)
##
## summary(gee1)
###################################################
### code chunk number 9: multivariate-birth2.Rnw:107-124 (eval = FALSE)
###################################################
## coefficients(bivarlogit)[1:2]
## coefficients(gee1)[1:2]
##
## coefficients(bivarlogit)[4:5]
## coefficients(gee1)[3:4]
##
## coefficients(bivarlogit)[7:8]
## coefficients(gee1)[5:6]
##
## coefficients(bivarlogit)[10:11]
## coefficients(gee1)[7:8]
##
## coefficients(bivarlogit)[13:14]
## coefficients(gee1)[9:10]
##
## coefficients(bivarlogit)[16:17]
## coefficients(gee1)[11:12]
###################################################
### code chunk number 10: multivariate-birth2.Rnw:127-128 (eval = FALSE)
###################################################
## detach(package:VGAM)
| /data/genthat_extracted_code/catdata/vignettes/multivariate-birth2.R | no_license | surayaaramli/typeRrh | R | false | false | 4,377 | r | ### R code from vignette source 'multivariate-birth2.Rnw'
###################################################
### code chunk number 1: multivariate-birth2.Rnw:13-14 (eval = FALSE)
###################################################
## options(width=80)
###################################################
### code chunk number 2: multivariate-birth2.Rnw:19-22 (eval = FALSE)
###################################################
## library(catdata)
## data(birth)
## attach(birth)
###################################################
### code chunk number 3: multivariate-birth2.Rnw:29-36 (eval = FALSE)
###################################################
## intensive <- rep(0,length(Intensive))
## intensive[Intensive>0] <- 1
## Intensive <- intensive
##
## previous <- Previous
## previous[previous>1] <- 2
## Previous <- previous
###################################################
### code chunk number 4: multivariate-birth2.Rnw:41-42 (eval = FALSE)
###################################################
## library(gee)
###################################################
### code chunk number 5: multivariate-birth2.Rnw:47-54 (eval = FALSE)
###################################################
## library(VGAM)
## Birth <- as.data.frame(na.omit(cbind(Intensive, Cesarean, Sex, Weight, Previous,
## AgeMother)))
## detach(birth)
## bivarlogit <- vglm(cbind(Intensive , Cesarean) ~ Weight + AgeMother +
## as.factor(Sex) + as.factor(Previous), binom2.or(zero=NULL), data=Birth)
## summary(bivarlogit)
###################################################
### code chunk number 6: multivariate-birth2.Rnw:59-82 (eval = FALSE)
###################################################
## n <- dim(Birth)[1]
## ID <- rep(1:n,2)
##
## InterceptInt <- InterceptCes <- rep(1, 2*n)
## InterceptInt[(n+1):(2*n)] <- InterceptCes[1:n] <- 0
##
## AgeMotherInt <- AgeMotherCes <- rep(Birth$AgeMother,2)
## AgeMotherInt[(n+1):(2*n)] <- AgeMotherCes[1:n] <- 0
##
## SexInt <- SexCes <- rep(Birth$Sex,2)
## SexInt[SexInt==1] <- SexCes[SexCes==1] <- 0
## SexInt[SexInt==2] <- SexCes[SexCes==2] <- 1
## SexInt[(n+1):(2*n)] <- SexCes[1:n] <- 0
##
## PrevBase <- rep(Birth$Previous,2)
## PreviousInt1 <- PreviousCes1 <- PreviousInt2 <- PreviousCes2 <- rep(0, 2*n)
## PreviousInt1[PrevBase==1] <- PreviousCes1[PrevBase==1] <- 1
## PreviousInt2[PrevBase>=2] <- PreviousCes2[PrevBase>=2] <- 1
## PreviousInt1[(n+1):(2*n)] <- PreviousInt2[(n+1):(2*n)] <- PreviousCes1[1:n] <-
## PreviousCes2[1:n] <- 0
##
## WeightInt <- WeightCes <- rep(Birth$Weight,2)
## WeightInt[(n+1):(2*n)] <- WeightCes[1:n] <- 0
###################################################
### code chunk number 7: multivariate-birth2.Rnw:87-91 (eval = FALSE)
###################################################
## GeeDat <- as.data.frame(cbind(ID, InterceptInt, InterceptCes, SexInt , SexCes ,
## WeightInt , WeightCes , PreviousInt1 , PreviousInt2, PreviousCes1,
## PreviousCes2, AgeMotherInt , AgeMotherCes, Response=
## c(Birth$Intensive, Birth$Cesarean)))
###################################################
### code chunk number 8: multivariate-birth2.Rnw:96-102 (eval = FALSE)
###################################################
## gee1 <- gee (Response ~ -1 + InterceptInt + InterceptCes + WeightInt + WeightCes
## + AgeMotherInt + AgeMotherCes + SexInt + SexCes +
## PreviousInt1 + PreviousCes1 + PreviousInt2 + PreviousCes2,
## family=binomial(link=logit), id=ID, data=GeeDat)
##
## summary(gee1)
###################################################
### code chunk number 9: multivariate-birth2.Rnw:107-124 (eval = FALSE)
###################################################
## coefficients(bivarlogit)[1:2]
## coefficients(gee1)[1:2]
##
## coefficients(bivarlogit)[4:5]
## coefficients(gee1)[3:4]
##
## coefficients(bivarlogit)[7:8]
## coefficients(gee1)[5:6]
##
## coefficients(bivarlogit)[10:11]
## coefficients(gee1)[7:8]
##
## coefficients(bivarlogit)[13:14]
## coefficients(gee1)[9:10]
##
## coefficients(bivarlogit)[16:17]
## coefficients(gee1)[11:12]
###################################################
### code chunk number 10: multivariate-birth2.Rnw:127-128 (eval = FALSE)
###################################################
## detach(package:VGAM)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/sqlite.r
\name{sqodbc_executeqy}
\alias{sqodbc_executeqy}
\title{sqodbc_executeqy wrapper on sqlQuery}
\usage{
sqodbc_executeqy(qy, db, ...)
}
\arguments{
\item{db:}{list of [dsn] srv usr pwd and dbn}
}
\description{
sqodbc_executeqy wrapper on sqlQuery
}
| /man/sqodbc_executeqy.Rd | no_license | gyang274/yg | R | false | true | 335 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/sqlite.r
\name{sqodbc_executeqy}
\alias{sqodbc_executeqy}
\title{sqodbc_executeqy wrapper on sqlQuery}
\usage{
sqodbc_executeqy(qy, db, ...)
}
\arguments{
\item{db:}{list of [dsn] srv usr pwd and dbn}
}
\description{
sqodbc_executeqy wrapper on sqlQuery
}
|
#load csv
test <- read.csv("~/Desktop/test.csv", header=FALSE)
#load ggplot lib
library(ggplot2)
# create variable names
effortTime=test$V1
dateOfEffort=test$V2
#change dates from factor to character
dateOfEffort=as.character(dateOfEffort)
str(dateOfEffort)
#convert from character to R date object
dateOfEffort=as.Date(dateOfEffort,"%m/%d/%y")
# make plot using ggplot geom_line
ggplot(test, aes(dateOfEffort,effortTime))+geom_line(color='orange')+geom_point(color='dark orange')+ggtitle("Whirlwind Koms over Time")+
labs(x="Date",y="Segment time")
| /komplot.R | no_license | wnowak10/strava | R | false | false | 552 | r | #load csv
test <- read.csv("~/Desktop/test.csv", header=FALSE)
#load ggplot lib
library(ggplot2)
# create variable names
effortTime=test$V1
dateOfEffort=test$V2
#change dates from factor to character
dateOfEffort=as.character(dateOfEffort)
str(dateOfEffort)
#convert from character to R date object
dateOfEffort=as.Date(dateOfEffort,"%m/%d/%y")
# make plot using ggplot geom_line
ggplot(test, aes(dateOfEffort,effortTime))+geom_line(color='orange')+geom_point(color='dark orange')+ggtitle("Whirlwind Koms over Time")+
labs(x="Date",y="Segment time")
|
#' Spatial data of white-tailed deer GPS locations
#'
#' Spatial layer of GPS locations from white-tailed (*Odocoileus virginianus*) in North and South Carolina, United States.
#'
#' @docType data
#'
#' @usage data(wtdeer_locations)
#'
#' @format An object of class \code{"SpatialPointsDataFrame"}
#'
#' @keywords datasets
#'
#'
#'
#' @examples
#' library(sp)
#' data(wtdeer_locations)
#' plot(wtdeer_locations)
"wtdeer_locations"
| /R/wtdeer_locations-data.R | no_license | cghaase/GARPTools | R | false | false | 432 | r | #' Spatial data of white-tailed deer GPS locations
#'
#' Spatial layer of GPS locations from white-tailed (*Odocoileus virginianus*) in North and South Carolina, United States.
#'
#' @docType data
#'
#' @usage data(wtdeer_locations)
#'
#' @format An object of class \code{"SpatialPointsDataFrame"}
#'
#' @keywords datasets
#'
#'
#'
#' @examples
#' library(sp)
#' data(wtdeer_locations)
#' plot(wtdeer_locations)
"wtdeer_locations"
|
library( knitr )
opts_chunk$set( cache=FALSE,
echo=TRUE,
message=TRUE,
warning=FALSE,
highlight=TRUE,
sanitize=FALSE,
tidy=TRUE,
dev='tikz',
tab.env='table',
fig.env='figure',
fig.lp='fig:',
fig.align='center',
fig.pos='tbp',
out.width='.75\\textwidth'
)
knit( "example_under_construction.rnw" )
quit( save = "no", status = 0, runLast = TRUE )
| /example_under_construction.R | no_license | dhanya1/cyclist_detection_thesis | R | false | false | 552 | r | library( knitr )
opts_chunk$set( cache=FALSE,
echo=TRUE,
message=TRUE,
warning=FALSE,
highlight=TRUE,
sanitize=FALSE,
tidy=TRUE,
dev='tikz',
tab.env='table',
fig.env='figure',
fig.lp='fig:',
fig.align='center',
fig.pos='tbp',
out.width='.75\\textwidth'
)
knit( "example_under_construction.rnw" )
quit( save = "no", status = 0, runLast = TRUE )
|
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/FuncionesAuxiliares.R
\name{ordenar_porconteo}
\alias{ordenar_porconteo}
\title{Ordenar por conteo de factores}
\usage{
ordenar_porconteo(df, col)
}
\arguments{
\item{df}{Data.frame a condensar}
\item{col}{Columna con factores. Se pone sin parentesis.}
}
\value{
Data.frame
}
\description{
Wrapper para ordenar rapidamente de mayor a menor por grupos un data.frame.
}
\examples{
df<-data.frame(factores=c("A","A","B","C","C","D","A","A"),otros=c(1,3,2,4,5,1,2,7))
#Ordenar, de mayor a menor, por conteo de factores
PorConteo<-ordenar_porconteo(df, factores)
}
\author{
Eduardo Flores
}
\seealso{
denue_varios_stats
}
| /man/ordenar_porconteo.Rd | no_license | fdzul/inegiR | R | false | false | 706 | rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/FuncionesAuxiliares.R
\name{ordenar_porconteo}
\alias{ordenar_porconteo}
\title{Ordenar por conteo de factores}
\usage{
ordenar_porconteo(df, col)
}
\arguments{
\item{df}{Data.frame a condensar}
\item{col}{Columna con factores. Se pone sin parentesis.}
}
\value{
Data.frame
}
\description{
Wrapper para ordenar rapidamente de mayor a menor por grupos un data.frame.
}
\examples{
df<-data.frame(factores=c("A","A","B","C","C","D","A","A"),otros=c(1,3,2,4,5,1,2,7))
#Ordenar, de mayor a menor, por conteo de factores
PorConteo<-ordenar_porconteo(df, factores)
}
\author{
Eduardo Flores
}
\seealso{
denue_varios_stats
}
|
#' Soybean data from Zobel, Wright and Gauch (1988) and
#' is subset from the \code{"gauch.soy"} dataset included in the \code{"agridat"} package
#' containing 7 genotypes evaluated in 10 environments
#'
#' @name soy
#' @docType data
#'
#' @usage data(soy)
#'
#' @format
#' soy An object of class \code{"data.frame"} containing plot values. The blocks (or reps) are randomly assigned as explained from the documentation of \code{"gauch.soy"} data from \code{"agridat"}
#'
#' @keywords soy
#'
#' @references Zobel, R. W., Wright, M. J., & Gauch, H. G. (1988).
#' Statistical analysis of a yield trial. Agronomy Journal, 80(3), 388-393.
#' (\href{https://dl.sciencesocieties.org/publications/aj/abstracts/80/3/AJ0800030388}{Agronomy Journal})
#'
#' @source \href{https://cran.r-project.org/web/packages/agridat/index.html}{agridat}
#'
#' @examples
#' data(soy)
#' summary(soy)
#' print(soy)
NULL
| /R/soy.R | no_license | nsantantonio/Bilinear | R | false | false | 900 | r | #' Soybean data from Zobel, Wright and Gauch (1988) and
#' is subset from the \code{"gauch.soy"} dataset included in the \code{"agridat"} package
#' containing 7 genotypes evaluated in 10 environments
#'
#' @name soy
#' @docType data
#'
#' @usage data(soy)
#'
#' @format
#' soy An object of class \code{"data.frame"} containing plot values. The blocks (or reps) are randomly assigned as explained from the documentation of \code{"gauch.soy"} data from \code{"agridat"}
#'
#' @keywords soy
#'
#' @references Zobel, R. W., Wright, M. J., & Gauch, H. G. (1988).
#' Statistical analysis of a yield trial. Agronomy Journal, 80(3), 388-393.
#' (\href{https://dl.sciencesocieties.org/publications/aj/abstracts/80/3/AJ0800030388}{Agronomy Journal})
#'
#' @source \href{https://cran.r-project.org/web/packages/agridat/index.html}{agridat}
#'
#' @examples
#' data(soy)
#' summary(soy)
#' print(soy)
NULL
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/main.R
\name{pm_project}
\alias{pm_project}
\title{Define empty PlasmoMAPI project}
\usage{
pm_project()
}
\description{
Define empty PlasmoMAPI project.
}
\details{
#------------------------------------------------
}
| /man/pm_project.Rd | permissive | mrc-ide/PlasmoMAPI | R | false | true | 296 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/main.R
\name{pm_project}
\alias{pm_project}
\title{Define empty PlasmoMAPI project}
\usage{
pm_project()
}
\description{
Define empty PlasmoMAPI project.
}
\details{
#------------------------------------------------
}
|
################################################################################
rm(list = ls())
library(survival)
library(survey)
################################################################################
load("../data/data4Demo.RData")
#Input data: "data4Demo.RData" contains a data frame "dataPheno" and a list "distMatList"
#"dataPheno" is a subject by covariates table, it contains the variable needed in the survival analysis
#str(dataPheno)
#"distMatList" is a list of distance matrix, each component in this list is a n_i by n_i (n_i is the number of tumor samples of the i_th subject) symmetric matrix with all zeroes in diagonal
#These distance matrix can be calculated from any source of data (i.e. SCNA profiles, methylation profiles) by user's own definition
#str(distMatList)
#names of "distMatList" match the rownames of "dataPheno"
#table(rownames(dataPheno) == names(distMatList))
#column "nTumor" (number of tumor samples of that subject) in "dataPheno" matchs the size of distance matrix in "distMatList"
#table(dataPheno$nTumor == unlist(lapply(distMatList, nrow)))
################################################################################
#Calculate the APITH index from distMatList (average distance of each sample pair)
dataPheno$APITH <- unlist(lapply(distMatList, function(distMat)mean(distMat[lower.tri(distMat)])))
#hist(dataPheno$APITH, seq(0, 1, 0.05))
################################################################################
#Function to calculate weights for survival anlaysis using the inverse of the estimated variance of APITH index
#(Please refer to the supplementary notes for details of the algorithm in this function)
funWeight <- function(distMatList){
tempFun <- function(distMat){
k <- nrow(distMat)
if(k <= 1)stop("Something is wrong!")
distVec <- distMat[lower.tri(distMat)]
k2 <- cbind(distVec, distVec)
k3 <- NULL
if(k >= 3){
tempCombn <- combn(k, 3)
for(i in 1:ncol(tempCombn)){
tempIdx <- tempCombn[, i]
tempMat <- cbind(c(distMat[tempIdx[1], tempIdx[2]], distMat[tempIdx[1], tempIdx[2]], distMat[tempIdx[1], tempIdx[3]]), c(distMat[tempIdx[1], tempIdx[3]], distMat[tempIdx[2], tempIdx[3]], distMat[tempIdx[2], tempIdx[3]]))
k3 <- rbind(k3, tempMat)
}
}
k4 <- NULL
if(k >= 4){
tempCombn <- combn(k, 4)
for(i in 1:ncol(tempCombn)){
tempIdx <- tempCombn[, i]
tempMat <- cbind(c(distMat[tempIdx[1], tempIdx[2]], distMat[tempIdx[1], tempIdx[3]], distMat[tempIdx[1], tempIdx[4]]), c(distMat[tempIdx[3], tempIdx[4]], distMat[tempIdx[2], tempIdx[4]], distMat[tempIdx[2], tempIdx[3]]))
k4 <- rbind(k4, tempMat)
}
}
return(list(k2 = k2, k3 = k3, k4 = k4))
}
tempList <- lapply(distMatList, tempFun)
k2 <- k3 <- k4 <- NULL
for(i in 1:length(tempList)){
k2 <- rbind(k2, tempList[[i]]$k2)
k3 <- rbind(k3, tempList[[i]]$k3)
k4 <- rbind(k4, tempList[[i]]$k4)
}
kk <- NULL
for(i in 1:(length(distMatList) - 1)){
for(j in (i + 1):length(distMatList)){
tempDistMat1 <- distMatList[[i]]
tempDistMat2 <- distMatList[[j]]
tempDistVec1 <- tempDistMat1[lower.tri(tempDistMat1)]
tempDistVec2 <- tempDistMat2[lower.tri(tempDistMat2)]
tempMat <- cbind(rep(tempDistVec1, each = length(tempDistVec2)), rep(tempDistVec2, length(tempDistVec1)))
kk <- rbind(kk, tempMat)
}
}
a1 <- mean((k4[, 1] - k4[, 2])^2)
a2 <- mean((k3[, 1] - k3[, 2])^2)
a3 <- mean((kk[, 1] - kk[, 2])^2)
b1 <- a1 / 2
b2 <- (a1 - a2) / 2
b3 <- (a3 - a1) / 2
m <- unlist(lapply(distMatList, nrow))
w <- 1 / (b3 + 2 / m /(m - 1) * b1 + 4 * (m - 2) / m / (m - 1) * b2)
attr(w, "par") <- c(b1, b2, b3)
return(w)
}
#Calculate the weights:
dataPheno$w <- funWeight(distMatList)
#str(dataPheno$w)
################################################################################
#Make "exp(coef)" as "odds ratio per 10% increase" of the APITH from SCNA
dataPheno$x <- dataPheno$APITH * 10
#Cox proportional hazard model without weight:
s1 <- summary(m1 <- coxph(SURVIVAL ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + x, data = dataPheno))
s2 <- summary(m2 <- coxph(METASTASIS ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + x, data = dataPheno))
#Cox proportional-hazard model without weight:with weight:
tempDesign <- svydesign(ids = ~ 1, data = dataPheno, weights = dataPheno$w)
s3 <- summary(m3 <- svycoxph(SURVIVAL ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + x, design = tempDesign, data = dataPheno))
s4 <- summary(m4 <- svycoxph(METASTASIS ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + x, design = tempDesign, data = dataPheno))
################################################################################
#Survival analysis of subjects with at least 3 tumor samples per subject
subDataPheno <- dataPheno[dataPheno$nTumor > 2, ]
#Cox proportional hazard model without weight:
s5 <- summary(m5 <- coxph(SURVIVAL ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + x, data = subDataPheno))
s6 <- summary(m6 <- coxph(METASTASIS ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + x, data = subDataPheno))
#Cox proportional-hazard model without weight:with weight:
tempDesign <- svydesign(ids = ~ 1, data = subDataPheno, weights = subDataPheno$w)
s7 <- summary(m7 <- svycoxph(SURVIVAL ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + x, design = tempDesign, data = subDataPheno))
s8 <- summary(m8 <- svycoxph(METASTASIS ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + x, design = tempDesign, data = subDataPheno))
################################################################################
# output tables
tempFun <- function(s)c(s$conf.int["x", c("exp(coef)", "lower .95", "upper .95")], P = s$coefficients["x", "Pr(>|z|)"])
tempWrite <- matrix(unlist(lapply(list(s1, s2, s3, s4, s5, s6, s7, s8), tempFun)), byrow = TRUE, ncol = 4)
dimnames(tempWrite) <- list(
c("Survival_80subjects_no_weight",
"METASTASIS_80subjects_no_weight",
"Survival_80subjects_weight",
"METASTASIS_80subjects_weight",
"Survival_48subjects_no_weight",
"METASTASIS_48subjects_no_weight",
"Survival_48subjects_weight",
"METASTASIS_48subjects_weight"),
c("P", "OR", "Lower .95", "upper .95")
)
write.csv(tempWrite, file = "../table/table4Demo.csv", quote = FALSE)
################################################################################
# output figures
subDataPheno$x_cat3 <- cut(subDataPheno$APITH, c(0, 0.1, 0.3, 1))
#table(subDataPheno$x_cat3)
s <- summary(m <- coxph(SURVIVAL ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + strata(x_cat3), data = subDataPheno))
pdf("../figure/figure4Demo.pdf", width = 10, height = 10)
par(cex.lab = 2, cex.axis = 2, mar = c(5, 6, 2, 2))
plot(survfit(m), col = c("blue", "green", "red"), lwd = 2, xlab = "Survival Weeks", ylab = "Survival Rate")
legend("topright", c("Low ITH (16)", "Medium ITH (24)", "High ITH (8)"), lwd = 2, col = c("blue", "green", "red"), cex = 1.5)
dev.off()
################################################################################
| /code/code4Demo.R | permissive | zhangyupisa/EAGLE_LUAD | R | false | false | 7,028 | r |
################################################################################
rm(list = ls())
library(survival)
library(survey)
################################################################################
load("../data/data4Demo.RData")
#Input data: "data4Demo.RData" contains a data frame "dataPheno" and a list "distMatList"
#"dataPheno" is a subject by covariates table, it contains the variable needed in the survival analysis
#str(dataPheno)
#"distMatList" is a list of distance matrix, each component in this list is a n_i by n_i (n_i is the number of tumor samples of the i_th subject) symmetric matrix with all zeroes in diagonal
#These distance matrix can be calculated from any source of data (i.e. SCNA profiles, methylation profiles) by user's own definition
#str(distMatList)
#names of "distMatList" match the rownames of "dataPheno"
#table(rownames(dataPheno) == names(distMatList))
#column "nTumor" (number of tumor samples of that subject) in "dataPheno" matchs the size of distance matrix in "distMatList"
#table(dataPheno$nTumor == unlist(lapply(distMatList, nrow)))
################################################################################
#Calculate the APITH index from distMatList (average distance of each sample pair)
dataPheno$APITH <- unlist(lapply(distMatList, function(distMat)mean(distMat[lower.tri(distMat)])))
#hist(dataPheno$APITH, seq(0, 1, 0.05))
################################################################################
#Function to calculate weights for survival anlaysis using the inverse of the estimated variance of APITH index
#(Please refer to the supplementary notes for details of the algorithm in this function)
funWeight <- function(distMatList){
tempFun <- function(distMat){
k <- nrow(distMat)
if(k <= 1)stop("Something is wrong!")
distVec <- distMat[lower.tri(distMat)]
k2 <- cbind(distVec, distVec)
k3 <- NULL
if(k >= 3){
tempCombn <- combn(k, 3)
for(i in 1:ncol(tempCombn)){
tempIdx <- tempCombn[, i]
tempMat <- cbind(c(distMat[tempIdx[1], tempIdx[2]], distMat[tempIdx[1], tempIdx[2]], distMat[tempIdx[1], tempIdx[3]]), c(distMat[tempIdx[1], tempIdx[3]], distMat[tempIdx[2], tempIdx[3]], distMat[tempIdx[2], tempIdx[3]]))
k3 <- rbind(k3, tempMat)
}
}
k4 <- NULL
if(k >= 4){
tempCombn <- combn(k, 4)
for(i in 1:ncol(tempCombn)){
tempIdx <- tempCombn[, i]
tempMat <- cbind(c(distMat[tempIdx[1], tempIdx[2]], distMat[tempIdx[1], tempIdx[3]], distMat[tempIdx[1], tempIdx[4]]), c(distMat[tempIdx[3], tempIdx[4]], distMat[tempIdx[2], tempIdx[4]], distMat[tempIdx[2], tempIdx[3]]))
k4 <- rbind(k4, tempMat)
}
}
return(list(k2 = k2, k3 = k3, k4 = k4))
}
tempList <- lapply(distMatList, tempFun)
k2 <- k3 <- k4 <- NULL
for(i in 1:length(tempList)){
k2 <- rbind(k2, tempList[[i]]$k2)
k3 <- rbind(k3, tempList[[i]]$k3)
k4 <- rbind(k4, tempList[[i]]$k4)
}
kk <- NULL
for(i in 1:(length(distMatList) - 1)){
for(j in (i + 1):length(distMatList)){
tempDistMat1 <- distMatList[[i]]
tempDistMat2 <- distMatList[[j]]
tempDistVec1 <- tempDistMat1[lower.tri(tempDistMat1)]
tempDistVec2 <- tempDistMat2[lower.tri(tempDistMat2)]
tempMat <- cbind(rep(tempDistVec1, each = length(tempDistVec2)), rep(tempDistVec2, length(tempDistVec1)))
kk <- rbind(kk, tempMat)
}
}
a1 <- mean((k4[, 1] - k4[, 2])^2)
a2 <- mean((k3[, 1] - k3[, 2])^2)
a3 <- mean((kk[, 1] - kk[, 2])^2)
b1 <- a1 / 2
b2 <- (a1 - a2) / 2
b3 <- (a3 - a1) / 2
m <- unlist(lapply(distMatList, nrow))
w <- 1 / (b3 + 2 / m /(m - 1) * b1 + 4 * (m - 2) / m / (m - 1) * b2)
attr(w, "par") <- c(b1, b2, b3)
return(w)
}
#Calculate the weights:
dataPheno$w <- funWeight(distMatList)
#str(dataPheno$w)
################################################################################
#Make "exp(coef)" as "odds ratio per 10% increase" of the APITH from SCNA
dataPheno$x <- dataPheno$APITH * 10
#Cox proportional hazard model without weight:
s1 <- summary(m1 <- coxph(SURVIVAL ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + x, data = dataPheno))
s2 <- summary(m2 <- coxph(METASTASIS ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + x, data = dataPheno))
#Cox proportional-hazard model without weight:with weight:
tempDesign <- svydesign(ids = ~ 1, data = dataPheno, weights = dataPheno$w)
s3 <- summary(m3 <- svycoxph(SURVIVAL ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + x, design = tempDesign, data = dataPheno))
s4 <- summary(m4 <- svycoxph(METASTASIS ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + x, design = tempDesign, data = dataPheno))
################################################################################
#Survival analysis of subjects with at least 3 tumor samples per subject
subDataPheno <- dataPheno[dataPheno$nTumor > 2, ]
#Cox proportional hazard model without weight:
s5 <- summary(m5 <- coxph(SURVIVAL ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + x, data = subDataPheno))
s6 <- summary(m6 <- coxph(METASTASIS ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + x, data = subDataPheno))
#Cox proportional-hazard model without weight:with weight:
tempDesign <- svydesign(ids = ~ 1, data = subDataPheno, weights = subDataPheno$w)
s7 <- summary(m7 <- svycoxph(SURVIVAL ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + x, design = tempDesign, data = subDataPheno))
s8 <- summary(m8 <- svycoxph(METASTASIS ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + x, design = tempDesign, data = subDataPheno))
################################################################################
# output tables
tempFun <- function(s)c(s$conf.int["x", c("exp(coef)", "lower .95", "upper .95")], P = s$coefficients["x", "Pr(>|z|)"])
tempWrite <- matrix(unlist(lapply(list(s1, s2, s3, s4, s5, s6, s7, s8), tempFun)), byrow = TRUE, ncol = 4)
dimnames(tempWrite) <- list(
c("Survival_80subjects_no_weight",
"METASTASIS_80subjects_no_weight",
"Survival_80subjects_weight",
"METASTASIS_80subjects_weight",
"Survival_48subjects_no_weight",
"METASTASIS_48subjects_no_weight",
"Survival_48subjects_weight",
"METASTASIS_48subjects_weight"),
c("P", "OR", "Lower .95", "upper .95")
)
write.csv(tempWrite, file = "../table/table4Demo.csv", quote = FALSE)
################################################################################
# output figures
subDataPheno$x_cat3 <- cut(subDataPheno$APITH, c(0, 0.1, 0.3, 1))
#table(subDataPheno$x_cat3)
s <- summary(m <- coxph(SURVIVAL ~ STAGE + AGE_FIRST_DIAGNOSIS + SEX + strata(x_cat3), data = subDataPheno))
pdf("../figure/figure4Demo.pdf", width = 10, height = 10)
par(cex.lab = 2, cex.axis = 2, mar = c(5, 6, 2, 2))
plot(survfit(m), col = c("blue", "green", "red"), lwd = 2, xlab = "Survival Weeks", ylab = "Survival Rate")
legend("topright", c("Low ITH (16)", "Medium ITH (24)", "High ITH (8)"), lwd = 2, col = c("blue", "green", "red"), cex = 1.5)
dev.off()
################################################################################
|
suppressMessages({
library(rpart)
#library(rattle)
library(rpart.plot)
library(RColorBrewer)
library(Amelia)
library(prediction)
library(plyr)
library(aod)
library(ROCR)
})
```
### Load and check data
```{r}
train <- read.csv("train_titanic.csv", header=T, na.strings=c(""))
test <- read.csv("test_titanic.csv", header=T, na.strings=c(""))
test$Survived <- 0
data <- rbind(train, test, fill = TRUE)
```
### missing value
```{r}
missmap(data, main = "NA vs. Observed")
```
```{r}
summary(data)
```
```{r}
summary(data$Age)
```
### Distribution plot
```{r}
z <- data$Age
s <- 2 * z + 4 + rnorm(length(data$Age))
par(mfrow = c(2, 2))
hist(z)
plot(s, z)
qqnorm(z)
qqline(z)
plot(1:length(z),log(z), lty = 1)
```
```{r}
data$Age[is.na(data$Age)] <- mean(data$Age, na.rm = T)
plot(density(data$Age))
```
```{r}
mode <- function(x) {
ux <- unique(x)
ux[which.max(tabulate(match(x, ux)))]
}
train$Embarked[is.na(train$Embarked)] <- mode(train$Embarked)
```
```{r}
# Grab title from passenger names
data$Title <- gsub('(.*, )|(\\..*)', '', data$Name)
# Show title counts by sex
table(data$Sex, data$Title)
```
```{r}
rare_title <- c('Dona', 'Lady', 'the Countess','Capt', 'Col', 'Don',
'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer')
# Also reassign mlle, ms, and mme accordingly
data$Title[data$Title == 'Mlle'] <- 'Miss'
data$Title[data$Title == 'Ms'] <- 'Miss'
data$Title[data$Title == 'Mme'] <- 'Mrs'
data$Title[data$Title %in% rare_title] <- 'Rare Title'
# Show title counts by sex again
table(data$Sex, data$Title)
```
### Split into training & test sets
```{r}
train <- data[1:891,]
test <- data[892:1309,]
```
```{r}
table(train$Sex, train$Survived)
prop.table(table(train$Sex, train$Survived))
prop.table(table(train$Sex, train$Survived),1)
```
```{r}
test$Survived <- 0
test$Survived[test$Sex == 'female'] <- 1
train$Child <- 0
train$Child[train$Age < 18] <- 1
table(train$Child, train$Survived)
prop.table(table(train$Child, train$Survived),1)
```
```{r}
aggregate(Survived ~ Child + Sex, data=train, FUN=sum)
aggregate(Survived ~ Child + Sex, data=train, FUN=length)
aggregate(Survived ~ Child + Sex, data=train, FUN=function(x) {sum(x)/length(x)})
```
```{r}
summary(train$Pclass)
summary(train$Fare)
```
```{r}
train$Fare2 <- '30+'
train$Fare2[train$Fare < 30 & train$Fare >= 20] <- '20-30'
train$Fare2[train$Fare < 20 & train$Fare >= 10] <- '10-20'
train$Fare2[train$Fare < 10] <- '<10'
```
```{r}
aggregate(Survived ~ Fare2 + Pclass + Sex, data=train, FUN=function(x) {sum(x)/length(x)})
```
```{r}
test$Child <- 0
test$Child[test$Age < 18] <- 1
test$Fare2 <- '30+'
test$Fare2[test$Fare < 30 & test$Fare >= 20] <- '20-30'
test$Fare2[test$Fare < 20 & test$Fare >= 10] <- '10-20'
test$Fare2[test$Fare < 10] <- '<10'
```
### logit
```{r}
model <- glm(Survived ~ Title + Child + Sex + Pclass + Fare2 + Embarked, family=binomial(link='logit'), data=train)
summary(model)
```
```{r}
anova(model, test="Chisq")
```
## Predict
```{r}
fitted.results <- predict(model, newdata = subset(test, select = c('Title','Child', 'Sex', 'Pclass', 'Fare2', 'Embarked')), type = 'response')
fitted.results <- ifelse(fitted.results > 0.5,1,0)
misClasificError <- mean(fitted.results != test$Survived)
print(paste('Accuracy',1-misClasificError))
```
```{r}
p <- predict(model, newdata = subset(test), type="response")
pr <- prediction(fitted.results, test$Survived)
perf <- performance(pr, measure = "tpr", x.measure = "fpr")
plot(perf)
```
```{r}
auc <- performance(pr, measure = "auc")
auc <- auc@y.values[[1]]
auc
```
```{r}
#submit <- data.frame(PassengerId = test$PassengerId, Survived = Prediction)
#write.csv(submit, file = "titanic_logit.csv", row.names = FALSE)
```
| /r/kernels/heysilver-titanic-test/script/titanic-test.R | no_license | helenaK/trustworthy-titanic | R | false | false | 3,749 | r | suppressMessages({
library(rpart)
#library(rattle)
library(rpart.plot)
library(RColorBrewer)
library(Amelia)
library(prediction)
library(plyr)
library(aod)
library(ROCR)
})
```
### Load and check data
```{r}
train <- read.csv("train_titanic.csv", header=T, na.strings=c(""))
test <- read.csv("test_titanic.csv", header=T, na.strings=c(""))
test$Survived <- 0
data <- rbind(train, test, fill = TRUE)
```
### missing value
```{r}
missmap(data, main = "NA vs. Observed")
```
```{r}
summary(data)
```
```{r}
summary(data$Age)
```
### Distribution plot
```{r}
z <- data$Age
s <- 2 * z + 4 + rnorm(length(data$Age))
par(mfrow = c(2, 2))
hist(z)
plot(s, z)
qqnorm(z)
qqline(z)
plot(1:length(z),log(z), lty = 1)
```
```{r}
data$Age[is.na(data$Age)] <- mean(data$Age, na.rm = T)
plot(density(data$Age))
```
```{r}
mode <- function(x) {
ux <- unique(x)
ux[which.max(tabulate(match(x, ux)))]
}
train$Embarked[is.na(train$Embarked)] <- mode(train$Embarked)
```
```{r}
# Grab title from passenger names
data$Title <- gsub('(.*, )|(\\..*)', '', data$Name)
# Show title counts by sex
table(data$Sex, data$Title)
```
```{r}
rare_title <- c('Dona', 'Lady', 'the Countess','Capt', 'Col', 'Don',
'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer')
# Also reassign mlle, ms, and mme accordingly
data$Title[data$Title == 'Mlle'] <- 'Miss'
data$Title[data$Title == 'Ms'] <- 'Miss'
data$Title[data$Title == 'Mme'] <- 'Mrs'
data$Title[data$Title %in% rare_title] <- 'Rare Title'
# Show title counts by sex again
table(data$Sex, data$Title)
```
### Split into training & test sets
```{r}
train <- data[1:891,]
test <- data[892:1309,]
```
```{r}
table(train$Sex, train$Survived)
prop.table(table(train$Sex, train$Survived))
prop.table(table(train$Sex, train$Survived),1)
```
```{r}
test$Survived <- 0
test$Survived[test$Sex == 'female'] <- 1
train$Child <- 0
train$Child[train$Age < 18] <- 1
table(train$Child, train$Survived)
prop.table(table(train$Child, train$Survived),1)
```
```{r}
aggregate(Survived ~ Child + Sex, data=train, FUN=sum)
aggregate(Survived ~ Child + Sex, data=train, FUN=length)
aggregate(Survived ~ Child + Sex, data=train, FUN=function(x) {sum(x)/length(x)})
```
```{r}
summary(train$Pclass)
summary(train$Fare)
```
```{r}
train$Fare2 <- '30+'
train$Fare2[train$Fare < 30 & train$Fare >= 20] <- '20-30'
train$Fare2[train$Fare < 20 & train$Fare >= 10] <- '10-20'
train$Fare2[train$Fare < 10] <- '<10'
```
```{r}
aggregate(Survived ~ Fare2 + Pclass + Sex, data=train, FUN=function(x) {sum(x)/length(x)})
```
```{r}
test$Child <- 0
test$Child[test$Age < 18] <- 1
test$Fare2 <- '30+'
test$Fare2[test$Fare < 30 & test$Fare >= 20] <- '20-30'
test$Fare2[test$Fare < 20 & test$Fare >= 10] <- '10-20'
test$Fare2[test$Fare < 10] <- '<10'
```
### logit
```{r}
model <- glm(Survived ~ Title + Child + Sex + Pclass + Fare2 + Embarked, family=binomial(link='logit'), data=train)
summary(model)
```
```{r}
anova(model, test="Chisq")
```
## Predict
```{r}
fitted.results <- predict(model, newdata = subset(test, select = c('Title','Child', 'Sex', 'Pclass', 'Fare2', 'Embarked')), type = 'response')
fitted.results <- ifelse(fitted.results > 0.5,1,0)
misClasificError <- mean(fitted.results != test$Survived)
print(paste('Accuracy',1-misClasificError))
```
```{r}
p <- predict(model, newdata = subset(test), type="response")
pr <- prediction(fitted.results, test$Survived)
perf <- performance(pr, measure = "tpr", x.measure = "fpr")
plot(perf)
```
```{r}
auc <- performance(pr, measure = "auc")
auc <- auc@y.values[[1]]
auc
```
```{r}
#submit <- data.frame(PassengerId = test$PassengerId, Survived = Prediction)
#write.csv(submit, file = "titanic_logit.csv", row.names = FALSE)
```
|
#' Combine sample means from two samples
#'
#' This function combines the sample mean information from two samples (of
#' the same phenomena) to return the sample mean of the union of the two
#' samples.
#'
#' @param x.mean sample mean from the first sample, 'x'
#' @param y.mean sample mean from the second sample, 'y'
#' @param x.n sample size from the first sample, 'x'
#' @param y.n sample size from the second sample, 'y'
#'
#'
mergeMean = function(x.mean, y.mean, x.n, y.n) {
N = x.n + y.n
(x.mean*x.n + y.mean*y.n)/N
} | /R/mergeMean.R | no_license | jmhewitt/telefit | R | false | false | 540 | r | #' Combine sample means from two samples
#'
#' This function combines the sample mean information from two samples (of
#' the same phenomena) to return the sample mean of the union of the two
#' samples.
#'
#' @param x.mean sample mean from the first sample, 'x'
#' @param y.mean sample mean from the second sample, 'y'
#' @param x.n sample size from the first sample, 'x'
#' @param y.n sample size from the second sample, 'y'
#'
#'
mergeMean = function(x.mean, y.mean, x.n, y.n) {
N = x.n + y.n
(x.mean*x.n + y.mean*y.n)/N
} |
basedir <- "/mnt/hdd0/univ/thesis-support-material/data/"
library(edgeR)
library(tidyr)
library(dplyr)
library(readr)
library(stringr)
# GUIDE:
# https://ucdavis-bioinformatics-training.github.io/2018-June-RNA-Seq-Workshop/thursday/DE.html
# importing data
setwd(paste0(basedir, "4-4/featureCounts-output/"))
filename = "counts.txt"
#filename = "filtered-only-lncrnas/counts_only_lncRNA.txt"
counts_full <- read_delim(filename, "\t",
escape_double = FALSE, trim_ws = TRUE, skip = 1)
counts_simple_full <- as.matrix(counts_full[,c(7,8,9,10,11,12)])
# renaming the rows and columns
# note: bc*-1 are the shNT samples
# bc*-2 are the sh1 samples
# bc*-3 are the sh2 samples
colnames(counts_simple_full) <- c("bc1-1", "bc1-2", "bc1-3", "bc2-1", "bc2-2", "bc2-3")
rownames(counts_simple_full) <- counts_full$Geneid
# set adjusted pvalue and LFC cutoffs
padj.cutoff <- 0.01
lfc.cutoff <- 0.58
# ----- shNT vs sh-1 ----
# keeping only shNT and sh1 samples
counts <- counts_simple_full[,c(1,4,2,5)]
condition <- factor(c(rep("control", 2), rep("sh1", 2)))
sampleInfo <- data.frame(row.names = colnames(counts), condition)
group <- sampleInfo$condition
# creating object
d0 <- DGEList(counts)
# compute normalization factors
d0 <- calcNormFactors(d0)
# filtering low expressed genes
# we remove the genes whose cpm normalized count is < 1
# in every sample
cutoff <- 1
drop <- which(apply(cpm(d0), 1, max) < cutoff)
d <- d0[-drop,]
mm <- model.matrix(~0 + group)
y <- voom(d, mm, plot = F)
# model fitting
fit <- lmFit(y, mm)
# setting the comparison between control and sh1
contr <- makeContrasts(groupsh1 - groupcontrol, levels = colnames(coef(fit)))
# Estimate contrast for each gene
tmp <- contrasts.fit(fit, contr)
tmp <- eBayes(tmp)
result.table <- topTable(tmp, sort.by = "P", n = Inf)
DE.genes.sh1.limma <- result.table %>%
filter(adj.P.Val < padj.cutoff &
abs(logFC) > lfc.cutoff ) %>%
arrange(-logFC) %>%
mutate(gene = rownames(.)) %>%
select(gene, adj.P.Val, logFC)
DE.genes.sh1.limma <- as_tibble(DE.genes.sh1.limma)
# -----------------------
rm(counts)
rm(condition)
rm(sampleInfo)
rm(group)
rm(d0)
rm(d)
rm(mm)
rm(drop)
rm(y)
rm(fit)
rm(contr)
rm(tmp)
rm(result.table)
# ----- shNT vs sh2 ----
# keeping only shNT and sh2 samples
counts <- counts_simple_full[,c(1,4,3,6)]
condition <- factor(c(rep("control", 2), rep("sh2", 2)))
sampleInfo <- data.frame(row.names = colnames(counts), condition)
group <- sampleInfo$condition
# creating object
d0 <- DGEList(counts)
# compute normalization factors
d0 <- calcNormFactors(d0)
# filtering low expressed genes
# we remove the genes whose cpm normalized count is < 1
# in every sample
cutoff <- 1
drop <- which(apply(cpm(d0), 1, max) < cutoff)
d <- d0[-drop,]
mm <- model.matrix(~0 + group)
y <- voom(d, mm, plot = F)
# model fitting
fit <- lmFit(y, mm)
# setting the comparison between control and sh1
contr <- makeContrasts(groupsh2 - groupcontrol, levels = colnames(coef(fit)))
# Estimate contrast for each gene
tmp <- contrasts.fit(fit, contr)
tmp <- eBayes(tmp)
result.table <- topTable(tmp, sort.by = "P", n = Inf)
DE.genes.sh2.limma <- result.table %>%
filter(adj.P.Val < padj.cutoff &
abs(logFC) > lfc.cutoff ) %>%
arrange(-logFC) %>%
mutate(gene = rownames(.)) %>%
select(gene, adj.P.Val, logFC)
DE.genes.sh2.limma <- as_tibble(DE.genes.sh2.limma)
# ----------------------------------
# a value of LFC > 0 means that the gene is MORE EXPRESSED in the sh1 (or sh2)
# samples (the gene is upregulated upon tp53-mut silencing)
rm(counts)
rm(condition)
rm(sampleInfo)
rm(group)
rm(d0)
rm(d)
rm(mm)
rm(drop)
rm(y)
rm(fit)
rm(contr)
rm(tmp)
rm(result.table)
rm(counts_full)
rm(counts_simple_full)
rm(cutoff)
rm(lfc.cutoff)
rm(padj.cutoff)
rm(filename)
# -----------------------------------
#DE.genes.sh1.limma.upreg <- DE.genes.sh1.limma %>% filter(logFC > 0)
#DE.genes.sh1.limma.downreg <- DE.genes.sh1.limma %>% filter(logFC < 0)
#DE.genes.sh2.limma.upreg <- DE.genes.sh2.limma %>% filter(logFC > 0)
#DE.genes.sh2.limma.downreg <- DE.genes.sh2.limma %>% filter(logFC < 0)
| /scripts/4-4/DE-analysis/Limma/Limma.r | no_license | lorenzoiuri/thesis-support-material | R | false | false | 4,276 | r |
basedir <- "/mnt/hdd0/univ/thesis-support-material/data/"
library(edgeR)
library(tidyr)
library(dplyr)
library(readr)
library(stringr)
# GUIDE:
# https://ucdavis-bioinformatics-training.github.io/2018-June-RNA-Seq-Workshop/thursday/DE.html
# importing data
setwd(paste0(basedir, "4-4/featureCounts-output/"))
filename = "counts.txt"
#filename = "filtered-only-lncrnas/counts_only_lncRNA.txt"
counts_full <- read_delim(filename, "\t",
escape_double = FALSE, trim_ws = TRUE, skip = 1)
counts_simple_full <- as.matrix(counts_full[,c(7,8,9,10,11,12)])
# renaming the rows and columns
# note: bc*-1 are the shNT samples
# bc*-2 are the sh1 samples
# bc*-3 are the sh2 samples
colnames(counts_simple_full) <- c("bc1-1", "bc1-2", "bc1-3", "bc2-1", "bc2-2", "bc2-3")
rownames(counts_simple_full) <- counts_full$Geneid
# set adjusted pvalue and LFC cutoffs
padj.cutoff <- 0.01
lfc.cutoff <- 0.58
# ----- shNT vs sh-1 ----
# keeping only shNT and sh1 samples
counts <- counts_simple_full[,c(1,4,2,5)]
condition <- factor(c(rep("control", 2), rep("sh1", 2)))
sampleInfo <- data.frame(row.names = colnames(counts), condition)
group <- sampleInfo$condition
# creating object
d0 <- DGEList(counts)
# compute normalization factors
d0 <- calcNormFactors(d0)
# filtering low expressed genes
# we remove the genes whose cpm normalized count is < 1
# in every sample
cutoff <- 1
drop <- which(apply(cpm(d0), 1, max) < cutoff)
d <- d0[-drop,]
mm <- model.matrix(~0 + group)
y <- voom(d, mm, plot = F)
# model fitting
fit <- lmFit(y, mm)
# setting the comparison between control and sh1
contr <- makeContrasts(groupsh1 - groupcontrol, levels = colnames(coef(fit)))
# Estimate contrast for each gene
tmp <- contrasts.fit(fit, contr)
tmp <- eBayes(tmp)
result.table <- topTable(tmp, sort.by = "P", n = Inf)
DE.genes.sh1.limma <- result.table %>%
filter(adj.P.Val < padj.cutoff &
abs(logFC) > lfc.cutoff ) %>%
arrange(-logFC) %>%
mutate(gene = rownames(.)) %>%
select(gene, adj.P.Val, logFC)
DE.genes.sh1.limma <- as_tibble(DE.genes.sh1.limma)
# -----------------------
rm(counts)
rm(condition)
rm(sampleInfo)
rm(group)
rm(d0)
rm(d)
rm(mm)
rm(drop)
rm(y)
rm(fit)
rm(contr)
rm(tmp)
rm(result.table)
# ----- shNT vs sh2 ----
# keeping only shNT and sh2 samples
counts <- counts_simple_full[,c(1,4,3,6)]
condition <- factor(c(rep("control", 2), rep("sh2", 2)))
sampleInfo <- data.frame(row.names = colnames(counts), condition)
group <- sampleInfo$condition
# creating object
d0 <- DGEList(counts)
# compute normalization factors
d0 <- calcNormFactors(d0)
# filtering low expressed genes
# we remove the genes whose cpm normalized count is < 1
# in every sample
cutoff <- 1
drop <- which(apply(cpm(d0), 1, max) < cutoff)
d <- d0[-drop,]
mm <- model.matrix(~0 + group)
y <- voom(d, mm, plot = F)
# model fitting
fit <- lmFit(y, mm)
# setting the comparison between control and sh1
contr <- makeContrasts(groupsh2 - groupcontrol, levels = colnames(coef(fit)))
# Estimate contrast for each gene
tmp <- contrasts.fit(fit, contr)
tmp <- eBayes(tmp)
result.table <- topTable(tmp, sort.by = "P", n = Inf)
DE.genes.sh2.limma <- result.table %>%
filter(adj.P.Val < padj.cutoff &
abs(logFC) > lfc.cutoff ) %>%
arrange(-logFC) %>%
mutate(gene = rownames(.)) %>%
select(gene, adj.P.Val, logFC)
DE.genes.sh2.limma <- as_tibble(DE.genes.sh2.limma)
# ----------------------------------
# a value of LFC > 0 means that the gene is MORE EXPRESSED in the sh1 (or sh2)
# samples (the gene is upregulated upon tp53-mut silencing)
rm(counts)
rm(condition)
rm(sampleInfo)
rm(group)
rm(d0)
rm(d)
rm(mm)
rm(drop)
rm(y)
rm(fit)
rm(contr)
rm(tmp)
rm(result.table)
rm(counts_full)
rm(counts_simple_full)
rm(cutoff)
rm(lfc.cutoff)
rm(padj.cutoff)
rm(filename)
# -----------------------------------
#DE.genes.sh1.limma.upreg <- DE.genes.sh1.limma %>% filter(logFC > 0)
#DE.genes.sh1.limma.downreg <- DE.genes.sh1.limma %>% filter(logFC < 0)
#DE.genes.sh2.limma.upreg <- DE.genes.sh2.limma %>% filter(logFC > 0)
#DE.genes.sh2.limma.downreg <- DE.genes.sh2.limma %>% filter(logFC < 0)
|
df2$R_cluster = NA
#df2$cluster2 = NA
AAA_R <- NULL
for(i in 1:nrow(df2)){ # i = 1:n
for(i in i){ # Start i[1],j[1:n] , i[2]:j[1:n] loop
for(j in 1:nrow(df2)){
outputR <- print((df2$High[i] - df2$High[j])^2)# distance keep in output
AAA_R <- rbind(AAA_R,outputR) # combine output in AAA
AAA_R <- data.frame(AAA_R) # Create Data Frame
outR <- as.data.frame(matrix(AAA_R$AAA_R[AAA_R$AAA_R!=""], ncol=nrow(df1), byrow=TRUE)) # 1 column to start_Day:today column
diag(outR)=max(Data_NEW$High,na.rm = TRUE)^2 # Change Diagonal = 999
#df2$error[i] = min(out[i])
#df2$cluster[i] = which(out[i]== min(out[i]))
}
}
}
####################################################################################
#find dot nearest
for(i in 1:nrow(df2)){
df2$R_cluster[i] = which(outR[i]== min(outR[i])) # column[n] find min
#df2$cluster2[i] = order(out[i])[2]
}
####################################################################################
# COlumn numeric to draw a line
df2$R_line = NA
for(i in 1:nrow(df2)){ # i = 1:n
for(i in i){ # Start i[1],j[1:n] , i[2]:j[1:n] loop
for(j in 1:nrow(df2)){
if(df2$R_cluster[i] == df2$No[j]){ # if cluster[1] == No[1,2,3,...,today] is TRUE/print Low is row TRUE
df2$R_line[i] = df2$High[j]
}
}
}
}
####################################################################################
# Column = group by & count
df2 <- df2 %>%
group_by(df2$R_cluster) %>% mutate(R_count = n()) %>% ungroup()
df2$`df2$R_cluster` <- NULL
####################################################################################
df2$R_Confirm <- ifelse(df2$R_count >= 2, "let's go","Nope") # if else
####################################################################################
ResistanceLine = NA
for(i in 1:nrow(df2)){
if(df2$R_Confirm[i] == "let's go" & df2$High[i] >= df2$High[nrow(df2)]){
ResistanceLine[i] = (c(df2$R_line[i]))
}
}
plot(addLines(h=c(ResistanceLine),col = "yellow"))
| /ADA_ResistanceLine(link_SL).R | no_license | PersThanachot/Crypto_Support-ResistanceLine | R | false | false | 2,092 | r | df2$R_cluster = NA
#df2$cluster2 = NA
AAA_R <- NULL
for(i in 1:nrow(df2)){ # i = 1:n
for(i in i){ # Start i[1],j[1:n] , i[2]:j[1:n] loop
for(j in 1:nrow(df2)){
outputR <- print((df2$High[i] - df2$High[j])^2)# distance keep in output
AAA_R <- rbind(AAA_R,outputR) # combine output in AAA
AAA_R <- data.frame(AAA_R) # Create Data Frame
outR <- as.data.frame(matrix(AAA_R$AAA_R[AAA_R$AAA_R!=""], ncol=nrow(df1), byrow=TRUE)) # 1 column to start_Day:today column
diag(outR)=max(Data_NEW$High,na.rm = TRUE)^2 # Change Diagonal = 999
#df2$error[i] = min(out[i])
#df2$cluster[i] = which(out[i]== min(out[i]))
}
}
}
####################################################################################
#find dot nearest
for(i in 1:nrow(df2)){
df2$R_cluster[i] = which(outR[i]== min(outR[i])) # column[n] find min
#df2$cluster2[i] = order(out[i])[2]
}
####################################################################################
# COlumn numeric to draw a line
df2$R_line = NA
for(i in 1:nrow(df2)){ # i = 1:n
for(i in i){ # Start i[1],j[1:n] , i[2]:j[1:n] loop
for(j in 1:nrow(df2)){
if(df2$R_cluster[i] == df2$No[j]){ # if cluster[1] == No[1,2,3,...,today] is TRUE/print Low is row TRUE
df2$R_line[i] = df2$High[j]
}
}
}
}
####################################################################################
# Column = group by & count
df2 <- df2 %>%
group_by(df2$R_cluster) %>% mutate(R_count = n()) %>% ungroup()
df2$`df2$R_cluster` <- NULL
####################################################################################
df2$R_Confirm <- ifelse(df2$R_count >= 2, "let's go","Nope") # if else
####################################################################################
ResistanceLine = NA
for(i in 1:nrow(df2)){
if(df2$R_Confirm[i] == "let's go" & df2$High[i] >= df2$High[nrow(df2)]){
ResistanceLine[i] = (c(df2$R_line[i]))
}
}
plot(addLines(h=c(ResistanceLine),col = "yellow"))
|
## THIS FILE GEOCODES THE DOC RELEASE RECORDS AND MAPS THEM TO THEIR HOME
## VOTER PRECINCTS AND CENSUS BLOCK GROUPS
released <- fread("./raw_data/doc/INMATE_RELEASE_ROOT_0419.csv")
parole <- fread("./raw_data/doc/OFFENDER_ROOT_0419.csv")
incarcerated <- fread("./raw_data/doc/INMATE_ACTIVE_ROOT_0419.csv")
released <- filter(released, !(DCNumber %in% parole$DCNumber),
!(DCNumber %in% incarcerated$DCNumber))
addresses <- fread("./raw_data/doc/INMATE_RELEASE_RESIDENCE_0419.csv")
released_with_addresses <- inner_join(released, addresses) %>%
filter(releasedateflag_descr == "valid release date")
full_n <- nrow(released_with_addresses)
saveRDS(full_n, "./temp/number_released.rds")
released_with_addresses <- released_with_addresses %>%
filter(grepl("[0-9]", substring(AddressLine1, 1, 1))) %>%
mutate(address = gsub("[#]", "", paste(AddressLine1, City)))
drop_start_chara <- full_n - nrow(released_with_addresses)
n_to_geo <- nrow(released_with_addresses)
saveRDS(nrow(released_with_addresses), "./temp/number_released_goodnumber.rds")
released_with_addresses$group <- ceiling((c(1:nrow(released_with_addresses)) /
nrow(released_with_addresses))*4)
# these take a long time and are expensive to run so commenting out
# lats_longs1 <- geocode(filter(released_with_addresses, group == 1)$address, override_limit = T, output = "more")
# saveRDS(lats_longs1, "./temp/geocode_output1.rds")
# lats_longs2 <- geocode(filter(released_with_addresses, group == 2)$address, override_limit = T, output = "more")
# saveRDS(lats_longs2, "./temp/geocode_output2.rds")
# lats_longs3 <- geocode(filter(released_with_addresses, group == 3)$address, override_limit = T, output = "more")
# saveRDS(lats_longs3, "./temp/geocode_output3.rds")
# lats_longs4 <- geocode(filter(released_with_addresses, group == 4)$address, override_limit = T, output = "more")
# saveRDS(lats_longs4, "./temp/geocode_output4.rds")
lats_longs <- bind_rows(
readRDS("./temp/geocode_output1.rds"),
readRDS("./temp/geocode_output2.rds"),
readRDS("./temp/geocode_output3.rds"),
readRDS("./temp/geocode_output4.rds")
)
### now keep only those in florida with good addresses, map to precincts
released_with_addresses <- bind_cols(released_with_addresses,
rename(lats_longs, ggl_address = address)) %>%
filter(grepl(", fl ", ggl_address))
drop_geocode <- n_to_geo - nrow(released_with_addresses)
ngood_geo <- nrow(released_with_addresses)
released_with_addresses <- released_with_addresses %>%
filter(!is.na(lat), !is.na(lon),
loctype %in% c("range_interpolated", "rooftop"))
ndrop_out_fl <- ngood_geo - nrow(released_with_addresses)
saveRDS(nrow(released_with_addresses), "./temp/good_geocoded.rds")
precincts <- readOGR("./raw_data/shapefiles/fl_2016_FEST",
"fl_2016")
precincts@data$full_id <- paste0(precincts@data$county, precincts@data$pct)
precincts <- spTransform(precincts, "+init=epsg:4326")
pings <- SpatialPoints(released_with_addresses[,c('lon','lat')], proj4string = precincts@proj4string)
released_with_addresses$precinct <- over(pings, precincts)$full_id
#### now map to census block groups
bgs <- readOGR("./raw_data/shapefiles/tl_2018_12_bg",
"tl_2018_12_bg")
bgs <- spTransform(bgs, "+init=epsg:4326")
pings <- SpatialPoints(released_with_addresses[,c('lon','lat')], proj4string = bgs@proj4string)
released_with_addresses$block_group <- over(pings, bgs)$GEOID
saveRDS(released_with_addresses, "./temp/released_with_addresses.rds")
saveRDS(sum(released_with_addresses$loctype %in% c("range_interpolated", "rooftop")), "./temp/good_geocoded.rds")
############ COLLAPSE DOWN TO PRECINCTS / BLOCK GROUPS
doc <- readRDS("./temp/released_with_addresses.rds") %>%
mutate(county = substring(precinct, 1, 3),
precinct = str_pad(substring(precinct, 4), width = 10, side = "left", pad = "0"),
cp = paste0(county, precinct),
PrisonReleaseDate = as.Date(substring(PrisonReleaseDate, 1, 10), "%m/%d/%Y"),
years_since = 2018 - year(PrisonReleaseDate)) %>%
filter(!is.na(cp),
PrisonReleaseDate <= "2018-11-06") %>%
group_by(ggl_address) %>%
mutate(big_release = n() >= 5,
years_since = ifelse(big_release, NA, years_since)) %>%
group_by(cp) %>%
summarize(years_since = mean(years_since, na.rm = T),
all_doc = n(),
small_res_doc = sum(1 - big_release),
big_release = sum(big_release))
saveRDS(doc, "temp/full_doc_precinct.rds")
doc_recent <- readRDS("./temp/released_with_addresses.rds") %>%
mutate(county = substring(precinct, 1, 3),
precinct = str_pad(substring(precinct, 4), width = 10, side = "left", pad = "0"),
cp = paste0(county, precinct),
PrisonReleaseDate = as.Date(substring(PrisonReleaseDate, 1, 10), "%m/%d/%Y"),
years_since = 2018 - year(PrisonReleaseDate)) %>%
filter(!is.na(cp),
PrisonReleaseDate >= "2015-01-01",
PrisonReleaseDate <= "2018-11-06") %>%
group_by(ggl_address) %>%
mutate(big_release_recent = n() >= 5) %>%
group_by(cp) %>%
summarize(all_doc_recent = n(),
small_res_doc_recent = sum(1 - big_release_recent),
big_release_recent = sum(big_release_recent))
saveRDS(doc_recent, "temp/recent_doc_precinct.rds")
## block groups
doc_bg <- readRDS("./temp/released_with_addresses.rds") %>%
mutate(county = substring(precinct, 1, 3),
PrisonReleaseDate = as.Date(substring(PrisonReleaseDate, 1, 10), "%m/%d/%Y")) %>%
filter(!is.na(block_group),
PrisonReleaseDate <= "2018-11-06") %>%
group_by(ggl_address) %>%
mutate(big_release = n() >= 5,
years_since = ifelse(big_release, NA, 2018 - year(PrisonReleaseDate))) %>%
group_by(GEOID = block_group) %>%
summarize(years_since = mean(years_since, na.rm = T),
all_doc = n(),
small_res_doc = sum(1 - big_release))
saveRDS(doc_bg, "temp/all_doc_bg.rds")
doc_bg_recent <- readRDS("./temp/released_with_addresses.rds") %>%
mutate(county = substring(precinct, 1, 3),
PrisonReleaseDate = as.Date(substring(PrisonReleaseDate, 1, 10), "%m/%d/%Y")) %>%
filter(!is.na(block_group),
PrisonReleaseDate >= "2015-01-01",
PrisonReleaseDate <= "2018-11-06") %>%
group_by(ggl_address) %>%
mutate(big_release = n() >= 5) %>%
group_by(GEOID = block_group) %>%
summarize(all_doc_recent = n(),
small_res_doc_recent = sum(1 - big_release))
saveRDS(doc_bg_recent, "temp/recent_doc_bg.rds") | /code/01_geocode_releasees.R | no_license | ktmorris/florida_am4 | R | false | false | 6,628 | r | ## THIS FILE GEOCODES THE DOC RELEASE RECORDS AND MAPS THEM TO THEIR HOME
## VOTER PRECINCTS AND CENSUS BLOCK GROUPS
released <- fread("./raw_data/doc/INMATE_RELEASE_ROOT_0419.csv")
parole <- fread("./raw_data/doc/OFFENDER_ROOT_0419.csv")
incarcerated <- fread("./raw_data/doc/INMATE_ACTIVE_ROOT_0419.csv")
released <- filter(released, !(DCNumber %in% parole$DCNumber),
!(DCNumber %in% incarcerated$DCNumber))
addresses <- fread("./raw_data/doc/INMATE_RELEASE_RESIDENCE_0419.csv")
released_with_addresses <- inner_join(released, addresses) %>%
filter(releasedateflag_descr == "valid release date")
full_n <- nrow(released_with_addresses)
saveRDS(full_n, "./temp/number_released.rds")
released_with_addresses <- released_with_addresses %>%
filter(grepl("[0-9]", substring(AddressLine1, 1, 1))) %>%
mutate(address = gsub("[#]", "", paste(AddressLine1, City)))
drop_start_chara <- full_n - nrow(released_with_addresses)
n_to_geo <- nrow(released_with_addresses)
saveRDS(nrow(released_with_addresses), "./temp/number_released_goodnumber.rds")
released_with_addresses$group <- ceiling((c(1:nrow(released_with_addresses)) /
nrow(released_with_addresses))*4)
# these take a long time and are expensive to run so commenting out
# lats_longs1 <- geocode(filter(released_with_addresses, group == 1)$address, override_limit = T, output = "more")
# saveRDS(lats_longs1, "./temp/geocode_output1.rds")
# lats_longs2 <- geocode(filter(released_with_addresses, group == 2)$address, override_limit = T, output = "more")
# saveRDS(lats_longs2, "./temp/geocode_output2.rds")
# lats_longs3 <- geocode(filter(released_with_addresses, group == 3)$address, override_limit = T, output = "more")
# saveRDS(lats_longs3, "./temp/geocode_output3.rds")
# lats_longs4 <- geocode(filter(released_with_addresses, group == 4)$address, override_limit = T, output = "more")
# saveRDS(lats_longs4, "./temp/geocode_output4.rds")
lats_longs <- bind_rows(
readRDS("./temp/geocode_output1.rds"),
readRDS("./temp/geocode_output2.rds"),
readRDS("./temp/geocode_output3.rds"),
readRDS("./temp/geocode_output4.rds")
)
### now keep only those in florida with good addresses, map to precincts
released_with_addresses <- bind_cols(released_with_addresses,
rename(lats_longs, ggl_address = address)) %>%
filter(grepl(", fl ", ggl_address))
drop_geocode <- n_to_geo - nrow(released_with_addresses)
ngood_geo <- nrow(released_with_addresses)
released_with_addresses <- released_with_addresses %>%
filter(!is.na(lat), !is.na(lon),
loctype %in% c("range_interpolated", "rooftop"))
ndrop_out_fl <- ngood_geo - nrow(released_with_addresses)
saveRDS(nrow(released_with_addresses), "./temp/good_geocoded.rds")
precincts <- readOGR("./raw_data/shapefiles/fl_2016_FEST",
"fl_2016")
precincts@data$full_id <- paste0(precincts@data$county, precincts@data$pct)
precincts <- spTransform(precincts, "+init=epsg:4326")
pings <- SpatialPoints(released_with_addresses[,c('lon','lat')], proj4string = precincts@proj4string)
released_with_addresses$precinct <- over(pings, precincts)$full_id
#### now map to census block groups
bgs <- readOGR("./raw_data/shapefiles/tl_2018_12_bg",
"tl_2018_12_bg")
bgs <- spTransform(bgs, "+init=epsg:4326")
pings <- SpatialPoints(released_with_addresses[,c('lon','lat')], proj4string = bgs@proj4string)
released_with_addresses$block_group <- over(pings, bgs)$GEOID
saveRDS(released_with_addresses, "./temp/released_with_addresses.rds")
saveRDS(sum(released_with_addresses$loctype %in% c("range_interpolated", "rooftop")), "./temp/good_geocoded.rds")
############ COLLAPSE DOWN TO PRECINCTS / BLOCK GROUPS
doc <- readRDS("./temp/released_with_addresses.rds") %>%
mutate(county = substring(precinct, 1, 3),
precinct = str_pad(substring(precinct, 4), width = 10, side = "left", pad = "0"),
cp = paste0(county, precinct),
PrisonReleaseDate = as.Date(substring(PrisonReleaseDate, 1, 10), "%m/%d/%Y"),
years_since = 2018 - year(PrisonReleaseDate)) %>%
filter(!is.na(cp),
PrisonReleaseDate <= "2018-11-06") %>%
group_by(ggl_address) %>%
mutate(big_release = n() >= 5,
years_since = ifelse(big_release, NA, years_since)) %>%
group_by(cp) %>%
summarize(years_since = mean(years_since, na.rm = T),
all_doc = n(),
small_res_doc = sum(1 - big_release),
big_release = sum(big_release))
saveRDS(doc, "temp/full_doc_precinct.rds")
doc_recent <- readRDS("./temp/released_with_addresses.rds") %>%
mutate(county = substring(precinct, 1, 3),
precinct = str_pad(substring(precinct, 4), width = 10, side = "left", pad = "0"),
cp = paste0(county, precinct),
PrisonReleaseDate = as.Date(substring(PrisonReleaseDate, 1, 10), "%m/%d/%Y"),
years_since = 2018 - year(PrisonReleaseDate)) %>%
filter(!is.na(cp),
PrisonReleaseDate >= "2015-01-01",
PrisonReleaseDate <= "2018-11-06") %>%
group_by(ggl_address) %>%
mutate(big_release_recent = n() >= 5) %>%
group_by(cp) %>%
summarize(all_doc_recent = n(),
small_res_doc_recent = sum(1 - big_release_recent),
big_release_recent = sum(big_release_recent))
saveRDS(doc_recent, "temp/recent_doc_precinct.rds")
## block groups
doc_bg <- readRDS("./temp/released_with_addresses.rds") %>%
mutate(county = substring(precinct, 1, 3),
PrisonReleaseDate = as.Date(substring(PrisonReleaseDate, 1, 10), "%m/%d/%Y")) %>%
filter(!is.na(block_group),
PrisonReleaseDate <= "2018-11-06") %>%
group_by(ggl_address) %>%
mutate(big_release = n() >= 5,
years_since = ifelse(big_release, NA, 2018 - year(PrisonReleaseDate))) %>%
group_by(GEOID = block_group) %>%
summarize(years_since = mean(years_since, na.rm = T),
all_doc = n(),
small_res_doc = sum(1 - big_release))
saveRDS(doc_bg, "temp/all_doc_bg.rds")
doc_bg_recent <- readRDS("./temp/released_with_addresses.rds") %>%
mutate(county = substring(precinct, 1, 3),
PrisonReleaseDate = as.Date(substring(PrisonReleaseDate, 1, 10), "%m/%d/%Y")) %>%
filter(!is.na(block_group),
PrisonReleaseDate >= "2015-01-01",
PrisonReleaseDate <= "2018-11-06") %>%
group_by(ggl_address) %>%
mutate(big_release = n() >= 5) %>%
group_by(GEOID = block_group) %>%
summarize(all_doc_recent = n(),
small_res_doc_recent = sum(1 - big_release))
saveRDS(doc_bg_recent, "temp/recent_doc_bg.rds") |
% Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/cr_search_free.r
\name{cr_search_free}
\alias{cr_search_free}
\title{Search the CrossRef Metatdata for DOIs using free form references.}
\usage{
cr_search_free(query, url = "http://search.crossref.org/links")
}
\arguments{
\item{query}{Reference query; a character vector of length 1 or greater,
comma-separated of course.}
\item{url}{Base url for the Crossref metadata API.}
}
\description{
Search the CrossRef Metatdata for DOIs using free form references.
}
\details{
Have to have at least three terms in each search query.
}
\examples{
\dontrun{
# search with title, author, year, and journal
cr_search_free(query = "Piwowar Sharing Detailed Research Data Is Associated with
Increased Citation Rate PLOS one 2007")
cr_search_free(query="Renear 2012") # too few words, need at least 3
cr_search_free(query=c("Renear 2012","Piwowar sharing data PLOS one")) # multiple queries
# Get a DOI and get the citation using cr_search
doi <- cr_search_free(query="Piwowar sharing data PLOS one")$doi
cr_search(doi = doi)
# Queries can be quite complex too
cr_search_free("M. Henrion, D. J. Mortlock, D. J. Hand, and A. Gandy, \\"A Bayesian
approach to star-galaxy classification,\\" Monthly Notices of the Royal Astronomical Society,
vol. 412, no. 4, pp. 2286-2302, Apr. 2011.")
# Lots of queries
queries <- c("Piwowar sharing data PLOS one", "Priem Scientometrics 2.0 social web",
"William Gunn A Crosstalk Between Myeloma Cells",
"karthik ram Metapopulation dynamics override local limits")
cr_search_free(queries)
}
}
\author{
Scott Chamberlain \email{myrmecocystus@gmail.com}
}
\seealso{
\code{\link{cr_search}}, \code{\link{cr_r}}, \code{\link{cr_citation}}
}
| /man/cr_search_free.Rd | permissive | andreywork/rcrossref | R | false | false | 1,752 | rd | % Generated by roxygen2 (4.1.1): do not edit by hand
% Please edit documentation in R/cr_search_free.r
\name{cr_search_free}
\alias{cr_search_free}
\title{Search the CrossRef Metatdata for DOIs using free form references.}
\usage{
cr_search_free(query, url = "http://search.crossref.org/links")
}
\arguments{
\item{query}{Reference query; a character vector of length 1 or greater,
comma-separated of course.}
\item{url}{Base url for the Crossref metadata API.}
}
\description{
Search the CrossRef Metatdata for DOIs using free form references.
}
\details{
Have to have at least three terms in each search query.
}
\examples{
\dontrun{
# search with title, author, year, and journal
cr_search_free(query = "Piwowar Sharing Detailed Research Data Is Associated with
Increased Citation Rate PLOS one 2007")
cr_search_free(query="Renear 2012") # too few words, need at least 3
cr_search_free(query=c("Renear 2012","Piwowar sharing data PLOS one")) # multiple queries
# Get a DOI and get the citation using cr_search
doi <- cr_search_free(query="Piwowar sharing data PLOS one")$doi
cr_search(doi = doi)
# Queries can be quite complex too
cr_search_free("M. Henrion, D. J. Mortlock, D. J. Hand, and A. Gandy, \\"A Bayesian
approach to star-galaxy classification,\\" Monthly Notices of the Royal Astronomical Society,
vol. 412, no. 4, pp. 2286-2302, Apr. 2011.")
# Lots of queries
queries <- c("Piwowar sharing data PLOS one", "Priem Scientometrics 2.0 social web",
"William Gunn A Crosstalk Between Myeloma Cells",
"karthik ram Metapopulation dynamics override local limits")
cr_search_free(queries)
}
}
\author{
Scott Chamberlain \email{myrmecocystus@gmail.com}
}
\seealso{
\code{\link{cr_search}}, \code{\link{cr_r}}, \code{\link{cr_citation}}
}
|
###Gabriel Agbesi Atsyor####
##Masters Thesis R script###
#setting the working directory
getwd()
setwd("C:/Users/GHOOST/Desktop/New Lit/data")
#attaching packages
library(lavaan)
library(foreign)
library(car)
library(psych)
library(ggplot2)
library(summarytools)
library(knitr)
library(gridExtra)
library(kableExtra)
library(stargazer)
library(multcomp)
#attaching the data
data<-read.csv("International Students Survey.csv")
attach(data)
##FACTORS INFLUENCING STUDENTS DECISION TO STUDY IN RUSSIA
#Data preparation: Demographic information
#Age
table(Age)
summary(as.numeric(Age))
data$age<-recode(as.numeric(Age),"17:21=1; 22:26=2;27:hi=3")
table(data$age)
data$age<-factor(data$age,lab=c("17 to 21 yrs", "22 to 26 yrs", " 27 yrs and older"))
#Degree
data$degree<-What.degree.are.you.currently.studying.for.
#Language of instruction
data$language.of.instruction<-What.is.the.language.of.instruction.for.your.program.
data$family.income<-What.was.your.annual.family.income.when.you.were.applying.to.study.abroad..estimate.in.US.dollars.
#country regions
table(Home.country)
data$world.region[data$Home.country == 'Algeria'|
data$Home.country == 'Botswana'| data$Home.country == 'Cameroon'|
data$Home.country == 'Chad'| data$Home.country == 'Congo'|
data$Home.country == 'DR Congo'|data$Home.country == 'Eritrea'|
data$Home.country == 'Ivory Coast'|
data$Home.country == 'Gambia'|data$Home.country == 'Ghana'|
data$Home.country == 'Kenya'|data$Home.country == 'Madagascar'|
data$Home.country == 'Niger'|data$Home.country == 'Nigeria'|
data$Home.country == 'South Africa'|data$Home.country == 'Sudan'|
data$Home.country == 'Uganda'|data$Home.country == 'Zambia'] <- 'Africa'
data$world.region[data$Home.country == 'Bangladesh'|
data$Home.country == 'India'| data$Home.country == 'Nepal'|
data$Home.country == 'Pakistan'|
data$Home.country == 'Sri Lanka'|data$Home.country == 'Indonesia'|
data$Home.country == 'Philippines'|data$Home.country == 'Thailand'|
data$Home.country == 'Vietnam'|data$Home.country == 'China'|
data$Home.country == 'Japan'|data$Home.country == 'Mongolia'|
data$Home.country == 'South Korea'|data$Home.country == 'Hong Kong'|
data$Home.country == 'Taiwan'] <- 'Asia'
data$world.region[data$Home.country == 'Australia'| data$Home.country == 'Austria'|
data$Home.country == 'Bosnia and Herzegovina'|
data$Home.country == 'Bulgaria'| data$Home.country == 'Europe'|
data$Home.country == 'France'| data$Home.country == 'Germany'|
data$Home.country == 'Italy'|data$Home.country == 'Poland'|
data$Home.country == 'Portugal'|data$Home.country == 'Serbia'|
data$Home.country == 'Spain'|data$Home.country == 'Switzerland'|
data$Home.country == 'Republic of North Macedonia'|
data$Home.country == 'USA'] <- 'Europe'
data$world.region[data$Home.country == 'Armenia'|
data$Home.country == 'Azerbaijan'|data$Home.country == 'Belarus'|
data$Home.country == 'Estonia'|data$Home.country == 'Georgia'|
data$Home.country == 'Georgia'|data$Home.country == 'Kazakhstan'|
data$Home.country == 'Kyrgyzstan'|data$Home.country == 'Latvia'|
data$Home.country == 'Moldova'|data$Home.country == 'Tajikistan'|
data$Home.country == 'Turkmenistan'|data$Home.country == 'Ukraine'|
data$Home.country == 'Uzbekistan'] <- 'Commonwealth of Independent States'
data$world.region[data$Home.country == 'Bahrain'|
data$Home.country == 'Egypt'| data$Home.country == 'Iran'|
data$Home.country == 'Israel'| data$Home.country == 'Lebanon'|
data$Home.country == 'Syria'|
data$Home.country == 'Turkey'] <- 'Middle East'
data$world.region[data$Home.country == 'Brazil'|
data$Home.country == 'Colombia'|data$Home.country == 'Ecuador'|
data$Home.country == 'Guatemala'| data$Home.country == 'Haiti'|
data$Home.country == 'Mexico'|data$Home.country == 'Venezuela'|
data$Home.country == 'Nicaragua'] <- 'Southern America'
table(data$world.region, useNA = "ifany")
attach(data)
##Regression Analysis:Factors that influenced the decision of international students to study in Russia
#Data Preparation: Push Factors
is.numeric(Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution)
Competitive.University.admission.process<-Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution
is.numeric(Competitive.University.admission.process)
is.numeric(Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market)
Perceived.advantage.of.international.degree<-Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market
is.numeric(Perceived.advantage.of.international.degree)
#creating a data frame for push factors
PushFactors <-data.frame(language.of.instruction, degree,
age, Gender, world.region, Home.country,
Unavailability.of.the.desired.study.program,Low.quality.of.education
,Competitive.University.admission.process
,Perceived.advantage.of.international.degree
,Unavailability.of.scholarship.opportunities
,Encouragement.from.my.family.to.study.abroad,Encouragement.from..my.friends.to.study.abroad
,Better.earning.prospects.abroad, The.social.prestige.of.studying.abroad
,To.experience.a.different.culture)
#exploratory factor analysis to allow for indexing.
#principal component analysis
pushpc <- princomp(~Unavailability.of.the.desired.study.program+Low.quality.of.education
+Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution
+Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market
+Unavailability.of.scholarship.opportunities
+Encouragement.from.my.family.to.study.abroad+Encouragement.from..my.friends.to.study.abroad
+Better.earning.prospects.abroad+The.social.prestige.of.studying.abroad
+To.experience.a.different.culture, data = PushFactors, cor = FALSE, na.action = na.omit)
summary(pushpc)
#factor analysis
push.efa <- factanal(~Unavailability.of.the.desired.study.program+Low.quality.of.education
+Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution
+Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market
+Unavailability.of.scholarship.opportunities
+Encouragement.from.my.family.to.study.abroad+Encouragement.from..my.friends.to.study.abroad
+Better.earning.prospects.abroad+The.social.prestige.of.studying.abroad
+To.experience.a.different.culture,
factors = 6, data = PushFactors , cor = FALSE, na.action = na.omit)
print(push.efa, digits=2, cutoff=.3, sort=TRUE)
push.efa1 <- factanal(~Unavailability.of.the.desired.study.program+Low.quality.of.education
+Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution
+Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market
+Unavailability.of.scholarship.opportunities
+Encouragement.from.my.family.to.study.abroad+Encouragement.from..my.friends.to.study.abroad
+Better.earning.prospects.abroad+The.social.prestige.of.studying.abroad
+To.experience.a.different.culture,
factors = 5, data = PushFactors, cor = FALSE, na.action = na.omit)
print(push.efa1, digits=2, cutoff=.3, sort=TRUE)
push.efa2 <- factanal(~Unavailability.of.the.desired.study.program+Low.quality.of.education
+Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution
+Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market
+Unavailability.of.scholarship.opportunities
+Encouragement.from.my.family.to.study.abroad+Encouragement.from..my.friends.to.study.abroad
+Better.earning.prospects.abroad+The.social.prestige.of.studying.abroad
+To.experience.a.different.culture,
factors = 4, data = PushFactors, cor = FALSE, na.action = na.omit)
print(push.efa2, digits=2, cutoff=.3, sort=TRUE)
#with p-value 0.0415, four factors are sufficient.
#indexing the correlated factors
#encouragement from family and friends
cor.test(Encouragement.from..my.friends.to.study.abroad,Encouragement.from.my.family.to.study.abroad)
encouragement.from.family.friends<-(Encouragement.from..my.friends.to.study.abroad+Encouragement.from.my.family.to.study.abroad)/2
table(encouragement.from.family.friends)
PushFactors$encouragement.from.family.friends<-recode(encouragement.from.family.friends, "1=1; 1.5:2=2; 2.5:3=3; 3.5:4=4; 4.5:5=5")
table(PushFactors$encouragement.from.family.friends)
#advantages of studying abroad
cor.test(Better.earning.prospects.abroad,The.social.prestige.of.studying.abroad)
advantages.of.studying.abroad<-(Better.earning.prospects.abroad+The.social.prestige.of.studying.abroad)/2
table(advantages.of.studying.abroad)
PushFactors$benefits.of.studying.abroad<-recode(advantages.of.studying.abroad, "1=1; 1.5:2=2; 2.5:3=3; 3.5:4=4; 4.5:5=5")
table(PushFactors$benefits.of.studying.abroad)
#access to education
cor.test(Unavailability.of.the.desired.study.program,Low.quality.of.education)
access.to.education<-(Unavailability.of.the.desired.study.program+Low.quality.of.education)/2
table(access.to.education)
PushFactors$access.to.education<-recode(access.to.education, "1=1; 1.5:2=2; 2.5:3=3; 3.5:4=4; 4.5:5=5")
table(PushFactors$access.to.education)
attach(PushFactors)
#checking for cronbach's aplha to establish reliability
PushFactorsHC <-data.frame(access.to.education, Competitive.University.admission.process
,Perceived.advantage.of.international.degree
,Unavailability.of.scholarship.opportunities
,encouragement.from.family.friends
,advantages.of.studying.abroad
,To.experience.a.different.culture)
psych::alpha(PushFactorsHC)
#Regression analysis
#Push factors in Home country that influenced the decision of international students to study in Russia
#Empty model
model0<-lm(as.numeric(language.of.instruction)~1, data = PushFactors)
summary(model0)
#Full Model
model1<-lm(as.numeric(language.of.instruction)~encouragement.from.family.friends+
benefits.of.studying.abroad+access.to.education+
Competitive.University.admission.process+Perceived.advantage.of.international.degree+
Unavailability.of.scholarship.opportunities+To.experience.a.different.culture, data = PushFactors)
summary(model1)
#Full Model and controls
model2<-lm(as.numeric(language.of.instruction)~encouragement.from.family.friends+
benefits.of.studying.abroad+access.to.education+
Competitive.University.admission.process+Perceived.advantage.of.international.degree+
Unavailability.of.scholarship.opportunities+To.experience.a.different.culture+as.numeric(age)+
as.numeric(Home.country)+as.numeric(Gender), data = PushFactors)
summary(model2)
#Full Model with interaction effect
model3<-lm(as.numeric(language.of.instruction)~benefits.of.studying.abroad+
Unavailability.of.scholarship.opportunities+Competitive.University.admission.process+
(access.to.education+encouragement.from.family.friends+Perceived.advantage.of.international.degree+
To.experience.a.different.culture)*world.region, data = PushFactors)
summary(model3)
#Full Model and control with interaction effects
model4<-lm(as.numeric(language.of.instruction)~benefits.of.studying.abroad+
Unavailability.of.scholarship.opportunities+Competitive.University.admission.process+
(access.to.education+encouragement.from.family.friends+Perceived.advantage.of.international.degree+
To.experience.a.different.culture)*world.region+as.numeric(age)+as.numeric(Gender), data = PushFactors)
summary(model4)
#Graph for the models
stargazer(model1, model2, title="Regression Results: Push Factors in Home Country", align=TRUE,
no.space=TRUE, single.row=T,type = "text", intercept.bottom = F,dep.var.labels = "Move to Moscow", out="a.doc",
covariate.labels =c("Constant","Encouragement from family and friends","Benefits of studying abroad",
"Access to education", "Competitive university admission process",
"Perceieved advantage of an international degree",
"Unavailability of scholarship opportunities", "Experience a different culture",
"Age", "Home Country", "Gender"))
stargazer(model3, model4, title="Regression Results: Push Factors in Home Country", align=TRUE,column.sep.width = "-5pt",
no.space=TRUE, single.row=T,type = "text", intercept.bottom = F,dep.var.labels = "Move to Moscow", out="b.doc",
covariate.labels =c("Constant","Benefits of studying abroad","Unavailability of scholarship opportunities",
"Competitive university admission process","Access to education",
"Encouragement from family and friends","Perceieved advantage of an international degree",
"Experience a different culture","Asia", "CIS", "Europe", "Middle East", "South America",
"Age", "Gender", "Access to education*Asia", "Access to education*CIS", "Access to education*Europe",
"Access to education*Middle East", "Access to education*South America","Encouragement from family and friends*Asia",
"Encouragement from family and friends*CIS", "Encouragement from family and friends*Europe",
"Encouragement from family and friends*Middle East","Encouragement from family and friends*South America",
"Perceieved advantage of an international degree*Asia","Perceieved advantage of an international degree*CIS",
"Perceieved advantage of an international degree*Europe","Perceieved advantage of an international degree*Middle East",
"Perceieved advantage of an international degree*South America","Experience a different culture*Asia",
"Experience a different culture*CIS", "Experience a different culture*Europe",
"Encouragement from family and friends*Middle East","Experience a different culture*South America"))
#checking multicolinearity
vif(model1)
vif(model2)
vif(model3)
vif(model4)
####Pull factors that influenced the decision of international students to study in Russia
Recommendations.from.family.friends<-Personal.recommendations.from.parents..relatives..and.friends
#creating a data frame for push factors
PullFactors<-data.frame(language.of.instruction, age, Gender, world.region, Home.country,
Availability.of.desired.study.program,Higher.quality.of.education..compared.to.home.country.,
Low.cost.of.living,Low.tuition.fees,Awarded.scholarships.or.tuition.waiver,Attraction.to.Russian.culture..society,
Career.prospects.in.Russia,Recommendations.from.family.friends,cultural.proximity.with.home,
geographical.proximity.with.home,Quality.and.reputation.of.the.University,Recognition.of.the.degree.in.my.home.country,
Quality.of.the.teaching.staff,The.reputation.of.the.alumni,The.reputation.of.the.international.community,
HSE.position.in.international.university.rankings,Cost.of.tuition.for.international.students,Availability.of.scholarships,
Support.services.for.international.students,Graduates.employment.rates,HSE.s.international.strategic.alliances,
Local.employers.preference.of..degrees.awarded.by.HSE)
#exploratory factor analysis to allow for index.
#principal component analysis
fit.pc <- princomp(~Availability.of.desired.study.program+Higher.quality.of.education..compared.to.home.country.
+Low.cost.of.living+Low.tuition.fees+Awarded.scholarships.or.tuition.waiver+Attraction.to.Russian.culture..society
+Career.prospects.in.Russia+Personal.recommendations.from.parents..relatives..and.friends+cultural.proximity.with.home
+geographical.proximity.with.home+Quality.and.reputation.of.the.University+Recognition.of.the.degree.in.my.home.country
+Quality.of.the.teaching.staff+The.reputation.of.the.alumni+The.reputation.of.the.international.community
+HSE.position.in.international.university.rankings+Cost.of.tuition.for.international.students
+Availability.of.scholarships+Support.services.for.international.students+Graduates.employment.rates
+HSE.s.international.strategic.alliances+Local.employers.preference.of..degrees.awarded.by.HSE,
data = PullFactors, cor = FALSE, na.action = na.omit)
summary(fit.pc)
fit.efa <- factanal(~Availability.of.desired.study.program+Higher.quality.of.education..compared.to.home.country.
+Low.cost.of.living+Low.tuition.fees+Awarded.scholarships.or.tuition.waiver+Attraction.to.Russian.culture..society
+Career.prospects.in.Russia+Personal.recommendations.from.parents..relatives..and.friends+cultural.proximity.with.home
+geographical.proximity.with.home+Quality.and.reputation.of.the.University+Recognition.of.the.degree.in.my.home.country
+Quality.of.the.teaching.staff+The.reputation.of.the.alumni+The.reputation.of.the.international.community
+HSE.position.in.international.university.rankings+Cost.of.tuition.for.international.students
+Availability.of.scholarships+Support.services.for.international.students+Graduates.employment.rates
+HSE.s.international.strategic.alliances+Local.employers.preference.of..degrees.awarded.by.HSE,
factors = 11, data = PullFactors, cor = FALSE, na.action = na.omit)
print(fit.efa, digits=2, cutoff=.3, sort=TRUE)
fit.efa2 <- factanal(~Availability.of.desired.study.program+Higher.quality.of.education..compared.to.home.country.
+Low.cost.of.living+Low.tuition.fees+Awarded.scholarships.or.tuition.waiver+Attraction.to.Russian.culture..society
+Career.prospects.in.Russia+Personal.recommendations.from.parents..relatives..and.friends+cultural.proximity.with.home
+geographical.proximity.with.home+Quality.and.reputation.of.the.University+Recognition.of.the.degree.in.my.home.country
+Quality.of.the.teaching.staff+The.reputation.of.the.alumni+The.reputation.of.the.international.community
+HSE.position.in.international.university.rankings+Cost.of.tuition.for.international.students
+Availability.of.scholarships+Support.services.for.international.students+Graduates.employment.rates
+HSE.s.international.strategic.alliances+Local.employers.preference.of..degrees.awarded.by.HSE,
factors = 8, data = PullFactors, cor = FALSE, na.action = na.omit)
print(fit.efa2, digits=2, cutoff=.3, sort=TRUE)
#with p-value 6.54e-10 8 factors are sufficient.
#indexing the correlated pull factors
#prospect of employment
cor.test(Graduates.employment.rates,Local.employers.preference.of..degrees.awarded.by.HSE)
cor.test(Graduates.employment.rates,Career.prospects.in.Russia)
cor.test(Career.prospects.in.Russia,Local.employers.preference.of..degrees.awarded.by.HSE)
employment.prospect<-(Graduates.employment.rates+Local.employers.preference.of..degrees.awarded.by.HSE+Career.prospects.in.Russia)/3
employment.prospect<-round(employment.prospect,2)
table(employment.prospect)
PullFactors$employment.prospect<-recode(employment.prospect, "1:1.33=1; 1.67:2.33=2; 2.67:3.33=3; 3.67:4.33=4; 4.67:5=5")
table(PullFactors$employment.prospect)
#proximity
cor.test(cultural.proximity.with.home,geographical.proximity.with.home)
proximity<-(cultural.proximity.with.home+geographical.proximity.with.home)/2
table(proximity)
PullFactors$proximity<-recode(proximity, "1=1; 1.5:2=2; 2.5:3=3; 3.5:4=4; 4.5:5=5")
table(PullFactors$proximity)
#cost
cor.test(Low.cost.of.living,Low.tuition.fees)
cor.test(Low.cost.of.living,Cost.of.tuition.for.international.students)
cor.test(Low.tuition.fees,Cost.of.tuition.for.international.students)
cost.of.living<-(Low.cost.of.living+Low.tuition.fees+Cost.of.tuition.for.international.students)/3
cost.of.living<-round(cost.of.living,2)
table(cost.of.living)
PullFactors$cost.of.living<-recode(cost.of.living, "1:1.33=1; 1.67:2.33=2; 2.67:3.33=3; 3.67:4.33=4; 4.67:5=5")
table(PullFactors$cost.of.living)
#scholarships
cor.test(Awarded.scholarships.or.tuition.waiver,Availability.of.scholarships)
scholarship<-(Awarded.scholarships.or.tuition.waiver+Availability.of.scholarships)/2
table(scholarship)
PullFactors$scholarship<-recode(scholarship, "1=1; 1.5:2=2; 2.5:3=3; 3.5:4=4; 4.5:5=5")
table(PullFactors$scholarship)
#HSE Quality
cor.test(Quality.and.reputation.of.the.University,Quality.of.the.teaching.staff)
HSE.quality<-(Quality.and.reputation.of.the.University+Quality.of.the.teaching.staff)/2
table(HSE.quality)
PullFactors$HSE.quality<-recode(HSE.quality, "1=1; 1.5:2=2; 2.5:3=3; 3.5:4=4; 4.5:5=5")
table(PullFactors$HSE.quality)
#HSE Reputation
cor.test(The.reputation.of.the.alumni, The.reputation.of.the.international.community)
HSE.reputation<-(The.reputation.of.the.alumni+The.reputation.of.the.international.community)/2
table(HSE.reputation)
PullFactors$HSE.reputation<-recode(HSE.reputation, "1=1; 1.5:2=2; 2.5:3=3; 3.5:4=4; 4.5:5=5")
table(PullFactors$HSE.reputation)
#Program Choice
cor.test(Higher.quality.of.education..compared.to.home.country., Availability.of.desired.study.program)
program.choice<-(Higher.quality.of.education..compared.to.home.country.+ Availability.of.desired.study.program)/2
table(program.choice)
PullFactors$program.choice<-recode(program.choice, "1=1; 1.5:2=2; 2.5:3=3; 3.5:4=4; 4.5:5=5")
table(PullFactors$program.choice)
attach(PullFactors)
#checking for realibility
PullFactorsRuHSE<-data.frame(program.choice,cost.of.living,proximity, scholarship,HSE.quality,
HSE.reputation, Attraction.to.Russian.culture..society,
Recognition.of.the.degree.in.my.home.country,Recommendations.from.family.friends,
HSE.position.in.international.university.rankings,Support.services.for.international.students,
HSE.s.international.strategic.alliances,employment.prospect)
psych::alpha(PullFactorsRuHSE)
#Full model
model5<-lm(as.numeric(language.of.instruction)~employment.prospect+proximity+
cost.of.living+scholarship+HSE.quality+HSE.reputation+program.choice+
Recommendations.from.family.friends+Attraction.to.Russian.culture..society+
Recognition.of.the.degree.in.my.home.country+HSE.position.in.international.university.rankings+
Support.services.for.international.students+HSE.s.international.strategic.alliances,
data = PullFactors)
summary(model5)
#Full model with controls
model6<-lm(as.numeric(language.of.instruction)~employment.prospect+proximity+
cost.of.living+scholarship+HSE.quality+HSE.reputation+program.choice+
Recommendations.from.family.friends+Attraction.to.Russian.culture..society+
Recognition.of.the.degree.in.my.home.country+HSE.position.in.international.university.rankings+
Support.services.for.international.students+HSE.s.international.strategic.alliances+
as.numeric(age)+ as.numeric(Home.country)+as.numeric(Gender), data = PullFactors)
summary(model6)
#Fullmodel with interaction effect
model7<-lm(as.numeric(language.of.instruction)~employment.prospect+scholarship+
HSE.quality+HSE.reputation+Recognition.of.the.degree.in.my.home.country+Support.services.for.international.students+
Attraction.to.Russian.culture..society+(Recommendations.from.family.friends+proximity+cost.of.living+
program.choice+HSE.position.in.international.university.rankings+
HSE.s.international.strategic.alliances)*world.region, data = PullFactors)
summary(model7)
#Fullmodel and controls with interaction effect
model8<-lm(as.numeric(language.of.instruction)~employment.prospect+scholarship+
HSE.quality+HSE.reputation+Recognition.of.the.degree.in.my.home.country+
Support.services.for.international.students+Attraction.to.Russian.culture..society+
(Recommendations.from.family.friends+proximity+cost.of.living+program.choice+
HSE.position.in.international.university.rankings+HSE.s.international.strategic.alliances)*world.region+
as.numeric(age)+as.numeric(Gender), data = PullFactors)
summary(model8)
#Graph for the models
stargazer(model5, model6, title="Regression Results: Push Factors in Home Country", align=TRUE,
no.space=TRUE, single.row=T,type = "text", intercept.bottom = F,dep.var.labels = "Move to Moscow",out = "c.doc",
covariate.labels =c("Constant","Employment prospects","Proximity","Cost of living", "Scholarship",
"HSE quality", "HSE reputation", "Program choice","Recommendations from family and friends",
"Attraction to Russian culture", "Recognition of HSE degree in my Home Country",
"HSE position in University rankings","Support services for International Students",
"HSE international stragtegic alliances","Age", "Home Country", "Gender"))
stargazer(model7, model8, title="Regression Results: Push Factors in Home Country", align=TRUE,
no.space=TRUE, single.row=T,type = "text", intercept.bottom = F,dep.var.labels = "Move to Moscow", out = "d.doc",
covariate.labels =c("Constant","Employment prospects","Scholarship","HSE quality","HSE reputation",
"Recognition of HSE degree in my Home Country","Support services for International Students",
"Attraction to Russian culture", "Recommendations from family and friends","Proximity",
"Cost of living", "Choice of program", "HSE position in University rankings",
"HSE international alliances","Asia", "CIS","Europe", "Middle East", "South America",
"Age", "Gender", "Recomendations from family and friends*Asia","Recomendations from family and friends*CIS",
"Recomendations from family and friends*Europe","Recomendations from family and friends*Middle East",
"Recomendations from family and friends*South America","Proximity*Asia", "Proximity*CIS", "Proximity*Europe",
"Proximity*Middle East","Proximity*South America", "Cost of living*Asia","Cost of living*CIS",
"Cost of living*Europe","Cost of living*Middle East","Cost of living*South America","Choice of program*Asia",
"Choice of program*CIS", "Choice of program*Europe","Choice of program*Middle East",
"Choice of program*South America", "HSE position in University rankings*Asia",
"HSE position in University rankings*CIS", "HSE position in University rankings*Europe",
"HSE position in University rankings*Middle East","HSE position in University rankings*South America",
"HSE international alliances*Asia", "HSE international alliances*CIS", "HSE international alliances*Europe",
"HSE international alliances*Middle East","HSE international alliances*South America"))
#checking for multicolinearity
vif(model5)
vif(model6)
vif(model7)
vif(model8)
#####Creating the Crosstab function
crosstab <- function (..., dec.places = NULL,
type = NULL,
style = "wide",
row.vars = NULL,
col.vars = NULL,
percentages = TRUE,
addmargins = TRUE,
subtotals=TRUE)
###################################################################################
# #
# Function created by Dr Paul Williamson, Dept. of Geography and Planning, #
# School of Environmental Sciences, University of Liverpool, UK. #
# #
# Adapted from the function ctab() in the catspec packge. #
# #
# Version: 12th July 2013 #
# #
# Output best viewed using the companion function print.crosstab() #
# #
###################################################################################
#Declare function used to convert frequency counts into relevant type of proportion or percentage
{
mk.pcnt.tbl <- function(tbl, type) {
a <- length(row.vars)
b <- length(col.vars)
mrgn <- switch(type, column.pct = c(row.vars[-a], col.vars),
row.pct = c(row.vars, col.vars[-b]),
joint.pct = c(row.vars[-a], col.vars[-b]),
total.pct = NULL)
tbl <- prop.table(tbl, mrgn)
if (percentages) {
tbl <- tbl * 100
}
tbl
}
#Find no. of vars (all; row; col) for use in subsequent code
n.row.vars <- length(row.vars)
n.col.vars <- length(col.vars)
n.vars <- n.row.vars + n.col.vars
#Check to make sure all user-supplied arguments have valid values
stopifnot(as.integer(dec.places) == dec.places, dec.places > -1)
#type: see next section of code
stopifnot(is.character(style))
stopifnot(is.logical(percentages))
stopifnot(is.logical(addmargins))
stopifnot(is.logical(subtotals))
stopifnot(n.vars>=1)
#Convert supplied table type(s) into full text string (e.g. "f" becomes "frequency")
#If invalid type supplied, failed match gives user automatic error message
types <- NULL
choices <- c("frequency", "row.pct", "column.pct", "joint.pct", "total.pct")
for (tp in type) types <- c(types, match.arg(tp, choices))
type <- types
#If no type supplied, default to 'frequency + total' for univariate tables and to
#'frequency' for multi-dimenstional tables
#For univariate table....
if (n.vars == 1) {
if (is.null(type)) {
# default = freq count + total.pct
type <- c("frequency", "total.pct")
#row.vars <- 1
} else {
#and any requests for row / col / joint.pct must be changed into requests for 'total.pct'
type <- ifelse(type == "frequency", "frequency", "total.pct")
}
#For multivariate tables...
} else if (is.null(type)) {
# default = frequency count
type <- "frequency"
}
#Check for integrity of requested analysis and adjust values of function arguments as required
if ((addmargins==FALSE) & (subtotals==FALSE)) {
warning("WARNING: Request to suppress subtotals (subtotals=FALSE) ignored because no margins requested (addmargins=FALSE)")
subtotals <- TRUE
}
if ((n.vars>1) & (length(type)>1) & (addmargins==TRUE)) {
warning("WARNING: Only row totals added when more than one table type requested")
#Code lower down selecting type of margin implements this...
}
if ((length(type)>1) & (subtotals==FALSE)) {
warning("WARNING: Can only request supply one table type if requesting suppression of subtotals; suppression of subtotals not executed")
subtotals <- TRUE
}
if ((length(type)==1) & (subtotals==FALSE)) {
choices <- c("frequency", "row.pct", "column.pct", "joint.pct", "total.pct")
tp <- match.arg(type, choices)
if (tp %in% c("row.pct","column.pct","joint.pct")) {
warning("WARNING: subtotals can only be suppressed for tables of type 'frequency' or 'total.pct'")
subtotals<- TRUE
}
}
if ((n.vars > 2) & (n.col.vars>1) & (subtotals==FALSE))
warning("WARNING: suppression of subtotals assumes only 1 col var; table flattened accordingly")
if ( (subtotals==FALSE) & (n.vars>2) ) {
#If subtotals not required AND total table vars > 2
#Reassign all but last col.var as row vars
#[because, for simplicity, crosstabs assumes removal of subtotals uses tables with only ONE col var]
#N.B. Subtotals only present in tables with > 2 cross-classified vars...
if (length(col.vars)>1) {
row.vars <- c(row.vars,col.vars[-length(col.vars)])
col.vars <- col.vars[length(col.vars)]
n.row.vars <- length(row.vars)
n.col.vars <- 1
}
}
#If dec.places not set by user, set to 2 unlesss only one table of type frequency requested,
#in which case set to 0. [Leaves user with possibility of having frequency tables with > 0 dp]
if (is.null(dec.places)) {
if ((length(type)==1) & (type[1]=="frequency")) {
dec.places <- 0
} else {
dec.places <-2
}
}
#Take the original input data, whatever form originally supplied in,
#convert into table format using requested row and col vars, and save as 'tbl'
args <- list(...)
if (length(args) > 1) {
if (!all(sapply(args, is.factor)))
stop("If more than one argument is passed then all must be factors")
tbl <- table(...)
}
else {
if (is.factor(...)) {
tbl <- table(...)
}
else if (is.table(...)) {
tbl <- eval(...)
}
else if (is.data.frame(...)) {
#tbl <- table(...)
if (is.null(row.vars) && is.null(col.vars)) {
tbl <- table(...)
}
else {
var.names <- c(row.vars,col.vars)
A <- (...)
tbl <- table(A[var.names])
if(length(var.names==1)) names(dimnames(tbl)) <- var.names
#[table() only autocompletes dimnames for multivariate crosstabs of dataframes]
}
}
else if (class(...) == "ftable") {
tbl <- eval(...)
if (is.null(row.vars) && is.null(col.vars)) {
row.vars <- names(attr(tbl, "row.vars"))
col.vars <- names(attr(tbl, "col.vars"))
}
tbl <- as.table(tbl)
}
else if (class(...) == "ctab") {
tbl <- eval(...)
if (is.null(row.vars) && is.null(col.vars)) {
row.vars <- tbl$row.vars
col.vars <- tbl$col.vars
}
for (opt in c("dec.places", "type", "style", "percentages",
"addmargins", "subtotals")) if (is.null(get(opt)))
assign(opt, eval(parse(text = paste("tbl$", opt,
sep = ""))))
tbl <- tbl$table
}
else {
stop("first argument must be either factors or a table object")
}
}
#Convert supplied table style into full text string (e.g. "l" becomes "long")
style <- match.arg(style, c("long", "wide"))
#Extract row and col names to be used in creating 'tbl' from supplied input data
nms <- names(dimnames(tbl))
z <- length(nms)
if (!is.null(row.vars) && !is.numeric(row.vars)) {
row.vars <- order(match(nms, row.vars), na.last = NA)
}
if (!is.null(col.vars) && !is.numeric(col.vars)) {
col.vars <- order(match(nms, col.vars), na.last = NA)
}
if (!is.null(row.vars) && is.null(col.vars)) {
col.vars <- (1:z)[-row.vars]
}
if (!is.null(col.vars) && is.null(row.vars)) {
row.vars <- (1:z)[-col.vars]
}
if (is.null(row.vars) && is.null(col.vars)) {
col.vars <- z
row.vars <- (1:z)[-col.vars]
}
#Take the original input data, converted into table format using supplied row and col vars (tbl)
#and create a second version (crosstab) which stores results as percentages if a percentage table type is requested.
if (type[1] == "frequency")
crosstab <- tbl
else
crosstab <- mk.pcnt.tbl(tbl, type[1])
#If multiple table types requested, create and add these to
if (length(type) > 1) {
tbldat <- as.data.frame.table(crosstab)
z <- length(names(tbldat)) + 1
tbldat[z] <- 1
pcntlab <- type
pcntlab[match("frequency", type)] <- "Count"
pcntlab[match("row.pct", type)] <- "Row %"
pcntlab[match("column.pct", type)] <- "Column %"
pcntlab[match("joint.pct", type)] <- "Joint %"
pcntlab[match("total.pct", type)] <- "Total %"
for (i in 2:length(type)) {
if (type[i] == "frequency")
crosstab <- tbl
else crosstab <- mk.pcnt.tbl(tbl, type[i])
crosstab <- as.data.frame.table(crosstab)
crosstab[z] <- i
tbldat <- rbind(tbldat, crosstab)
}
tbldat[[z]] <- as.factor(tbldat[[z]])
levels(tbldat[[z]]) <- pcntlab
crosstab <- xtabs(Freq ~ ., data = tbldat)
names(dimnames(crosstab))[z - 1] <- ""
}
#Add margins if required, adding only those margins appropriate to user request
if (addmargins==TRUE) {
vars <- c(row.vars,col.vars)
if (length(type)==1) {
if (type=="row.pct")
{ crosstab <- addmargins(crosstab,margin=c(vars[n.vars]))
tbl <- addmargins(tbl,margin=c(vars[n.vars]))
}
else
{ if (type=="column.pct")
{ crosstab <- addmargins(crosstab,margin=c(vars[n.row.vars]))
tbl <- addmargins(tbl,margin=c(vars[n.row.vars]))
}
else
{ if (type=="joint.pct")
{ crosstab <- addmargins(crosstab,margin=c(vars[(n.row.vars)],vars[n.vars]))
tbl <- addmargins(tbl,margin=c(vars[(n.row.vars)],vars[n.vars]))
}
else #must be total.pct OR frequency
{ crosstab <- addmargins(crosstab)
tbl <- addmargins(tbl)
}
}
}
}
#If more than one table type requested, only adding row totals makes any sense...
if (length(type)>1) {
crosstab <- addmargins(crosstab,margin=c(vars[n.vars]))
tbl <- addmargins(tbl,margin=c(vars[n.vars]))
}
}
#If subtotals not required, and total vars > 2, create dataframe version of table, with relevent
#subtotal rows / cols dropped [Subtotals only present in tables with > 2 cross-classified vars]
t1 <- NULL
if ( (subtotals==FALSE) & (n.vars>2) ) {
#Create version of crosstab in ftable format
t1 <- crosstab
t1 <- ftable(t1,row.vars=row.vars,col.vars=col.vars)
#Convert to a dataframe
t1 <- as.data.frame(format(t1),stringsAsFactors=FALSE)
#Remove backslashes from category names AND colnames
t1 <- apply(t1[,],2, function(x) gsub("\"","",x))
#Remove preceding and trailing spaces from category names to enable accurate capture of 'sum' rows/cols
#[Use of grep might extrac category labels with 'sum' as part of a longer one or two word string...]
t1 <- apply(t1,2,function(x) gsub("[[:space:]]*$","",gsub("^[[:space:]]*","",x)))
#Reshape dataframe to that variable and category labels display as required
#(a) Move col category names down one row; and move col variable name one column to right
t1[2,(n.row.vars+1):ncol(t1)] <- t1[1,(n.row.vars+1):ncol(t1)]
t1[1,] <- ""
t1[1,(n.row.vars+2)] <- t1[2,(n.row.vars+1)]
#(b) Drop the now redundant column separating the row.var labels from the table data + col.var labels
t1 <- t1[,-(n.row.vars+1)]
#In 'lab', assign category labels for each variable to all rows (to allow identification of sub-totals)
lab <- t1[,1:n.row.vars]
for (c in 1:n.row.vars) {
for (r in 2:nrow(lab)) {
if (lab[r,c]=="") lab[r,c] <- lab[r-1,c]
}
}
lab <- (apply(lab[,1:n.row.vars],2,function(x) x=="Sum"))
lab <- apply(lab,1,sum)
#Filter out rows of dataframe containing subtotals
t1 <- t1[((lab==0) | (lab==n.row.vars)),]
#Move the 'Sum' label associated with last row to the first column; in the process
#setting the final row labels associated with other row variables to ""
t1[nrow(t1),1] <- "Sum"
t1[nrow(t1),(2:n.row.vars)] <- ""
#set row and column names to NULL
rownames(t1) <- NULL
colnames(t1) <- NULL
}
#Create output object 'result' [class: crosstab]
result <- NULL
#(a) record of argument values used to produce tabular output
result$row.vars <- row.vars
result$col.vars <- col.vars
result$dec.places <- dec.places
result$type <- type
result$style <- style
result$percentages <- percentages
result$addmargins <- addmargins
result$subtotals <- subtotals
#(b) tabular output [3 variants]
result$table <- tbl #Stores original cross-tab frequency counts without margins [class: table]
result$crosstab <- crosstab #Stores cross-tab in table format using requested style(frequency/pct) and table margins (on/off)
#[class: table]
result$crosstab.nosub <- t1 #crosstab with subtotals suppressed [class: dataframe; or NULL if no subtotals suppressed]
class(result) <- "crosstab"
#Return 'result' as output of function
result
}
print.crosstab <- function(x,dec.places=x$dec.places,subtotals=x$subtotals,...) {
###################################################################################
# #
# Function created by Dr Paul Williamson, Dept. of Geography and Planning, #
# School of Environmental Sciences, University of Liverpool, UK. #
# #
# Adapted from the function print.ctab() in the catspec packge. #
# #
# Version: 12th July 2013 #
# #
# Designed to provide optimal viewing of the output from crosstab() #
# #
###################################################################################
row.vars <- x$row.vars
col.vars <- x$col.vars
n.row.vars <- length(row.vars)
n.col.vars <- length(col.vars)
n.vars <- n.row.vars + n.col.vars
if (length(x$type)>1) {
z<-length(names(dimnames(x$crosstab)))
if (x$style=="long") {
row.vars<-c(row.vars,z)
} else {
col.vars<-c(z,col.vars)
}
}
if (n.vars==1) {
if (length(x$type)==1) {
tmp <- data.frame(round(x$crosstab,x$dec.places))
colnames(tmp)[2] <- ifelse(x$type=="frequency","Count","%")
print(tmp,row.names=FALSE)
} else {
print(round(x$crosstab,x$dec.places))
}
}
#If table has only 2 dimensions, or subtotals required for >2 dimensional table,
#print table using ftable() on x$crosstab
if ((n.vars == 2) | ((subtotals==TRUE) & (n.vars>2))) {
tbl <- ftable(x$crosstab,row.vars=row.vars,col.vars=col.vars)
if (!all(as.integer(tbl)==as.numeric(tbl))) tbl <- round(tbl,dec.places)
print(tbl,...)
}
#If subtotals NOT required AND > 2 dimensions, print table using write.table() on x$crosstab.nosub
if ((subtotals==FALSE) & (n.vars>2)) {
t1 <- x$crosstab.nosub
#Convert numbers to required decimal places, right aligned
width <- max( nchar(t1[1,]), nchar(t1[2,]), 7 )
dec.places <- x$dec.places
number.format <- paste("%",width,".",dec.places,"f",sep="")
t1[3:nrow(t1),((n.row.vars+1):ncol(t1))] <- sprintf(number.format,as.numeric(t1[3:nrow(t1),((n.row.vars+1):ncol(t1))]))
#Adjust column variable label to same width as numbers, left aligned, padding with trailing spaces as required
col.var.format <- paste("%-",width,"s",sep="")
t1[1,(n.row.vars+1):ncol(t1)] <- sprintf(col.var.format,t1[1,(n.row.vars+1):ncol(t1)])
#Adjust column category labels to same width as numbers, right aligned, padding with preceding spaces as required
col.cat.format <- paste("%",width,"s",sep="")
t1[2,(n.row.vars+1):ncol(t1)] <- sprintf(col.cat.format,t1[2,(n.row.vars+1):ncol(t1)])
#Adjust row labels so that each column is of fixed width, using trailing spaces as required
for (i in 1:n.row.vars) {
width <- max(nchar(t1[,i])) + 2
row.lab.format <- paste("%-",width,"s",sep="")
t1[,i] <- sprintf(row.lab.format,t1[,i])
}
write.table(t1,quote=FALSE,col.names=FALSE,row.names=FALSE)
}
}
library(sjPlot)
library(sjmisc)
library(sjlabelled)
theme_set(theme_bw())
##Descriptive Statistics: Demographic information
theme_set(theme_bw())
#degree
freq(degree, display.type = F,report.nas = F, headings = T, cumul = F, style = "grid")
plot_frq(degree, geom.colors = c("red", "yellow"), title = "Degree")
crosstab(data, row.vars = c("degree"),
col.vars ="world.region", type = "f", style = "long",
addmargins = T, dec.places = 0, subtotals = T)
#language of instruction
freq(language.of.instruction, display.type = F,
report.nas = F, headings = T, cumul = F, style = "grid")
plot_frq(language.of.instruction, geom.colors = "#336699", title = "Language of instruction")
crosstab(data, row.vars = c("language.of.instruction"),
col.vars ="world.region", type = "f", style = "long",
addmargins = T, dec.places = 0, subtotals = T)
#Gender
freq(Gender, display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
plot_frq(Gender, geom.colors = "#336699", title = "Gender")
crosstab(data, row.vars = c("Gender"),
col.vars ="world.region", type = "f", style = "long",
addmargins = T, dec.places = 0, subtotals = T)
#Age
freq(age, display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
plot_frq(age, geom.colors = "#336699", title = "Age")
crosstab(data, row.vars = c("age"),
col.vars ="world.region", type = "f", style = "long",
addmargins = T, dec.places = 0, subtotals = T)
#World Region
freq(world.region, display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
plot_frq(world.region, geom.colors = "#336699", title = "World Region")
#Family income
freq(What.was.your.annual.family.income.when.you.were.applying.to.study.abroad..estimate.in.US.dollars.,
display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
crosstab(data, row.vars = "family.income",
col.vars ="world.region",
type = "f", style = "long", addmargins = T, dec.places = 0, subtotals = T)
#Length of stay in Russia
freq(How.long.have.you.been.in.Russia.studying.for.your.current.program.,
display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
#Student finance status
freq(How.are.you.financing.your.participation.in.the.program.,
display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
crosstab(data, row.vars = c("How.are.you.financing.your.participation.in.the.program."),
col.vars ="world.region", type = "f", style = "long",
addmargins = T, dec.places = 0, subtotals = T)
#stay in Russia
freq(Have.you.ever.been.in.Russia.before.you.enrolled.for.your.current.program,
display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
#Data preparation: Push factors in home country influencing the decision to study in Russia
#unavailable program
unavailable.program <-as.factor(Unavailability.of.the.desired.study.program)
unavailable.program <- factor(unavailable.program,levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(unavailable.program, Unavailability.of.the.desired.study.program)
#quality of education
low.educational.quality<-as.factor(Low.quality.of.education)
low.educational.quality <- factor(low.educational.quality,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(low.educational.quality, Low.quality.of.education)
#competitive University admission in home country
competitive.admission<-as.factor(Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution)
competitive.admission <- factor(competitive.admission,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(competitive.admission, Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution)
#Advantage of international degree
advantage.of.international.degree<-as.factor(Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market)
advantage.of.international.degree <- factor(advantage.of.international.degree,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(advantage.of.international.degree, Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market)
#unavailability of scholarships
unavailability.of.scholarship<-as.factor(Unavailability.of.scholarship.opportunities)
unavailability.of.scholarship <- factor(unavailability.of.scholarship,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(unavailability.of.scholarship, Unavailability.of.scholarship.opportunities)
#encouragement from family
encouragement.from.family<-as.factor(Encouragement.from.my.family.to.study.abroad)
encouragement.from.family <- factor(encouragement.from.family,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(encouragement.from.family, Encouragement.from.my.family.to.study.abroad)
#encouragement from friends
encouragement.from.friends<-as.factor(Encouragement.from..my.friends.to.study.abroad)
encouragement.from.friends <- factor(encouragement.from.friends,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(encouragement.from.friends, Encouragement.from..my.friends.to.study.abroad)
#better earning prospects
better.earning.prospects<-as.factor(Better.earning.prospects.abroad)
better.earning.prospects <- factor(better.earning.prospects,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(better.earning.prospects, Better.earning.prospects.abroad)
#social prestige
social.prestige<-as.factor(The.social.prestige.of.studying.abroad)
social.prestige <- factor(social.prestige,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(social.prestige, The.social.prestige.of.studying.abroad)
#experience different culture
experience.different.culture<-as.factor(To.experience.a.different.culture)
experience.different.culture <- factor(experience.different.culture,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(experience.different.culture, To.experience.a.different.culture)
#Descriptive Statistics: Push Factors influencing the decision to move to Russia
#push factors
pushfactor<-data.frame(unavailable.program,low.educational.quality,competitive.admission,
advantage.of.international.degree,unavailability.of.scholarship,
encouragement.from.family,encouragement.from.friends,better.earning.prospects,
social.prestige,experience.different.culture)
freq(pushfactor, display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
HCpushfactor<-data.frame(degree, world.region, unavailable.program,low.educational.quality,competitive.admission,
advantage.of.international.degree,unavailability.of.scholarship,
encouragement.from.family,encouragement.from.friends,better.earning.prospects,
social.prestige,experience.different.culture)
#Crosstab between Push Factors and World Region with degree
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Unavailability.of.the.desired.study.program", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Low.quality.of.education", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Unavailability.of.scholarship.opportunities",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Encouragement.from.my.family.to.study.abroad",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Encouragement.from..my.friends.to.study.abroad",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Better.earning.prospects.abroad",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="The.social.prestige.of.studying.abroad",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="To.experience.a.different.culture",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
is.factor(world.region)
#Anova
#Unavailable program
data$region<-as.factor(world.region)
attach(data)
leveneTest(Unavailability.of.the.desired.study.program ~ region, data = data)
res.aov <- aov(Unavailability.of.the.desired.study.program ~ region, data = data)
# Summary of the analysis
summary(res.aov)
TukeyHSD(res.aov)
summary(glht(res.aov, linfct = mcp(region = "Tukey")))
#Graphs
#Plots for the push factors in the home countries
#Push factors
plot_stackfrq(pushfactor)
#Unavailability of desired study program
plot_grpfrq(unavailable.program, world.region, geom.colors = "gs", title = "Unavailability of desired study program in Home Country")
plot_grpfrq(unavailable.program, degree, geom.colors = "gs", title = "Unavailability of Program")
#Low quality of education
plot_grpfrq(low.educational.quality, world.region, geom.colors = "gs", title = "Low quality of education in Home Country")
plot_grpfrq(low.educational.quality, degree, geom.colors = "gs", title = "Low quality of education")
#Competitive univeristy admission
plot_grpfrq(competitive.admission, world.region, geom.colors = "gs", title = "Competitive University admission process in Home Country")
plot_grpfrq(competitive.admission, degree, geom.colors = "gs", title = "Competitive university admission")
#Perceived advantage of international degree
plot_grpfrq(advantage.of.international.degree, world.region, geom.colors = "gs",
title = "Perceived advantage of international degree than a local degree")
plot_grpfrq(advantage.of.international.degree, degree, geom.colors = "gs",
title = "Perceived advantage of internatioal degree")
#Unavailability of Scholarship
plot_grpfrq(unavailability.of.scholarship, world.region, geom.colors = "gs", title = "Unavailability of scholarship opportunities in Home Country")
plot_grpfrq(unavailability.of.scholarship, degree, geom.colors = "gs", title = "Unavailability of scholarship")
#Encouragement from family
plot_grpfrq(encouragement.from.family, world.region, geom.colors = "gs", title = "Encouragement from family")
plot_grpfrq(encouragement.from.family, degree, geom.colors = "gs", title = "Encouragement from family")
#Encouragement from friends
plot_grpfrq(encouragement.from.friends, world.region, geom.colors = "gs", title = "Encouragement from friends")
plot_grpfrq(encouragement.from.friends, degree, geom.colors = "gs", title = "Encouragement from friends")
#Better earning prospects
plot_grpfrq(better.earning.prospects, world.region, geom.colors = "gs", title = "Better earning prospects abroad")
plot_grpfrq(better.earning.prospects, degree, geom.colors = "gs", title = "Better earning prospects")
#The social prestige of studying abroad
plot_grpfrq(social.prestige, world.region, geom.colors = "gs", title = "The social prestige of studying abroad")
plot_grpfrq(social.prestige, degree, geom.colors = "gs", title = "The.social.prestige.of.studying.abroad")
#Experience different culture
plot_grpfrq(experience.different.culture, world.region, geom.colors = "gs", title = "EXperience different culture abroad")
plot_grpfrq(experience.different.culture, degree, geom.colors = "gs", title = "EXperience different culture")
#Data preparation: Pull factors in Russia influencing the decision to study in Russia
#availability of desired program
available.study.program <-as.factor(Availability.of.desired.study.program)
available.study.program <- factor(available.study.program,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(available.study.program, Availability.of.desired.study.program)
#high quality of education
high.educational.quality<-as.factor(Higher.quality.of.education..compared.to.home.country.)
high.educational.quality <- factor(high.educational.quality,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(high.educational.quality, Higher.quality.of.education..compared.to.home.country.)
#low cost of living
low.cost.living<-as.factor(Low.cost.of.living)
low.cost.living <- factor(low.cost.living,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(low.cost.living,Low.cost.of.living)
#low tuition fees
low.tuition<-as.factor(Low.tuition.fees)
low.tuition <- factor(low.tuition,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(low.tuition, Low.tuition.fees)
#Awarded scholarships
scholarship.tuitionwaiver<-as.factor(Awarded.scholarships.or.tuition.waiver)
scholarship.tuitionwaiver <- factor(scholarship.tuitionwaiver,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(scholarship.tuitionwaiver,Awarded.scholarships.or.tuition.waiver)
#Attraction to Russian culture
russian.culture<-as.factor(Attraction.to.Russian.culture..society)
russian.culture <- factor(russian.culture,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(russian.culture, Attraction.to.Russian.culture..society)
#Career prospects in Russia
career.prospects<-as.factor(Career.prospects.in.Russia)
career.prospects <- factor(career.prospects,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(career.prospects, Career.prospects.in.Russia)
#Recommendations from family and friends
family.friends.recommendations<-as.factor(Personal.recommendations.from.parents..relatives..and.friends)
family.friends.recommendations <- factor(family.friends.recommendations,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(family.friends.recommendations, Personal.recommendations.from.parents..relatives..and.friends)
#Cultural Proximity
cultural.proximity<-as.factor(cultural.proximity.with.home)
cultural.proximity <- factor(cultural.proximity,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(cultural.proximity, cultural.proximity.with.home)
#Geographical proximity
geographical.proximity<-as.factor(geographical.proximity.with.home)
geographical.proximity <- factor(geographical.proximity,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(geographical.proximity, geographical.proximity.with.home)
#Descriptive Statistics: Pull Factors in Russia influencing the decision to move to Russia
Pullfactor_Russia<-data.frame(available.study.program, high.educational.quality, low.cost.living,
low.tuition, scholarship.tuitionwaiver, russian.culture, career.prospects,
family.friends.recommendations, cultural.proximity, geographical.proximity)
freq(Pullfactor_Russia, display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
Pullfactor_Ru<-data.frame(world.region, degree, available.study.program, high.educational.quality, low.cost.living,
low.tuition, scholarship.tuitionwaiver, russian.culture, career.prospects,
family.friends.recommendations, cultural.proximity, geographical.proximity)
#Crosstab between Pull Factors in Russia and World Region with degree
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Unavailability.of.the.desired.study.program", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Higher.quality.of.education..compared.to.home.country.", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Low.cost.of.living", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Availability.of.scholarships", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Attraction.to.Russian.culture..society", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Career.prospects.in.Russia", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Personal.recommendations.from.parents..relatives..and.friends",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="cultural.proximity.with.home", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="geographical.proximity.with.home", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
#Plots for the pull factors in Russia
plot_stackfrq(Pullfactor_Russia)
#Availability of desired study program
plot_grpfrq(available.study.program, world.region, geom.colors = "gs", title = "Availability of desired study program in Russia")
plot_grpfrq(available.study.program, degree, geom.colors = "gs", title = "Availability of desired study program")
#High quality of education
plot_grpfrq(high.educational.quality, world.region, geom.colors = "gs", title = "High quality of education in Russia")
plot_grpfrq(high.educational.quality, degree, geom.colors = "gs", title = "High quality of education")
#Low cost of living
plot_grpfrq(low.cost.living, world.region, geom.colors = "gs", title = "Low cost of living in Russia")
plot_grpfrq(low.cost.living, degree, geom.colors = "gs", title = "Low cost of living")
#Low tuition fees
plot_grpfrq(low.tuition, world.region, geom.colors = "gs", title = "Low tuition fees in Russia")
plot_grpfrq(low.tuition, degree, geom.colors = "gs", title = "Low tuition fees")
#Scholarship or tuition waiver
plot_grpfrq(scholarship.tuitionwaiver, world.region, geom.colors = "gs", title = "Scholarship or tuition waiver")
plot_grpfrq(scholarship.tuitionwaiver, degree, geom.colors = "gs", title = "Scholarship or tuition waiver")
#Attraction to Russian culture
plot_grpfrq(russian.culture, world.region, geom.colors = "gs", title = "Attraction to Russian culture")
plot_grpfrq(russian.culture, degree, geom.colors = "gs", title = "Attraction to Russian culture")
#Career prospects in Russia
plot_grpfrq(career.prospects, world.region, geom.colors = "gs", title = "Career prospects in Russia")
plot_grpfrq(career.prospects, degree, geom.colors = "gs", title = "Career prospects in Russia")
#Recommendations from family and friends
plot_grpfrq(family.friends.recommendations, world.region, geom.colors = "gs", title = "Recommendations from family and friends")
plot_grpfrq(family.friends.recommendations, degree, geom.colors = "gs", title = "Recommendations from family and friends")
#cultural.proximity
plot_grpfrq(cultural.proximity, world.region, geom.colors = "gs", title = "Home Country's cultural proximity to Russia")
plot_grpfrq(cultural.proximity, degree, geom.colors = "gs", title = "Cultural proximity")
#Geograhical proximity
plot_grpfrq(geographical.proximity, world.region, geom.colors = "gs", title = "Home Country's geograhical proximity to Russia")
plot_grpfrq(geographical.proximity, degree, geom.colors = "gs", title = "Geograhical proximity")
#Data preparation: Pull factors in HSE
#Quality and Reputation of HSE
HSE.qualityandreputation <-as.factor(Quality.and.reputation.of.the.University)
HSE.qualityandreputation <- factor(HSE.qualityandreputation,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(HSE.qualityandreputation, Quality.and.reputation.of.the.University)
#Recognition of HSE degree
recognition.of.HSE.degree<-as.factor(Recognition.of.the.degree.in.my.home.country)
recognition.of.HSE.degree <- factor(recognition.of.HSE.degree,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(recognition.of.HSE.degree, Recognition.of.the.degree.in.my.home.country)
#Quality of teaching staff
quality.teachers<-as.factor(Quality.of.the.teaching.staff)
quality.teachers <- factor(quality.teachers,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(quality.teachers,Quality.of.the.teaching.staff)
#Reputation of the alumni
alumni.reputation<-as.factor(The.reputation.of.the.alumni)
alumni.reputation <- factor(alumni.reputation,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(alumni.reputation, The.reputation.of.the.alumni)
#reputation of the international community
internationalcommunity.reputation<-as.factor(The.reputation.of.the.international.community)
internationalcommunity.reputation <- factor(internationalcommunity.reputation,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(internationalcommunity.reputation,The.reputation.of.the.international.community)
#HSE rank
HSE.rank<-as.factor(HSE.position.in.international.university.rankings)
HSE.rank <- factor(HSE.rank,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(HSE.rank, HSE.position.in.international.university.rankings)
#Cost of tuition
tuition.cost<-as.factor(Cost.of.tuition.for.international.students)
tuition.cost <- factor(tuition.cost,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(tuition.cost, Cost.of.tuition.for.international.students)
#Availability of scholarship
available.scholarships<-as.factor(Availability.of.scholarships)
available.scholarships <- factor(available.scholarships,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(available.scholarships,Availability.of.scholarships)
#Support Services for international students
international.students.support<-as.factor(Support.services.for.international.students)
international.students.support <- factor(international.students.support,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(international.students.support, Support.services.for.international.students)
#Graduate employment rate
graduate.employment<-as.factor(Graduates.employment.rates)
graduate.employment <- factor(graduate.employment,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(graduate.employment, Graduates.employment.rates)
#HSE international strategic alliances
HSE.alliances<-as.factor(HSE.s.international.strategic.alliances)
HSE.alliances <- factor(HSE.alliances,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(HSE.alliances, HSE.s.international.strategic.alliances)
#Local Employers preference for HSE degrees
employers.preference.for.HSE.degrees<-as.factor(Local.employers.preference.of..degrees.awarded.by.HSE)
employers.preference.for.HSE.degrees <- factor(employers.preference.for.HSE.degrees,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(employers.preference.for.HSE.degrees, Local.employers.preference.of..degrees.awarded.by.HSE)
#Descriptive Statistics: Pull Factors in HSE influencing the decision to move to Russia
Pullfactor_HSE<-data.frame(HSE.qualityandreputation,recognition.of.HSE.degree,quality.teachers,
alumni.reputation,internationalcommunity.reputation,HSE.rank,tuition.cost,
available.scholarships,international.students.support,graduate.employment,
HSE.alliances,employers.preference.for.HSE.degrees)
freq(Pullfactor_HSE, display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
HSEPullfactor<-data.frame(world.region, degree, HSE.qualityandreputation,recognition.of.HSE.degree,quality.teachers,
alumni.reputation,internationalcommunity.reputation,HSE.rank,tuition.cost,
available.scholarships,international.students.support,graduate.employment,
HSE.alliances,employers.preference.for.HSE.degrees)
#Crosstab between Pull Factors in HSE and World Region with degree
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Quality.and.reputation.of.the.University", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Recognition.of.the.degree.in.my.home.country", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Quality.of.the.teaching.staff", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="The.reputation.of.the.alumni", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="The.reputation.of.the.international.community", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="HSE.position.in.international.university.rankings", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Cost.of.tuition.for.international.students", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Availability.of.scholarships", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Support.services.for.international.students", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Graduates.employment.rates", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="HSE.s.international.strategic.alliances", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Local.employers.preference.of..degrees.awarded.by.HSE", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
#Plots for the pull factors in HSE
plot_stackfrq(Pullfactor_HSE)
#Quality and reputation of HSE
plot_grpfrq(HSE.qualityandreputation, world.region, geom.colors = "gs", title = "Quality and reputation of HSE")
plot_grpfrq(HSE.qualityandreputation, degree, geom.colors = "gs", title = "Quality and reputation of HSE")
#Recognition of degree
plot_grpfrq(recognition.of.HSE.degree, world.region, geom.colors = "gs", title = "Recognition of HSE degree in my Home Country")
plot_grpfrq(recognition.of.HSE.degree, degree, geom.colors = "gs", title = "Recognition of degree")
#Quality of teachers
plot_grpfrq(quality.teachers, world.region, geom.colors = "gs", title = "Quality of HSE teachers")
plot_grpfrq(quality.teachers, degree, geom.colors = "gs", title = "Quality of teachers")
#Reputation of alumni
plot_grpfrq(alumni.reputation, world.region, geom.colors = "gs", title = "Reputation of HSE alumni")
plot_grpfrq(alumni.reputation, degree, geom.colors = "gs", title = "Reputation of alumni")
#Reputation of International Community
plot_grpfrq(internationalcommunity.reputation, world.region, geom.colors = "gs", title = "Reputation of HSE International Community")
plot_grpfrq(internationalcommunity.reputation, degree, geom.colors = "gs", title = "Reputation of International Community")
#HSE's rank
plot_grpfrq(HSE.rank, world.region, geom.colors = "gs", title = "HSE's position on World Universities ranking")
plot_grpfrq(HSE.rank, degree, geom.colors = "gs", title = "HSE's rank")
#Cost of tuition
plot_grpfrq(tuition.cost, world.region, geom.colors = "gs", title = "Cost of tuition in HSE")
plot_grpfrq(tuition.cost, degree, geom.colors = "gs", title = "Cost of tuition")
#Recognition of degree
plot_grpfrq(available.scholarships, world.region, geom.colors = "gs", title = "Availability of scholarships in HSE")
plot_grpfrq(available.scholarships, degree, geom.colors = "gs", title = "Availability of scholarships")
#Support services for international students
plot_grpfrq(international.students.support, world.region, geom.colors = "gs", title = "Support services for international students in HSE")
plot_grpfrq(international.students.support, degree, geom.colors = "gs", title = "Support services for international students")
#Graduate employment rate
plot_grpfrq(graduate.employment, world.region, geom.colors = "gs", title = "HSE's Graduate employment rate")
plot_grpfrq(graduate.employment, degree, geom.colors = "gs", title = "Graduate employment rate")
#HSE alliances
plot_grpfrq(HSE.alliances, world.region, geom.colors = "gs", title = "HSE strategic international alliances")
plot_grpfrq(HSE.alliances, degree, geom.colors = "gs", title = "HSE alliances")
#Recognition of degree
plot_grpfrq(employers.preference.for.HSE.degrees, world.region, geom.colors = "gs", title = "Local employers preference for HSE degrees")
plot_grpfrq(employers.preference.for.HSE.degrees, degree, geom.colors = "gs", title = "Local preference for HSE degrees")
###STUDENTS POST GRADUATION PLANS##
#Descriptive Statistics
#Post Graduation migration plans
freq(What.are.your.plans.after.graduation., display.type = F, report.nas = F,
headings = T, cumul = F, style = "grid")
plot_grpfrq(What.are.your.plans.after.graduation., world.region, geom.colors = "gs",
title = "Post graduation plans")
plot_grpfrq(What.are.your.plans.after.graduation., degree, geom.colors = "gs",
title = "Post graduation plans")
#Stay in Russia
freq(What.will.be.your.reason.for.staying.in.Russia.after.graduation., display.type = F, report.nas = F,
headings = T, cumul = F, style = "grid")
plot_grpfrq(What.will.be.your.reason.for.staying.in.Russia.after.graduation., world.region, geom.colors = "gs",
title = "Reasons for staying in Russia after graduation")
plot_grpfrq(What.are.your.plans.after.graduation., degree, geom.colors = "gs",
title = "Reasons for staying in Russia after graduation")
#Return home
freq(What.will.be.your.reason.for.returning.home.after.graduation., display.type = F, report.nas = F,
headings = T, cumul = F, style = "grid")
plot_grpfrq(What.will.be.your.reason.for.returning.home.after.graduation., world.region, geom.colors = "gs",
title = "Reasons for returning home after graduation")
plot_grpfrq(What.will.be.your.reason.for.returning.home.after.graduation., degree, geom.colors = "gs",
title = "Reasons for returning home after graduation")
#move to another country
freq(What.will.be.your.reason.for.moving.to.another.country.after.graduation., display.type = F, report.nas = F,
headings = T, cumul = F, style = "grid")
plot_grpfrq(What.will.be.your.reason.for.moving.to.another.country.after.graduation., world.region, geom.colors = "gs",
title = "Reasons for moving to another country after graduation")
plot_grpfrq(What.will.be.your.reason.for.moving.to.another.country.after.graduation., degree, geom.colors = "gs",
title = "Reasons for moving to another country after graduation", show.na = F)
#Stay in Russia after graduation
#Better job opportunities
job.opportunities <-as.factor(Better.job.opportunities..in.comparison.with.home.country.)
job.opportunities <- factor(job.opportunities,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(job.opportunities, Better.job.opportunities..in.comparison.with.home.country.)
#Higher quality of life
high.quality.life<-as.factor(Higher.quality.of.life..in.comparison.with.home.country.)
high.quality.life <- factor(high.quality.life,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(high.quality.life, Higher.quality.of.life..in.comparison.with.home.country.)
#Better career opportunities
career.opportunities<-as.factor(Better.career.opportunities.and.advancement.in.chosen.profession)
career.opportunities <- factor(career.opportunities,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(career.opportunities,Better.career.opportunities.and.advancement.in.chosen.profession)
#high income level
high.income.level<-as.factor(Higher.income.level)
high.income.level <- factor(high.income.level,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(high.income.level, Higher.income.level)
#Ties to family and friends
family.friends.ties<-as.factor(Ties.to.family.and.friends)
family.friends.ties <- factor(family.friends.ties,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(family.friends.ties,Ties.to.family.and.friends)
#international experience
international.experience<-as.factor(Gain.international.experience)
international.experience <- factor(international.experience,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(international.experience, Gain.international.experience)
#family expectations
familyexpectations<-as.factor(Family.expectations)
familyexpectations <- factor(familyexpectations,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(familyexpectations, Family.expectations)
#Restrcitive cultural practices
cultural.practices<-as.factor(Restrictive.cultural.practices..eg..pressure.to.marry.)
cultural.practices <- factor(cultural.practices,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(cultural.practices, Restrictive.cultural.practices..eg..pressure.to.marry.)
#limited jobs
limited.jobs<-as.factor(Limited.job.opportunities.in.home.country)
limited.jobs <- factor(limited.jobs,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(limited.jobs, Limited.job.opportunities.in.home.country)
#lower income levels
lower.income<-as.factor(Lower.income.levels)
lower.income <- factor(lower.income,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(lower.income, Lower.income.levels)
#Lower quality of life
lower.quality.life<-as.factor(Lower.quality.of.life.2)
lower.quality.life <- factor(lower.quality.life,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(lower.quality.life, Lower.quality.of.life.2)
#Political persecution
politicalpersecution<-as.factor(Political.persecution)
politicalpersecution <- factor(politicalpersecution,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(politicalpersecution, Political.persecution)
#Dangers to ones own life
danger.to.ones.life<-as.factor(Danger.or.fear.for.one.s.own.life)
danger.to.ones.life <- factor(Danger.or.fear.for.one.s.own.life,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(danger.to.ones.life, Danger.or.fear.for.one.s.own.life)
#Factors influencing the decision to stay in Russia
RStayFactors<-data.frame(job.opportunities,
high.quality.life,career.opportunities,
high.income.level, family.friends.ties,
international.experience, familyexpectations,
cultural.practices,limited.jobs,lower.income,lower.quality.life,
politicalpersecution,danger.to.ones.life)
RussiaStay_factors<-data.frame(world.region, degree, Better.job.opportunities..in.comparison.with.home.country.+
Higher.quality.of.life..in.comparison.with.home.country.+
Better.career.opportunities.and.advancement.in.chosen.profession+
Higher.income.level+ Ties.to.family.and.friends+
Gain.international.experience+ Family.expectations+
Restrictive.cultural.practices..eg..pressure.to.marry.+
Limited.job.opportunities.in.home.country+Lower.income.levels+
Lower.quality.of.life.2+Political.persecution+Danger.or.fear.for.one.s.own.life)
#Crosstabulation of Russia Stay factors with Region and degree
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Better.job.opportunities..in.comparison.with.home.country.",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Higher.quality.of.life..in.comparison.with.home.country.",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Better.career.opportunities.and.advancement.in.chosen.profession",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Higher.income.level",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Ties.to.family.and.friends",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Gain.international.experience",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Family.expectations",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Limited.job.opportunities.in.home.country",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Lower.income.levels",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Lower.quality.of.life.2",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Political.persecution",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Danger.or.fear.for.one.s.own.life",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
#Graphs
#Factors influencing the decision to stay
plot_stackfrq(RStayFactors)
#Better job opportunities
plot_grpfrq(job.opportunities, world.region, geom.colors = "gs", title = "Better job opportunities")
plot_grpfrq(job.opportunities, degree, geom.colors = "gs", title = "Better job opportunities")
#Higher quality of life
plot_grpfrq(high.quality.life, world.region, geom.colors = "gs", title = "Higher quality of life")
plot_grpfrq(high.quality.life, degree, geom.colors = "gs", title = "Higher quality of life")
#Better career opportunities
plot_grpfrq(career.opportunities, world.region, geom.colors = "gs", title = "Better career opportunities")
plot_grpfrq(career.opportunities, degree, geom.colors = "gs", title = "Better career opportunities")
#high income level
plot_grpfrq(high.income.level, world.region, geom.colors = "gs", title = "high income level")
plot_grpfrq(high.income.level, degree, geom.colors = "gs", title = "high income level")
#Ties to family and friends
plot_grpfrq(family.friends.ties, world.region, geom.colors = "gs", title = "Ties to family and friends")
plot_grpfrq(family.friends.ties, degree, geom.colors = "gs", title = "Ties to family and friends")
#international experience
plot_grpfrq(international.experience, world.region, geom.colors = "gs", title = "Gain international experience")
plot_grpfrq(international.experience, degree, geom.colors = "gs", title = "Gain international experience")
#family expectations
plot_grpfrq(familyexpectations, world.region, geom.colors = "gs", title = "Family expectations")
plot_grpfrq(familyexpectations, degree, geom.colors = "gs", title = "Family expectations")
#Restrcitive cultural practices
plot_grpfrq(cultural.practices, world.region, geom.colors = "gs", title = "Restrictive cultural practices")
plot_grpfrq(cultural.practices, degree, geom.colors = "gs", title = "Restrictive cultural practices")
#limited jobs
plot_grpfrq(limited.jobs, world.region, geom.colors = "gs", title = "Limited job opportunities")
plot_grpfrq(limited.jobs, degree, geom.colors = "gs", title = "Limited job opportunities")
#lower income levels
plot_grpfrq(lower.income, world.region, geom.colors = "gs", title = "Lower income level")
plot_grpfrq(lower.income, degree, geom.colors = "gs", title = "Lower income level")
#Lower quality of life
plot_grpfrq(lower.quality.life, world.region, geom.colors = "gs", title = "Lower quality of life")
plot_grpfrq(lower.quality.life, degree, geom.colors = "gs", title = "Lower quality of life")
#Political persecution
plot_grpfrq(politicalpersecution, world.region, geom.colors = "gs", title = "Political persecution")
plot_grpfrq(politicalpersecution, degree, geom.colors = "gs", title = "Political persecution")
#Dangers to ones own life
plot_grpfrq(danger.to.ones.life, world.region, geom.colors = "gs", title = "Danger to one's own life")
plot_grpfrq(danger.to.ones.life, degree, geom.colors = "gs", title = "Danger to one's own life")
#Data preparation Returning home: Pull factors in home country
#Better job opportunities
professional.opportunities <-as.factor(Better.professional.opportunities.in.home.country)
professional.opportunities <- factor(professional.opportunities,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(professional.opportunities, Better.professional.opportunities.in.home.country)
#Better quality of life
better.quality.life<-as.factor(Better.quality.of.living.in.home.country)
better.quality.life <- factor(better.quality.life,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(better.quality.life, Better.quality.of.living.in.home.country)
#Feeling more comfortable at Home
home.comfort<-as.factor(Feeling.more.comfortable.at.home)
home.comfort <- factor(home.comfort,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(home.comfort,Feeling.more.comfortable.at.home)
#higher income level
higher.income <-as.factor(Higher.income.levels)
higher.income <- factor(higher.income,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(higher.income, Higher.income.levels)
#Family ties back home
family.ties<-as.factor(Family.ties.back.home)
family.ties <- factor(family.ties,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(family.ties, Family.ties.back.home)
#Feelings of alienation
feelings.of.alienation <-as.factor(Feelings.of.alienation.from.the.Russian.culture.and.population)
feelings.of.alienation <- factor(feelings.of.alienation,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(feelings.of.alienation, Feelings.of.alienation.from.the.Russian.culture.and.population)
#Difficulties in finding job
job.difficulties<-as.factor(Difficulties.in.finding.a.job)
job.difficulties <- factor(job.difficulties,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(job.difficulties, Difficulties.in.finding.a.job)
#Poor working conditions
poor.work<-as.factor(Poor.working.conditions)
poor.work <- factor(poor.work,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(poor.work, Poor.working.conditions)
#Lower quality of life
low.life.quality <-as.factor(Lower.quality.of.life)
low.life.quality <- factor(low.life.quality,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(low.life.quality, Lower.quality.of.life)
#Perceived or Experienced discrimination
discrimination<-as.factor(Perceived.or.experienced.discrimination)
discrimination <- factor(discrimination,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(discrimination, Perceived.or.experienced.discrimination)
#Crime and low level of safety
crime.safety<-as.factor(Crime.and.low.level.of.safety)
crime.safety <- factor(crime.safety,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(crime.safety, Crime.and.low.level.of.safety)
#Strict migration process
strict.migration<-as.factor(Strict.migration.process.difficulties.in.getting.visas.)
strict.migration <- factor(strict.migration,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(strict.migration, Strict.migration.process.difficulties.in.getting.visas.)
#Factors influencing the decision to stay in Russia
HCReturnFactors<-data.frame(professional.opportunities,better.quality.life,home.comfort,
higher.income, family.ties,feelings.of.alienation, poor.work,
low.life.quality,discrimination,crime.safety,strict.migration)
HomeReturn_factors<-data.frame(world.region, degree, Better.professional.opportunities.in.home.country+
Better.quality.of.living.in.home.country+Feeling.more.comfortable.at.home+
Higher.income.levels+ Family.ties.back.home+
Feelings.of.alienation.from.the.Russian.culture.and.population+
Difficulties.in.finding.a.job+ Poor.working.conditions+
Lower.quality.of.life+ Crime.and.low.level.of.safety+
Strict.migration.process.difficulties.in.getting.visas.)
#Crosstabulation of Return home factors with Region and degree
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Better.professional.opportunities.in.home.country",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Better.quality.of.living.in.home.country",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Higher.income.levels",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Family.ties.back.home",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Feelings.of.alienation.from.the.Russian.culture.and.population",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Difficulties.in.finding.a.job",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Poor.working.conditions",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Lower.quality.of.life",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Crime.and.low.level.of.safety",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Strict.migration.process.difficulties.in.getting.visas.",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
#Graphs
#Factors influencing the decision to stay
plot_stackfrq(HCReturnFactors)
#Better professional opportunities
plot_grpfrq(professional.opportunities, world.region, geom.colors = "gs", title = "Better professional opportunities")
plot_grpfrq(professional.opportunities, degree, geom.colors = "gs", title = "Better professional opportunities")
#Better quality of life
plot_grpfrq(better.quality.life, world.region, geom.colors = "gs", title = "Better quality of life")
plot_grpfrq(better.quality.life, degree, geom.colors = "gs", title = "Better quality of life")
#Feeling more comfortable at Home
plot_grpfrq(home.comfort, world.region, geom.colors = "gs", title = "Feeling more comfortable at Home")
plot_grpfrq(home.comfort, degree, geom.colors = "gs", title = "Feeling more comfortable at Home")
#Higher income level
plot_grpfrq(higher.income, world.region, geom.colors = "gs", title = "Higher income level")
plot_grpfrq(higher.income, degree, geom.colors = "gs", title = "Higher income level")
#Family ties back home
plot_grpfrq(family.ties, world.region, geom.colors = "gs", title = "Family ties back home")
plot_grpfrq(family.ties, degree, geom.colors = "gs", title = "Family ties back home")
#Feelings of alienation
plot_grpfrq(feelings.of.alienation, world.region, geom.colors = "gs", title = "Feelings of alienation")
plot_grpfrq(feelings.of.alienation, degree, geom.colors = "gs", title = "Feelings of alienation")
#Difficulties in finding job
plot_grpfrq(job.difficulties, world.region, geom.colors = "gs", title = "Difficulties in finding jobs")
plot_grpfrq(job.difficulties, degree, geom.colors = "gs", title = "Difficulties in finding jobs")
#Poor working conditions
plot_grpfrq(poor.work, world.region, geom.colors = "gs", title = "Poor working conditions")
plot_grpfrq(poor.work, degree, geom.colors = "gs", title = "Poor working conditions")
#Lower quality of life
plot_grpfrq(low.life.quality, world.region, geom.colors = "gs", title = "Lower quality of life")
plot_grpfrq(low.life.quality, degree, geom.colors = "gs", title = "Lower quality of life")
#Perceived or Experienced discrimination
plot_grpfrq(discrimination, world.region, geom.colors = "gs", title = "Perceived or Experienced discrimination")
plot_grpfrq(discrimination, degree, geom.colors = "gs", title = "Perceived or Experienced discrimination")
#Crime and low level of safety
plot_grpfrq(crime.safety, world.region, geom.colors = "gs", title = "Crime and low level of safety")
plot_grpfrq(crime.safety, degree, geom.colors = "gs", title = "Crime and low level of safety")
#Strict migration process
plot_grpfrq(strict.migration, world.region, geom.colors = "gs", title = "Strict migration process")
plot_grpfrq(strict.migration, degree, geom.colors = "gs", title = "Strict migration process")
#Data preparation: Moving to another country
#Better job opportunities
better_job.opportunities <-as.factor(Better.job.opportunities)
better_job.opportunities <- factor(better_job.opportunities,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(better_job.opportunities, Better.job.opportunities)
#higher quality of life
high_quality.life<-as.factor(Higher.quality.of.life)
high_quality.life <- factor(high_quality.life,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(high_quality.life, Higher.quality.of.life)
#Better career opportunities
better.career<-as.factor(Better.career.opportunities.and.advancement.in.chosen.profession.1)
better.career <- factor(better.career,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(better.career, Better.career.opportunities.and.advancement.in.chosen.profession.1)
#higher income level
high.income <-as.factor(Higher.income.levels.1)
high.income <- factor(high.income,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(high.income, Higher.income.levels.1)
#Family and Friends ties
family_friends.ties<-as.factor(Ties.to.family.and.friends.1)
family_friends.ties <- factor(family_friends.ties,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(family_friends.ties, Ties.to.family.and.friends.1)
#Gain international experience
gain.experience<-as.factor(Gain.international.experience.1)
gain.experience <- factor(gain.experience,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(gain.experience, Gain.international.experience.1)
#Flexible immigration process
flexible.immigration<-as.factor(Flexible.immigration.process)
flexible.immigration <- factor(flexible.immigration,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(flexible.immigration, Flexible.immigration.process)
#Moving to another country: Push factors in Russia
#Feelings of alienation
feeling.alienation <-as.factor(Feelings.of.alienation.from.the.Russian.culture.and.population.1)
feeling.alienation <- factor(feeling.alienation,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(feeling.alienation, Feelings.of.alienation.from.the.Russian.culture.and.population.1)
#Difficulties in finding job
finding.job<-as.factor(Difficulties.in.finding.a.job.1)
finding.job <- factor(finding.job,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(finding.job, Difficulties.in.finding.a.job.1)
#Poor working conditions
work.conditions<-as.factor(Poor.working.conditions.1)
work.conditions <- factor(work.conditions,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(work.conditions, Poor.working.conditions.1)
#Lower quality of life
low.quality <-as.factor(Lower.quality.of.life.1)
low.quality <- factor(low.quality,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(low.quality, Lower.quality.of.life.1)
#Perceived or Experienced discrimination
perceived.discrimination<-as.factor(Perceived.or.experienced.discrimination.1)
perceived.discrimination <- factor(perceived.discrimination,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(perceived.discrimination, Perceived.or.experienced.discrimination.1)
#Crime and low level of safety
crime.safety.levels<-as.factor(Crime.and.low.level.of.safety.1)
crime.safety.levels <- factor(crime.safety.levels,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(crime.safety.levels, Crime.and.low.level.of.safety.1)
#Strict migration process
strict.visa<-as.factor(Strict.migration.process.difficulties.in.getting.visas..1)
strict.visa <- factor(strict.visa,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(strict.visa, Strict.migration.process.difficulties.in.getting.visas..1)
#Moving to another country: Push factors in home country
#family expectations
family_expectations<-as.factor(Family.expectations.1)
family_expectations <- factor(family_expectations,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(family_expectations, Family.expectations.1)
#Availability of scholarship
restrictive.practices<-as.factor(Restrictive.cultural.practices..eg..pressure.to.marry..1)
restrictive.practices <- factor(restrictive.practices,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(restrictive.practices, Restrictive.cultural.practices..eg..pressure.to.marry..1)
#limited jobs
limited.jobs.opportunities<-as.factor(Limited.job.opportunities.in.home.country.1)
limited.jobs.opportunities <- factor(limited.jobs.opportunities,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(limited.jobs.opportunities, Limited.job.opportunities.in.home.country.1)
#lower income levels
low.income<-as.factor(Lower.income.levels.1)
low.income <- factor(low.income,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(low.income, Lower.income.levels.1)
#Lower quality of life
low_lifequality<-as.factor(Lower.quality.of.life.3)
low_lifequality <- factor(low_lifequality,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(low_lifequality, Lower.quality.of.life.3)
#Political persecution
political_persecution<-as.factor(Political.persecution.1)
political_persecution <- factor(political_persecution,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(political_persecution, Political.persecution.1)
#Dangers to ones own life
danger.to.life<-as.factor(Danger.or.fear.for.one.s.own.life.1)
danger.to.life <- factor(danger.to.life,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(danger.to.life, Danger.or.fear.for.one.s.own.life.1)
#Factors influencing the decision to stay in Russia
Move2CountryFactors<-data.frame(better_job.opportunities,high_quality.life,better.career,high.income,
family_friends.ties,gain.experience, flexible.immigration,feeling.alienation,
finding.job,work.conditions,low.quality,perceived.discrimination, crime.safety.levels,
strict.visa, family_expectations, restrictive.practices, limited.jobs.opportunities,
low.income, low_lifequality, political_persecution, danger.to.life)
MoveCountry_factors<-data.frame(world.region, degree, Better.job.opportunities,Higher.quality.of.life,
Better.career.opportunities.and.advancement.in.chosen.profession.1,
Higher.income.levels.1,Ties.to.family.and.friends.1,Gain.international.experience.1,
Flexible.immigration.process,Feelings.of.alienation.from.the.Russian.culture.and.population.1,
Difficulties.in.finding.a.job.1,Poor.working.conditions.1,Lower.quality.of.life.1,
Perceived.or.experienced.discrimination.1, Crime.and.low.level.of.safety.1,
Strict.migration.process.difficulties.in.getting.visas..1,Family.expectations.1,
Restrictive.cultural.practices..eg..pressure.to.marry..1,
Limited.job.opportunities.in.home.country.1, Lower.income.levels.1,
Lower.quality.of.life.3, Political.persecution.1, Danger.or.fear.for.one.s.own.life.1)
#Crosstabulation of Return home factors with Region and degree
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Better.job.opportunities",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Higher.quality.of.life",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Better.career.opportunities.and.advancement.in.chosen.profession.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Higher.income.levels.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Ties.to.family.and.friends.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Gain.international.experience.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Flexible.immigration.process",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Feelings.of.alienation.from.the.Russian.culture.and.population.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Difficulties.in.finding.a.job.1,Poor.working.conditions.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Lower.quality.of.life.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Perceived.or.experienced.discrimination.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Crime.and.low.level.of.safety.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Strict.migration.process.difficulties.in.getting.visas..1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Restrictive.cultural.practices..eg..pressure.to.marry..1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Family.expectations.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Limited.job.opportunities.in.home.country.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Lower.income.levels.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Lower.quality.of.life.3",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Political.persecution.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Danger.or.fear.for.one.s.own.life.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
#Graphs
#Factors influencing the decision to stay
plot_stackfrq(Move2CountryFactors)
#Better job opportunities
plot_grpfrq(better_job.opportunities, world.region, geom.colors = "gs", title = "Better job opportunities")
plot_grpfrq(better_job.opportunities, degree, geom.colors = "gs", title = "Better job opportunities")
#Higher quality of life
plot_grpfrq(high_quality.life, world.region, geom.colors = "gs", title = "Higher quality of life")
plot_grpfrq(high_quality.life, degree, geom.colors = "gs", title = "Higher quality of life")
#Better career opportunities
plot_grpfrq(better.career, world.region, geom.colors = "gs", title = "Better career opportunities")
plot_grpfrq(better.career, degree, geom.colors = "gs", title = "Better career opportunities")
#Higher income level
plot_grpfrq(high.income, world.region, geom.colors = "gs", title = "Higher income level")
plot_grpfrq(high.income, degree, geom.colors = "gs", title = "Higher income level")
#Family and Friends ties
plot_grpfrq(family_friends.ties, world.region, geom.colors = "gs", title = "Family and Friends ties")
plot_grpfrq(family_friends.ties, degree, geom.colors = "gs", title = "Family and Friends ties")
#Gain international experience
plot_grpfrq(gain.experience, world.region, geom.colors = "gs", title = "Gain international experience")
plot_grpfrq(gain.experience, degree, geom.colors = "gs", title = "Gain international experience")
#Flexible immigration process
plot_grpfrq(flexible.immigration, world.region, geom.colors = "gs", title = "Flexible immigration process ")
plot_grpfrq(flexible.immigration, degree, geom.colors = "gs", title = "Flexible immigration process")
#Feelings of alienation
plot_grpfrq(feeling.alienation, world.region, geom.colors = "gs", title = "Feelings of alienation")
plot_grpfrq(feeling.alienation, degree, geom.colors = "gs", title = "Feelings of alienation")
#Difficulties in finding job
plot_grpfrq(finding.job, world.region, geom.colors = "gs", title = "Difficulties in finding jobs")
plot_grpfrq(finding.job, degree, geom.colors = "gs", title = "Difficulties in finding jobs")
#Poor working conditions
plot_grpfrq(work.conditions, world.region, geom.colors = "gs", title = "Poor working conditions")
plot_grpfrq(work.conditions, degree, geom.colors = "gs", title = "Poor working conditions")
#Lower quality of life
plot_grpfrq(low.quality, world.region, geom.colors = "gs", title = "Lower quality of life")
plot_grpfrq(low.quality, degree, geom.colors = "gs", title = "Lower quality of life")
#Perceived or Experienced discrimination
plot_grpfrq(perceived.discrimination, world.region, geom.colors = "gs", title = "Perceived or Experienced discrimination")
plot_grpfrq(perceived.discrimination, degree, geom.colors = "gs", title = "Perceived or Experienced discrimination")
#Crime and low level of safety
plot_grpfrq(crime.safety.levels, world.region, geom.colors = "gs", title = "Crime and low level of safety")
plot_grpfrq(crime.safety.levels, degree, geom.colors = "gs", title = "Crime and low level of safety")
#Strict migration process
plot_grpfrq(strict.visa, world.region, geom.colors = "gs", title = "Strict migration process")
plot_grpfrq(strict.visa, degree, geom.colors = "gs", title = "Strict migration process")
#Family expectations
plot_grpfrq(family_expectations, world.region, geom.colors = "gs", title = "Family expectations")
plot_grpfrq(job.opportunities, degree, geom.colors = "gs", title = "Family expectations")
#Restrictive Cultural practices
plot_grpfrq(restrictive.practices, world.region, geom.colors = "gs", title = "Restrictive Cultural practices")
plot_grpfrq(restrictive.practices, degree, geom.colors = "gs", title = "Restrictive Cultural practices")
#Limited jobs
plot_grpfrq(limited.jobs.opportunities, world.region, geom.colors = "gs", title = "Limited jobs opportunities")
plot_grpfrq(limited.jobs.opportunities, degree, geom.colors = "gs", title = "Limited jobs opportunities")
#Lower income levels
plot_grpfrq(low.income, world.region, geom.colors = "gs", title = "Lower income levels")
plot_grpfrq(low.income, degree, geom.colors = "gs", title = "Lower income levels")
#Lower quality of life
plot_grpfrq(low_lifequality, world.region, geom.colors = "gs", title = "Lower quality of life")
plot_grpfrq(low_lifequality, degree, geom.colors = "gs", title = "Lower quality of life")
#Political persecution
plot_grpfrq(political_persecution, world.region, geom.colors = "gs", title = "Political persecution")
plot_grpfrq(political_persecution, degree, geom.colors = "gs", title = "Political persecution")
#Dangers to ones own life
plot_grpfrq(danger.to.life, world.region, geom.colors = "gs", title = "Dangers to ones own life")
plot_grpfrq(danger.to.life, degree, geom.colors = "gs", title = "Dangers to ones own life")
| /MThesis.R | no_license | Gabriel-Ghoost/Student-Analysis | R | false | false | 140,136 | r | ###Gabriel Agbesi Atsyor####
##Masters Thesis R script###
#setting the working directory
getwd()
setwd("C:/Users/GHOOST/Desktop/New Lit/data")
#attaching packages
library(lavaan)
library(foreign)
library(car)
library(psych)
library(ggplot2)
library(summarytools)
library(knitr)
library(gridExtra)
library(kableExtra)
library(stargazer)
library(multcomp)
#attaching the data
data<-read.csv("International Students Survey.csv")
attach(data)
##FACTORS INFLUENCING STUDENTS DECISION TO STUDY IN RUSSIA
#Data preparation: Demographic information
#Age
table(Age)
summary(as.numeric(Age))
data$age<-recode(as.numeric(Age),"17:21=1; 22:26=2;27:hi=3")
table(data$age)
data$age<-factor(data$age,lab=c("17 to 21 yrs", "22 to 26 yrs", " 27 yrs and older"))
#Degree
data$degree<-What.degree.are.you.currently.studying.for.
#Language of instruction
data$language.of.instruction<-What.is.the.language.of.instruction.for.your.program.
data$family.income<-What.was.your.annual.family.income.when.you.were.applying.to.study.abroad..estimate.in.US.dollars.
#country regions
table(Home.country)
data$world.region[data$Home.country == 'Algeria'|
data$Home.country == 'Botswana'| data$Home.country == 'Cameroon'|
data$Home.country == 'Chad'| data$Home.country == 'Congo'|
data$Home.country == 'DR Congo'|data$Home.country == 'Eritrea'|
data$Home.country == 'Ivory Coast'|
data$Home.country == 'Gambia'|data$Home.country == 'Ghana'|
data$Home.country == 'Kenya'|data$Home.country == 'Madagascar'|
data$Home.country == 'Niger'|data$Home.country == 'Nigeria'|
data$Home.country == 'South Africa'|data$Home.country == 'Sudan'|
data$Home.country == 'Uganda'|data$Home.country == 'Zambia'] <- 'Africa'
data$world.region[data$Home.country == 'Bangladesh'|
data$Home.country == 'India'| data$Home.country == 'Nepal'|
data$Home.country == 'Pakistan'|
data$Home.country == 'Sri Lanka'|data$Home.country == 'Indonesia'|
data$Home.country == 'Philippines'|data$Home.country == 'Thailand'|
data$Home.country == 'Vietnam'|data$Home.country == 'China'|
data$Home.country == 'Japan'|data$Home.country == 'Mongolia'|
data$Home.country == 'South Korea'|data$Home.country == 'Hong Kong'|
data$Home.country == 'Taiwan'] <- 'Asia'
data$world.region[data$Home.country == 'Australia'| data$Home.country == 'Austria'|
data$Home.country == 'Bosnia and Herzegovina'|
data$Home.country == 'Bulgaria'| data$Home.country == 'Europe'|
data$Home.country == 'France'| data$Home.country == 'Germany'|
data$Home.country == 'Italy'|data$Home.country == 'Poland'|
data$Home.country == 'Portugal'|data$Home.country == 'Serbia'|
data$Home.country == 'Spain'|data$Home.country == 'Switzerland'|
data$Home.country == 'Republic of North Macedonia'|
data$Home.country == 'USA'] <- 'Europe'
data$world.region[data$Home.country == 'Armenia'|
data$Home.country == 'Azerbaijan'|data$Home.country == 'Belarus'|
data$Home.country == 'Estonia'|data$Home.country == 'Georgia'|
data$Home.country == 'Georgia'|data$Home.country == 'Kazakhstan'|
data$Home.country == 'Kyrgyzstan'|data$Home.country == 'Latvia'|
data$Home.country == 'Moldova'|data$Home.country == 'Tajikistan'|
data$Home.country == 'Turkmenistan'|data$Home.country == 'Ukraine'|
data$Home.country == 'Uzbekistan'] <- 'Commonwealth of Independent States'
data$world.region[data$Home.country == 'Bahrain'|
data$Home.country == 'Egypt'| data$Home.country == 'Iran'|
data$Home.country == 'Israel'| data$Home.country == 'Lebanon'|
data$Home.country == 'Syria'|
data$Home.country == 'Turkey'] <- 'Middle East'
data$world.region[data$Home.country == 'Brazil'|
data$Home.country == 'Colombia'|data$Home.country == 'Ecuador'|
data$Home.country == 'Guatemala'| data$Home.country == 'Haiti'|
data$Home.country == 'Mexico'|data$Home.country == 'Venezuela'|
data$Home.country == 'Nicaragua'] <- 'Southern America'
table(data$world.region, useNA = "ifany")
attach(data)
##Regression Analysis:Factors that influenced the decision of international students to study in Russia
#Data Preparation: Push Factors
is.numeric(Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution)
Competitive.University.admission.process<-Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution
is.numeric(Competitive.University.admission.process)
is.numeric(Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market)
Perceived.advantage.of.international.degree<-Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market
is.numeric(Perceived.advantage.of.international.degree)
#creating a data frame for push factors
PushFactors <-data.frame(language.of.instruction, degree,
age, Gender, world.region, Home.country,
Unavailability.of.the.desired.study.program,Low.quality.of.education
,Competitive.University.admission.process
,Perceived.advantage.of.international.degree
,Unavailability.of.scholarship.opportunities
,Encouragement.from.my.family.to.study.abroad,Encouragement.from..my.friends.to.study.abroad
,Better.earning.prospects.abroad, The.social.prestige.of.studying.abroad
,To.experience.a.different.culture)
#exploratory factor analysis to allow for indexing.
#principal component analysis
pushpc <- princomp(~Unavailability.of.the.desired.study.program+Low.quality.of.education
+Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution
+Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market
+Unavailability.of.scholarship.opportunities
+Encouragement.from.my.family.to.study.abroad+Encouragement.from..my.friends.to.study.abroad
+Better.earning.prospects.abroad+The.social.prestige.of.studying.abroad
+To.experience.a.different.culture, data = PushFactors, cor = FALSE, na.action = na.omit)
summary(pushpc)
#factor analysis
push.efa <- factanal(~Unavailability.of.the.desired.study.program+Low.quality.of.education
+Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution
+Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market
+Unavailability.of.scholarship.opportunities
+Encouragement.from.my.family.to.study.abroad+Encouragement.from..my.friends.to.study.abroad
+Better.earning.prospects.abroad+The.social.prestige.of.studying.abroad
+To.experience.a.different.culture,
factors = 6, data = PushFactors , cor = FALSE, na.action = na.omit)
print(push.efa, digits=2, cutoff=.3, sort=TRUE)
push.efa1 <- factanal(~Unavailability.of.the.desired.study.program+Low.quality.of.education
+Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution
+Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market
+Unavailability.of.scholarship.opportunities
+Encouragement.from.my.family.to.study.abroad+Encouragement.from..my.friends.to.study.abroad
+Better.earning.prospects.abroad+The.social.prestige.of.studying.abroad
+To.experience.a.different.culture,
factors = 5, data = PushFactors, cor = FALSE, na.action = na.omit)
print(push.efa1, digits=2, cutoff=.3, sort=TRUE)
push.efa2 <- factanal(~Unavailability.of.the.desired.study.program+Low.quality.of.education
+Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution
+Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market
+Unavailability.of.scholarship.opportunities
+Encouragement.from.my.family.to.study.abroad+Encouragement.from..my.friends.to.study.abroad
+Better.earning.prospects.abroad+The.social.prestige.of.studying.abroad
+To.experience.a.different.culture,
factors = 4, data = PushFactors, cor = FALSE, na.action = na.omit)
print(push.efa2, digits=2, cutoff=.3, sort=TRUE)
#with p-value 0.0415, four factors are sufficient.
#indexing the correlated factors
#encouragement from family and friends
cor.test(Encouragement.from..my.friends.to.study.abroad,Encouragement.from.my.family.to.study.abroad)
encouragement.from.family.friends<-(Encouragement.from..my.friends.to.study.abroad+Encouragement.from.my.family.to.study.abroad)/2
table(encouragement.from.family.friends)
PushFactors$encouragement.from.family.friends<-recode(encouragement.from.family.friends, "1=1; 1.5:2=2; 2.5:3=3; 3.5:4=4; 4.5:5=5")
table(PushFactors$encouragement.from.family.friends)
#advantages of studying abroad
cor.test(Better.earning.prospects.abroad,The.social.prestige.of.studying.abroad)
advantages.of.studying.abroad<-(Better.earning.prospects.abroad+The.social.prestige.of.studying.abroad)/2
table(advantages.of.studying.abroad)
PushFactors$benefits.of.studying.abroad<-recode(advantages.of.studying.abroad, "1=1; 1.5:2=2; 2.5:3=3; 3.5:4=4; 4.5:5=5")
table(PushFactors$benefits.of.studying.abroad)
#access to education
cor.test(Unavailability.of.the.desired.study.program,Low.quality.of.education)
access.to.education<-(Unavailability.of.the.desired.study.program+Low.quality.of.education)/2
table(access.to.education)
PushFactors$access.to.education<-recode(access.to.education, "1=1; 1.5:2=2; 2.5:3=3; 3.5:4=4; 4.5:5=5")
table(PushFactors$access.to.education)
attach(PushFactors)
#checking for cronbach's aplha to establish reliability
PushFactorsHC <-data.frame(access.to.education, Competitive.University.admission.process
,Perceived.advantage.of.international.degree
,Unavailability.of.scholarship.opportunities
,encouragement.from.family.friends
,advantages.of.studying.abroad
,To.experience.a.different.culture)
psych::alpha(PushFactorsHC)
#Regression analysis
#Push factors in Home country that influenced the decision of international students to study in Russia
#Empty model
model0<-lm(as.numeric(language.of.instruction)~1, data = PushFactors)
summary(model0)
#Full Model
model1<-lm(as.numeric(language.of.instruction)~encouragement.from.family.friends+
benefits.of.studying.abroad+access.to.education+
Competitive.University.admission.process+Perceived.advantage.of.international.degree+
Unavailability.of.scholarship.opportunities+To.experience.a.different.culture, data = PushFactors)
summary(model1)
#Full Model and controls
model2<-lm(as.numeric(language.of.instruction)~encouragement.from.family.friends+
benefits.of.studying.abroad+access.to.education+
Competitive.University.admission.process+Perceived.advantage.of.international.degree+
Unavailability.of.scholarship.opportunities+To.experience.a.different.culture+as.numeric(age)+
as.numeric(Home.country)+as.numeric(Gender), data = PushFactors)
summary(model2)
#Full Model with interaction effect
model3<-lm(as.numeric(language.of.instruction)~benefits.of.studying.abroad+
Unavailability.of.scholarship.opportunities+Competitive.University.admission.process+
(access.to.education+encouragement.from.family.friends+Perceived.advantage.of.international.degree+
To.experience.a.different.culture)*world.region, data = PushFactors)
summary(model3)
#Full Model and control with interaction effects
model4<-lm(as.numeric(language.of.instruction)~benefits.of.studying.abroad+
Unavailability.of.scholarship.opportunities+Competitive.University.admission.process+
(access.to.education+encouragement.from.family.friends+Perceived.advantage.of.international.degree+
To.experience.a.different.culture)*world.region+as.numeric(age)+as.numeric(Gender), data = PushFactors)
summary(model4)
#Graph for the models
stargazer(model1, model2, title="Regression Results: Push Factors in Home Country", align=TRUE,
no.space=TRUE, single.row=T,type = "text", intercept.bottom = F,dep.var.labels = "Move to Moscow", out="a.doc",
covariate.labels =c("Constant","Encouragement from family and friends","Benefits of studying abroad",
"Access to education", "Competitive university admission process",
"Perceieved advantage of an international degree",
"Unavailability of scholarship opportunities", "Experience a different culture",
"Age", "Home Country", "Gender"))
stargazer(model3, model4, title="Regression Results: Push Factors in Home Country", align=TRUE,column.sep.width = "-5pt",
no.space=TRUE, single.row=T,type = "text", intercept.bottom = F,dep.var.labels = "Move to Moscow", out="b.doc",
covariate.labels =c("Constant","Benefits of studying abroad","Unavailability of scholarship opportunities",
"Competitive university admission process","Access to education",
"Encouragement from family and friends","Perceieved advantage of an international degree",
"Experience a different culture","Asia", "CIS", "Europe", "Middle East", "South America",
"Age", "Gender", "Access to education*Asia", "Access to education*CIS", "Access to education*Europe",
"Access to education*Middle East", "Access to education*South America","Encouragement from family and friends*Asia",
"Encouragement from family and friends*CIS", "Encouragement from family and friends*Europe",
"Encouragement from family and friends*Middle East","Encouragement from family and friends*South America",
"Perceieved advantage of an international degree*Asia","Perceieved advantage of an international degree*CIS",
"Perceieved advantage of an international degree*Europe","Perceieved advantage of an international degree*Middle East",
"Perceieved advantage of an international degree*South America","Experience a different culture*Asia",
"Experience a different culture*CIS", "Experience a different culture*Europe",
"Encouragement from family and friends*Middle East","Experience a different culture*South America"))
#checking multicolinearity
vif(model1)
vif(model2)
vif(model3)
vif(model4)
####Pull factors that influenced the decision of international students to study in Russia
Recommendations.from.family.friends<-Personal.recommendations.from.parents..relatives..and.friends
#creating a data frame for push factors
PullFactors<-data.frame(language.of.instruction, age, Gender, world.region, Home.country,
Availability.of.desired.study.program,Higher.quality.of.education..compared.to.home.country.,
Low.cost.of.living,Low.tuition.fees,Awarded.scholarships.or.tuition.waiver,Attraction.to.Russian.culture..society,
Career.prospects.in.Russia,Recommendations.from.family.friends,cultural.proximity.with.home,
geographical.proximity.with.home,Quality.and.reputation.of.the.University,Recognition.of.the.degree.in.my.home.country,
Quality.of.the.teaching.staff,The.reputation.of.the.alumni,The.reputation.of.the.international.community,
HSE.position.in.international.university.rankings,Cost.of.tuition.for.international.students,Availability.of.scholarships,
Support.services.for.international.students,Graduates.employment.rates,HSE.s.international.strategic.alliances,
Local.employers.preference.of..degrees.awarded.by.HSE)
#exploratory factor analysis to allow for index.
#principal component analysis
fit.pc <- princomp(~Availability.of.desired.study.program+Higher.quality.of.education..compared.to.home.country.
+Low.cost.of.living+Low.tuition.fees+Awarded.scholarships.or.tuition.waiver+Attraction.to.Russian.culture..society
+Career.prospects.in.Russia+Personal.recommendations.from.parents..relatives..and.friends+cultural.proximity.with.home
+geographical.proximity.with.home+Quality.and.reputation.of.the.University+Recognition.of.the.degree.in.my.home.country
+Quality.of.the.teaching.staff+The.reputation.of.the.alumni+The.reputation.of.the.international.community
+HSE.position.in.international.university.rankings+Cost.of.tuition.for.international.students
+Availability.of.scholarships+Support.services.for.international.students+Graduates.employment.rates
+HSE.s.international.strategic.alliances+Local.employers.preference.of..degrees.awarded.by.HSE,
data = PullFactors, cor = FALSE, na.action = na.omit)
summary(fit.pc)
fit.efa <- factanal(~Availability.of.desired.study.program+Higher.quality.of.education..compared.to.home.country.
+Low.cost.of.living+Low.tuition.fees+Awarded.scholarships.or.tuition.waiver+Attraction.to.Russian.culture..society
+Career.prospects.in.Russia+Personal.recommendations.from.parents..relatives..and.friends+cultural.proximity.with.home
+geographical.proximity.with.home+Quality.and.reputation.of.the.University+Recognition.of.the.degree.in.my.home.country
+Quality.of.the.teaching.staff+The.reputation.of.the.alumni+The.reputation.of.the.international.community
+HSE.position.in.international.university.rankings+Cost.of.tuition.for.international.students
+Availability.of.scholarships+Support.services.for.international.students+Graduates.employment.rates
+HSE.s.international.strategic.alliances+Local.employers.preference.of..degrees.awarded.by.HSE,
factors = 11, data = PullFactors, cor = FALSE, na.action = na.omit)
print(fit.efa, digits=2, cutoff=.3, sort=TRUE)
fit.efa2 <- factanal(~Availability.of.desired.study.program+Higher.quality.of.education..compared.to.home.country.
+Low.cost.of.living+Low.tuition.fees+Awarded.scholarships.or.tuition.waiver+Attraction.to.Russian.culture..society
+Career.prospects.in.Russia+Personal.recommendations.from.parents..relatives..and.friends+cultural.proximity.with.home
+geographical.proximity.with.home+Quality.and.reputation.of.the.University+Recognition.of.the.degree.in.my.home.country
+Quality.of.the.teaching.staff+The.reputation.of.the.alumni+The.reputation.of.the.international.community
+HSE.position.in.international.university.rankings+Cost.of.tuition.for.international.students
+Availability.of.scholarships+Support.services.for.international.students+Graduates.employment.rates
+HSE.s.international.strategic.alliances+Local.employers.preference.of..degrees.awarded.by.HSE,
factors = 8, data = PullFactors, cor = FALSE, na.action = na.omit)
print(fit.efa2, digits=2, cutoff=.3, sort=TRUE)
#with p-value 6.54e-10 8 factors are sufficient.
#indexing the correlated pull factors
#prospect of employment
cor.test(Graduates.employment.rates,Local.employers.preference.of..degrees.awarded.by.HSE)
cor.test(Graduates.employment.rates,Career.prospects.in.Russia)
cor.test(Career.prospects.in.Russia,Local.employers.preference.of..degrees.awarded.by.HSE)
employment.prospect<-(Graduates.employment.rates+Local.employers.preference.of..degrees.awarded.by.HSE+Career.prospects.in.Russia)/3
employment.prospect<-round(employment.prospect,2)
table(employment.prospect)
PullFactors$employment.prospect<-recode(employment.prospect, "1:1.33=1; 1.67:2.33=2; 2.67:3.33=3; 3.67:4.33=4; 4.67:5=5")
table(PullFactors$employment.prospect)
#proximity
cor.test(cultural.proximity.with.home,geographical.proximity.with.home)
proximity<-(cultural.proximity.with.home+geographical.proximity.with.home)/2
table(proximity)
PullFactors$proximity<-recode(proximity, "1=1; 1.5:2=2; 2.5:3=3; 3.5:4=4; 4.5:5=5")
table(PullFactors$proximity)
#cost
cor.test(Low.cost.of.living,Low.tuition.fees)
cor.test(Low.cost.of.living,Cost.of.tuition.for.international.students)
cor.test(Low.tuition.fees,Cost.of.tuition.for.international.students)
cost.of.living<-(Low.cost.of.living+Low.tuition.fees+Cost.of.tuition.for.international.students)/3
cost.of.living<-round(cost.of.living,2)
table(cost.of.living)
PullFactors$cost.of.living<-recode(cost.of.living, "1:1.33=1; 1.67:2.33=2; 2.67:3.33=3; 3.67:4.33=4; 4.67:5=5")
table(PullFactors$cost.of.living)
#scholarships
cor.test(Awarded.scholarships.or.tuition.waiver,Availability.of.scholarships)
scholarship<-(Awarded.scholarships.or.tuition.waiver+Availability.of.scholarships)/2
table(scholarship)
PullFactors$scholarship<-recode(scholarship, "1=1; 1.5:2=2; 2.5:3=3; 3.5:4=4; 4.5:5=5")
table(PullFactors$scholarship)
#HSE Quality
cor.test(Quality.and.reputation.of.the.University,Quality.of.the.teaching.staff)
HSE.quality<-(Quality.and.reputation.of.the.University+Quality.of.the.teaching.staff)/2
table(HSE.quality)
PullFactors$HSE.quality<-recode(HSE.quality, "1=1; 1.5:2=2; 2.5:3=3; 3.5:4=4; 4.5:5=5")
table(PullFactors$HSE.quality)
#HSE Reputation
cor.test(The.reputation.of.the.alumni, The.reputation.of.the.international.community)
HSE.reputation<-(The.reputation.of.the.alumni+The.reputation.of.the.international.community)/2
table(HSE.reputation)
PullFactors$HSE.reputation<-recode(HSE.reputation, "1=1; 1.5:2=2; 2.5:3=3; 3.5:4=4; 4.5:5=5")
table(PullFactors$HSE.reputation)
#Program Choice
cor.test(Higher.quality.of.education..compared.to.home.country., Availability.of.desired.study.program)
program.choice<-(Higher.quality.of.education..compared.to.home.country.+ Availability.of.desired.study.program)/2
table(program.choice)
PullFactors$program.choice<-recode(program.choice, "1=1; 1.5:2=2; 2.5:3=3; 3.5:4=4; 4.5:5=5")
table(PullFactors$program.choice)
attach(PullFactors)
#checking for realibility
PullFactorsRuHSE<-data.frame(program.choice,cost.of.living,proximity, scholarship,HSE.quality,
HSE.reputation, Attraction.to.Russian.culture..society,
Recognition.of.the.degree.in.my.home.country,Recommendations.from.family.friends,
HSE.position.in.international.university.rankings,Support.services.for.international.students,
HSE.s.international.strategic.alliances,employment.prospect)
psych::alpha(PullFactorsRuHSE)
#Full model
model5<-lm(as.numeric(language.of.instruction)~employment.prospect+proximity+
cost.of.living+scholarship+HSE.quality+HSE.reputation+program.choice+
Recommendations.from.family.friends+Attraction.to.Russian.culture..society+
Recognition.of.the.degree.in.my.home.country+HSE.position.in.international.university.rankings+
Support.services.for.international.students+HSE.s.international.strategic.alliances,
data = PullFactors)
summary(model5)
#Full model with controls
model6<-lm(as.numeric(language.of.instruction)~employment.prospect+proximity+
cost.of.living+scholarship+HSE.quality+HSE.reputation+program.choice+
Recommendations.from.family.friends+Attraction.to.Russian.culture..society+
Recognition.of.the.degree.in.my.home.country+HSE.position.in.international.university.rankings+
Support.services.for.international.students+HSE.s.international.strategic.alliances+
as.numeric(age)+ as.numeric(Home.country)+as.numeric(Gender), data = PullFactors)
summary(model6)
#Fullmodel with interaction effect
model7<-lm(as.numeric(language.of.instruction)~employment.prospect+scholarship+
HSE.quality+HSE.reputation+Recognition.of.the.degree.in.my.home.country+Support.services.for.international.students+
Attraction.to.Russian.culture..society+(Recommendations.from.family.friends+proximity+cost.of.living+
program.choice+HSE.position.in.international.university.rankings+
HSE.s.international.strategic.alliances)*world.region, data = PullFactors)
summary(model7)
#Fullmodel and controls with interaction effect
model8<-lm(as.numeric(language.of.instruction)~employment.prospect+scholarship+
HSE.quality+HSE.reputation+Recognition.of.the.degree.in.my.home.country+
Support.services.for.international.students+Attraction.to.Russian.culture..society+
(Recommendations.from.family.friends+proximity+cost.of.living+program.choice+
HSE.position.in.international.university.rankings+HSE.s.international.strategic.alliances)*world.region+
as.numeric(age)+as.numeric(Gender), data = PullFactors)
summary(model8)
#Graph for the models
stargazer(model5, model6, title="Regression Results: Push Factors in Home Country", align=TRUE,
no.space=TRUE, single.row=T,type = "text", intercept.bottom = F,dep.var.labels = "Move to Moscow",out = "c.doc",
covariate.labels =c("Constant","Employment prospects","Proximity","Cost of living", "Scholarship",
"HSE quality", "HSE reputation", "Program choice","Recommendations from family and friends",
"Attraction to Russian culture", "Recognition of HSE degree in my Home Country",
"HSE position in University rankings","Support services for International Students",
"HSE international stragtegic alliances","Age", "Home Country", "Gender"))
stargazer(model7, model8, title="Regression Results: Push Factors in Home Country", align=TRUE,
no.space=TRUE, single.row=T,type = "text", intercept.bottom = F,dep.var.labels = "Move to Moscow", out = "d.doc",
covariate.labels =c("Constant","Employment prospects","Scholarship","HSE quality","HSE reputation",
"Recognition of HSE degree in my Home Country","Support services for International Students",
"Attraction to Russian culture", "Recommendations from family and friends","Proximity",
"Cost of living", "Choice of program", "HSE position in University rankings",
"HSE international alliances","Asia", "CIS","Europe", "Middle East", "South America",
"Age", "Gender", "Recomendations from family and friends*Asia","Recomendations from family and friends*CIS",
"Recomendations from family and friends*Europe","Recomendations from family and friends*Middle East",
"Recomendations from family and friends*South America","Proximity*Asia", "Proximity*CIS", "Proximity*Europe",
"Proximity*Middle East","Proximity*South America", "Cost of living*Asia","Cost of living*CIS",
"Cost of living*Europe","Cost of living*Middle East","Cost of living*South America","Choice of program*Asia",
"Choice of program*CIS", "Choice of program*Europe","Choice of program*Middle East",
"Choice of program*South America", "HSE position in University rankings*Asia",
"HSE position in University rankings*CIS", "HSE position in University rankings*Europe",
"HSE position in University rankings*Middle East","HSE position in University rankings*South America",
"HSE international alliances*Asia", "HSE international alliances*CIS", "HSE international alliances*Europe",
"HSE international alliances*Middle East","HSE international alliances*South America"))
#checking for multicolinearity
vif(model5)
vif(model6)
vif(model7)
vif(model8)
#####Creating the Crosstab function
crosstab <- function (..., dec.places = NULL,
type = NULL,
style = "wide",
row.vars = NULL,
col.vars = NULL,
percentages = TRUE,
addmargins = TRUE,
subtotals=TRUE)
###################################################################################
# #
# Function created by Dr Paul Williamson, Dept. of Geography and Planning, #
# School of Environmental Sciences, University of Liverpool, UK. #
# #
# Adapted from the function ctab() in the catspec packge. #
# #
# Version: 12th July 2013 #
# #
# Output best viewed using the companion function print.crosstab() #
# #
###################################################################################
#Declare function used to convert frequency counts into relevant type of proportion or percentage
{
mk.pcnt.tbl <- function(tbl, type) {
a <- length(row.vars)
b <- length(col.vars)
mrgn <- switch(type, column.pct = c(row.vars[-a], col.vars),
row.pct = c(row.vars, col.vars[-b]),
joint.pct = c(row.vars[-a], col.vars[-b]),
total.pct = NULL)
tbl <- prop.table(tbl, mrgn)
if (percentages) {
tbl <- tbl * 100
}
tbl
}
#Find no. of vars (all; row; col) for use in subsequent code
n.row.vars <- length(row.vars)
n.col.vars <- length(col.vars)
n.vars <- n.row.vars + n.col.vars
#Check to make sure all user-supplied arguments have valid values
stopifnot(as.integer(dec.places) == dec.places, dec.places > -1)
#type: see next section of code
stopifnot(is.character(style))
stopifnot(is.logical(percentages))
stopifnot(is.logical(addmargins))
stopifnot(is.logical(subtotals))
stopifnot(n.vars>=1)
#Convert supplied table type(s) into full text string (e.g. "f" becomes "frequency")
#If invalid type supplied, failed match gives user automatic error message
types <- NULL
choices <- c("frequency", "row.pct", "column.pct", "joint.pct", "total.pct")
for (tp in type) types <- c(types, match.arg(tp, choices))
type <- types
#If no type supplied, default to 'frequency + total' for univariate tables and to
#'frequency' for multi-dimenstional tables
#For univariate table....
if (n.vars == 1) {
if (is.null(type)) {
# default = freq count + total.pct
type <- c("frequency", "total.pct")
#row.vars <- 1
} else {
#and any requests for row / col / joint.pct must be changed into requests for 'total.pct'
type <- ifelse(type == "frequency", "frequency", "total.pct")
}
#For multivariate tables...
} else if (is.null(type)) {
# default = frequency count
type <- "frequency"
}
#Check for integrity of requested analysis and adjust values of function arguments as required
if ((addmargins==FALSE) & (subtotals==FALSE)) {
warning("WARNING: Request to suppress subtotals (subtotals=FALSE) ignored because no margins requested (addmargins=FALSE)")
subtotals <- TRUE
}
if ((n.vars>1) & (length(type)>1) & (addmargins==TRUE)) {
warning("WARNING: Only row totals added when more than one table type requested")
#Code lower down selecting type of margin implements this...
}
if ((length(type)>1) & (subtotals==FALSE)) {
warning("WARNING: Can only request supply one table type if requesting suppression of subtotals; suppression of subtotals not executed")
subtotals <- TRUE
}
if ((length(type)==1) & (subtotals==FALSE)) {
choices <- c("frequency", "row.pct", "column.pct", "joint.pct", "total.pct")
tp <- match.arg(type, choices)
if (tp %in% c("row.pct","column.pct","joint.pct")) {
warning("WARNING: subtotals can only be suppressed for tables of type 'frequency' or 'total.pct'")
subtotals<- TRUE
}
}
if ((n.vars > 2) & (n.col.vars>1) & (subtotals==FALSE))
warning("WARNING: suppression of subtotals assumes only 1 col var; table flattened accordingly")
if ( (subtotals==FALSE) & (n.vars>2) ) {
#If subtotals not required AND total table vars > 2
#Reassign all but last col.var as row vars
#[because, for simplicity, crosstabs assumes removal of subtotals uses tables with only ONE col var]
#N.B. Subtotals only present in tables with > 2 cross-classified vars...
if (length(col.vars)>1) {
row.vars <- c(row.vars,col.vars[-length(col.vars)])
col.vars <- col.vars[length(col.vars)]
n.row.vars <- length(row.vars)
n.col.vars <- 1
}
}
#If dec.places not set by user, set to 2 unlesss only one table of type frequency requested,
#in which case set to 0. [Leaves user with possibility of having frequency tables with > 0 dp]
if (is.null(dec.places)) {
if ((length(type)==1) & (type[1]=="frequency")) {
dec.places <- 0
} else {
dec.places <-2
}
}
#Take the original input data, whatever form originally supplied in,
#convert into table format using requested row and col vars, and save as 'tbl'
args <- list(...)
if (length(args) > 1) {
if (!all(sapply(args, is.factor)))
stop("If more than one argument is passed then all must be factors")
tbl <- table(...)
}
else {
if (is.factor(...)) {
tbl <- table(...)
}
else if (is.table(...)) {
tbl <- eval(...)
}
else if (is.data.frame(...)) {
#tbl <- table(...)
if (is.null(row.vars) && is.null(col.vars)) {
tbl <- table(...)
}
else {
var.names <- c(row.vars,col.vars)
A <- (...)
tbl <- table(A[var.names])
if(length(var.names==1)) names(dimnames(tbl)) <- var.names
#[table() only autocompletes dimnames for multivariate crosstabs of dataframes]
}
}
else if (class(...) == "ftable") {
tbl <- eval(...)
if (is.null(row.vars) && is.null(col.vars)) {
row.vars <- names(attr(tbl, "row.vars"))
col.vars <- names(attr(tbl, "col.vars"))
}
tbl <- as.table(tbl)
}
else if (class(...) == "ctab") {
tbl <- eval(...)
if (is.null(row.vars) && is.null(col.vars)) {
row.vars <- tbl$row.vars
col.vars <- tbl$col.vars
}
for (opt in c("dec.places", "type", "style", "percentages",
"addmargins", "subtotals")) if (is.null(get(opt)))
assign(opt, eval(parse(text = paste("tbl$", opt,
sep = ""))))
tbl <- tbl$table
}
else {
stop("first argument must be either factors or a table object")
}
}
#Convert supplied table style into full text string (e.g. "l" becomes "long")
style <- match.arg(style, c("long", "wide"))
#Extract row and col names to be used in creating 'tbl' from supplied input data
nms <- names(dimnames(tbl))
z <- length(nms)
if (!is.null(row.vars) && !is.numeric(row.vars)) {
row.vars <- order(match(nms, row.vars), na.last = NA)
}
if (!is.null(col.vars) && !is.numeric(col.vars)) {
col.vars <- order(match(nms, col.vars), na.last = NA)
}
if (!is.null(row.vars) && is.null(col.vars)) {
col.vars <- (1:z)[-row.vars]
}
if (!is.null(col.vars) && is.null(row.vars)) {
row.vars <- (1:z)[-col.vars]
}
if (is.null(row.vars) && is.null(col.vars)) {
col.vars <- z
row.vars <- (1:z)[-col.vars]
}
#Take the original input data, converted into table format using supplied row and col vars (tbl)
#and create a second version (crosstab) which stores results as percentages if a percentage table type is requested.
if (type[1] == "frequency")
crosstab <- tbl
else
crosstab <- mk.pcnt.tbl(tbl, type[1])
#If multiple table types requested, create and add these to
if (length(type) > 1) {
tbldat <- as.data.frame.table(crosstab)
z <- length(names(tbldat)) + 1
tbldat[z] <- 1
pcntlab <- type
pcntlab[match("frequency", type)] <- "Count"
pcntlab[match("row.pct", type)] <- "Row %"
pcntlab[match("column.pct", type)] <- "Column %"
pcntlab[match("joint.pct", type)] <- "Joint %"
pcntlab[match("total.pct", type)] <- "Total %"
for (i in 2:length(type)) {
if (type[i] == "frequency")
crosstab <- tbl
else crosstab <- mk.pcnt.tbl(tbl, type[i])
crosstab <- as.data.frame.table(crosstab)
crosstab[z] <- i
tbldat <- rbind(tbldat, crosstab)
}
tbldat[[z]] <- as.factor(tbldat[[z]])
levels(tbldat[[z]]) <- pcntlab
crosstab <- xtabs(Freq ~ ., data = tbldat)
names(dimnames(crosstab))[z - 1] <- ""
}
#Add margins if required, adding only those margins appropriate to user request
if (addmargins==TRUE) {
vars <- c(row.vars,col.vars)
if (length(type)==1) {
if (type=="row.pct")
{ crosstab <- addmargins(crosstab,margin=c(vars[n.vars]))
tbl <- addmargins(tbl,margin=c(vars[n.vars]))
}
else
{ if (type=="column.pct")
{ crosstab <- addmargins(crosstab,margin=c(vars[n.row.vars]))
tbl <- addmargins(tbl,margin=c(vars[n.row.vars]))
}
else
{ if (type=="joint.pct")
{ crosstab <- addmargins(crosstab,margin=c(vars[(n.row.vars)],vars[n.vars]))
tbl <- addmargins(tbl,margin=c(vars[(n.row.vars)],vars[n.vars]))
}
else #must be total.pct OR frequency
{ crosstab <- addmargins(crosstab)
tbl <- addmargins(tbl)
}
}
}
}
#If more than one table type requested, only adding row totals makes any sense...
if (length(type)>1) {
crosstab <- addmargins(crosstab,margin=c(vars[n.vars]))
tbl <- addmargins(tbl,margin=c(vars[n.vars]))
}
}
#If subtotals not required, and total vars > 2, create dataframe version of table, with relevent
#subtotal rows / cols dropped [Subtotals only present in tables with > 2 cross-classified vars]
t1 <- NULL
if ( (subtotals==FALSE) & (n.vars>2) ) {
#Create version of crosstab in ftable format
t1 <- crosstab
t1 <- ftable(t1,row.vars=row.vars,col.vars=col.vars)
#Convert to a dataframe
t1 <- as.data.frame(format(t1),stringsAsFactors=FALSE)
#Remove backslashes from category names AND colnames
t1 <- apply(t1[,],2, function(x) gsub("\"","",x))
#Remove preceding and trailing spaces from category names to enable accurate capture of 'sum' rows/cols
#[Use of grep might extrac category labels with 'sum' as part of a longer one or two word string...]
t1 <- apply(t1,2,function(x) gsub("[[:space:]]*$","",gsub("^[[:space:]]*","",x)))
#Reshape dataframe to that variable and category labels display as required
#(a) Move col category names down one row; and move col variable name one column to right
t1[2,(n.row.vars+1):ncol(t1)] <- t1[1,(n.row.vars+1):ncol(t1)]
t1[1,] <- ""
t1[1,(n.row.vars+2)] <- t1[2,(n.row.vars+1)]
#(b) Drop the now redundant column separating the row.var labels from the table data + col.var labels
t1 <- t1[,-(n.row.vars+1)]
#In 'lab', assign category labels for each variable to all rows (to allow identification of sub-totals)
lab <- t1[,1:n.row.vars]
for (c in 1:n.row.vars) {
for (r in 2:nrow(lab)) {
if (lab[r,c]=="") lab[r,c] <- lab[r-1,c]
}
}
lab <- (apply(lab[,1:n.row.vars],2,function(x) x=="Sum"))
lab <- apply(lab,1,sum)
#Filter out rows of dataframe containing subtotals
t1 <- t1[((lab==0) | (lab==n.row.vars)),]
#Move the 'Sum' label associated with last row to the first column; in the process
#setting the final row labels associated with other row variables to ""
t1[nrow(t1),1] <- "Sum"
t1[nrow(t1),(2:n.row.vars)] <- ""
#set row and column names to NULL
rownames(t1) <- NULL
colnames(t1) <- NULL
}
#Create output object 'result' [class: crosstab]
result <- NULL
#(a) record of argument values used to produce tabular output
result$row.vars <- row.vars
result$col.vars <- col.vars
result$dec.places <- dec.places
result$type <- type
result$style <- style
result$percentages <- percentages
result$addmargins <- addmargins
result$subtotals <- subtotals
#(b) tabular output [3 variants]
result$table <- tbl #Stores original cross-tab frequency counts without margins [class: table]
result$crosstab <- crosstab #Stores cross-tab in table format using requested style(frequency/pct) and table margins (on/off)
#[class: table]
result$crosstab.nosub <- t1 #crosstab with subtotals suppressed [class: dataframe; or NULL if no subtotals suppressed]
class(result) <- "crosstab"
#Return 'result' as output of function
result
}
print.crosstab <- function(x,dec.places=x$dec.places,subtotals=x$subtotals,...) {
###################################################################################
# #
# Function created by Dr Paul Williamson, Dept. of Geography and Planning, #
# School of Environmental Sciences, University of Liverpool, UK. #
# #
# Adapted from the function print.ctab() in the catspec packge. #
# #
# Version: 12th July 2013 #
# #
# Designed to provide optimal viewing of the output from crosstab() #
# #
###################################################################################
row.vars <- x$row.vars
col.vars <- x$col.vars
n.row.vars <- length(row.vars)
n.col.vars <- length(col.vars)
n.vars <- n.row.vars + n.col.vars
if (length(x$type)>1) {
z<-length(names(dimnames(x$crosstab)))
if (x$style=="long") {
row.vars<-c(row.vars,z)
} else {
col.vars<-c(z,col.vars)
}
}
if (n.vars==1) {
if (length(x$type)==1) {
tmp <- data.frame(round(x$crosstab,x$dec.places))
colnames(tmp)[2] <- ifelse(x$type=="frequency","Count","%")
print(tmp,row.names=FALSE)
} else {
print(round(x$crosstab,x$dec.places))
}
}
#If table has only 2 dimensions, or subtotals required for >2 dimensional table,
#print table using ftable() on x$crosstab
if ((n.vars == 2) | ((subtotals==TRUE) & (n.vars>2))) {
tbl <- ftable(x$crosstab,row.vars=row.vars,col.vars=col.vars)
if (!all(as.integer(tbl)==as.numeric(tbl))) tbl <- round(tbl,dec.places)
print(tbl,...)
}
#If subtotals NOT required AND > 2 dimensions, print table using write.table() on x$crosstab.nosub
if ((subtotals==FALSE) & (n.vars>2)) {
t1 <- x$crosstab.nosub
#Convert numbers to required decimal places, right aligned
width <- max( nchar(t1[1,]), nchar(t1[2,]), 7 )
dec.places <- x$dec.places
number.format <- paste("%",width,".",dec.places,"f",sep="")
t1[3:nrow(t1),((n.row.vars+1):ncol(t1))] <- sprintf(number.format,as.numeric(t1[3:nrow(t1),((n.row.vars+1):ncol(t1))]))
#Adjust column variable label to same width as numbers, left aligned, padding with trailing spaces as required
col.var.format <- paste("%-",width,"s",sep="")
t1[1,(n.row.vars+1):ncol(t1)] <- sprintf(col.var.format,t1[1,(n.row.vars+1):ncol(t1)])
#Adjust column category labels to same width as numbers, right aligned, padding with preceding spaces as required
col.cat.format <- paste("%",width,"s",sep="")
t1[2,(n.row.vars+1):ncol(t1)] <- sprintf(col.cat.format,t1[2,(n.row.vars+1):ncol(t1)])
#Adjust row labels so that each column is of fixed width, using trailing spaces as required
for (i in 1:n.row.vars) {
width <- max(nchar(t1[,i])) + 2
row.lab.format <- paste("%-",width,"s",sep="")
t1[,i] <- sprintf(row.lab.format,t1[,i])
}
write.table(t1,quote=FALSE,col.names=FALSE,row.names=FALSE)
}
}
library(sjPlot)
library(sjmisc)
library(sjlabelled)
theme_set(theme_bw())
##Descriptive Statistics: Demographic information
theme_set(theme_bw())
#degree
freq(degree, display.type = F,report.nas = F, headings = T, cumul = F, style = "grid")
plot_frq(degree, geom.colors = c("red", "yellow"), title = "Degree")
crosstab(data, row.vars = c("degree"),
col.vars ="world.region", type = "f", style = "long",
addmargins = T, dec.places = 0, subtotals = T)
#language of instruction
freq(language.of.instruction, display.type = F,
report.nas = F, headings = T, cumul = F, style = "grid")
plot_frq(language.of.instruction, geom.colors = "#336699", title = "Language of instruction")
crosstab(data, row.vars = c("language.of.instruction"),
col.vars ="world.region", type = "f", style = "long",
addmargins = T, dec.places = 0, subtotals = T)
#Gender
freq(Gender, display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
plot_frq(Gender, geom.colors = "#336699", title = "Gender")
crosstab(data, row.vars = c("Gender"),
col.vars ="world.region", type = "f", style = "long",
addmargins = T, dec.places = 0, subtotals = T)
#Age
freq(age, display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
plot_frq(age, geom.colors = "#336699", title = "Age")
crosstab(data, row.vars = c("age"),
col.vars ="world.region", type = "f", style = "long",
addmargins = T, dec.places = 0, subtotals = T)
#World Region
freq(world.region, display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
plot_frq(world.region, geom.colors = "#336699", title = "World Region")
#Family income
freq(What.was.your.annual.family.income.when.you.were.applying.to.study.abroad..estimate.in.US.dollars.,
display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
crosstab(data, row.vars = "family.income",
col.vars ="world.region",
type = "f", style = "long", addmargins = T, dec.places = 0, subtotals = T)
#Length of stay in Russia
freq(How.long.have.you.been.in.Russia.studying.for.your.current.program.,
display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
#Student finance status
freq(How.are.you.financing.your.participation.in.the.program.,
display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
crosstab(data, row.vars = c("How.are.you.financing.your.participation.in.the.program."),
col.vars ="world.region", type = "f", style = "long",
addmargins = T, dec.places = 0, subtotals = T)
#stay in Russia
freq(Have.you.ever.been.in.Russia.before.you.enrolled.for.your.current.program,
display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
#Data preparation: Push factors in home country influencing the decision to study in Russia
#unavailable program
unavailable.program <-as.factor(Unavailability.of.the.desired.study.program)
unavailable.program <- factor(unavailable.program,levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(unavailable.program, Unavailability.of.the.desired.study.program)
#quality of education
low.educational.quality<-as.factor(Low.quality.of.education)
low.educational.quality <- factor(low.educational.quality,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(low.educational.quality, Low.quality.of.education)
#competitive University admission in home country
competitive.admission<-as.factor(Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution)
competitive.admission <- factor(competitive.admission,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(competitive.admission, Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution)
#Advantage of international degree
advantage.of.international.degree<-as.factor(Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market)
advantage.of.international.degree <- factor(advantage.of.international.degree,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(advantage.of.international.degree, Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market)
#unavailability of scholarships
unavailability.of.scholarship<-as.factor(Unavailability.of.scholarship.opportunities)
unavailability.of.scholarship <- factor(unavailability.of.scholarship,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(unavailability.of.scholarship, Unavailability.of.scholarship.opportunities)
#encouragement from family
encouragement.from.family<-as.factor(Encouragement.from.my.family.to.study.abroad)
encouragement.from.family <- factor(encouragement.from.family,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(encouragement.from.family, Encouragement.from.my.family.to.study.abroad)
#encouragement from friends
encouragement.from.friends<-as.factor(Encouragement.from..my.friends.to.study.abroad)
encouragement.from.friends <- factor(encouragement.from.friends,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(encouragement.from.friends, Encouragement.from..my.friends.to.study.abroad)
#better earning prospects
better.earning.prospects<-as.factor(Better.earning.prospects.abroad)
better.earning.prospects <- factor(better.earning.prospects,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(better.earning.prospects, Better.earning.prospects.abroad)
#social prestige
social.prestige<-as.factor(The.social.prestige.of.studying.abroad)
social.prestige <- factor(social.prestige,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(social.prestige, The.social.prestige.of.studying.abroad)
#experience different culture
experience.different.culture<-as.factor(To.experience.a.different.culture)
experience.different.culture <- factor(experience.different.culture,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(experience.different.culture, To.experience.a.different.culture)
#Descriptive Statistics: Push Factors influencing the decision to move to Russia
#push factors
pushfactor<-data.frame(unavailable.program,low.educational.quality,competitive.admission,
advantage.of.international.degree,unavailability.of.scholarship,
encouragement.from.family,encouragement.from.friends,better.earning.prospects,
social.prestige,experience.different.culture)
freq(pushfactor, display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
HCpushfactor<-data.frame(degree, world.region, unavailable.program,low.educational.quality,competitive.admission,
advantage.of.international.degree,unavailability.of.scholarship,
encouragement.from.family,encouragement.from.friends,better.earning.prospects,
social.prestige,experience.different.culture)
#Crosstab between Push Factors and World Region with degree
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Unavailability.of.the.desired.study.program", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Low.quality.of.education", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Competitive.university.admission.process..difficult.to.gain.admission.to.a.quality.local.institution",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Perceived.advantage.of.international.degree.over.a.local.one.at.the.local.job.market",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Unavailability.of.scholarship.opportunities",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Encouragement.from.my.family.to.study.abroad",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Encouragement.from..my.friends.to.study.abroad",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Better.earning.prospects.abroad",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="The.social.prestige.of.studying.abroad",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="To.experience.a.different.culture",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
is.factor(world.region)
#Anova
#Unavailable program
data$region<-as.factor(world.region)
attach(data)
leveneTest(Unavailability.of.the.desired.study.program ~ region, data = data)
res.aov <- aov(Unavailability.of.the.desired.study.program ~ region, data = data)
# Summary of the analysis
summary(res.aov)
TukeyHSD(res.aov)
summary(glht(res.aov, linfct = mcp(region = "Tukey")))
#Graphs
#Plots for the push factors in the home countries
#Push factors
plot_stackfrq(pushfactor)
#Unavailability of desired study program
plot_grpfrq(unavailable.program, world.region, geom.colors = "gs", title = "Unavailability of desired study program in Home Country")
plot_grpfrq(unavailable.program, degree, geom.colors = "gs", title = "Unavailability of Program")
#Low quality of education
plot_grpfrq(low.educational.quality, world.region, geom.colors = "gs", title = "Low quality of education in Home Country")
plot_grpfrq(low.educational.quality, degree, geom.colors = "gs", title = "Low quality of education")
#Competitive univeristy admission
plot_grpfrq(competitive.admission, world.region, geom.colors = "gs", title = "Competitive University admission process in Home Country")
plot_grpfrq(competitive.admission, degree, geom.colors = "gs", title = "Competitive university admission")
#Perceived advantage of international degree
plot_grpfrq(advantage.of.international.degree, world.region, geom.colors = "gs",
title = "Perceived advantage of international degree than a local degree")
plot_grpfrq(advantage.of.international.degree, degree, geom.colors = "gs",
title = "Perceived advantage of internatioal degree")
#Unavailability of Scholarship
plot_grpfrq(unavailability.of.scholarship, world.region, geom.colors = "gs", title = "Unavailability of scholarship opportunities in Home Country")
plot_grpfrq(unavailability.of.scholarship, degree, geom.colors = "gs", title = "Unavailability of scholarship")
#Encouragement from family
plot_grpfrq(encouragement.from.family, world.region, geom.colors = "gs", title = "Encouragement from family")
plot_grpfrq(encouragement.from.family, degree, geom.colors = "gs", title = "Encouragement from family")
#Encouragement from friends
plot_grpfrq(encouragement.from.friends, world.region, geom.colors = "gs", title = "Encouragement from friends")
plot_grpfrq(encouragement.from.friends, degree, geom.colors = "gs", title = "Encouragement from friends")
#Better earning prospects
plot_grpfrq(better.earning.prospects, world.region, geom.colors = "gs", title = "Better earning prospects abroad")
plot_grpfrq(better.earning.prospects, degree, geom.colors = "gs", title = "Better earning prospects")
#The social prestige of studying abroad
plot_grpfrq(social.prestige, world.region, geom.colors = "gs", title = "The social prestige of studying abroad")
plot_grpfrq(social.prestige, degree, geom.colors = "gs", title = "The.social.prestige.of.studying.abroad")
#Experience different culture
plot_grpfrq(experience.different.culture, world.region, geom.colors = "gs", title = "EXperience different culture abroad")
plot_grpfrq(experience.different.culture, degree, geom.colors = "gs", title = "EXperience different culture")
#Data preparation: Pull factors in Russia influencing the decision to study in Russia
#availability of desired program
available.study.program <-as.factor(Availability.of.desired.study.program)
available.study.program <- factor(available.study.program,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(available.study.program, Availability.of.desired.study.program)
#high quality of education
high.educational.quality<-as.factor(Higher.quality.of.education..compared.to.home.country.)
high.educational.quality <- factor(high.educational.quality,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(high.educational.quality, Higher.quality.of.education..compared.to.home.country.)
#low cost of living
low.cost.living<-as.factor(Low.cost.of.living)
low.cost.living <- factor(low.cost.living,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(low.cost.living,Low.cost.of.living)
#low tuition fees
low.tuition<-as.factor(Low.tuition.fees)
low.tuition <- factor(low.tuition,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(low.tuition, Low.tuition.fees)
#Awarded scholarships
scholarship.tuitionwaiver<-as.factor(Awarded.scholarships.or.tuition.waiver)
scholarship.tuitionwaiver <- factor(scholarship.tuitionwaiver,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(scholarship.tuitionwaiver,Awarded.scholarships.or.tuition.waiver)
#Attraction to Russian culture
russian.culture<-as.factor(Attraction.to.Russian.culture..society)
russian.culture <- factor(russian.culture,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(russian.culture, Attraction.to.Russian.culture..society)
#Career prospects in Russia
career.prospects<-as.factor(Career.prospects.in.Russia)
career.prospects <- factor(career.prospects,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(career.prospects, Career.prospects.in.Russia)
#Recommendations from family and friends
family.friends.recommendations<-as.factor(Personal.recommendations.from.parents..relatives..and.friends)
family.friends.recommendations <- factor(family.friends.recommendations,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(family.friends.recommendations, Personal.recommendations.from.parents..relatives..and.friends)
#Cultural Proximity
cultural.proximity<-as.factor(cultural.proximity.with.home)
cultural.proximity <- factor(cultural.proximity,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(cultural.proximity, cultural.proximity.with.home)
#Geographical proximity
geographical.proximity<-as.factor(geographical.proximity.with.home)
geographical.proximity <- factor(geographical.proximity,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(geographical.proximity, geographical.proximity.with.home)
#Descriptive Statistics: Pull Factors in Russia influencing the decision to move to Russia
Pullfactor_Russia<-data.frame(available.study.program, high.educational.quality, low.cost.living,
low.tuition, scholarship.tuitionwaiver, russian.culture, career.prospects,
family.friends.recommendations, cultural.proximity, geographical.proximity)
freq(Pullfactor_Russia, display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
Pullfactor_Ru<-data.frame(world.region, degree, available.study.program, high.educational.quality, low.cost.living,
low.tuition, scholarship.tuitionwaiver, russian.culture, career.prospects,
family.friends.recommendations, cultural.proximity, geographical.proximity)
#Crosstab between Pull Factors in Russia and World Region with degree
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Unavailability.of.the.desired.study.program", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Higher.quality.of.education..compared.to.home.country.", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Low.cost.of.living", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Availability.of.scholarships", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Attraction.to.Russian.culture..society", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Career.prospects.in.Russia", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Personal.recommendations.from.parents..relatives..and.friends",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="cultural.proximity.with.home", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="geographical.proximity.with.home", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
#Plots for the pull factors in Russia
plot_stackfrq(Pullfactor_Russia)
#Availability of desired study program
plot_grpfrq(available.study.program, world.region, geom.colors = "gs", title = "Availability of desired study program in Russia")
plot_grpfrq(available.study.program, degree, geom.colors = "gs", title = "Availability of desired study program")
#High quality of education
plot_grpfrq(high.educational.quality, world.region, geom.colors = "gs", title = "High quality of education in Russia")
plot_grpfrq(high.educational.quality, degree, geom.colors = "gs", title = "High quality of education")
#Low cost of living
plot_grpfrq(low.cost.living, world.region, geom.colors = "gs", title = "Low cost of living in Russia")
plot_grpfrq(low.cost.living, degree, geom.colors = "gs", title = "Low cost of living")
#Low tuition fees
plot_grpfrq(low.tuition, world.region, geom.colors = "gs", title = "Low tuition fees in Russia")
plot_grpfrq(low.tuition, degree, geom.colors = "gs", title = "Low tuition fees")
#Scholarship or tuition waiver
plot_grpfrq(scholarship.tuitionwaiver, world.region, geom.colors = "gs", title = "Scholarship or tuition waiver")
plot_grpfrq(scholarship.tuitionwaiver, degree, geom.colors = "gs", title = "Scholarship or tuition waiver")
#Attraction to Russian culture
plot_grpfrq(russian.culture, world.region, geom.colors = "gs", title = "Attraction to Russian culture")
plot_grpfrq(russian.culture, degree, geom.colors = "gs", title = "Attraction to Russian culture")
#Career prospects in Russia
plot_grpfrq(career.prospects, world.region, geom.colors = "gs", title = "Career prospects in Russia")
plot_grpfrq(career.prospects, degree, geom.colors = "gs", title = "Career prospects in Russia")
#Recommendations from family and friends
plot_grpfrq(family.friends.recommendations, world.region, geom.colors = "gs", title = "Recommendations from family and friends")
plot_grpfrq(family.friends.recommendations, degree, geom.colors = "gs", title = "Recommendations from family and friends")
#cultural.proximity
plot_grpfrq(cultural.proximity, world.region, geom.colors = "gs", title = "Home Country's cultural proximity to Russia")
plot_grpfrq(cultural.proximity, degree, geom.colors = "gs", title = "Cultural proximity")
#Geograhical proximity
plot_grpfrq(geographical.proximity, world.region, geom.colors = "gs", title = "Home Country's geograhical proximity to Russia")
plot_grpfrq(geographical.proximity, degree, geom.colors = "gs", title = "Geograhical proximity")
#Data preparation: Pull factors in HSE
#Quality and Reputation of HSE
HSE.qualityandreputation <-as.factor(Quality.and.reputation.of.the.University)
HSE.qualityandreputation <- factor(HSE.qualityandreputation,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(HSE.qualityandreputation, Quality.and.reputation.of.the.University)
#Recognition of HSE degree
recognition.of.HSE.degree<-as.factor(Recognition.of.the.degree.in.my.home.country)
recognition.of.HSE.degree <- factor(recognition.of.HSE.degree,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(recognition.of.HSE.degree, Recognition.of.the.degree.in.my.home.country)
#Quality of teaching staff
quality.teachers<-as.factor(Quality.of.the.teaching.staff)
quality.teachers <- factor(quality.teachers,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(quality.teachers,Quality.of.the.teaching.staff)
#Reputation of the alumni
alumni.reputation<-as.factor(The.reputation.of.the.alumni)
alumni.reputation <- factor(alumni.reputation,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(alumni.reputation, The.reputation.of.the.alumni)
#reputation of the international community
internationalcommunity.reputation<-as.factor(The.reputation.of.the.international.community)
internationalcommunity.reputation <- factor(internationalcommunity.reputation,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(internationalcommunity.reputation,The.reputation.of.the.international.community)
#HSE rank
HSE.rank<-as.factor(HSE.position.in.international.university.rankings)
HSE.rank <- factor(HSE.rank,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(HSE.rank, HSE.position.in.international.university.rankings)
#Cost of tuition
tuition.cost<-as.factor(Cost.of.tuition.for.international.students)
tuition.cost <- factor(tuition.cost,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(tuition.cost, Cost.of.tuition.for.international.students)
#Availability of scholarship
available.scholarships<-as.factor(Availability.of.scholarships)
available.scholarships <- factor(available.scholarships,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(available.scholarships,Availability.of.scholarships)
#Support Services for international students
international.students.support<-as.factor(Support.services.for.international.students)
international.students.support <- factor(international.students.support,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(international.students.support, Support.services.for.international.students)
#Graduate employment rate
graduate.employment<-as.factor(Graduates.employment.rates)
graduate.employment <- factor(graduate.employment,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(graduate.employment, Graduates.employment.rates)
#HSE international strategic alliances
HSE.alliances<-as.factor(HSE.s.international.strategic.alliances)
HSE.alliances <- factor(HSE.alliances,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(HSE.alliances, HSE.s.international.strategic.alliances)
#Local Employers preference for HSE degrees
employers.preference.for.HSE.degrees<-as.factor(Local.employers.preference.of..degrees.awarded.by.HSE)
employers.preference.for.HSE.degrees <- factor(employers.preference.for.HSE.degrees,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(employers.preference.for.HSE.degrees, Local.employers.preference.of..degrees.awarded.by.HSE)
#Descriptive Statistics: Pull Factors in HSE influencing the decision to move to Russia
Pullfactor_HSE<-data.frame(HSE.qualityandreputation,recognition.of.HSE.degree,quality.teachers,
alumni.reputation,internationalcommunity.reputation,HSE.rank,tuition.cost,
available.scholarships,international.students.support,graduate.employment,
HSE.alliances,employers.preference.for.HSE.degrees)
freq(Pullfactor_HSE, display.type = F, report.nas = F, headings = T, cumul = F, style = "grid")
HSEPullfactor<-data.frame(world.region, degree, HSE.qualityandreputation,recognition.of.HSE.degree,quality.teachers,
alumni.reputation,internationalcommunity.reputation,HSE.rank,tuition.cost,
available.scholarships,international.students.support,graduate.employment,
HSE.alliances,employers.preference.for.HSE.degrees)
#Crosstab between Pull Factors in HSE and World Region with degree
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Quality.and.reputation.of.the.University", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Recognition.of.the.degree.in.my.home.country", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Quality.of.the.teaching.staff", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="The.reputation.of.the.alumni", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="The.reputation.of.the.international.community", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="HSE.position.in.international.university.rankings", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Cost.of.tuition.for.international.students", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Availability.of.scholarships", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Support.services.for.international.students", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Graduates.employment.rates", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="HSE.s.international.strategic.alliances", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Local.employers.preference.of..degrees.awarded.by.HSE", type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
#Plots for the pull factors in HSE
plot_stackfrq(Pullfactor_HSE)
#Quality and reputation of HSE
plot_grpfrq(HSE.qualityandreputation, world.region, geom.colors = "gs", title = "Quality and reputation of HSE")
plot_grpfrq(HSE.qualityandreputation, degree, geom.colors = "gs", title = "Quality and reputation of HSE")
#Recognition of degree
plot_grpfrq(recognition.of.HSE.degree, world.region, geom.colors = "gs", title = "Recognition of HSE degree in my Home Country")
plot_grpfrq(recognition.of.HSE.degree, degree, geom.colors = "gs", title = "Recognition of degree")
#Quality of teachers
plot_grpfrq(quality.teachers, world.region, geom.colors = "gs", title = "Quality of HSE teachers")
plot_grpfrq(quality.teachers, degree, geom.colors = "gs", title = "Quality of teachers")
#Reputation of alumni
plot_grpfrq(alumni.reputation, world.region, geom.colors = "gs", title = "Reputation of HSE alumni")
plot_grpfrq(alumni.reputation, degree, geom.colors = "gs", title = "Reputation of alumni")
#Reputation of International Community
plot_grpfrq(internationalcommunity.reputation, world.region, geom.colors = "gs", title = "Reputation of HSE International Community")
plot_grpfrq(internationalcommunity.reputation, degree, geom.colors = "gs", title = "Reputation of International Community")
#HSE's rank
plot_grpfrq(HSE.rank, world.region, geom.colors = "gs", title = "HSE's position on World Universities ranking")
plot_grpfrq(HSE.rank, degree, geom.colors = "gs", title = "HSE's rank")
#Cost of tuition
plot_grpfrq(tuition.cost, world.region, geom.colors = "gs", title = "Cost of tuition in HSE")
plot_grpfrq(tuition.cost, degree, geom.colors = "gs", title = "Cost of tuition")
#Recognition of degree
plot_grpfrq(available.scholarships, world.region, geom.colors = "gs", title = "Availability of scholarships in HSE")
plot_grpfrq(available.scholarships, degree, geom.colors = "gs", title = "Availability of scholarships")
#Support services for international students
plot_grpfrq(international.students.support, world.region, geom.colors = "gs", title = "Support services for international students in HSE")
plot_grpfrq(international.students.support, degree, geom.colors = "gs", title = "Support services for international students")
#Graduate employment rate
plot_grpfrq(graduate.employment, world.region, geom.colors = "gs", title = "HSE's Graduate employment rate")
plot_grpfrq(graduate.employment, degree, geom.colors = "gs", title = "Graduate employment rate")
#HSE alliances
plot_grpfrq(HSE.alliances, world.region, geom.colors = "gs", title = "HSE strategic international alliances")
plot_grpfrq(HSE.alliances, degree, geom.colors = "gs", title = "HSE alliances")
#Recognition of degree
plot_grpfrq(employers.preference.for.HSE.degrees, world.region, geom.colors = "gs", title = "Local employers preference for HSE degrees")
plot_grpfrq(employers.preference.for.HSE.degrees, degree, geom.colors = "gs", title = "Local preference for HSE degrees")
###STUDENTS POST GRADUATION PLANS##
#Descriptive Statistics
#Post Graduation migration plans
freq(What.are.your.plans.after.graduation., display.type = F, report.nas = F,
headings = T, cumul = F, style = "grid")
plot_grpfrq(What.are.your.plans.after.graduation., world.region, geom.colors = "gs",
title = "Post graduation plans")
plot_grpfrq(What.are.your.plans.after.graduation., degree, geom.colors = "gs",
title = "Post graduation plans")
#Stay in Russia
freq(What.will.be.your.reason.for.staying.in.Russia.after.graduation., display.type = F, report.nas = F,
headings = T, cumul = F, style = "grid")
plot_grpfrq(What.will.be.your.reason.for.staying.in.Russia.after.graduation., world.region, geom.colors = "gs",
title = "Reasons for staying in Russia after graduation")
plot_grpfrq(What.are.your.plans.after.graduation., degree, geom.colors = "gs",
title = "Reasons for staying in Russia after graduation")
#Return home
freq(What.will.be.your.reason.for.returning.home.after.graduation., display.type = F, report.nas = F,
headings = T, cumul = F, style = "grid")
plot_grpfrq(What.will.be.your.reason.for.returning.home.after.graduation., world.region, geom.colors = "gs",
title = "Reasons for returning home after graduation")
plot_grpfrq(What.will.be.your.reason.for.returning.home.after.graduation., degree, geom.colors = "gs",
title = "Reasons for returning home after graduation")
#move to another country
freq(What.will.be.your.reason.for.moving.to.another.country.after.graduation., display.type = F, report.nas = F,
headings = T, cumul = F, style = "grid")
plot_grpfrq(What.will.be.your.reason.for.moving.to.another.country.after.graduation., world.region, geom.colors = "gs",
title = "Reasons for moving to another country after graduation")
plot_grpfrq(What.will.be.your.reason.for.moving.to.another.country.after.graduation., degree, geom.colors = "gs",
title = "Reasons for moving to another country after graduation", show.na = F)
#Stay in Russia after graduation
#Better job opportunities
job.opportunities <-as.factor(Better.job.opportunities..in.comparison.with.home.country.)
job.opportunities <- factor(job.opportunities,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(job.opportunities, Better.job.opportunities..in.comparison.with.home.country.)
#Higher quality of life
high.quality.life<-as.factor(Higher.quality.of.life..in.comparison.with.home.country.)
high.quality.life <- factor(high.quality.life,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(high.quality.life, Higher.quality.of.life..in.comparison.with.home.country.)
#Better career opportunities
career.opportunities<-as.factor(Better.career.opportunities.and.advancement.in.chosen.profession)
career.opportunities <- factor(career.opportunities,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(career.opportunities,Better.career.opportunities.and.advancement.in.chosen.profession)
#high income level
high.income.level<-as.factor(Higher.income.level)
high.income.level <- factor(high.income.level,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(high.income.level, Higher.income.level)
#Ties to family and friends
family.friends.ties<-as.factor(Ties.to.family.and.friends)
family.friends.ties <- factor(family.friends.ties,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(family.friends.ties,Ties.to.family.and.friends)
#international experience
international.experience<-as.factor(Gain.international.experience)
international.experience <- factor(international.experience,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(international.experience, Gain.international.experience)
#family expectations
familyexpectations<-as.factor(Family.expectations)
familyexpectations <- factor(familyexpectations,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(familyexpectations, Family.expectations)
#Restrcitive cultural practices
cultural.practices<-as.factor(Restrictive.cultural.practices..eg..pressure.to.marry.)
cultural.practices <- factor(cultural.practices,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(cultural.practices, Restrictive.cultural.practices..eg..pressure.to.marry.)
#limited jobs
limited.jobs<-as.factor(Limited.job.opportunities.in.home.country)
limited.jobs <- factor(limited.jobs,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(limited.jobs, Limited.job.opportunities.in.home.country)
#lower income levels
lower.income<-as.factor(Lower.income.levels)
lower.income <- factor(lower.income,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(lower.income, Lower.income.levels)
#Lower quality of life
lower.quality.life<-as.factor(Lower.quality.of.life.2)
lower.quality.life <- factor(lower.quality.life,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(lower.quality.life, Lower.quality.of.life.2)
#Political persecution
politicalpersecution<-as.factor(Political.persecution)
politicalpersecution <- factor(politicalpersecution,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(politicalpersecution, Political.persecution)
#Dangers to ones own life
danger.to.ones.life<-as.factor(Danger.or.fear.for.one.s.own.life)
danger.to.ones.life <- factor(Danger.or.fear.for.one.s.own.life,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(danger.to.ones.life, Danger.or.fear.for.one.s.own.life)
#Factors influencing the decision to stay in Russia
RStayFactors<-data.frame(job.opportunities,
high.quality.life,career.opportunities,
high.income.level, family.friends.ties,
international.experience, familyexpectations,
cultural.practices,limited.jobs,lower.income,lower.quality.life,
politicalpersecution,danger.to.ones.life)
RussiaStay_factors<-data.frame(world.region, degree, Better.job.opportunities..in.comparison.with.home.country.+
Higher.quality.of.life..in.comparison.with.home.country.+
Better.career.opportunities.and.advancement.in.chosen.profession+
Higher.income.level+ Ties.to.family.and.friends+
Gain.international.experience+ Family.expectations+
Restrictive.cultural.practices..eg..pressure.to.marry.+
Limited.job.opportunities.in.home.country+Lower.income.levels+
Lower.quality.of.life.2+Political.persecution+Danger.or.fear.for.one.s.own.life)
#Crosstabulation of Russia Stay factors with Region and degree
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Better.job.opportunities..in.comparison.with.home.country.",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Higher.quality.of.life..in.comparison.with.home.country.",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Better.career.opportunities.and.advancement.in.chosen.profession",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Higher.income.level",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Ties.to.family.and.friends",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Gain.international.experience",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Family.expectations",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Limited.job.opportunities.in.home.country",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Lower.income.levels",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Lower.quality.of.life.2",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Political.persecution",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Danger.or.fear.for.one.s.own.life",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
#Graphs
#Factors influencing the decision to stay
plot_stackfrq(RStayFactors)
#Better job opportunities
plot_grpfrq(job.opportunities, world.region, geom.colors = "gs", title = "Better job opportunities")
plot_grpfrq(job.opportunities, degree, geom.colors = "gs", title = "Better job opportunities")
#Higher quality of life
plot_grpfrq(high.quality.life, world.region, geom.colors = "gs", title = "Higher quality of life")
plot_grpfrq(high.quality.life, degree, geom.colors = "gs", title = "Higher quality of life")
#Better career opportunities
plot_grpfrq(career.opportunities, world.region, geom.colors = "gs", title = "Better career opportunities")
plot_grpfrq(career.opportunities, degree, geom.colors = "gs", title = "Better career opportunities")
#high income level
plot_grpfrq(high.income.level, world.region, geom.colors = "gs", title = "high income level")
plot_grpfrq(high.income.level, degree, geom.colors = "gs", title = "high income level")
#Ties to family and friends
plot_grpfrq(family.friends.ties, world.region, geom.colors = "gs", title = "Ties to family and friends")
plot_grpfrq(family.friends.ties, degree, geom.colors = "gs", title = "Ties to family and friends")
#international experience
plot_grpfrq(international.experience, world.region, geom.colors = "gs", title = "Gain international experience")
plot_grpfrq(international.experience, degree, geom.colors = "gs", title = "Gain international experience")
#family expectations
plot_grpfrq(familyexpectations, world.region, geom.colors = "gs", title = "Family expectations")
plot_grpfrq(familyexpectations, degree, geom.colors = "gs", title = "Family expectations")
#Restrcitive cultural practices
plot_grpfrq(cultural.practices, world.region, geom.colors = "gs", title = "Restrictive cultural practices")
plot_grpfrq(cultural.practices, degree, geom.colors = "gs", title = "Restrictive cultural practices")
#limited jobs
plot_grpfrq(limited.jobs, world.region, geom.colors = "gs", title = "Limited job opportunities")
plot_grpfrq(limited.jobs, degree, geom.colors = "gs", title = "Limited job opportunities")
#lower income levels
plot_grpfrq(lower.income, world.region, geom.colors = "gs", title = "Lower income level")
plot_grpfrq(lower.income, degree, geom.colors = "gs", title = "Lower income level")
#Lower quality of life
plot_grpfrq(lower.quality.life, world.region, geom.colors = "gs", title = "Lower quality of life")
plot_grpfrq(lower.quality.life, degree, geom.colors = "gs", title = "Lower quality of life")
#Political persecution
plot_grpfrq(politicalpersecution, world.region, geom.colors = "gs", title = "Political persecution")
plot_grpfrq(politicalpersecution, degree, geom.colors = "gs", title = "Political persecution")
#Dangers to ones own life
plot_grpfrq(danger.to.ones.life, world.region, geom.colors = "gs", title = "Danger to one's own life")
plot_grpfrq(danger.to.ones.life, degree, geom.colors = "gs", title = "Danger to one's own life")
#Data preparation Returning home: Pull factors in home country
#Better job opportunities
professional.opportunities <-as.factor(Better.professional.opportunities.in.home.country)
professional.opportunities <- factor(professional.opportunities,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(professional.opportunities, Better.professional.opportunities.in.home.country)
#Better quality of life
better.quality.life<-as.factor(Better.quality.of.living.in.home.country)
better.quality.life <- factor(better.quality.life,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(better.quality.life, Better.quality.of.living.in.home.country)
#Feeling more comfortable at Home
home.comfort<-as.factor(Feeling.more.comfortable.at.home)
home.comfort <- factor(home.comfort,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(home.comfort,Feeling.more.comfortable.at.home)
#higher income level
higher.income <-as.factor(Higher.income.levels)
higher.income <- factor(higher.income,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(higher.income, Higher.income.levels)
#Family ties back home
family.ties<-as.factor(Family.ties.back.home)
family.ties <- factor(family.ties,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(family.ties, Family.ties.back.home)
#Feelings of alienation
feelings.of.alienation <-as.factor(Feelings.of.alienation.from.the.Russian.culture.and.population)
feelings.of.alienation <- factor(feelings.of.alienation,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(feelings.of.alienation, Feelings.of.alienation.from.the.Russian.culture.and.population)
#Difficulties in finding job
job.difficulties<-as.factor(Difficulties.in.finding.a.job)
job.difficulties <- factor(job.difficulties,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(job.difficulties, Difficulties.in.finding.a.job)
#Poor working conditions
poor.work<-as.factor(Poor.working.conditions)
poor.work <- factor(poor.work,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(poor.work, Poor.working.conditions)
#Lower quality of life
low.life.quality <-as.factor(Lower.quality.of.life)
low.life.quality <- factor(low.life.quality,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(low.life.quality, Lower.quality.of.life)
#Perceived or Experienced discrimination
discrimination<-as.factor(Perceived.or.experienced.discrimination)
discrimination <- factor(discrimination,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(discrimination, Perceived.or.experienced.discrimination)
#Crime and low level of safety
crime.safety<-as.factor(Crime.and.low.level.of.safety)
crime.safety <- factor(crime.safety,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(crime.safety, Crime.and.low.level.of.safety)
#Strict migration process
strict.migration<-as.factor(Strict.migration.process.difficulties.in.getting.visas.)
strict.migration <- factor(strict.migration,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(strict.migration, Strict.migration.process.difficulties.in.getting.visas.)
#Factors influencing the decision to stay in Russia
HCReturnFactors<-data.frame(professional.opportunities,better.quality.life,home.comfort,
higher.income, family.ties,feelings.of.alienation, poor.work,
low.life.quality,discrimination,crime.safety,strict.migration)
HomeReturn_factors<-data.frame(world.region, degree, Better.professional.opportunities.in.home.country+
Better.quality.of.living.in.home.country+Feeling.more.comfortable.at.home+
Higher.income.levels+ Family.ties.back.home+
Feelings.of.alienation.from.the.Russian.culture.and.population+
Difficulties.in.finding.a.job+ Poor.working.conditions+
Lower.quality.of.life+ Crime.and.low.level.of.safety+
Strict.migration.process.difficulties.in.getting.visas.)
#Crosstabulation of Return home factors with Region and degree
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Better.professional.opportunities.in.home.country",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Better.quality.of.living.in.home.country",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Higher.income.levels",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Family.ties.back.home",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Feelings.of.alienation.from.the.Russian.culture.and.population",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Difficulties.in.finding.a.job",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Poor.working.conditions",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Lower.quality.of.life",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Crime.and.low.level.of.safety",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Strict.migration.process.difficulties.in.getting.visas.",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
#Graphs
#Factors influencing the decision to stay
plot_stackfrq(HCReturnFactors)
#Better professional opportunities
plot_grpfrq(professional.opportunities, world.region, geom.colors = "gs", title = "Better professional opportunities")
plot_grpfrq(professional.opportunities, degree, geom.colors = "gs", title = "Better professional opportunities")
#Better quality of life
plot_grpfrq(better.quality.life, world.region, geom.colors = "gs", title = "Better quality of life")
plot_grpfrq(better.quality.life, degree, geom.colors = "gs", title = "Better quality of life")
#Feeling more comfortable at Home
plot_grpfrq(home.comfort, world.region, geom.colors = "gs", title = "Feeling more comfortable at Home")
plot_grpfrq(home.comfort, degree, geom.colors = "gs", title = "Feeling more comfortable at Home")
#Higher income level
plot_grpfrq(higher.income, world.region, geom.colors = "gs", title = "Higher income level")
plot_grpfrq(higher.income, degree, geom.colors = "gs", title = "Higher income level")
#Family ties back home
plot_grpfrq(family.ties, world.region, geom.colors = "gs", title = "Family ties back home")
plot_grpfrq(family.ties, degree, geom.colors = "gs", title = "Family ties back home")
#Feelings of alienation
plot_grpfrq(feelings.of.alienation, world.region, geom.colors = "gs", title = "Feelings of alienation")
plot_grpfrq(feelings.of.alienation, degree, geom.colors = "gs", title = "Feelings of alienation")
#Difficulties in finding job
plot_grpfrq(job.difficulties, world.region, geom.colors = "gs", title = "Difficulties in finding jobs")
plot_grpfrq(job.difficulties, degree, geom.colors = "gs", title = "Difficulties in finding jobs")
#Poor working conditions
plot_grpfrq(poor.work, world.region, geom.colors = "gs", title = "Poor working conditions")
plot_grpfrq(poor.work, degree, geom.colors = "gs", title = "Poor working conditions")
#Lower quality of life
plot_grpfrq(low.life.quality, world.region, geom.colors = "gs", title = "Lower quality of life")
plot_grpfrq(low.life.quality, degree, geom.colors = "gs", title = "Lower quality of life")
#Perceived or Experienced discrimination
plot_grpfrq(discrimination, world.region, geom.colors = "gs", title = "Perceived or Experienced discrimination")
plot_grpfrq(discrimination, degree, geom.colors = "gs", title = "Perceived or Experienced discrimination")
#Crime and low level of safety
plot_grpfrq(crime.safety, world.region, geom.colors = "gs", title = "Crime and low level of safety")
plot_grpfrq(crime.safety, degree, geom.colors = "gs", title = "Crime and low level of safety")
#Strict migration process
plot_grpfrq(strict.migration, world.region, geom.colors = "gs", title = "Strict migration process")
plot_grpfrq(strict.migration, degree, geom.colors = "gs", title = "Strict migration process")
#Data preparation: Moving to another country
#Better job opportunities
better_job.opportunities <-as.factor(Better.job.opportunities)
better_job.opportunities <- factor(better_job.opportunities,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(better_job.opportunities, Better.job.opportunities)
#higher quality of life
high_quality.life<-as.factor(Higher.quality.of.life)
high_quality.life <- factor(high_quality.life,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(high_quality.life, Higher.quality.of.life)
#Better career opportunities
better.career<-as.factor(Better.career.opportunities.and.advancement.in.chosen.profession.1)
better.career <- factor(better.career,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(better.career, Better.career.opportunities.and.advancement.in.chosen.profession.1)
#higher income level
high.income <-as.factor(Higher.income.levels.1)
high.income <- factor(high.income,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(high.income, Higher.income.levels.1)
#Family and Friends ties
family_friends.ties<-as.factor(Ties.to.family.and.friends.1)
family_friends.ties <- factor(family_friends.ties,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(family_friends.ties, Ties.to.family.and.friends.1)
#Gain international experience
gain.experience<-as.factor(Gain.international.experience.1)
gain.experience <- factor(gain.experience,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(gain.experience, Gain.international.experience.1)
#Flexible immigration process
flexible.immigration<-as.factor(Flexible.immigration.process)
flexible.immigration <- factor(flexible.immigration,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(flexible.immigration, Flexible.immigration.process)
#Moving to another country: Push factors in Russia
#Feelings of alienation
feeling.alienation <-as.factor(Feelings.of.alienation.from.the.Russian.culture.and.population.1)
feeling.alienation <- factor(feeling.alienation,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(feeling.alienation, Feelings.of.alienation.from.the.Russian.culture.and.population.1)
#Difficulties in finding job
finding.job<-as.factor(Difficulties.in.finding.a.job.1)
finding.job <- factor(finding.job,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(finding.job, Difficulties.in.finding.a.job.1)
#Poor working conditions
work.conditions<-as.factor(Poor.working.conditions.1)
work.conditions <- factor(work.conditions,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(work.conditions, Poor.working.conditions.1)
#Lower quality of life
low.quality <-as.factor(Lower.quality.of.life.1)
low.quality <- factor(low.quality,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(low.quality, Lower.quality.of.life.1)
#Perceived or Experienced discrimination
perceived.discrimination<-as.factor(Perceived.or.experienced.discrimination.1)
perceived.discrimination <- factor(perceived.discrimination,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(perceived.discrimination, Perceived.or.experienced.discrimination.1)
#Crime and low level of safety
crime.safety.levels<-as.factor(Crime.and.low.level.of.safety.1)
crime.safety.levels <- factor(crime.safety.levels,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(crime.safety.levels, Crime.and.low.level.of.safety.1)
#Strict migration process
strict.visa<-as.factor(Strict.migration.process.difficulties.in.getting.visas..1)
strict.visa <- factor(strict.visa,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(strict.visa, Strict.migration.process.difficulties.in.getting.visas..1)
#Moving to another country: Push factors in home country
#family expectations
family_expectations<-as.factor(Family.expectations.1)
family_expectations <- factor(family_expectations,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(family_expectations, Family.expectations.1)
#Availability of scholarship
restrictive.practices<-as.factor(Restrictive.cultural.practices..eg..pressure.to.marry..1)
restrictive.practices <- factor(restrictive.practices,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(restrictive.practices, Restrictive.cultural.practices..eg..pressure.to.marry..1)
#limited jobs
limited.jobs.opportunities<-as.factor(Limited.job.opportunities.in.home.country.1)
limited.jobs.opportunities <- factor(limited.jobs.opportunities,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(limited.jobs.opportunities, Limited.job.opportunities.in.home.country.1)
#lower income levels
low.income<-as.factor(Lower.income.levels.1)
low.income <- factor(low.income,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(low.income, Lower.income.levels.1)
#Lower quality of life
low_lifequality<-as.factor(Lower.quality.of.life.3)
low_lifequality <- factor(low_lifequality,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(low_lifequality, Lower.quality.of.life.3)
#Political persecution
political_persecution<-as.factor(Political.persecution.1)
political_persecution <- factor(political_persecution,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(political_persecution, Political.persecution.1)
#Dangers to ones own life
danger.to.life<-as.factor(Danger.or.fear.for.one.s.own.life.1)
danger.to.life <- factor(danger.to.life,
levels = c(1,2,3,4,5),
labels = c("Not at all influential",
"Slightly influential",
"Somewhat influential",
"Very influential",
"Extremely influential"))
table(danger.to.life, Danger.or.fear.for.one.s.own.life.1)
#Factors influencing the decision to stay in Russia
Move2CountryFactors<-data.frame(better_job.opportunities,high_quality.life,better.career,high.income,
family_friends.ties,gain.experience, flexible.immigration,feeling.alienation,
finding.job,work.conditions,low.quality,perceived.discrimination, crime.safety.levels,
strict.visa, family_expectations, restrictive.practices, limited.jobs.opportunities,
low.income, low_lifequality, political_persecution, danger.to.life)
MoveCountry_factors<-data.frame(world.region, degree, Better.job.opportunities,Higher.quality.of.life,
Better.career.opportunities.and.advancement.in.chosen.profession.1,
Higher.income.levels.1,Ties.to.family.and.friends.1,Gain.international.experience.1,
Flexible.immigration.process,Feelings.of.alienation.from.the.Russian.culture.and.population.1,
Difficulties.in.finding.a.job.1,Poor.working.conditions.1,Lower.quality.of.life.1,
Perceived.or.experienced.discrimination.1, Crime.and.low.level.of.safety.1,
Strict.migration.process.difficulties.in.getting.visas..1,Family.expectations.1,
Restrictive.cultural.practices..eg..pressure.to.marry..1,
Limited.job.opportunities.in.home.country.1, Lower.income.levels.1,
Lower.quality.of.life.3, Political.persecution.1, Danger.or.fear.for.one.s.own.life.1)
#Crosstabulation of Return home factors with Region and degree
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Better.job.opportunities",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Higher.quality.of.life",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Better.career.opportunities.and.advancement.in.chosen.profession.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Higher.income.levels.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Ties.to.family.and.friends.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Gain.international.experience.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Flexible.immigration.process",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Feelings.of.alienation.from.the.Russian.culture.and.population.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Difficulties.in.finding.a.job.1,Poor.working.conditions.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Lower.quality.of.life.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Perceived.or.experienced.discrimination.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Crime.and.low.level.of.safety.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Strict.migration.process.difficulties.in.getting.visas..1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Restrictive.cultural.practices..eg..pressure.to.marry..1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Family.expectations.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Limited.job.opportunities.in.home.country.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Lower.income.levels.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Lower.quality.of.life.3",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Political.persecution.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
crosstab(data, row.vars = c("world.region","degree"),
col.vars ="Danger.or.fear.for.one.s.own.life.1",
type = c("f", "t"), style = "long",
addmargins = T, dec.places = 2, subtotals = T)
#Graphs
#Factors influencing the decision to stay
plot_stackfrq(Move2CountryFactors)
#Better job opportunities
plot_grpfrq(better_job.opportunities, world.region, geom.colors = "gs", title = "Better job opportunities")
plot_grpfrq(better_job.opportunities, degree, geom.colors = "gs", title = "Better job opportunities")
#Higher quality of life
plot_grpfrq(high_quality.life, world.region, geom.colors = "gs", title = "Higher quality of life")
plot_grpfrq(high_quality.life, degree, geom.colors = "gs", title = "Higher quality of life")
#Better career opportunities
plot_grpfrq(better.career, world.region, geom.colors = "gs", title = "Better career opportunities")
plot_grpfrq(better.career, degree, geom.colors = "gs", title = "Better career opportunities")
#Higher income level
plot_grpfrq(high.income, world.region, geom.colors = "gs", title = "Higher income level")
plot_grpfrq(high.income, degree, geom.colors = "gs", title = "Higher income level")
#Family and Friends ties
plot_grpfrq(family_friends.ties, world.region, geom.colors = "gs", title = "Family and Friends ties")
plot_grpfrq(family_friends.ties, degree, geom.colors = "gs", title = "Family and Friends ties")
#Gain international experience
plot_grpfrq(gain.experience, world.region, geom.colors = "gs", title = "Gain international experience")
plot_grpfrq(gain.experience, degree, geom.colors = "gs", title = "Gain international experience")
#Flexible immigration process
plot_grpfrq(flexible.immigration, world.region, geom.colors = "gs", title = "Flexible immigration process ")
plot_grpfrq(flexible.immigration, degree, geom.colors = "gs", title = "Flexible immigration process")
#Feelings of alienation
plot_grpfrq(feeling.alienation, world.region, geom.colors = "gs", title = "Feelings of alienation")
plot_grpfrq(feeling.alienation, degree, geom.colors = "gs", title = "Feelings of alienation")
#Difficulties in finding job
plot_grpfrq(finding.job, world.region, geom.colors = "gs", title = "Difficulties in finding jobs")
plot_grpfrq(finding.job, degree, geom.colors = "gs", title = "Difficulties in finding jobs")
#Poor working conditions
plot_grpfrq(work.conditions, world.region, geom.colors = "gs", title = "Poor working conditions")
plot_grpfrq(work.conditions, degree, geom.colors = "gs", title = "Poor working conditions")
#Lower quality of life
plot_grpfrq(low.quality, world.region, geom.colors = "gs", title = "Lower quality of life")
plot_grpfrq(low.quality, degree, geom.colors = "gs", title = "Lower quality of life")
#Perceived or Experienced discrimination
plot_grpfrq(perceived.discrimination, world.region, geom.colors = "gs", title = "Perceived or Experienced discrimination")
plot_grpfrq(perceived.discrimination, degree, geom.colors = "gs", title = "Perceived or Experienced discrimination")
#Crime and low level of safety
plot_grpfrq(crime.safety.levels, world.region, geom.colors = "gs", title = "Crime and low level of safety")
plot_grpfrq(crime.safety.levels, degree, geom.colors = "gs", title = "Crime and low level of safety")
#Strict migration process
plot_grpfrq(strict.visa, world.region, geom.colors = "gs", title = "Strict migration process")
plot_grpfrq(strict.visa, degree, geom.colors = "gs", title = "Strict migration process")
#Family expectations
plot_grpfrq(family_expectations, world.region, geom.colors = "gs", title = "Family expectations")
plot_grpfrq(job.opportunities, degree, geom.colors = "gs", title = "Family expectations")
#Restrictive Cultural practices
plot_grpfrq(restrictive.practices, world.region, geom.colors = "gs", title = "Restrictive Cultural practices")
plot_grpfrq(restrictive.practices, degree, geom.colors = "gs", title = "Restrictive Cultural practices")
#Limited jobs
plot_grpfrq(limited.jobs.opportunities, world.region, geom.colors = "gs", title = "Limited jobs opportunities")
plot_grpfrq(limited.jobs.opportunities, degree, geom.colors = "gs", title = "Limited jobs opportunities")
#Lower income levels
plot_grpfrq(low.income, world.region, geom.colors = "gs", title = "Lower income levels")
plot_grpfrq(low.income, degree, geom.colors = "gs", title = "Lower income levels")
#Lower quality of life
plot_grpfrq(low_lifequality, world.region, geom.colors = "gs", title = "Lower quality of life")
plot_grpfrq(low_lifequality, degree, geom.colors = "gs", title = "Lower quality of life")
#Political persecution
plot_grpfrq(political_persecution, world.region, geom.colors = "gs", title = "Political persecution")
plot_grpfrq(political_persecution, degree, geom.colors = "gs", title = "Political persecution")
#Dangers to ones own life
plot_grpfrq(danger.to.life, world.region, geom.colors = "gs", title = "Dangers to ones own life")
plot_grpfrq(danger.to.life, degree, geom.colors = "gs", title = "Dangers to ones own life")
|
#' evaldyntracer: a dynamic tracer for eval
#'
#' evaldyntracer is an R package for tracing usage of the eval function. It
#' intercepts and analyzes the usage and impact of eval in R code. It also
#' exports the intercepted data for further summarization and analysis.
#'
#' @useDynLib evaldyntracer, .registration = TRUE, .fixes="C_"
"_PACKAGE"
| /R/evaldyntracer.R | permissive | chakshugoyal97/evaldyntracer | R | false | false | 347 | r | #' evaldyntracer: a dynamic tracer for eval
#'
#' evaldyntracer is an R package for tracing usage of the eval function. It
#' intercepts and analyzes the usage and impact of eval in R code. It also
#' exports the intercepted data for further summarization and analysis.
#'
#' @useDynLib evaldyntracer, .registration = TRUE, .fixes="C_"
"_PACKAGE"
|
#' @export
vc.r <- function(expr) {
mock_network_functions(
eval.parent(substitute(expr))
)
}
network_function_registry <- local({
registry <- list()
list(
empty = function() { length(registry) == 0 },
init = function() {
registry$`httr:::perform` <<- duplicate(httr:::perform)
},
get = function(key) {
registry[[key]]
}
)
})
mock_network_functions <- function(expr) {
setup_network_mocks()
eval.parent(substitute({
testthatsomemore::package_stub("httr", "perform", mock_network_function("httr:::perform"),
expr
)}))
}
setup_network_mocks <- function() {
if (network_function_registry$empty()) {
network_function_registry$init()
}
}
mock_network_function <- function(fn) {
# TODO: (RK) Support non-standard evaluation, argument preservation,
# environment preservation, etc.?
eval(bquote(
function(...) {
network_request(.(fn), list(...))
}
))
}
network_request <- local({
# TODO: (RK) Replace with package currently being tested.
network_requests <- NULL
function(fn, args) {
if (is.null(network_requests)) {
network_requests <<- director::registry$new(
file.path(system.file(package = "vcr"), file.path("tests", "net_data"))
)
}
key <- digest::digest(list(fn, args))
result <- network_requests$get(key)
if (is.null(result)) {
result <- do.call(network_function_registry$get(fn), args)
network_requests$set(key, result)
}
network_requests$get(key)
}
})
| /R/vc.r.R | no_license | robertzk/vc.r | R | false | false | 1,525 | r | #' @export
vc.r <- function(expr) {
mock_network_functions(
eval.parent(substitute(expr))
)
}
network_function_registry <- local({
registry <- list()
list(
empty = function() { length(registry) == 0 },
init = function() {
registry$`httr:::perform` <<- duplicate(httr:::perform)
},
get = function(key) {
registry[[key]]
}
)
})
mock_network_functions <- function(expr) {
setup_network_mocks()
eval.parent(substitute({
testthatsomemore::package_stub("httr", "perform", mock_network_function("httr:::perform"),
expr
)}))
}
setup_network_mocks <- function() {
if (network_function_registry$empty()) {
network_function_registry$init()
}
}
mock_network_function <- function(fn) {
# TODO: (RK) Support non-standard evaluation, argument preservation,
# environment preservation, etc.?
eval(bquote(
function(...) {
network_request(.(fn), list(...))
}
))
}
network_request <- local({
# TODO: (RK) Replace with package currently being tested.
network_requests <- NULL
function(fn, args) {
if (is.null(network_requests)) {
network_requests <<- director::registry$new(
file.path(system.file(package = "vcr"), file.path("tests", "net_data"))
)
}
key <- digest::digest(list(fn, args))
result <- network_requests$get(key)
if (is.null(result)) {
result <- do.call(network_function_registry$get(fn), args)
network_requests$set(key, result)
}
network_requests$get(key)
}
})
|
source("/data/safe/bekritsk/simons/scripts/genotyper/emGenotyper/src/mendelScoreFunctions2.R")
source("/data/safe/bekritsk/simons/scripts/genotyper/emGenotyper/src/modelingFunctions3.R")
source("/data/safe/bekritsk/simons/scripts/genotyper/emGenotyper/src/genotyperFunctions2.R")
source("/data/safe/bekritsk/simons/scripts/genotyper/emGenotyper/src/figureFunctions.R")
source("/data/safe/bekritsk/simons/scripts/genotyper/emGenotyper/src/dataLoadingFunctions.R")
source("/data/safe/bekritsk/simons/scripts/genotyper/emGenotyper/src/annotationFunctions.R")
source("/data/safe/bekritsk/simons/scripts/genotyper/emGenotyper/src/helperFunctions.R")
dirname="/data/safe/bekritsk/simons/Exome/workingSet/06182013_completeWiglerSSCQuads/"
locus.info.file=paste(dirname,"allele_matrix_info.txt",sep="")
allele.info.file=paste(dirname,"allele_matrix.txt",sep="")
family.info.file=paste(dirname,"person_info.txt",sep="")
#load families
locus.info<-load.locus.info(locus.info.file)
#Exclude sex chromosomes--X and Y can be corrected for copy number (MT copy number is unknown), but genotyper
#also needs to accomodate single allele genotypes for X and Y, depending on gender. MT could be single allele genotype,
#but could have many more than 2 genotypes as well
allele.info<-load.alleles(allele.info.file,locus.info,exclude.XY=TRUE,exclude.MT=FALSE)
locus.info<-remove.chromosomes(locus.info,c("chrX","chrY"))
locus.info<-add.locus.idxs.to.locus.info(locus.info,allele.info)
person.info<-load.person.info(family.info.file)
exon.anno.file<-"/mnt/wigclust4/home/bekritsk/genomes/hg19/hg19BedFiles/hg19.ms.ccds.rg.info.exons.merged.bed"
intron.anno.file<-"/mnt/wigclust4/home/bekritsk/genomes/hg19/hg19BedFiles/hg19.ms.ccds.rg.info.introns.merged.bed"
mirna.anno.file<-"/mnt/wigclust4/home/bekritsk/genomes/hg19/hg19BedFiles/hg19.ms.miRNA.merged.bed"
utr.anno.file<-"/mnt/wigclust4/home/bekritsk/genomes/hg19/hg19BedFiles/hg19.ms.ccds.rg.info.utr.merged.bed"
gene.id.file<-"/mnt/wigclust4/home/bekritsk/genomes/hg19/hg19GeneName.ccds.rg.geneID.txt"
gene.ids<-load.gene.ids(gene.id.file)
ptm<-proc.time()
annotations<-get.annotations(exon.file=exon.anno.file,intron.file=intron.anno.file,mirna.file=mirna.anno.file,utr.file=utr.anno.file,
locus.info=locus.info)
print(proc.time() - ptm)
save(annotations,file=sprintf("%sannotations.RData",dirname))
ptm<-proc.time()
coverage.estimator<-linear.coverage.model(allele.info$locus.cov,print.dir=dirname,people=person.info)
print(proc.time()-ptm)
save(coverage.estimator,file=sprintf("%scoverageEstimator.RData",dirname))
#plot coverage per person
total.cov<-pop.total.coverage.hist(allele.info$locus.cov,dirname)
mean.cov<-pop.mean.coverage.hist(allele.info$locus.cov,dirname)
#find individuals with low correlation between expected and observed allele coverage or very low observed coverage and remove them
problem.people<-unique(as.integer(c(which(total.cov < (mean(total.cov) - (2*sd(total.cov)))),which(coverage.estimator$cor < 0.8))))
problem.families<-which(person.info$family.id %in% person.info$family.id[problem.people])
#exclude families from analysis where a member has low correlation between expected and observed allele coverage or has low total coverage
person.info<-exclude.people.from.person.df(person.info,problem.families)
locus.info<-exclude.people.from.locus.df(locus.info,allele.info,problem.families)
allele.info<-exclude.people.from.allele.df(allele.info,problem.families)
coverage.estimator<-exclude.people.from.exp.coverage.matrix(coverage.estimator,problem.families)
pop<-ncol(allele.info$alleles)
n.fams<-pop/4
by.fams<-get.fam.mat(person.info)
mom.and.pop<-which(person.info$relation %in% c("mother","father"))
em.geno<-call.genotypes(allele.info,locus.info,coverage.estimator,pop)
save(em.geno,file=sprintf("%semGenotpyes.RData",dirname))
locus.info<-add.num.called(em.geno,locus.info,stop.locus=1000)
locus.info<-add.allele.biases(em.geno,locus.info,stop.locus=1000)
save(locus.info,file=sprintf("%slocusInfo.RData",dirname))
graphBiasInfo(dirname,locus.info)
graphGenotypeOverviewStats(dirname,em.geno)
max.alleles<-6 #number of most common alleles within family to consider when calculating the Mendel score
mendel.ind<-get.mendel.trio.indices(max.alleles)
n.ind<-nrow(mendel.ind) #total number of genotype combinations (of 4^max.alleles) that are Mendelian
fam.genotypes<-get.mendel.scores(allele.info=allele.info,locus.info=locus.info,geno.list=em.geno,
n.fams=n.fams,by.fam=by.fams,pop.size=pop,coverage.model=coverage.estimator,mendel.ind=mendel.ind)
save(fam.genotypes,file=sprintf("%sfamilyGenotypes.RData",dirname))
allele.cov.info<-get.allele.cov.info(allele.info,locus.info,coverage.estimator$exp.cov,gp$genotypes)
low.mendel.threshold<-5e-3
allele.fit.threshold<-1e-5
noise.fit.threshold<-1e-3
max.error.rate<-1e-1
too.biased<-0.5
include.na.noise<-FALSE
include.na<-FALSE
pro.low.mendel<-length(which(fam.genotypes[,,10] <= low.mendel.threshold))
pro.low.mendel.indices<-which(fam.genotypes[,,10] <= low.mendel.threshold, arr.ind=T)
pro.low.mendel.indices<-cbind(pro.low.mendel.indices,1)
sib.low.mendel<-length(which(fam.genotypes[,,11] <= low.mendel.threshold))
sib.low.mendel.indices<-which(fam.genotypes[,,11] <= low.mendel.threshold, arr.ind=T)
sib.low.mendel.indices<-cbind(sib.low.mendel.indices,2)
low.mendel.indices<-rbind(sib.low.mendel.indices,pro.low.mendel.indices)
low.mendel.indices<-low.mendel.indices[order(low.mendel.indices[,1],low.mendel.indices[,2],low.mendel.indices[,3]),]
tot.low.mendel<-vector()
denovo.loci<-as.integer(names(table(low.mendel.indices[,1])))
for(i in denovo.loci)
{
loci.indices<-which(low.mendel.indices[,1] == i)
denovo.fams.at.loci<-as.integer(names(table(low.mendel.indices[loci.indices,2])))
for(j in denovo.fams.at.loci)
{
dn.fam.at.locus<-which(low.mendel.indices[,1] == i & low.mendel.indices[,2] == j)
if(length(dn.fam.at.locus) == 1)
{
tot.low.mendel<-rbind(tot.low.mendel,low.mendel.indices[dn.fam.at.locus,])
} else
{
tot.low.mendel<-rbind(tot.low.mendel,c(low.mendel.indices[dn.fam.at.locus[1],1:2],3))
}
}
}
denovo.info<-array(NA,dim=c(nrow(low.mendel.indices),2))
for(i in 1:nrow(tot.low.mendel))
{
locus<-tot.low.mendel[i,1]
family<-tot.low.mendel[i,2]
who.dn<-tot.low.mendel[i,3]
num.na<-sum(fam.genotypes[locus,family,1:8] == -1)
if(num.na == 0)
{
dn.allele.bias<-get.allele.bias(fam.genotypes[locus,family,],who.dn,allele.cov.info,locus.info,locus)
if(is.na(dn.allele.bias)){dn.allele.bias<-1}
if(dn.allele.bias > too.biased & (1/dn.allele.bias) > too.biased)
{
fam.ind<-by.fams[family,]
num.good.allele.fit<-sum(gp$genotypes[locus,fam.ind,4] > allele.fit.threshold)
num.good.noise.fit<-length(which(gp$genotypes[locus,fam.ind,5] > noise.fit.threshold))
num.low.error<-sum(np.noise.estimator[locus,fam.ind] < max.error.rate)
if(num.good.noise.fit == 4 & num.good.allele.fit == 4 & num.low.error == 4)
{
if(who.dn == 1)
{
denovo.info[i,1]<-fam.genotypes[locus,family,10]
} else if (who.dn == 2)
{
denovo.info[i,1]<-fam.genotypes[locus,family,11]
} else if (who.dn == 3)
{
denovo.info[i,1]<-mean(fam.genotypes[locus,family,10:11])
}
denovo.info[i,2]<-i
}
}
}
}
denovo.info<-na.omit(denovo.info)
denovo.info<-denovo.info[order(denovo.info[,1]),]
quartz();plot(-10*log10(denovo.info[,1]),main=sprintf("Top %d de novo candidate scores",nrow(denovo.info)),pch=16,ylab="log10 denovo score")
denovo.dir<-"~/Desktop/prelimFams2/denovos6/"
fig.dirs<-create.dn.figure.dirs(denovo.dir)
total.denovo<-0
for(i in 1:nrow(denovo.info))
{
total.denovo<-total.denovo+1
denovo.ind<-tot.low.mendel[denovo.info[i,2],]
fam.ind<-by.fams[denovo.ind[2],]
dn.locus<-denovo.ind[1]
dn.family<-denovo.ind[2]
dn.person<-denovo.ind[3]
locus.info.index<-allele.info$locus.row.indices[dn.locus]
dn.type<-NULL
ifelse(dn.person == 1, dn.indices<-5:6,
ifelse(dn.person == 2, dn.indices<-7:8,
dn.indices<-5:8))
dn.anno<-get.locus.annotations(dn.locus,annotations,gene.ids)
context<-annotation.precedence(dn.anno)
dn.alleles<-unique(fam.genotypes[dn.locus,dn.family,dn.indices])
graph.dir<-NULL
if(sum(dn.alleles %in% fam.genotypes[dn.locus,dn.family,1:4]) == length(dn.alleles)) #all alleles in Mendel violation child are in parents, omission violation
{
ifelse(dn.person == 1, graph.dir<-fig.dirs[["omissions"]][["proband"]][[context]],
ifelse(dn.person == 2, graph.dir<-fig.dirs[["omissions"]][["sibling"]][[context]],
graph.dir<-fig.dirs[["omissions"]][["both"]][[context]]))
} else #new allele in Mendel violation child, commission violation
{
ifelse(dn.person == 1, graph.dir<-fig.dirs[["commissions"]][["proband"]][[context]],
ifelse(dn.person == 2, graph.dir<-fig.dirs[["commissions"]][["sibling"]][[context]],
graph.dir<-fig.dirs[["commissions"]][["both"]][[context]]))
}
plot.family(locus.info,allele.info,gp$genotypes,dn.locus,person.info,family.ind=by.fams[dn.family,],fam.genotypes=fam.genotypes[dn.locus,dn.family,],
who.dn=dn.person,print.dir=graph.dir,anno=dn.anno)
plot.pop.image(locus.info,gp$genotypes,denovo.ind[1],print.dir=graph.dir,fam.genotypes=fam.genotypes[dn.locus,dn.family,],who.dn=denovo.ind[3],anno=dn.anno)
plot.dn.eo.scatter(allele.info,locus.info,dn.locus,dn.family,by.fams,gp$genotypes,coverage.estimator,who.dn=dn.person,fam.genotypes=fam.genotypes[dn.locus,dn.family,],
print.dir=graph.dir,anno=dn.anno)
plot.dn.cov.scatter(allele.info,locus.info,dn.locus,dn.family,by.fams,gp$genotypes,who.dn=dn.person,fam.genotypes=fam.genotypes[dn.locus,dn.family,],
print.dir=graph.dir,anno=dn.anno)
}
unique.mendel<-as.integer(names(which(table(tot.low.mendel[,1]) == 1)))
for(i in unique.mendel)
{
denovo.locus.info<-tot.low.mendel[which(tot.low.mendel == i),]
locus<-denovo.locus.info[1]
family<-denovo.locus.info[2]
who.dn<-denovo.locus.info[3]
plot.family(locus.info,allele.info,gp$genotypes,locus,family.id=as.character(person.info$family.id[by.fams[family,1]]),
fam.genotypes=fam.genotypes[locus,family,],who.dn=who.dn,print.dir="~/Desktop/prelimFams2/uniqueMendel2/")
plot.pop.heatmap(locus.info,allele.info,pop,gp$genotypes,i,print.dir="~/Desktop/prelimFams2/uniqueMendel2/")
}
| /emGenotyper/popGenotyper7_fullSet.R | no_license | MBekritsky/uSeq | R | false | false | 10,564 | r | source("/data/safe/bekritsk/simons/scripts/genotyper/emGenotyper/src/mendelScoreFunctions2.R")
source("/data/safe/bekritsk/simons/scripts/genotyper/emGenotyper/src/modelingFunctions3.R")
source("/data/safe/bekritsk/simons/scripts/genotyper/emGenotyper/src/genotyperFunctions2.R")
source("/data/safe/bekritsk/simons/scripts/genotyper/emGenotyper/src/figureFunctions.R")
source("/data/safe/bekritsk/simons/scripts/genotyper/emGenotyper/src/dataLoadingFunctions.R")
source("/data/safe/bekritsk/simons/scripts/genotyper/emGenotyper/src/annotationFunctions.R")
source("/data/safe/bekritsk/simons/scripts/genotyper/emGenotyper/src/helperFunctions.R")
dirname="/data/safe/bekritsk/simons/Exome/workingSet/06182013_completeWiglerSSCQuads/"
locus.info.file=paste(dirname,"allele_matrix_info.txt",sep="")
allele.info.file=paste(dirname,"allele_matrix.txt",sep="")
family.info.file=paste(dirname,"person_info.txt",sep="")
#load families
locus.info<-load.locus.info(locus.info.file)
#Exclude sex chromosomes--X and Y can be corrected for copy number (MT copy number is unknown), but genotyper
#also needs to accomodate single allele genotypes for X and Y, depending on gender. MT could be single allele genotype,
#but could have many more than 2 genotypes as well
allele.info<-load.alleles(allele.info.file,locus.info,exclude.XY=TRUE,exclude.MT=FALSE)
locus.info<-remove.chromosomes(locus.info,c("chrX","chrY"))
locus.info<-add.locus.idxs.to.locus.info(locus.info,allele.info)
person.info<-load.person.info(family.info.file)
exon.anno.file<-"/mnt/wigclust4/home/bekritsk/genomes/hg19/hg19BedFiles/hg19.ms.ccds.rg.info.exons.merged.bed"
intron.anno.file<-"/mnt/wigclust4/home/bekritsk/genomes/hg19/hg19BedFiles/hg19.ms.ccds.rg.info.introns.merged.bed"
mirna.anno.file<-"/mnt/wigclust4/home/bekritsk/genomes/hg19/hg19BedFiles/hg19.ms.miRNA.merged.bed"
utr.anno.file<-"/mnt/wigclust4/home/bekritsk/genomes/hg19/hg19BedFiles/hg19.ms.ccds.rg.info.utr.merged.bed"
gene.id.file<-"/mnt/wigclust4/home/bekritsk/genomes/hg19/hg19GeneName.ccds.rg.geneID.txt"
gene.ids<-load.gene.ids(gene.id.file)
ptm<-proc.time()
annotations<-get.annotations(exon.file=exon.anno.file,intron.file=intron.anno.file,mirna.file=mirna.anno.file,utr.file=utr.anno.file,
locus.info=locus.info)
print(proc.time() - ptm)
save(annotations,file=sprintf("%sannotations.RData",dirname))
ptm<-proc.time()
coverage.estimator<-linear.coverage.model(allele.info$locus.cov,print.dir=dirname,people=person.info)
print(proc.time()-ptm)
save(coverage.estimator,file=sprintf("%scoverageEstimator.RData",dirname))
#plot coverage per person
total.cov<-pop.total.coverage.hist(allele.info$locus.cov,dirname)
mean.cov<-pop.mean.coverage.hist(allele.info$locus.cov,dirname)
#find individuals with low correlation between expected and observed allele coverage or very low observed coverage and remove them
problem.people<-unique(as.integer(c(which(total.cov < (mean(total.cov) - (2*sd(total.cov)))),which(coverage.estimator$cor < 0.8))))
problem.families<-which(person.info$family.id %in% person.info$family.id[problem.people])
#exclude families from analysis where a member has low correlation between expected and observed allele coverage or has low total coverage
person.info<-exclude.people.from.person.df(person.info,problem.families)
locus.info<-exclude.people.from.locus.df(locus.info,allele.info,problem.families)
allele.info<-exclude.people.from.allele.df(allele.info,problem.families)
coverage.estimator<-exclude.people.from.exp.coverage.matrix(coverage.estimator,problem.families)
pop<-ncol(allele.info$alleles)
n.fams<-pop/4
by.fams<-get.fam.mat(person.info)
mom.and.pop<-which(person.info$relation %in% c("mother","father"))
em.geno<-call.genotypes(allele.info,locus.info,coverage.estimator,pop)
save(em.geno,file=sprintf("%semGenotpyes.RData",dirname))
locus.info<-add.num.called(em.geno,locus.info,stop.locus=1000)
locus.info<-add.allele.biases(em.geno,locus.info,stop.locus=1000)
save(locus.info,file=sprintf("%slocusInfo.RData",dirname))
graphBiasInfo(dirname,locus.info)
graphGenotypeOverviewStats(dirname,em.geno)
max.alleles<-6 #number of most common alleles within family to consider when calculating the Mendel score
mendel.ind<-get.mendel.trio.indices(max.alleles)
n.ind<-nrow(mendel.ind) #total number of genotype combinations (of 4^max.alleles) that are Mendelian
fam.genotypes<-get.mendel.scores(allele.info=allele.info,locus.info=locus.info,geno.list=em.geno,
n.fams=n.fams,by.fam=by.fams,pop.size=pop,coverage.model=coverage.estimator,mendel.ind=mendel.ind)
save(fam.genotypes,file=sprintf("%sfamilyGenotypes.RData",dirname))
allele.cov.info<-get.allele.cov.info(allele.info,locus.info,coverage.estimator$exp.cov,gp$genotypes)
low.mendel.threshold<-5e-3
allele.fit.threshold<-1e-5
noise.fit.threshold<-1e-3
max.error.rate<-1e-1
too.biased<-0.5
include.na.noise<-FALSE
include.na<-FALSE
pro.low.mendel<-length(which(fam.genotypes[,,10] <= low.mendel.threshold))
pro.low.mendel.indices<-which(fam.genotypes[,,10] <= low.mendel.threshold, arr.ind=T)
pro.low.mendel.indices<-cbind(pro.low.mendel.indices,1)
sib.low.mendel<-length(which(fam.genotypes[,,11] <= low.mendel.threshold))
sib.low.mendel.indices<-which(fam.genotypes[,,11] <= low.mendel.threshold, arr.ind=T)
sib.low.mendel.indices<-cbind(sib.low.mendel.indices,2)
low.mendel.indices<-rbind(sib.low.mendel.indices,pro.low.mendel.indices)
low.mendel.indices<-low.mendel.indices[order(low.mendel.indices[,1],low.mendel.indices[,2],low.mendel.indices[,3]),]
tot.low.mendel<-vector()
denovo.loci<-as.integer(names(table(low.mendel.indices[,1])))
for(i in denovo.loci)
{
loci.indices<-which(low.mendel.indices[,1] == i)
denovo.fams.at.loci<-as.integer(names(table(low.mendel.indices[loci.indices,2])))
for(j in denovo.fams.at.loci)
{
dn.fam.at.locus<-which(low.mendel.indices[,1] == i & low.mendel.indices[,2] == j)
if(length(dn.fam.at.locus) == 1)
{
tot.low.mendel<-rbind(tot.low.mendel,low.mendel.indices[dn.fam.at.locus,])
} else
{
tot.low.mendel<-rbind(tot.low.mendel,c(low.mendel.indices[dn.fam.at.locus[1],1:2],3))
}
}
}
denovo.info<-array(NA,dim=c(nrow(low.mendel.indices),2))
for(i in 1:nrow(tot.low.mendel))
{
locus<-tot.low.mendel[i,1]
family<-tot.low.mendel[i,2]
who.dn<-tot.low.mendel[i,3]
num.na<-sum(fam.genotypes[locus,family,1:8] == -1)
if(num.na == 0)
{
dn.allele.bias<-get.allele.bias(fam.genotypes[locus,family,],who.dn,allele.cov.info,locus.info,locus)
if(is.na(dn.allele.bias)){dn.allele.bias<-1}
if(dn.allele.bias > too.biased & (1/dn.allele.bias) > too.biased)
{
fam.ind<-by.fams[family,]
num.good.allele.fit<-sum(gp$genotypes[locus,fam.ind,4] > allele.fit.threshold)
num.good.noise.fit<-length(which(gp$genotypes[locus,fam.ind,5] > noise.fit.threshold))
num.low.error<-sum(np.noise.estimator[locus,fam.ind] < max.error.rate)
if(num.good.noise.fit == 4 & num.good.allele.fit == 4 & num.low.error == 4)
{
if(who.dn == 1)
{
denovo.info[i,1]<-fam.genotypes[locus,family,10]
} else if (who.dn == 2)
{
denovo.info[i,1]<-fam.genotypes[locus,family,11]
} else if (who.dn == 3)
{
denovo.info[i,1]<-mean(fam.genotypes[locus,family,10:11])
}
denovo.info[i,2]<-i
}
}
}
}
denovo.info<-na.omit(denovo.info)
denovo.info<-denovo.info[order(denovo.info[,1]),]
quartz();plot(-10*log10(denovo.info[,1]),main=sprintf("Top %d de novo candidate scores",nrow(denovo.info)),pch=16,ylab="log10 denovo score")
denovo.dir<-"~/Desktop/prelimFams2/denovos6/"
fig.dirs<-create.dn.figure.dirs(denovo.dir)
total.denovo<-0
for(i in 1:nrow(denovo.info))
{
total.denovo<-total.denovo+1
denovo.ind<-tot.low.mendel[denovo.info[i,2],]
fam.ind<-by.fams[denovo.ind[2],]
dn.locus<-denovo.ind[1]
dn.family<-denovo.ind[2]
dn.person<-denovo.ind[3]
locus.info.index<-allele.info$locus.row.indices[dn.locus]
dn.type<-NULL
ifelse(dn.person == 1, dn.indices<-5:6,
ifelse(dn.person == 2, dn.indices<-7:8,
dn.indices<-5:8))
dn.anno<-get.locus.annotations(dn.locus,annotations,gene.ids)
context<-annotation.precedence(dn.anno)
dn.alleles<-unique(fam.genotypes[dn.locus,dn.family,dn.indices])
graph.dir<-NULL
if(sum(dn.alleles %in% fam.genotypes[dn.locus,dn.family,1:4]) == length(dn.alleles)) #all alleles in Mendel violation child are in parents, omission violation
{
ifelse(dn.person == 1, graph.dir<-fig.dirs[["omissions"]][["proband"]][[context]],
ifelse(dn.person == 2, graph.dir<-fig.dirs[["omissions"]][["sibling"]][[context]],
graph.dir<-fig.dirs[["omissions"]][["both"]][[context]]))
} else #new allele in Mendel violation child, commission violation
{
ifelse(dn.person == 1, graph.dir<-fig.dirs[["commissions"]][["proband"]][[context]],
ifelse(dn.person == 2, graph.dir<-fig.dirs[["commissions"]][["sibling"]][[context]],
graph.dir<-fig.dirs[["commissions"]][["both"]][[context]]))
}
plot.family(locus.info,allele.info,gp$genotypes,dn.locus,person.info,family.ind=by.fams[dn.family,],fam.genotypes=fam.genotypes[dn.locus,dn.family,],
who.dn=dn.person,print.dir=graph.dir,anno=dn.anno)
plot.pop.image(locus.info,gp$genotypes,denovo.ind[1],print.dir=graph.dir,fam.genotypes=fam.genotypes[dn.locus,dn.family,],who.dn=denovo.ind[3],anno=dn.anno)
plot.dn.eo.scatter(allele.info,locus.info,dn.locus,dn.family,by.fams,gp$genotypes,coverage.estimator,who.dn=dn.person,fam.genotypes=fam.genotypes[dn.locus,dn.family,],
print.dir=graph.dir,anno=dn.anno)
plot.dn.cov.scatter(allele.info,locus.info,dn.locus,dn.family,by.fams,gp$genotypes,who.dn=dn.person,fam.genotypes=fam.genotypes[dn.locus,dn.family,],
print.dir=graph.dir,anno=dn.anno)
}
unique.mendel<-as.integer(names(which(table(tot.low.mendel[,1]) == 1)))
for(i in unique.mendel)
{
denovo.locus.info<-tot.low.mendel[which(tot.low.mendel == i),]
locus<-denovo.locus.info[1]
family<-denovo.locus.info[2]
who.dn<-denovo.locus.info[3]
plot.family(locus.info,allele.info,gp$genotypes,locus,family.id=as.character(person.info$family.id[by.fams[family,1]]),
fam.genotypes=fam.genotypes[locus,family,],who.dn=who.dn,print.dir="~/Desktop/prelimFams2/uniqueMendel2/")
plot.pop.heatmap(locus.info,allele.info,pop,gp$genotypes,i,print.dir="~/Desktop/prelimFams2/uniqueMendel2/")
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/wr.data.R
\name{wr.data}
\alias{wr.data}
\title{Reading WhatsApp conversation data}
\usage{
wr.data(file, OS, CheckForErrors = FALSE)
}
\arguments{
\item{file}{the directory of the WhatsApp .txt file}
\item{OS}{indicating if the operating system is "android" or "iOS"}
\item{CheckForErrors}{will check for errors in reading of input data, mostly due to quoted text}
}
\value{
Returns a data frame with formatted WhatsApp data
}
\description{
Reading WhatsApp conversation data
}
\author{
Finlay Campbell <f.campbell15@imperial.ac.uk>
}
| /man/wr.data.Rd | permissive | finlaycampbell/WhatsAppR | R | false | true | 616 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/wr.data.R
\name{wr.data}
\alias{wr.data}
\title{Reading WhatsApp conversation data}
\usage{
wr.data(file, OS, CheckForErrors = FALSE)
}
\arguments{
\item{file}{the directory of the WhatsApp .txt file}
\item{OS}{indicating if the operating system is "android" or "iOS"}
\item{CheckForErrors}{will check for errors in reading of input data, mostly due to quoted text}
}
\value{
Returns a data frame with formatted WhatsApp data
}
\description{
Reading WhatsApp conversation data
}
\author{
Finlay Campbell <f.campbell15@imperial.ac.uk>
}
|
#' Multivariate Adaptive Regression Splines Model
#'
#' Build a regression model using the techniques in Friedman's papers
#' "Multivariate Adaptive Regression Splines" and "Fast MARS".
#'
#' @param pmethod pruning method.
#' @param trace level of execution information to display.
#' @param degree maximum degree of interaction.
#' @param nprune maximum number of terms (including intercept) in the pruned
#' model.
#' @param nfold number of cross-validation folds.
#' @param ncross number of cross-validations if \code{nfold > 1}.
#' @param stratify logical indicating whether to stratify cross-validation
#' samples by the response levels.
#'
#' @details
#' \describe{
#' \item{Response Types:}{\code{factor}, \code{numeric}}
#' \item{\link[=TunedModel]{Automatic Tuning} of Grid Parameters:}{
#' \code{nprune}, \code{degree}*
#' }
#' }
#' * included only in randomly sampled grid points
#'
#' Default values for the \code{NULL} arguments and further model details can be
#' found in the source link below.
#'
#' In calls to \code{\link{varimp}} for \code{EarthModel}, argument
#' \code{metric} may be specified as \code{"gcv"} (default) for the generalized
#' cross-validation decrease over all subsets that include each predictor, as
#' \code{"rss"} for the residual sums of squares decrease, or as
#' \code{"nsubsets"} for the number of model subsets that include each
#' predictor. Variable importance is automatically scaled to range from 0 to
#' 100. To obtain unscaled importance values, set \code{scale = FALSE}. See
#' example below.
#'
#' @return \code{MLModel} class object.
#'
#' @seealso \code{\link[earth]{earth}}, \code{\link{fit}},
#' \code{\link{resample}}
#'
#' @examples
#' model_fit <- fit(Species ~ ., data = iris, model = EarthModel)
#' varimp(model_fit, metric = "nsubsets", scale = FALSE)
#'
EarthModel <- function(pmethod = c("backward", "none", "exhaustive", "forward",
"seqrep", "cv"),
trace = 0, degree = 1, nprune = NULL,
nfold = 0, ncross = 1, stratify = TRUE) {
pmethod <- match.arg(pmethod)
MLModel(
name = "EarthModel",
label = "Multivariate Adaptive Regression Splines",
packages = "earth",
response_types = c("factor", "numeric"),
predictor_encoding = "model.matrix",
params = params(environment()),
grid = function(x, length, random, ...) {
modelfit <- fit(x, model = EarthModel(pmethod = "none"))
max_terms <- min(2 + 0.75 * nrow(modelfit$dirs), 200)
params <- list(
nprune = round(seq(2, max_terms, length = length))
)
if (random) params$degree <- head(1:2, length)
params
},
fit = function(formula, data, weights, ...) {
attach_objects(list(
contr.earth.response = earth::contr.earth.response
), name = "earth_exports")
glm <- list(family = switch_class(response(data),
factor = "binomial",
numeric = "gaussian"))
eval_fit(data,
formula = earth::earth(formula, data = as.data.frame(data),
weights = weights, glm = glm, ...),
matrix = earth::earth(x, y, weights = weights, glm = glm, ...))
},
predict = function(object, newdata, ...) {
newdata <- as.data.frame(newdata)
predict(object, newdata = newdata, type = "response")
},
varimp = function(object, metric = c("gcv", "rss", "nsubsets"), ...) {
earth::evimp(object)[, match.arg(metric), drop = FALSE]
}
)
}
MLModelFunction(EarthModel) <- NULL
| /R/ML_EarthModel.R | no_license | chen061218/MachineShop | R | false | false | 3,644 | r | #' Multivariate Adaptive Regression Splines Model
#'
#' Build a regression model using the techniques in Friedman's papers
#' "Multivariate Adaptive Regression Splines" and "Fast MARS".
#'
#' @param pmethod pruning method.
#' @param trace level of execution information to display.
#' @param degree maximum degree of interaction.
#' @param nprune maximum number of terms (including intercept) in the pruned
#' model.
#' @param nfold number of cross-validation folds.
#' @param ncross number of cross-validations if \code{nfold > 1}.
#' @param stratify logical indicating whether to stratify cross-validation
#' samples by the response levels.
#'
#' @details
#' \describe{
#' \item{Response Types:}{\code{factor}, \code{numeric}}
#' \item{\link[=TunedModel]{Automatic Tuning} of Grid Parameters:}{
#' \code{nprune}, \code{degree}*
#' }
#' }
#' * included only in randomly sampled grid points
#'
#' Default values for the \code{NULL} arguments and further model details can be
#' found in the source link below.
#'
#' In calls to \code{\link{varimp}} for \code{EarthModel}, argument
#' \code{metric} may be specified as \code{"gcv"} (default) for the generalized
#' cross-validation decrease over all subsets that include each predictor, as
#' \code{"rss"} for the residual sums of squares decrease, or as
#' \code{"nsubsets"} for the number of model subsets that include each
#' predictor. Variable importance is automatically scaled to range from 0 to
#' 100. To obtain unscaled importance values, set \code{scale = FALSE}. See
#' example below.
#'
#' @return \code{MLModel} class object.
#'
#' @seealso \code{\link[earth]{earth}}, \code{\link{fit}},
#' \code{\link{resample}}
#'
#' @examples
#' model_fit <- fit(Species ~ ., data = iris, model = EarthModel)
#' varimp(model_fit, metric = "nsubsets", scale = FALSE)
#'
EarthModel <- function(pmethod = c("backward", "none", "exhaustive", "forward",
"seqrep", "cv"),
trace = 0, degree = 1, nprune = NULL,
nfold = 0, ncross = 1, stratify = TRUE) {
pmethod <- match.arg(pmethod)
MLModel(
name = "EarthModel",
label = "Multivariate Adaptive Regression Splines",
packages = "earth",
response_types = c("factor", "numeric"),
predictor_encoding = "model.matrix",
params = params(environment()),
grid = function(x, length, random, ...) {
modelfit <- fit(x, model = EarthModel(pmethod = "none"))
max_terms <- min(2 + 0.75 * nrow(modelfit$dirs), 200)
params <- list(
nprune = round(seq(2, max_terms, length = length))
)
if (random) params$degree <- head(1:2, length)
params
},
fit = function(formula, data, weights, ...) {
attach_objects(list(
contr.earth.response = earth::contr.earth.response
), name = "earth_exports")
glm <- list(family = switch_class(response(data),
factor = "binomial",
numeric = "gaussian"))
eval_fit(data,
formula = earth::earth(formula, data = as.data.frame(data),
weights = weights, glm = glm, ...),
matrix = earth::earth(x, y, weights = weights, glm = glm, ...))
},
predict = function(object, newdata, ...) {
newdata <- as.data.frame(newdata)
predict(object, newdata = newdata, type = "response")
},
varimp = function(object, metric = c("gcv", "rss", "nsubsets"), ...) {
earth::evimp(object)[, match.arg(metric), drop = FALSE]
}
)
}
MLModelFunction(EarthModel) <- NULL
|
#' @title RR exact CI, TOSST method.
#' @description Estimates confidence interval for the risk ratio or prevented fraction; exact method based on the
#' score statistic (inverts two one-sided tests).
#' @details Estimates confidence intervals based on the score statistic that are 'exact' in the sense of accounting for discreteness.
#' Inverts two one-sided score tests. The score statistic is used to select tail area tables, and the binomial probability is estimated
#' over the tail area by taking the maximum over the nuisance parameter. Algorithm is a simple step search.
#' \cr \cr The data may also be a matrix. In that case \code{y} would be entered as \code{matrix(c(y1, n1-y1, y2, n2-y2), 2, 2, byrow = TRUE)}.
#' @param y Data vector c(y1, n1, y2, n2) where y are the positives, n are the total, and group 1 is compared to group 2.
#' @param alpha Complement of the confidence level.
#' @param pf Estimate \emph{RR} or its complement \emph{PF}?
#' @param trace.it Verbose tracking of the iterations?
#' @param iter.max Maximum number of iterations
#' @param converge Convergence criterion
#' @param rnd Number of digits for rounding. Affects display only, not estimates.
#' @param stepstart starting interval for step search
#' @param nuisance.points number of points over which to evaluate nuisance parameter
#' @param gamma parameter for Berger-Boos correction (restricts range of nuisance parameter evaluation)
#' @return A \code{\link{rr1}} object with the following fields.
#' \item{estimate}{vector with point and interval estimate}
#' \item{estimator}{either \code{"PF"} or \code{"RR"}}
#' \item{y}{data vector}
#' \item{rnd}{how many digits to round the display}
#' \item{alpha}{complement of confidence level}
#' @export
#' @references Koopman PAR, 1984. Confidence intervals for the ratio of two binomial proportions. \emph{Biometrics} 40:513-517.
#' \cr Agresti A, Min Y, 2001. On small-sample confidence intervals for parameters in discrete distribution. \emph{Biometrics} 57: 963-971.
#' \cr Berger RL, Boos DD, 1994. P values maximized over a confidence set for the nuisance parameter. \emph{Journal of the American Statistical Association} 89:214-220.
#' @author David Siev \email{david.siev@@aphis.usda.gov}
#' @note Level tested: Moderate.
#' @seealso \code{\link{RRotsst}, \link{rr1}}
#'
#' @examples
#' \dontrun{RRtosst(c(4, 24, 12, 28))
#'
#' # PF
#' # 95% interval estimates
#'
#' # PF LL UL
#' # 0.611 0.012 0.902 }
##--------------------------------------------------------------------
## RRtosst function
##--------------------------------------------------------------------
RRtosst <- function(y, alpha = 0.05, pf = TRUE, stepstart=.1, iter.max = 36, converge = 1e-6, rnd = 3, trace.it=FALSE, nuisance.points=120, gamma=1e-6){
# Estimates exact confidence interval by the TOSST method
# Score statistic used to select tail area tables
# Binomial probability estimated over the tail area
# by taking the maximum over the nuisance parameter
# Written 9/17/07 by Siev
# Functions called by rrcix():
# .rr.score.asymp - gets asymptotic interval for starting value of upper bound
# found in this file below
# (if want to eliminate calling this function would have to search down from r.max)
# binci - gets Clopper-Pearson intervals for Berger-Boos method
# included here now, but may be moved to another package
binci <- function(y,n,alpha=.05,show.warnings=F){
w <- 1*show.warnings - 1
options(warn=w)
p <- y/n
cpl <- ifelse(y>0,qbeta(alpha/2,y,n-y+1),0)
cpu <- ifelse(y<n,qbeta(1-alpha/2,y+1,n-y),1)
out <- cbind(y,n,p,cpl,cpu)
dimnames(out) <- list(names(y), c('y','n','p.hat','cp low','cp high'))
options(warn=0)
return(out)
}
# Data entry y=c(x2,n2,x1,n1) Vaccinates First (order same but subscripts reversed)
# data vector
if(is.matrix(y))
y <- c(t(cbind(y[,1],apply(y,1,sum))))
# NOTE: the subscripts are reversed compared to the other functions
x2 <- y[1]
n2 <- y[2]
x1 <- y[3]
n1 <- y[4]
p1 <- x1/n1
p2 <- x2/n2
rho.mle <- p2/p1
# itemize all possible tables in omega (17.26)
Y <- data.frame(y1=rep(0:n1,(n2+1)),y2=rep(0:n2,rep(n1+1,n2+1)))
observed <- (1:nrow(Y))[Y[,1]==x1 & Y[,2]==x2]
Y$C <- choose(n1,Y$y1)*choose(n2,Y$y2)
# score statistic - with pi.tilde by quadratic formula
scst <- function(rho,y1,n1,y2,n2){
pih1 <- y1/n1 # unrestricted MLE of current data
pih2 <- y2/n2
if(y1==0 & y2==0) sc <- 0
else if(y2==n2) sc <- 0
else{
A <- rho*(n1+n2)
B <- -(rho*(y1+n2) + y2 + n1)
C <- y1+y2
pit1 <- (-B-sqrt(B^2-4*A*C))/(2*A)
pit2 <- rho*pit1
sc <- (pih2-rho*pih1)/sqrt(rho^2*pit1*(1-pit1)/n1 + pit2*(1-pit2)/n2)
}
return(sc)
}
# get Clopper-Pearson intervals for Berger-Boos method
cp <- binci(c(x1,x2),c(n1,n2),alpha=gamma)[,c('cp low','cp high')]
L1 <- cp[1,1]
U1 <- cp[1,2]
L2 <- cp[2,1]
U2 <- cp[2,2]
r.min <- L2/U1
r.max <- U2/L1
if(rho.mle==0)
low <- 0
else{
# search for lower endpoint
iter <- 0
step <- stepstart
low <- max(0.0001,r.min) # start above 0 (for quadratic formula)
repeat{
iter <- iter + 1
if(iter > iter.max)
break
if(iter>1){
old.low <- low
low <- low+step
}
scst.y <- rep(NA,nrow(Y))
for(i in 1:length(scst.y))
scst.y[i] <- scst(low,Y$y1[i],n1,Y$y2[i],n2)
q.set <- Y[scst.y>=scst.y[observed],]
q.set$n1y1 <- n1-q.set$y1
q.set$n2y2 <- n2-q.set$y2
if(gamma > 0) pn <- seq(max(L1,L2/low),min(U1,U2/low),length=nuisance.points) # Berger-Boos method 17.164
else pn <- seq(0,min(1/low,1),length=nuisance.points) # simple method 17.138
if(sum(pn>1)>0){
cat('\nIteration', iter, 'nuisance parameter outside parameter space\n')
next
}
fy <- rep(NA,nuisance.points)
for(i in 1:nuisance.points){
pni <- pn[i]
fy[i] <- sum(q.set$C * pni^q.set$y1 * (1-pni)^q.set$n1y1 * (low*pni)^q.set$y2 * (1-low*pni)^q.set$n2y2)
}
max.fy <- max(fy)
if(trace.it) cat('\nIteration',iter,'rho.low',low,'tail',max.fy,'\n')
if(abs(max.fy-(alpha/2 - gamma/2)) < converge)
break
if(max.fy > (alpha/2 - gamma/2)){
step <- step/2
low <- low-step*2
}
} # end repeat
} # end else
# search for upper endpoint upward from just below asymptotic
# rather than downward from r.max
# get asymptotic interval for starting
ci.asymp <- .rr.score.asymp(c(x2,n2,x1,n1)) # koopman version (slightly narrower interval than mn)
high <- ci.asymp[3]*.9
iter <- 0
step <- stepstart
repeat{
iter <- iter + 1
if(iter > iter.max)
break
if(iter>1){
old.high <- high
high <- high+step
}
scst.y <- rep(NA,nrow(Y))
for(i in 1:length(scst.y))
scst.y[i] <- scst(high,Y$y1[i],n1,Y$y2[i],n2)
p.set <- Y[scst.y<=scst.y[observed],]
p.set$n1y1 <- n1-p.set$y1
p.set$n2y2 <- n2-p.set$y2
if(gamma > 0) pn <- seq(max(L1,L2/high),min(U1,U2/high),length=nuisance.points) # Berger-Boos method 17.164
else pn <- seq(0,min(1/high,1),length=nuisance.points) # simple method 17.138
if(sum(pn>1)>0){
cat('\nIteration', iter, 'nuisance parameter outside parameter space\n')
next
}
fy <- rep(NA,nuisance.points)
for(i in 1:nuisance.points){
pni <- pn[i]
fy[i] <- sum(p.set$C * pni^p.set$y1 * (1-pni)^p.set$n1y1 * (high*pni)^p.set$y2 * (1-high*pni)^p.set$n2y2)
}
max.fy <- max(fy)
if(trace.it) cat('\nIteration',iter,'rho.high',high,'tail',max.fy,'\n')
if(abs(max.fy-(alpha/2 - gamma/2)) < converge)
break
if(max.fy < (alpha/2 - gamma/2)){
step <- step/2
high <- high-step*2
}
} # end repeat
int <- c(rho.hat=rho.mle,low=low,high=high)
if(!pf)
names(int) <- c("RR", "LL", "UL")
else{
int <- 1 - int[c(1,3,2)]
names(int) <- c("PF", "LL", "UL")
}
return(rr1$new(estimate = int, estimator = ifelse(pf, 'PF', 'RR'), y = as.matrix(y), rnd = rnd, alpha = alpha))
# out <- list(estimate = int, estimator = ifelse(pf, 'PF', 'RR'), y = y, rnd = rnd, alpha = alpha)
# class(out) <- 'rr1'
# return(out)
}
#-------------------------------------------------------
# Asymptotic score interval
#-------------------------------------------------------
##
#' Internal function.
#'
#' @usage .rr.score.asymp(y)
#' @param y data
#' @export
#' @examples
#' # none
.rr.score.asymp <- function(y, alpha = 0.05, iter.max = 18., converge = 0.0001, mn=F){
# asymptotic score interval
# code taken from RRsc()
# choice of either Koopman (mn=F)
# or Miettinenen-Nurminen (mn=T)
# Data entry y=c(x2,n2,x1,n1) Vaccinates First
u.p <- function(p1, p2, n1, n2)
(1. - p1)/(n1 * p1) + (1. - p2)/(n2 * p2)
z.phi <- function(phi, x1, x2, n1, n2, u.p, root, za, MN = F) {
if(MN)
mn <- sqrt((n1 + n2 - 1.)/(n1 + n2))
else mn <- 1.
p2 <- root(x1, x2, n1, n2, phi)
p1 <- p2 * phi
u <- u.p(p1, p2, n1, n2)
z <- ((x1 - n1 * p1)/(1. - p1)) * sqrt(u) * mn
return(z)
}
root <- function(x1, x2, n1, n2, phi){
a <- phi * (n1 + n2)
b <- - (phi * (x2 + n1) + x1 + n2)
cc <- x1 + x2
det <- sqrt(b^2. - 4. * a * cc)
rt <- ( - b - det)/(2. * a)
return(rt)
}
al2 <- alpha/2.
z.al2 <- qnorm(al2)
z.ah2 <- qnorm(1. - al2)
zv <- c(z.al2, z.ah2)
x1 <- y[1.]
n1 <- y[2.]
x2 <- y[3.]
n2 <- y[4.]
p1 <- x1/n1
p2 <- x2/n2
p1 <- x1/n1
phi.mle <- p1/p2
# 0.5 log method
p1 <- (x1 + 0.5)/(n1 + 0.5)
p2 <- (x2 + 0.5)/(n2 + 0.5)
phi <- p1/p2
v <- sqrt(u.p(p1, p2, (n1 + 0.5), (n2 + 0.5)))
starting <- exp(v * zv + logb(phi))
# Score method
score <- rep(0., length(zv))
for(k in 1.:length(zv)) {
if(k == 1. & x1 == 0.)
score[k] <- 0.
else
if(k == 2. & x2 == 0.) score[k] <- Inf else {
phi <- c(starting[k], 0.9 * starting[k])
za <- -zv[k]
zz <- c(z.phi(phi[1.], x1, x2, n1, n2, u.p, root, za, mn), z.phi(phi[2.], x1, x2, n1, n2, u.p, root, za, mn))
if(abs(za - zz[1.]) > abs(za - zz[2.]))
phi <- rev(phi)
phi.new <- phi[1.]
phi.old <- phi[2.]
# cat("\n\n")
iter <- 0.
repeat {
iter <- iter + 1.
if(iter > iter.max)
break
z.new <- z.phi(phi.new, x1, x2, n1, n2, u.p, root, za, mn)
# cat("iteration", iter, " z", z.new, "phi", phi.new, "\n")
if(abs(za - z.new) < converge)
break
z.old <- z.phi(phi.old, x1, x2, n1, n2, u.p, root, za, mn)
phi <- exp(logb(phi.old) + logb(phi.new/phi.old) * ((za - z.old)/(z.new - z.old)))
phi.old <- phi.new
phi.new <- phi
}
score[k] <- phi.new
}
}
int <- c(phi.mle,score)
# cat("\n\n")
names(int) <- c('point','LL','UL')
return(int)
}
#------------------------------------------------------
# End asymptotic score method
#------------------------------------------------------
# .rr.score.asymp(c(0,18,16,19),mn=F)
# .rr.score.asymp(c(0,18,16,19),mn=T)
| /R/RRtosst.r | no_license | tmrealphd/PF | R | false | false | 10,694 | r | #' @title RR exact CI, TOSST method.
#' @description Estimates confidence interval for the risk ratio or prevented fraction; exact method based on the
#' score statistic (inverts two one-sided tests).
#' @details Estimates confidence intervals based on the score statistic that are 'exact' in the sense of accounting for discreteness.
#' Inverts two one-sided score tests. The score statistic is used to select tail area tables, and the binomial probability is estimated
#' over the tail area by taking the maximum over the nuisance parameter. Algorithm is a simple step search.
#' \cr \cr The data may also be a matrix. In that case \code{y} would be entered as \code{matrix(c(y1, n1-y1, y2, n2-y2), 2, 2, byrow = TRUE)}.
#' @param y Data vector c(y1, n1, y2, n2) where y are the positives, n are the total, and group 1 is compared to group 2.
#' @param alpha Complement of the confidence level.
#' @param pf Estimate \emph{RR} or its complement \emph{PF}?
#' @param trace.it Verbose tracking of the iterations?
#' @param iter.max Maximum number of iterations
#' @param converge Convergence criterion
#' @param rnd Number of digits for rounding. Affects display only, not estimates.
#' @param stepstart starting interval for step search
#' @param nuisance.points number of points over which to evaluate nuisance parameter
#' @param gamma parameter for Berger-Boos correction (restricts range of nuisance parameter evaluation)
#' @return A \code{\link{rr1}} object with the following fields.
#' \item{estimate}{vector with point and interval estimate}
#' \item{estimator}{either \code{"PF"} or \code{"RR"}}
#' \item{y}{data vector}
#' \item{rnd}{how many digits to round the display}
#' \item{alpha}{complement of confidence level}
#' @export
#' @references Koopman PAR, 1984. Confidence intervals for the ratio of two binomial proportions. \emph{Biometrics} 40:513-517.
#' \cr Agresti A, Min Y, 2001. On small-sample confidence intervals for parameters in discrete distribution. \emph{Biometrics} 57: 963-971.
#' \cr Berger RL, Boos DD, 1994. P values maximized over a confidence set for the nuisance parameter. \emph{Journal of the American Statistical Association} 89:214-220.
#' @author David Siev \email{david.siev@@aphis.usda.gov}
#' @note Level tested: Moderate.
#' @seealso \code{\link{RRotsst}, \link{rr1}}
#'
#' @examples
#' \dontrun{RRtosst(c(4, 24, 12, 28))
#'
#' # PF
#' # 95% interval estimates
#'
#' # PF LL UL
#' # 0.611 0.012 0.902 }
##--------------------------------------------------------------------
## RRtosst function
##--------------------------------------------------------------------
RRtosst <- function(y, alpha = 0.05, pf = TRUE, stepstart=.1, iter.max = 36, converge = 1e-6, rnd = 3, trace.it=FALSE, nuisance.points=120, gamma=1e-6){
# Estimates exact confidence interval by the TOSST method
# Score statistic used to select tail area tables
# Binomial probability estimated over the tail area
# by taking the maximum over the nuisance parameter
# Written 9/17/07 by Siev
# Functions called by rrcix():
# .rr.score.asymp - gets asymptotic interval for starting value of upper bound
# found in this file below
# (if want to eliminate calling this function would have to search down from r.max)
# binci - gets Clopper-Pearson intervals for Berger-Boos method
# included here now, but may be moved to another package
binci <- function(y,n,alpha=.05,show.warnings=F){
w <- 1*show.warnings - 1
options(warn=w)
p <- y/n
cpl <- ifelse(y>0,qbeta(alpha/2,y,n-y+1),0)
cpu <- ifelse(y<n,qbeta(1-alpha/2,y+1,n-y),1)
out <- cbind(y,n,p,cpl,cpu)
dimnames(out) <- list(names(y), c('y','n','p.hat','cp low','cp high'))
options(warn=0)
return(out)
}
# Data entry y=c(x2,n2,x1,n1) Vaccinates First (order same but subscripts reversed)
# data vector
if(is.matrix(y))
y <- c(t(cbind(y[,1],apply(y,1,sum))))
# NOTE: the subscripts are reversed compared to the other functions
x2 <- y[1]
n2 <- y[2]
x1 <- y[3]
n1 <- y[4]
p1 <- x1/n1
p2 <- x2/n2
rho.mle <- p2/p1
# itemize all possible tables in omega (17.26)
Y <- data.frame(y1=rep(0:n1,(n2+1)),y2=rep(0:n2,rep(n1+1,n2+1)))
observed <- (1:nrow(Y))[Y[,1]==x1 & Y[,2]==x2]
Y$C <- choose(n1,Y$y1)*choose(n2,Y$y2)
# score statistic - with pi.tilde by quadratic formula
scst <- function(rho,y1,n1,y2,n2){
pih1 <- y1/n1 # unrestricted MLE of current data
pih2 <- y2/n2
if(y1==0 & y2==0) sc <- 0
else if(y2==n2) sc <- 0
else{
A <- rho*(n1+n2)
B <- -(rho*(y1+n2) + y2 + n1)
C <- y1+y2
pit1 <- (-B-sqrt(B^2-4*A*C))/(2*A)
pit2 <- rho*pit1
sc <- (pih2-rho*pih1)/sqrt(rho^2*pit1*(1-pit1)/n1 + pit2*(1-pit2)/n2)
}
return(sc)
}
# get Clopper-Pearson intervals for Berger-Boos method
cp <- binci(c(x1,x2),c(n1,n2),alpha=gamma)[,c('cp low','cp high')]
L1 <- cp[1,1]
U1 <- cp[1,2]
L2 <- cp[2,1]
U2 <- cp[2,2]
r.min <- L2/U1
r.max <- U2/L1
if(rho.mle==0)
low <- 0
else{
# search for lower endpoint
iter <- 0
step <- stepstart
low <- max(0.0001,r.min) # start above 0 (for quadratic formula)
repeat{
iter <- iter + 1
if(iter > iter.max)
break
if(iter>1){
old.low <- low
low <- low+step
}
scst.y <- rep(NA,nrow(Y))
for(i in 1:length(scst.y))
scst.y[i] <- scst(low,Y$y1[i],n1,Y$y2[i],n2)
q.set <- Y[scst.y>=scst.y[observed],]
q.set$n1y1 <- n1-q.set$y1
q.set$n2y2 <- n2-q.set$y2
if(gamma > 0) pn <- seq(max(L1,L2/low),min(U1,U2/low),length=nuisance.points) # Berger-Boos method 17.164
else pn <- seq(0,min(1/low,1),length=nuisance.points) # simple method 17.138
if(sum(pn>1)>0){
cat('\nIteration', iter, 'nuisance parameter outside parameter space\n')
next
}
fy <- rep(NA,nuisance.points)
for(i in 1:nuisance.points){
pni <- pn[i]
fy[i] <- sum(q.set$C * pni^q.set$y1 * (1-pni)^q.set$n1y1 * (low*pni)^q.set$y2 * (1-low*pni)^q.set$n2y2)
}
max.fy <- max(fy)
if(trace.it) cat('\nIteration',iter,'rho.low',low,'tail',max.fy,'\n')
if(abs(max.fy-(alpha/2 - gamma/2)) < converge)
break
if(max.fy > (alpha/2 - gamma/2)){
step <- step/2
low <- low-step*2
}
} # end repeat
} # end else
# search for upper endpoint upward from just below asymptotic
# rather than downward from r.max
# get asymptotic interval for starting
ci.asymp <- .rr.score.asymp(c(x2,n2,x1,n1)) # koopman version (slightly narrower interval than mn)
high <- ci.asymp[3]*.9
iter <- 0
step <- stepstart
repeat{
iter <- iter + 1
if(iter > iter.max)
break
if(iter>1){
old.high <- high
high <- high+step
}
scst.y <- rep(NA,nrow(Y))
for(i in 1:length(scst.y))
scst.y[i] <- scst(high,Y$y1[i],n1,Y$y2[i],n2)
p.set <- Y[scst.y<=scst.y[observed],]
p.set$n1y1 <- n1-p.set$y1
p.set$n2y2 <- n2-p.set$y2
if(gamma > 0) pn <- seq(max(L1,L2/high),min(U1,U2/high),length=nuisance.points) # Berger-Boos method 17.164
else pn <- seq(0,min(1/high,1),length=nuisance.points) # simple method 17.138
if(sum(pn>1)>0){
cat('\nIteration', iter, 'nuisance parameter outside parameter space\n')
next
}
fy <- rep(NA,nuisance.points)
for(i in 1:nuisance.points){
pni <- pn[i]
fy[i] <- sum(p.set$C * pni^p.set$y1 * (1-pni)^p.set$n1y1 * (high*pni)^p.set$y2 * (1-high*pni)^p.set$n2y2)
}
max.fy <- max(fy)
if(trace.it) cat('\nIteration',iter,'rho.high',high,'tail',max.fy,'\n')
if(abs(max.fy-(alpha/2 - gamma/2)) < converge)
break
if(max.fy < (alpha/2 - gamma/2)){
step <- step/2
high <- high-step*2
}
} # end repeat
int <- c(rho.hat=rho.mle,low=low,high=high)
if(!pf)
names(int) <- c("RR", "LL", "UL")
else{
int <- 1 - int[c(1,3,2)]
names(int) <- c("PF", "LL", "UL")
}
return(rr1$new(estimate = int, estimator = ifelse(pf, 'PF', 'RR'), y = as.matrix(y), rnd = rnd, alpha = alpha))
# out <- list(estimate = int, estimator = ifelse(pf, 'PF', 'RR'), y = y, rnd = rnd, alpha = alpha)
# class(out) <- 'rr1'
# return(out)
}
#-------------------------------------------------------
# Asymptotic score interval
#-------------------------------------------------------
##
#' Internal function.
#'
#' @usage .rr.score.asymp(y)
#' @param y data
#' @export
#' @examples
#' # none
.rr.score.asymp <- function(y, alpha = 0.05, iter.max = 18., converge = 0.0001, mn=F){
# asymptotic score interval
# code taken from RRsc()
# choice of either Koopman (mn=F)
# or Miettinenen-Nurminen (mn=T)
# Data entry y=c(x2,n2,x1,n1) Vaccinates First
u.p <- function(p1, p2, n1, n2)
(1. - p1)/(n1 * p1) + (1. - p2)/(n2 * p2)
z.phi <- function(phi, x1, x2, n1, n2, u.p, root, za, MN = F) {
if(MN)
mn <- sqrt((n1 + n2 - 1.)/(n1 + n2))
else mn <- 1.
p2 <- root(x1, x2, n1, n2, phi)
p1 <- p2 * phi
u <- u.p(p1, p2, n1, n2)
z <- ((x1 - n1 * p1)/(1. - p1)) * sqrt(u) * mn
return(z)
}
root <- function(x1, x2, n1, n2, phi){
a <- phi * (n1 + n2)
b <- - (phi * (x2 + n1) + x1 + n2)
cc <- x1 + x2
det <- sqrt(b^2. - 4. * a * cc)
rt <- ( - b - det)/(2. * a)
return(rt)
}
al2 <- alpha/2.
z.al2 <- qnorm(al2)
z.ah2 <- qnorm(1. - al2)
zv <- c(z.al2, z.ah2)
x1 <- y[1.]
n1 <- y[2.]
x2 <- y[3.]
n2 <- y[4.]
p1 <- x1/n1
p2 <- x2/n2
p1 <- x1/n1
phi.mle <- p1/p2
# 0.5 log method
p1 <- (x1 + 0.5)/(n1 + 0.5)
p2 <- (x2 + 0.5)/(n2 + 0.5)
phi <- p1/p2
v <- sqrt(u.p(p1, p2, (n1 + 0.5), (n2 + 0.5)))
starting <- exp(v * zv + logb(phi))
# Score method
score <- rep(0., length(zv))
for(k in 1.:length(zv)) {
if(k == 1. & x1 == 0.)
score[k] <- 0.
else
if(k == 2. & x2 == 0.) score[k] <- Inf else {
phi <- c(starting[k], 0.9 * starting[k])
za <- -zv[k]
zz <- c(z.phi(phi[1.], x1, x2, n1, n2, u.p, root, za, mn), z.phi(phi[2.], x1, x2, n1, n2, u.p, root, za, mn))
if(abs(za - zz[1.]) > abs(za - zz[2.]))
phi <- rev(phi)
phi.new <- phi[1.]
phi.old <- phi[2.]
# cat("\n\n")
iter <- 0.
repeat {
iter <- iter + 1.
if(iter > iter.max)
break
z.new <- z.phi(phi.new, x1, x2, n1, n2, u.p, root, za, mn)
# cat("iteration", iter, " z", z.new, "phi", phi.new, "\n")
if(abs(za - z.new) < converge)
break
z.old <- z.phi(phi.old, x1, x2, n1, n2, u.p, root, za, mn)
phi <- exp(logb(phi.old) + logb(phi.new/phi.old) * ((za - z.old)/(z.new - z.old)))
phi.old <- phi.new
phi.new <- phi
}
score[k] <- phi.new
}
}
int <- c(phi.mle,score)
# cat("\n\n")
names(int) <- c('point','LL','UL')
return(int)
}
#------------------------------------------------------
# End asymptotic score method
#------------------------------------------------------
# .rr.score.asymp(c(0,18,16,19),mn=F)
# .rr.score.asymp(c(0,18,16,19),mn=T)
|
###############################################################
# 2015 csv file for january to november
###############################################################
file <- read_csv("inst/extdata/jan-nov2015.csv", skip = 5,
col_names = c("Date", "Time",
"PM2.5_Delhi", "PM2.5_Chennai",
"PM2.5_Kolkata", "PM2.5_Mumbai",
"PM2.5_Hyderabad"),
col_types = "ccnnnnn",
na = c("", "NA", "NoData", "-999",
"---", "InVld", "PwrFail"))
which24 <- which(grepl("24:00 AM", file$Date))
file <- file %>%
mutate(Time = ifelse(grepl("24:00 AM", Date),
"12:00 AM", Time)) %>%
mutate(Date = dmy(gsub(" 24:00 AM", "", Date)))
file$Date[which24] <- file$Date[which24] + days(1)
file <- file %>%
filter(!is.na(Date)) %>%
mutate(datetime = paste(as.character(Date), Time)) %>%
mutate(datetime = parse_date_time(datetime,
"%Y-%m-%d I:M p",
tz = "Asia/Kolkata")) %>%
select(datetime, everything()) %>%
select(- Date, - Time)
data_us <- bind_rows(data_us, file)
###############################################################
# 2015 file for december is a pdf, yikee
###############################################################
# f <- "inst/extdata/jan-dec_2015.pdf"
# out1 <- extract_tables(f)
# save(out1, file = "inst/extdata/us_data.RData")
load("inst/extdata/us_data.RData")
# apply said functions
all_pm <- bind_rows(lapply(out1, transform_tableau))
names(all_pm) <- "name"
# now separate
all_pm <- all_pm %>% separate(col = name,
sep = ",",
into = c("PM2.5_Chennai",
"PM2.5_Kolkata",
"PM2.5_Hyderabad",
"PM2.5_Mumbai",
"PM2.5_Delhi"))
all_pm <- all_pm %>% mutate(PM2.5_Chennai = as.numeric(PM2.5_Chennai),
PM2.5_Kolkata = as.numeric(PM2.5_Kolkata),
PM2.5_Hyderabad = as.numeric(PM2.5_Hyderabad),
PM2.5_Mumbai = as.numeric(PM2.5_Mumbai),
PM2.5_Delhi = as.numeric(PM2.5_Delhi))
# now find the corresponding times
# not that easy since part of the pdf is unreadable
times_us <- bind_rows(lapply(out1, horaire_tableau))
names(times_us) <- "datetime"
times_us <- times_us %>%
mutate(datetime = as.POSIXct(datetime, origin = "1970-01-01", tz = "Asia/Kolkata"))
all_pm <- cbind(all_pm, times_us)
names(all_pm)[6] <- "datetime"
first_interesting <- max(which(month(all_pm$datetime) == 11))+2
all_pm <- all_pm[first_interesting:nrow(all_pm),]
all_pm$datetime <- seq(from = ymd_hms("2015-12-01 01:00:00", tz = "Asia/Kolkata"),
to = ymd_hms("2016-01-01 00:00:00", tz = "Asia/Kolkata"),
by = "1 hour")
data_us <- bind_rows(data_us, all_pm)
rm(out1)
rm(all_pm)
| /inst/pm25_consulate_2015.R | no_license | dshen1/usaqmindia | R | false | false | 3,101 | r | ###############################################################
# 2015 csv file for january to november
###############################################################
file <- read_csv("inst/extdata/jan-nov2015.csv", skip = 5,
col_names = c("Date", "Time",
"PM2.5_Delhi", "PM2.5_Chennai",
"PM2.5_Kolkata", "PM2.5_Mumbai",
"PM2.5_Hyderabad"),
col_types = "ccnnnnn",
na = c("", "NA", "NoData", "-999",
"---", "InVld", "PwrFail"))
which24 <- which(grepl("24:00 AM", file$Date))
file <- file %>%
mutate(Time = ifelse(grepl("24:00 AM", Date),
"12:00 AM", Time)) %>%
mutate(Date = dmy(gsub(" 24:00 AM", "", Date)))
file$Date[which24] <- file$Date[which24] + days(1)
file <- file %>%
filter(!is.na(Date)) %>%
mutate(datetime = paste(as.character(Date), Time)) %>%
mutate(datetime = parse_date_time(datetime,
"%Y-%m-%d I:M p",
tz = "Asia/Kolkata")) %>%
select(datetime, everything()) %>%
select(- Date, - Time)
data_us <- bind_rows(data_us, file)
###############################################################
# 2015 file for december is a pdf, yikee
###############################################################
# f <- "inst/extdata/jan-dec_2015.pdf"
# out1 <- extract_tables(f)
# save(out1, file = "inst/extdata/us_data.RData")
load("inst/extdata/us_data.RData")
# apply said functions
all_pm <- bind_rows(lapply(out1, transform_tableau))
names(all_pm) <- "name"
# now separate
all_pm <- all_pm %>% separate(col = name,
sep = ",",
into = c("PM2.5_Chennai",
"PM2.5_Kolkata",
"PM2.5_Hyderabad",
"PM2.5_Mumbai",
"PM2.5_Delhi"))
all_pm <- all_pm %>% mutate(PM2.5_Chennai = as.numeric(PM2.5_Chennai),
PM2.5_Kolkata = as.numeric(PM2.5_Kolkata),
PM2.5_Hyderabad = as.numeric(PM2.5_Hyderabad),
PM2.5_Mumbai = as.numeric(PM2.5_Mumbai),
PM2.5_Delhi = as.numeric(PM2.5_Delhi))
# now find the corresponding times
# not that easy since part of the pdf is unreadable
times_us <- bind_rows(lapply(out1, horaire_tableau))
names(times_us) <- "datetime"
times_us <- times_us %>%
mutate(datetime = as.POSIXct(datetime, origin = "1970-01-01", tz = "Asia/Kolkata"))
all_pm <- cbind(all_pm, times_us)
names(all_pm)[6] <- "datetime"
first_interesting <- max(which(month(all_pm$datetime) == 11))+2
all_pm <- all_pm[first_interesting:nrow(all_pm),]
all_pm$datetime <- seq(from = ymd_hms("2015-12-01 01:00:00", tz = "Asia/Kolkata"),
to = ymd_hms("2016-01-01 00:00:00", tz = "Asia/Kolkata"),
by = "1 hour")
data_us <- bind_rows(data_us, all_pm)
rm(out1)
rm(all_pm)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/play_by_play.R
\name{win_probability}
\alias{win_probability}
\title{NBA games win probabilities}
\usage{
win_probability(
game_ids = c(21700002, 21700003),
nest_data = FALSE,
filter_non_plays = FALSE,
return_message = TRUE
)
}
\arguments{
\item{game_ids}{vector of game ids}
\item{nest_data}{if \code{TRUE} nests data}
\item{filter_non_plays}{if \code{TRUE} filters out non plays}
\item{return_message}{if \code{T} returns message}
}
\value{
a \code{tibble}
a `tibble`
}
\description{
Gets nba in-game win probabilities
for specified game ids
}
\examples{
win_probability(game_ids = c(21700002, 21700005), filter_non_plays = T,
nest_data = FALSE,
return_message = TRUE)
}
\seealso{
Other game:
\code{\link{box_scores}()},
\code{\link{fanduel_summary}()},
\code{\link{game_logs}()}
Other season:
\code{\link{fanduel_summary}()}
}
\concept{game}
\concept{season}
| /man/win_probability.Rd | no_license | abresler/nbastatR | R | false | true | 955 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/play_by_play.R
\name{win_probability}
\alias{win_probability}
\title{NBA games win probabilities}
\usage{
win_probability(
game_ids = c(21700002, 21700003),
nest_data = FALSE,
filter_non_plays = FALSE,
return_message = TRUE
)
}
\arguments{
\item{game_ids}{vector of game ids}
\item{nest_data}{if \code{TRUE} nests data}
\item{filter_non_plays}{if \code{TRUE} filters out non plays}
\item{return_message}{if \code{T} returns message}
}
\value{
a \code{tibble}
a `tibble`
}
\description{
Gets nba in-game win probabilities
for specified game ids
}
\examples{
win_probability(game_ids = c(21700002, 21700005), filter_non_plays = T,
nest_data = FALSE,
return_message = TRUE)
}
\seealso{
Other game:
\code{\link{box_scores}()},
\code{\link{fanduel_summary}()},
\code{\link{game_logs}()}
Other season:
\code{\link{fanduel_summary}()}
}
\concept{game}
\concept{season}
|
args = commandArgs(trailingOnly=T)
#args = c("/well/ukbiobank/expt/V2_QCed.SNP-QC/src/V2_QCed.snpqc-tests.R","/well/ukbiobank/expt/V2_QCed.SNP-QC/src/V2_QCed.bin2clusterplots.R","/well/ukbiobank/qcoutput.V2_QCed.sample-QC-testing/QC-Scripts/R/scripts/readPSperformance.R","/well/ukbiobank/qcoutput.V2_QCed.sample-QC-testing/QC-Scripts/R/scripts/auxFunctions.R",
#"-in","b1__b11-b001__b095-autosome-sampleqc-fastpca-highquality-init1","-threshold","0.003")
print(args)
h = args[-c(which(args%in%c("-in","-threshold")),1+which(args%in%c("-in","-threshold")))]
for(helperScript in h){
source(helperScript)
}
OutputFile = args[which(args=="-in")+1]
threshold = as.numeric(args[which(args=="-threshold")+1])
SnpLoads = paste0(baseSampleQCDir,'/data/PCA/',OutputFile,'.snpload.map')
loads = dplyr::tbl_df(read.table(SnpLoads,header=F,stringsAsFactors=F))
# look at the first 5 PCs
# standard-deviations are very similar, 0.00298768
# what about the distributions?
system(paste0('mkdir plots'))
for(pc in 1:5){
png(paste0('plots/',OutputFile,'-PC',pc,'-loads-hist-%02d.png'),width=1500,height=1500,res=150)
hist(loads[[pc + 7]],breaks=1000,xlab=paste0('PC ',pc),main=paste0("SNP-loads PC ", pc,"\nVertical lines shown at ",-threshold," and ",threshold))
# abline(v=mean(loads[[pc + 7]]),col="red")
abline(v=c(-threshold,threshold),col="red")
hist(abs(loads[[pc + 7]]),breaks=1000,xlab=paste0('PC ',pc),main=paste0("Absolute value SNP-loads PC ", pc))
# abline(v=mean(loads[[pc + 7]]),col="red")
abline(v=c(threshold),col="red")
dev.off()
}
# select a set of SNPs
for(threshold in c(seq(0.003,0.004,by=0.0005),seq(0.005,0.01,by=0.001)) ){
print(threshold)
pc1 = abs(loads[[8]]) > threshold
pc2 = abs(loads[[9]]) > threshold
pc3 = abs(loads[[10]]) > threshold
pc4 = abs(loads[[11]]) > threshold
hi = pc1 + pc2 + pc3
exc = sum(hi>0) # extreme in at least one
inc = sum(hi==0)
print(paste0(exc," SNPs will be in the exclusion list. That leaves ",inc," remaining, out of ",nrow(loads)))
# 1-4: 34,364 SNPs left.
# 1-3: 50,959 SNPs left. ~ 40 %
# 1-2: 63,835 SNPs left.
# 1: 81,370 SNPs left.
## NOTE: all are left with around 80,000 SNPs using 0.003 threshold
# exclude remaining SNPs, plus SNPs in LD
excludeSNPs = loads$V2[hi>0]
keepSNPs = loads$V2[hi==0]
# save list of SNPs
write.table(excludeSNPs,file=paste0(OutputFile,"-snpsToExcludePCs-",threshold,".txt"),quote=FALSE,row.names=FALSE,col.names=FALSE)
write.table(keepSNPs,file=paste0(OutputFile,"-snpsToKeepPCs-",threshold,".txt"),quote=FALSE,row.names=FALSE,col.names=FALSE)
}
write.table(seq(0.003,0.01,by=0.001),paste0(OutputFile,"-snpsToKeepPCs-thresholds.txt"),quote=FALSE,row.names=FALSE,col.names=FALSE)
| /QC-Scripts/PCA/pca-UKBio/filter-snps-for-king.R | no_license | cgbycroft/UK_biobank | R | false | false | 3,016 | r |
args = commandArgs(trailingOnly=T)
#args = c("/well/ukbiobank/expt/V2_QCed.SNP-QC/src/V2_QCed.snpqc-tests.R","/well/ukbiobank/expt/V2_QCed.SNP-QC/src/V2_QCed.bin2clusterplots.R","/well/ukbiobank/qcoutput.V2_QCed.sample-QC-testing/QC-Scripts/R/scripts/readPSperformance.R","/well/ukbiobank/qcoutput.V2_QCed.sample-QC-testing/QC-Scripts/R/scripts/auxFunctions.R",
#"-in","b1__b11-b001__b095-autosome-sampleqc-fastpca-highquality-init1","-threshold","0.003")
print(args)
h = args[-c(which(args%in%c("-in","-threshold")),1+which(args%in%c("-in","-threshold")))]
for(helperScript in h){
source(helperScript)
}
OutputFile = args[which(args=="-in")+1]
threshold = as.numeric(args[which(args=="-threshold")+1])
SnpLoads = paste0(baseSampleQCDir,'/data/PCA/',OutputFile,'.snpload.map')
loads = dplyr::tbl_df(read.table(SnpLoads,header=F,stringsAsFactors=F))
# look at the first 5 PCs
# standard-deviations are very similar, 0.00298768
# what about the distributions?
system(paste0('mkdir plots'))
for(pc in 1:5){
png(paste0('plots/',OutputFile,'-PC',pc,'-loads-hist-%02d.png'),width=1500,height=1500,res=150)
hist(loads[[pc + 7]],breaks=1000,xlab=paste0('PC ',pc),main=paste0("SNP-loads PC ", pc,"\nVertical lines shown at ",-threshold," and ",threshold))
# abline(v=mean(loads[[pc + 7]]),col="red")
abline(v=c(-threshold,threshold),col="red")
hist(abs(loads[[pc + 7]]),breaks=1000,xlab=paste0('PC ',pc),main=paste0("Absolute value SNP-loads PC ", pc))
# abline(v=mean(loads[[pc + 7]]),col="red")
abline(v=c(threshold),col="red")
dev.off()
}
# select a set of SNPs
for(threshold in c(seq(0.003,0.004,by=0.0005),seq(0.005,0.01,by=0.001)) ){
print(threshold)
pc1 = abs(loads[[8]]) > threshold
pc2 = abs(loads[[9]]) > threshold
pc3 = abs(loads[[10]]) > threshold
pc4 = abs(loads[[11]]) > threshold
hi = pc1 + pc2 + pc3
exc = sum(hi>0) # extreme in at least one
inc = sum(hi==0)
print(paste0(exc," SNPs will be in the exclusion list. That leaves ",inc," remaining, out of ",nrow(loads)))
# 1-4: 34,364 SNPs left.
# 1-3: 50,959 SNPs left. ~ 40 %
# 1-2: 63,835 SNPs left.
# 1: 81,370 SNPs left.
## NOTE: all are left with around 80,000 SNPs using 0.003 threshold
# exclude remaining SNPs, plus SNPs in LD
excludeSNPs = loads$V2[hi>0]
keepSNPs = loads$V2[hi==0]
# save list of SNPs
write.table(excludeSNPs,file=paste0(OutputFile,"-snpsToExcludePCs-",threshold,".txt"),quote=FALSE,row.names=FALSE,col.names=FALSE)
write.table(keepSNPs,file=paste0(OutputFile,"-snpsToKeepPCs-",threshold,".txt"),quote=FALSE,row.names=FALSE,col.names=FALSE)
}
write.table(seq(0.003,0.01,by=0.001),paste0(OutputFile,"-snpsToKeepPCs-thresholds.txt"),quote=FALSE,row.names=FALSE,col.names=FALSE)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/dims.R
\name{dims}
\alias{dims}
\title{Dimensions}
\usage{
dims(x, ...)
}
\arguments{
\item{x}{An object.}
\item{...}{Other arguments passed to methods.}
}
\value{
An integer vector of the dimensions.
}
\description{
Gets the dimensions of an object.
}
\details{
Unlike \code{base::dim()}, dims returns the length of an atomic vector.
}
\seealso{
\code{\link[base:dim]{base::dim()}}
Other {dimensions}:
\code{\link{ndims}()},
\code{\link{npdims}()},
\code{\link{pdims}()}
}
\concept{{dimensions}}
| /man/dims.Rd | permissive | krlmlr/universals | R | false | true | 578 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/dims.R
\name{dims}
\alias{dims}
\title{Dimensions}
\usage{
dims(x, ...)
}
\arguments{
\item{x}{An object.}
\item{...}{Other arguments passed to methods.}
}
\value{
An integer vector of the dimensions.
}
\description{
Gets the dimensions of an object.
}
\details{
Unlike \code{base::dim()}, dims returns the length of an atomic vector.
}
\seealso{
\code{\link[base:dim]{base::dim()}}
Other {dimensions}:
\code{\link{ndims}()},
\code{\link{npdims}()},
\code{\link{pdims}()}
}
\concept{{dimensions}}
|
library(shiny)
shinyApp(
ui = fluidPage(
#leave some spacing
br(),
sidebarLayout(
sidebarPanel(),
mainPanel(
sliderInput("nrow", label = "Select rows",
min = 1, max = 32, value = c(1, 5)),
tableHTML_output("mytable"))
)
),
server = function(input, output) {
output$mytable <- render_tableHTML(
tableHTML(mtcars[input$nrow[1]:input$nrow[2], ], theme = 'rshiny-blue')
)
}
)
| /shiny_example.R | no_license | clemens-zauchner/tableHTML_Vienna_R | R | false | false | 468 | r |
library(shiny)
shinyApp(
ui = fluidPage(
#leave some spacing
br(),
sidebarLayout(
sidebarPanel(),
mainPanel(
sliderInput("nrow", label = "Select rows",
min = 1, max = 32, value = c(1, 5)),
tableHTML_output("mytable"))
)
),
server = function(input, output) {
output$mytable <- render_tableHTML(
tableHTML(mtcars[input$nrow[1]:input$nrow[2], ], theme = 'rshiny-blue')
)
}
)
|
\name{DiffMat_forward}
\alias{DiffMat_forward}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
Diffusion matrix building
}
\description{
Internal function that builds the discretized diffusion matrix of the FPK process going forward in time (for simulations)
}
\usage{
DiffMat_forward(V)
}
\arguments{
\item{V}{
A vector giving the values of the evolutionary potential (V) at each point in the gridded trait interval.
}
}
\author{
F.C. Boucher
}
| /man/DiffMat_forward.Rd | no_license | cran/BBMV | R | false | false | 471 | rd | \name{DiffMat_forward}
\alias{DiffMat_forward}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
Diffusion matrix building
}
\description{
Internal function that builds the discretized diffusion matrix of the FPK process going forward in time (for simulations)
}
\usage{
DiffMat_forward(V)
}
\arguments{
\item{V}{
A vector giving the values of the evolutionary potential (V) at each point in the gridded trait interval.
}
}
\author{
F.C. Boucher
}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/EM_W_multi.R
\docType{package}
\name{Probabilistic-PLS}
\alias{Probabilistic-PLS}
\alias{Probabilistic-PLS-package}
\title{PPLS: Probabilistic Partial Least Squares}
\description{
This package implements the Partial Least Squares method in a probabilistic framework.
}
\section{Usage}{
Just do PPLS(Datamatrix_1,Datamatrix_2,nr_of_components) to have a quick and dirty fit, modify the default arguments if necessary.
}
\author{
Said el Bouhaddani (\email{s.el_bouhaddani@lumc.nl}),
Jeanine Houwing-Duistermaat (\email{J.J.Houwing@lumc.nl}),
Geurt Jongbloed (\email{G.Jongbloed@tudelft.nl}),
Szymon Kielbasa (\email{S.M.Kielbasa@lumc.nl}),
Hae-Won Uh (\email{H.Uh@lumc.nl}).
Maintainer: Said el Bouhaddani (\email{s.el_bouhaddani@lumc.nl}).
}
\keyword{Probabilistic-PLS}
| /Package/PPLS/man/Probabilistic-PLS.Rd | no_license | selbouhaddani/PPLS | R | false | true | 851 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/EM_W_multi.R
\docType{package}
\name{Probabilistic-PLS}
\alias{Probabilistic-PLS}
\alias{Probabilistic-PLS-package}
\title{PPLS: Probabilistic Partial Least Squares}
\description{
This package implements the Partial Least Squares method in a probabilistic framework.
}
\section{Usage}{
Just do PPLS(Datamatrix_1,Datamatrix_2,nr_of_components) to have a quick and dirty fit, modify the default arguments if necessary.
}
\author{
Said el Bouhaddani (\email{s.el_bouhaddani@lumc.nl}),
Jeanine Houwing-Duistermaat (\email{J.J.Houwing@lumc.nl}),
Geurt Jongbloed (\email{G.Jongbloed@tudelft.nl}),
Szymon Kielbasa (\email{S.M.Kielbasa@lumc.nl}),
Hae-Won Uh (\email{H.Uh@lumc.nl}).
Maintainer: Said el Bouhaddani (\email{s.el_bouhaddani@lumc.nl}).
}
\keyword{Probabilistic-PLS}
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/PlotsSpatial.R
\name{linearRatePlot}
\alias{linearRatePlot}
\title{Plot a linear rate histogram from a SpatialProperties1D object}
\usage{
linearRatePlot(sp1, n = 1, outma = c(1, 1, 1, 0), margin = c(1.5, 2, 0.8,
0.3), axis.x.pos = 0, axis.y.pos = 0, axis.y.las = 2, mgp.x = c(0.5,
0.05, 0), mgp.y = c(0.8, 0.3, 0.2), xlab = "Position (cm)",
ylab = "Rate (Hz)", xaxis.at = seq(0, 80, 20), yaxis.at = seq(0, 50,
10), ...)
}
\arguments{
\item{sp1}{SpatialProperties1d object}
\item{n}{Index of the histogram in the sp1 object that you want to plot}
\item{outma}{Outer margins of the figure}
\item{margin}{Inner margins of the figure}
\item{axis.x.pos}{position of x axis}
\item{axis.y.pos}{position of y axis}
\item{axis.y.las}{las of y axis}
\item{mgp.x}{mgp for x axis}
\item{mgp.y}{mgp for y axis}
\item{xlab}{Name to display under the x axis}
\item{ylab}{Name to display at the left of the y axis}
\item{xaxis.at}{where to put the tics}
\item{yaxis.at}{where to put the tics}
\item{...}{passed to the plot function}
}
\description{
In development
}
| /man/linearRatePlot.Rd | no_license | kevin-allen/relectro | R | false | true | 1,150 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/PlotsSpatial.R
\name{linearRatePlot}
\alias{linearRatePlot}
\title{Plot a linear rate histogram from a SpatialProperties1D object}
\usage{
linearRatePlot(sp1, n = 1, outma = c(1, 1, 1, 0), margin = c(1.5, 2, 0.8,
0.3), axis.x.pos = 0, axis.y.pos = 0, axis.y.las = 2, mgp.x = c(0.5,
0.05, 0), mgp.y = c(0.8, 0.3, 0.2), xlab = "Position (cm)",
ylab = "Rate (Hz)", xaxis.at = seq(0, 80, 20), yaxis.at = seq(0, 50,
10), ...)
}
\arguments{
\item{sp1}{SpatialProperties1d object}
\item{n}{Index of the histogram in the sp1 object that you want to plot}
\item{outma}{Outer margins of the figure}
\item{margin}{Inner margins of the figure}
\item{axis.x.pos}{position of x axis}
\item{axis.y.pos}{position of y axis}
\item{axis.y.las}{las of y axis}
\item{mgp.x}{mgp for x axis}
\item{mgp.y}{mgp for y axis}
\item{xlab}{Name to display under the x axis}
\item{ylab}{Name to display at the left of the y axis}
\item{xaxis.at}{where to put the tics}
\item{yaxis.at}{where to put the tics}
\item{...}{passed to the plot function}
}
\description{
In development
}
|
#setwd(paste0(getwd(),"/rcourse_lesson5"))
## LOAD PACKAGES ####
library(dplyr)
library(purrr)
## READ IN DATA ####
# Full data on election results
data_election_results = list.files(path = "data/elections", full.names = T) %>%
# Run read.table call on all files
map(read.table, header = T, sep = "\t") %>%
# Combine all data frames into a single data frame by row
reduce(rbind)
# Read in extra data about specific elections
data_elections = read.table("data/rcourse_lesson5_data_elections.txt", header=T, sep="\t")
# Read in extra data about specific states
data_states = read.table("data/rcourse_lesson5_data_states.txt", header=T, sep="\t")
# See how many states in union versus confederacy
xtabs(~civil_war, data_states)
## CLEAN DATA ####
# Make data set balanced for Union and Confederacy states
data_states_clean = data_states %>%
# Drop any data from states that were not in the US during the Civil War
filter(!is.na(civil_war)) %>%
# Drop any data besides the first 11 states in the Union or Confederacy based on date of US entry
group_by(civil_war) %>%
arrange(order_enter) %>%
filter(row_number() <= 11) %>%
ungroup()
# Double check balanced for 'civil_war' variable
xtabs(~civil_war, data_states_clean)
# Combine three data frames
data_clean = data_election_results %>%
# Combine with election specific data
inner_join(data_elections) %>%
# Combine with state specific data
inner_join(data_states_clean) %>%
# Drop unused states
mutate(state = factor(state))
# Double check independent variables are balanced
xtabs(~incumbent_party+civil_war, data_clean)
| /rcourse_lesson5/scripts/rcourse_lesson5_cleaning.R | no_license | pcava/R4Publication | R | false | false | 1,613 | r | #setwd(paste0(getwd(),"/rcourse_lesson5"))
## LOAD PACKAGES ####
library(dplyr)
library(purrr)
## READ IN DATA ####
# Full data on election results
data_election_results = list.files(path = "data/elections", full.names = T) %>%
# Run read.table call on all files
map(read.table, header = T, sep = "\t") %>%
# Combine all data frames into a single data frame by row
reduce(rbind)
# Read in extra data about specific elections
data_elections = read.table("data/rcourse_lesson5_data_elections.txt", header=T, sep="\t")
# Read in extra data about specific states
data_states = read.table("data/rcourse_lesson5_data_states.txt", header=T, sep="\t")
# See how many states in union versus confederacy
xtabs(~civil_war, data_states)
## CLEAN DATA ####
# Make data set balanced for Union and Confederacy states
data_states_clean = data_states %>%
# Drop any data from states that were not in the US during the Civil War
filter(!is.na(civil_war)) %>%
# Drop any data besides the first 11 states in the Union or Confederacy based on date of US entry
group_by(civil_war) %>%
arrange(order_enter) %>%
filter(row_number() <= 11) %>%
ungroup()
# Double check balanced for 'civil_war' variable
xtabs(~civil_war, data_states_clean)
# Combine three data frames
data_clean = data_election_results %>%
# Combine with election specific data
inner_join(data_elections) %>%
# Combine with state specific data
inner_join(data_states_clean) %>%
# Drop unused states
mutate(state = factor(state))
# Double check independent variables are balanced
xtabs(~incumbent_party+civil_war, data_clean)
|
pgfhypergeometric <-
function(s,params) {
require(hypergeo)
k<-s[abs(s)>1]
if (length(k)>0)
warning("At least one element of the vector s are out of interval [-1,1]")
if (length(params)<3) stop("At least one value in params is missing")
if (length(params)>3) stop("The length of params is 3")
m<-params[1]
n<-params[2]
p<-params[3]
if (m<0)
stop("Parameter m must be positive")
if(!(abs(m-round(m))<.Machine$double.eps^0.5))
stop("Parameter m must be positive integer")
if (n<0)
stop("Parameter n must be positive")
if(!(abs(n-round(n))<.Machine$double.eps^0.5))
stop("Parameter n must be positive integer")
if ((p>=1)|(p<=0))
stop ("Parameter p belongs to the interval (0,1)")
if (m<n)
stop ("Parameter m is greater or equal then n ")
Re(hypergeo(-n,-m*p,-m,1-s))
}
| /Compounding/R/pgfhypergeometric.R | no_license | ingted/R-Examples | R | false | false | 844 | r | pgfhypergeometric <-
function(s,params) {
require(hypergeo)
k<-s[abs(s)>1]
if (length(k)>0)
warning("At least one element of the vector s are out of interval [-1,1]")
if (length(params)<3) stop("At least one value in params is missing")
if (length(params)>3) stop("The length of params is 3")
m<-params[1]
n<-params[2]
p<-params[3]
if (m<0)
stop("Parameter m must be positive")
if(!(abs(m-round(m))<.Machine$double.eps^0.5))
stop("Parameter m must be positive integer")
if (n<0)
stop("Parameter n must be positive")
if(!(abs(n-round(n))<.Machine$double.eps^0.5))
stop("Parameter n must be positive integer")
if ((p>=1)|(p<=0))
stop ("Parameter p belongs to the interval (0,1)")
if (m<n)
stop ("Parameter m is greater or equal then n ")
Re(hypergeo(-n,-m*p,-m,1-s))
}
|
# Exercise 3: writing and executing functions
# Define a function `add_three` that takes a single argument and
# returns a value 3 greater than the input
add_three <- function(your_num){
return(your_num + 3)
}
add_three(3)
# Create a variable `ten` that is the result of passing 7 to your `add_three`
# function
# Define a function `imperial_to_metric` that takes in two arguments: a number
# of feet and a number of inches
# The function should return the equivalent length in meters
imperial_to_metric <- function(feet, inches){
return(paste(feet*0.3048 + inches*0.0254, "meters"))
}
imperial_to_metric(5, 6)
# Create a variable `height_in_meters` by passing your height in imperial to the
# `imperial_to_metric` function
height_in_meters <- imperial_to_metric(5, 6)
| /chapter-06-exercises/exercise-3/exercise.R | no_license | cpz-baron/INFO201-Week-2 | R | false | false | 806 | r | # Exercise 3: writing and executing functions
# Define a function `add_three` that takes a single argument and
# returns a value 3 greater than the input
add_three <- function(your_num){
return(your_num + 3)
}
add_three(3)
# Create a variable `ten` that is the result of passing 7 to your `add_three`
# function
# Define a function `imperial_to_metric` that takes in two arguments: a number
# of feet and a number of inches
# The function should return the equivalent length in meters
imperial_to_metric <- function(feet, inches){
return(paste(feet*0.3048 + inches*0.0254, "meters"))
}
imperial_to_metric(5, 6)
# Create a variable `height_in_meters` by passing your height in imperial to the
# `imperial_to_metric` function
height_in_meters <- imperial_to_metric(5, 6)
|
# Compare model implementation in manuscript with alternate model implementations
library("plyr")
library("magrittr")
library("tidyr")
library("xtable")
library("ggplot2")
library("dplyr")
library("RColorBrewer")
library(cowplot)
# Models to compare ------------------------------------------------------------
models <- c("MK_F_RNminus","MK_P_RNminus","MK_U_RNminus","MK_FM_RNminus",
"MK_F_RNplus","MK_P_RNplus","MK_U_RNplus","MK_FM_RNplus",
"MK_P2_RNminus","MK_U2_RNminus","MK_FM2_RNminus",
"MK_P2_RNplus","MK_U2_RNplus","MK_FM2_RNplus")
# Load Fits --------------------------------------------------------------------
FittedFull <- readRDS("Fits/FitFull.rds") %>% filter(model %in% models)
# Identify best fit ------------------------------------------------------------
BestFull <- FittedFull %>%
group_by(model,cvid) %>%
filter(Dev == min(Dev, na.rm = TRUE)) %>% #finds best fit of 20 runs
arrange(cvid) %>%
distinct() %>%
group_by(cvid) %>%
mutate(delLL = LL - min(LL),
delDev = Dev - min(Dev),
delAIC = AIC - min(AIC)) %>%
group_by(model)
## globally determines order of bars in graphs
modelsinorder <- BestFull %>%
group_by(model) %>%
summarize(means = mean(AIC)) %>%
arrange(-means) %>%
.$model
BestFull$model <- factor(BestFull$model,levels = modelsinorder)
# Manuscript-style graph -------------------------------------------------------
## Fitted Full
AICmeans <- BestFull %>%
group_by(model) %>%
summarize(means = mean(AIC)) %>%
arrange(means)
#sum all aics
AICtot <- BestFull %>%
group_by(cvid,model) %>%
mutate(deltaAIC = AIC - AICmeans %>% slice(1) %>% .$means) %>%
group_by(model) %>%
summarize(mdeltaAIC = mean(deltaAIC)) %>%
mutate(mdeltaAICadj = ifelse(mdeltaAIC > 20,20,mdeltaAIC)) %>%
arrange(model)
allAIC <- ggplot(AICtot,aes(y = mdeltaAICadj,x = model,fill = mdeltaAIC)) +
geom_rect(aes(xmin=0, xmax=14.5, ymin=0, ymax=1), fill = "grey") +
geom_bar(stat = "identity") +
coord_flip(ylim = c(0,20)) +
scale_fill_gradientn(name = expression(paste(Delta," AIC")),
colors = rev(brewer.pal(n = 9, name = "YlOrRd")),
limits=c(0,20))+
scale_y_continuous(name = expression(paste(Delta," AIC")),
breaks = seq(0,20,2),
labels = c(seq(0,18,2),"...")) +
scale_x_discrete(name = "Model", labels = c(
expression(paste("VP(",kappa,")U2-")),
expression(paste("VP(",kappa,")U-")),
expression(paste("VP(",kappa,")P-")),
expression(paste("VP(",kappa,")P2-")),
expression(paste("VP(",kappa,")F3+")),
expression(paste("VP(",kappa,")F2+")),
expression(paste("VP(",kappa,")F+")),
expression(paste("VP(",kappa,")F2-")),
expression(paste("VP(",kappa,")F-")),
expression(paste("VP(",kappa,")F3-")),
expression(paste("VP(",kappa,")U2+")),
expression(paste("VP(",kappa,")P+")),
expression(paste("VP(",kappa,")U+")),
expression(paste("VP(",kappa,")P2+"))
)) +
theme(axis.text.x = element_text(size=12),
axis.text.y = element_text(size=12),
axis.title = element_text(size=12),
legend.text = element_text(size = 12),
axis.ticks.y = element_blank(),
panel.background = element_rect(fill = "white"),
legend.position = c(0.7,0.7),
panel.border = element_rect(colour = "black", fill = "transparent"),
strip.background = element_rect(fill = "white"),
strip.text.x = element_text(size=12),
strip.placement = "outside",
strip.text = element_text(size=12))
# Split by individual differences ----------------------------------------------
Refcompare <- BestFull %>%
mutate(modeltype =
case_when(model %in% c("MK_FM_RNminus", "MK_F_RNminus") ~ "F-",
model %in% c("MK_FM_RNplus","MK_F_RNplus") ~ "F+",
model %in% c("MK_FM2_RNplus") ~ "FM+",
model %in% c("MK_FM2_RNminus") ~ "FM-",
model %in% c("MK_P_RNplus","MK_P2_RNplus") ~ "P+",
model %in% c("MK_U_RNplus","MK_U2_RNplus") ~ "U+",
model %in% c("MK_P_RNminus","MK_P2_RNminus") ~ "P-",
model %in% c("MK_U_RNminus","MK_U2_RNminus") ~ "U-")) %>%
mutate(reference =
case_when(model %in% c("MK_FM_RNminus","MK_P_RNminus",
"MK_U_RNminus","MK_FM_RNplus",
"MK_P_RNplus","MK_U_RNplus") ~ "aRef",
TRUE ~ "bAlt"))
RefFp <- Refcompare %>% filter(model %in% c("MK_FM_RNplus")) %>%
mutate(modeltype = "FM+")
RefFm <- Refcompare %>% filter(model %in% c("MK_FM_RNminus")) %>%
mutate(modeltype = "FM-")
Refcompare <- bind_rows(Refcompare,RefFp,RefFm) %>%
mutate(modeltype = factor(modeltype, levels = c("F-","F+","FM-","FM+",
"P-","P+","U-","U+"),
labels = c("F2- - F-",
"F2+ - F+",
"F3- - F-",
"F3+ - F+",
"P2- - P-",
"P2+ - P+",
"U2- - U-",
"U2+ - U+")))
Allmodels <- Refcompare %>% group_by(modeltype,exp,cvid) %>%
arrange(modeltype,reference) %>% summarize(diffs = diff(AIC))
Refcomp <- ggplot(data = Allmodels,
aes(x = cvid, y = diffs, colour = exp)) +
scale_y_continuous(limits=c(-50,50),
name=expression(paste(Delta, "AIC (alternative model - reference model)"))) +
geom_point() +
facet_wrap(.~modeltype, nrow=4) +
theme(axis.text.x = element_blank(),
axis.ticks.x = element_blank(),
axis.text.y = element_text(size=12),
axis.title.y = element_text(size=12),
axis.title.x = element_blank(),
legend.text = element_text(size = 12),
axis.ticks.y = element_blank(),
panel.background = element_rect(fill = "white"),
legend.key = element_rect(fill="transparent"),
legend.title = element_blank(),
panel.border = element_rect(colour = "black", fill = "transparent"),
strip.background = element_rect(fill = "white"),
strip.text.x = element_text(size=12),
strip.placement = "outside",
strip.text = element_text(size=12))
plot_grid(allAIC,Refcomp,nrow=1,rel_widths=c(1,2))
#ggsave("VP_modelvariants.png", units="cm", width=35, height=20, dpi=600)
| /Compare_modelvariants.R | no_license | nklange/InferenceModelComparison | R | false | false | 6,599 | r | # Compare model implementation in manuscript with alternate model implementations
library("plyr")
library("magrittr")
library("tidyr")
library("xtable")
library("ggplot2")
library("dplyr")
library("RColorBrewer")
library(cowplot)
# Models to compare ------------------------------------------------------------
models <- c("MK_F_RNminus","MK_P_RNminus","MK_U_RNminus","MK_FM_RNminus",
"MK_F_RNplus","MK_P_RNplus","MK_U_RNplus","MK_FM_RNplus",
"MK_P2_RNminus","MK_U2_RNminus","MK_FM2_RNminus",
"MK_P2_RNplus","MK_U2_RNplus","MK_FM2_RNplus")
# Load Fits --------------------------------------------------------------------
FittedFull <- readRDS("Fits/FitFull.rds") %>% filter(model %in% models)
# Identify best fit ------------------------------------------------------------
BestFull <- FittedFull %>%
group_by(model,cvid) %>%
filter(Dev == min(Dev, na.rm = TRUE)) %>% #finds best fit of 20 runs
arrange(cvid) %>%
distinct() %>%
group_by(cvid) %>%
mutate(delLL = LL - min(LL),
delDev = Dev - min(Dev),
delAIC = AIC - min(AIC)) %>%
group_by(model)
## globally determines order of bars in graphs
modelsinorder <- BestFull %>%
group_by(model) %>%
summarize(means = mean(AIC)) %>%
arrange(-means) %>%
.$model
BestFull$model <- factor(BestFull$model,levels = modelsinorder)
# Manuscript-style graph -------------------------------------------------------
## Fitted Full
AICmeans <- BestFull %>%
group_by(model) %>%
summarize(means = mean(AIC)) %>%
arrange(means)
#sum all aics
AICtot <- BestFull %>%
group_by(cvid,model) %>%
mutate(deltaAIC = AIC - AICmeans %>% slice(1) %>% .$means) %>%
group_by(model) %>%
summarize(mdeltaAIC = mean(deltaAIC)) %>%
mutate(mdeltaAICadj = ifelse(mdeltaAIC > 20,20,mdeltaAIC)) %>%
arrange(model)
allAIC <- ggplot(AICtot,aes(y = mdeltaAICadj,x = model,fill = mdeltaAIC)) +
geom_rect(aes(xmin=0, xmax=14.5, ymin=0, ymax=1), fill = "grey") +
geom_bar(stat = "identity") +
coord_flip(ylim = c(0,20)) +
scale_fill_gradientn(name = expression(paste(Delta," AIC")),
colors = rev(brewer.pal(n = 9, name = "YlOrRd")),
limits=c(0,20))+
scale_y_continuous(name = expression(paste(Delta," AIC")),
breaks = seq(0,20,2),
labels = c(seq(0,18,2),"...")) +
scale_x_discrete(name = "Model", labels = c(
expression(paste("VP(",kappa,")U2-")),
expression(paste("VP(",kappa,")U-")),
expression(paste("VP(",kappa,")P-")),
expression(paste("VP(",kappa,")P2-")),
expression(paste("VP(",kappa,")F3+")),
expression(paste("VP(",kappa,")F2+")),
expression(paste("VP(",kappa,")F+")),
expression(paste("VP(",kappa,")F2-")),
expression(paste("VP(",kappa,")F-")),
expression(paste("VP(",kappa,")F3-")),
expression(paste("VP(",kappa,")U2+")),
expression(paste("VP(",kappa,")P+")),
expression(paste("VP(",kappa,")U+")),
expression(paste("VP(",kappa,")P2+"))
)) +
theme(axis.text.x = element_text(size=12),
axis.text.y = element_text(size=12),
axis.title = element_text(size=12),
legend.text = element_text(size = 12),
axis.ticks.y = element_blank(),
panel.background = element_rect(fill = "white"),
legend.position = c(0.7,0.7),
panel.border = element_rect(colour = "black", fill = "transparent"),
strip.background = element_rect(fill = "white"),
strip.text.x = element_text(size=12),
strip.placement = "outside",
strip.text = element_text(size=12))
# Split by individual differences ----------------------------------------------
Refcompare <- BestFull %>%
mutate(modeltype =
case_when(model %in% c("MK_FM_RNminus", "MK_F_RNminus") ~ "F-",
model %in% c("MK_FM_RNplus","MK_F_RNplus") ~ "F+",
model %in% c("MK_FM2_RNplus") ~ "FM+",
model %in% c("MK_FM2_RNminus") ~ "FM-",
model %in% c("MK_P_RNplus","MK_P2_RNplus") ~ "P+",
model %in% c("MK_U_RNplus","MK_U2_RNplus") ~ "U+",
model %in% c("MK_P_RNminus","MK_P2_RNminus") ~ "P-",
model %in% c("MK_U_RNminus","MK_U2_RNminus") ~ "U-")) %>%
mutate(reference =
case_when(model %in% c("MK_FM_RNminus","MK_P_RNminus",
"MK_U_RNminus","MK_FM_RNplus",
"MK_P_RNplus","MK_U_RNplus") ~ "aRef",
TRUE ~ "bAlt"))
RefFp <- Refcompare %>% filter(model %in% c("MK_FM_RNplus")) %>%
mutate(modeltype = "FM+")
RefFm <- Refcompare %>% filter(model %in% c("MK_FM_RNminus")) %>%
mutate(modeltype = "FM-")
Refcompare <- bind_rows(Refcompare,RefFp,RefFm) %>%
mutate(modeltype = factor(modeltype, levels = c("F-","F+","FM-","FM+",
"P-","P+","U-","U+"),
labels = c("F2- - F-",
"F2+ - F+",
"F3- - F-",
"F3+ - F+",
"P2- - P-",
"P2+ - P+",
"U2- - U-",
"U2+ - U+")))
Allmodels <- Refcompare %>% group_by(modeltype,exp,cvid) %>%
arrange(modeltype,reference) %>% summarize(diffs = diff(AIC))
Refcomp <- ggplot(data = Allmodels,
aes(x = cvid, y = diffs, colour = exp)) +
scale_y_continuous(limits=c(-50,50),
name=expression(paste(Delta, "AIC (alternative model - reference model)"))) +
geom_point() +
facet_wrap(.~modeltype, nrow=4) +
theme(axis.text.x = element_blank(),
axis.ticks.x = element_blank(),
axis.text.y = element_text(size=12),
axis.title.y = element_text(size=12),
axis.title.x = element_blank(),
legend.text = element_text(size = 12),
axis.ticks.y = element_blank(),
panel.background = element_rect(fill = "white"),
legend.key = element_rect(fill="transparent"),
legend.title = element_blank(),
panel.border = element_rect(colour = "black", fill = "transparent"),
strip.background = element_rect(fill = "white"),
strip.text.x = element_text(size=12),
strip.placement = "outside",
strip.text = element_text(size=12))
plot_grid(allAIC,Refcomp,nrow=1,rel_widths=c(1,2))
#ggsave("VP_modelvariants.png", units="cm", width=35, height=20, dpi=600)
|
#' View table and attributes
#'
#' Displays the table and attributes of the object.
#'
#' @param pt A `pivot_table` object.
#'
#' @return A `pivot_table` object.
#'
#' @family pivot table definition functions
#' @seealso
#'
#' @export
view_table_attr <- function(pt) {
UseMethod("view_table_attr")
}
#' @rdname view_table_attr
#' @export
view_table_attr.pivot_table <- function(pt) {
if (ncol(pt) > 0) {
utils::View(pt)
} else {
utils::View("")
}
if (length(attr(pt, "page")) > 0) {
df <- data.frame(page = attr(pt, "page"), n_col_labels = "", n_row_labels = "", stringsAsFactors = FALSE)
df[1, "n_col_labels"] <- attr(pt, "n_col_labels")
df[1, "n_row_labels"] <- attr(pt, "n_row_labels")
utils::View(df)
} else {
utils::View("")
}
invisible(pt)
}
| /R/pivot_table_view_table_attr.R | no_license | cran/flattabler | R | false | false | 827 | r | #' View table and attributes
#'
#' Displays the table and attributes of the object.
#'
#' @param pt A `pivot_table` object.
#'
#' @return A `pivot_table` object.
#'
#' @family pivot table definition functions
#' @seealso
#'
#' @export
view_table_attr <- function(pt) {
UseMethod("view_table_attr")
}
#' @rdname view_table_attr
#' @export
view_table_attr.pivot_table <- function(pt) {
if (ncol(pt) > 0) {
utils::View(pt)
} else {
utils::View("")
}
if (length(attr(pt, "page")) > 0) {
df <- data.frame(page = attr(pt, "page"), n_col_labels = "", n_row_labels = "", stringsAsFactors = FALSE)
df[1, "n_col_labels"] <- attr(pt, "n_col_labels")
df[1, "n_row_labels"] <- attr(pt, "n_row_labels")
utils::View(df)
} else {
utils::View("")
}
invisible(pt)
}
|
vertex.centrality.hits <- function(id='(all networks)',direction='out',arc_weight='(empty)',alpha='0',maximum_iterations='100',tolerance='0.001',allow_loops='false',filter='(empty)',cluster='(empty)',arc_set='(empty)',save_as='hits',save_to='None',debug='false'){
l <- as.list(match.call())
l2 <- list()
for (i in names(l[-1])) {
l2 <- c(l2, eval(dplyr::sym(i)))
}
names(l2) <- names(l[-1])
l3 <- c(l[1], l2)
FNA::exec_command(FNA::check(l3))
} | /R/vertex.centrality.hits.R | no_license | lubospernis/FNA_package | R | false | false | 470 | r | vertex.centrality.hits <- function(id='(all networks)',direction='out',arc_weight='(empty)',alpha='0',maximum_iterations='100',tolerance='0.001',allow_loops='false',filter='(empty)',cluster='(empty)',arc_set='(empty)',save_as='hits',save_to='None',debug='false'){
l <- as.list(match.call())
l2 <- list()
for (i in names(l[-1])) {
l2 <- c(l2, eval(dplyr::sym(i)))
}
names(l2) <- names(l[-1])
l3 <- c(l[1], l2)
FNA::exec_command(FNA::check(l3))
} |
integrated_vars <- c(
server_1 = Sys.getenv("TEST_SERVER_1"),
key_1 = Sys.getenv("TEST_KEY_1"),
server_2 = Sys.getenv("TEST_SERVER_2"),
key_2 = Sys.getenv("TEST_KEY_2")
)
health_checks <- list(
server_1 = tryCatch(httr::content(httr::GET(paste0(integrated_vars[["server_1"]], "/__ping__"))), error = print),
server_2 = tryCatch(httr::content(httr::GET(paste0(integrated_vars[["server_2"]], "/__ping__"))), error = print)
)
# decide if integrated tests can run
if (
all(nchar(integrated_vars) > 0) &&
all(as.logical(lapply(health_checks, function(x){length(x) == 0})))
) {
test_dir("integrated-tests")
} else {
context("integrated tests")
test_that("all", {
skip("test environment not set up properly")
})
}
| /tests/testthat/test-integrated.R | no_license | tbradley1013/connectapi | R | false | false | 740 | r | integrated_vars <- c(
server_1 = Sys.getenv("TEST_SERVER_1"),
key_1 = Sys.getenv("TEST_KEY_1"),
server_2 = Sys.getenv("TEST_SERVER_2"),
key_2 = Sys.getenv("TEST_KEY_2")
)
health_checks <- list(
server_1 = tryCatch(httr::content(httr::GET(paste0(integrated_vars[["server_1"]], "/__ping__"))), error = print),
server_2 = tryCatch(httr::content(httr::GET(paste0(integrated_vars[["server_2"]], "/__ping__"))), error = print)
)
# decide if integrated tests can run
if (
all(nchar(integrated_vars) > 0) &&
all(as.logical(lapply(health_checks, function(x){length(x) == 0})))
) {
test_dir("integrated-tests")
} else {
context("integrated tests")
test_that("all", {
skip("test environment not set up properly")
})
}
|
mtcars
write.csv(mtcars, "mtcars.csv")
library(margins)
library(lme4)
library(tidyverse)
# 1: SAME
x <- lm(mpg ~ cyl * hp + wt, data = mtcars)
(m <- margins(x))
summary(m)
plot(m)
# 2: SAME
x <- lm(mpg ~ cyl + hp * wt, data = mtcars)
margins(x, at = list(cyl = mean(mtcars$cyl, na.rm = T)))
# 3: SAME
# stata: reg mpg cyl hp wt
# margins, dydx(*)
x <- lm(mpg ~ cyl + hp + carb, data = mtcars)
margins(x)
# 4:
# stata: reg mpg cyl hp i.carb
# margins, dydx(*)
x <- lm(mpg ~ cyl + hp + as.factor(carb), data = mtcars)
margins(x)
# 5:
# Stata: xtmixed mpg hp i.carb || cyl:
# margins, dydx(*)
mtcars$carb_factor <- as.factor(mtcars$carb)
mtcars
x <- lmer(mpg ~ hp + carb_factor + (1 | cyl), data = mtcars, REML = FALSE)
mtcars_sub <- head(mtcars, 4)
mtcars_sub <- mtcars_sub %>%
select(mpg, hp, carb)
x1 <- lm(mpg ~ hp + carb, data = mtcars_sub)
summary(x1)
m1 <- margins(x, type = "link")
summary(m1)
# 6:
# Stata: margins, at(hp=(66 243.5))
m <- margins(x1, at = list(hp = c(66, 243.5)))
print(m, digits = 10)
coef(m)[1]
summary(m, digits = 6)
plot(m)
plot(m1)
(66 - mean(mtcars_sub$hp)) * (-.0824)
margins_hp = -0.08235294118
low = 66
high = 243.5
mtcars_sub$predict_low = mtcars_sub$mpg + (low - mtcars_sub$hp) * (margins_hp)
mtcars_sub$predict_high = mtcars_sub$mpg + (high - mtcars_sub$hp) * (margins_hp)
mean(mtcars_sub$predict_low)
mean(mtcars_sub$predict_high)
# the Stata command gives a point prediction of the dependent variable whereas
# the R command gives the average marginal effect, so to get the point
# prediction you have to multiply it by the change in the variable of interest
# and add that to the original point prediction!!!!!!!
# FINAL SUB MODEL
# Stata: xtmixed mpg wt i.carb || cyl:
# R: t4 <- lmer(mpg ~ wt + as.factor(carb) + (1 | cyl), data = mtcars, REML = FALSE)
# FINAL MODEL:
# STATA: margins, at((mean) _all H_citytract_multi_i=(.23 .54))
################################## regressions in R v. stata ###############################3
## model 1: exactly the same
# stata : reg mpg cyl hp wt
t1 <- lm(mpg ~ cyl + hp + wt, data = mtcars)
summary(t1)
## model 2: as.factor is the same thing as doing i. in stata!!!!
# stata: reg mpg i.cyl hp wt
t2 <- lm(mpg ~ as.factor(cyl) + hp + wt, data = mtcars)
summary(t2)
## model 3: THIS IS THE SAME
# stata: xtmixed mpg wt || geo_id2:
t3 <- lmer(mpg ~ wt + (1 | cyl), data = mtcars, REML = FALSE)
summary(t3)
## model 4: THIS IS THE SAME
# stata: xtmixed mpg wt || cyl:
t3 <- lmer(mpg ~ wt + (1 | cyl), data = mtcars, REML = FALSE)
summary(t3)
## model 4: THIS IS THE SAME
# stata xtmixed mpg wt i.carb || cyl:
t4 <- lmer(mpg ~ wt + as.factor(carb) + (1 | cyl), data = mtcars, REML = FALSE)
summary(t4)
# FINAL MODEL:
# xtmixed biggestsplit H_citytract_multi_i diversityinterp pctasianpopinterp
# pctblkpopinterp pctlatinopopinterp medincinterp pctrentersinterp
# pctcollegegradinterp biracial nonpartisan primary logpop i. year south midwest
# west if winner==1||geo_id2:
m1 <- lmer(biggestsplit ~ H_citytract_multi_i + diversityinterp +
pctasianpopinterp + pctblkpopinterp + pctlatinopopinterp +
medincinterp + pctrentersinterp + pctcollegegradinterp +
biracial + nonpartisan + primary + logpop + year.f +
south + midwest + west + (1 | geo_id2),
data = rp_1, REML=FALSE)
| /margins practice.R | no_license | mburzillo/old_version_final_project_2 | R | false | false | 3,408 | r | mtcars
write.csv(mtcars, "mtcars.csv")
library(margins)
library(lme4)
library(tidyverse)
# 1: SAME
x <- lm(mpg ~ cyl * hp + wt, data = mtcars)
(m <- margins(x))
summary(m)
plot(m)
# 2: SAME
x <- lm(mpg ~ cyl + hp * wt, data = mtcars)
margins(x, at = list(cyl = mean(mtcars$cyl, na.rm = T)))
# 3: SAME
# stata: reg mpg cyl hp wt
# margins, dydx(*)
x <- lm(mpg ~ cyl + hp + carb, data = mtcars)
margins(x)
# 4:
# stata: reg mpg cyl hp i.carb
# margins, dydx(*)
x <- lm(mpg ~ cyl + hp + as.factor(carb), data = mtcars)
margins(x)
# 5:
# Stata: xtmixed mpg hp i.carb || cyl:
# margins, dydx(*)
mtcars$carb_factor <- as.factor(mtcars$carb)
mtcars
x <- lmer(mpg ~ hp + carb_factor + (1 | cyl), data = mtcars, REML = FALSE)
mtcars_sub <- head(mtcars, 4)
mtcars_sub <- mtcars_sub %>%
select(mpg, hp, carb)
x1 <- lm(mpg ~ hp + carb, data = mtcars_sub)
summary(x1)
m1 <- margins(x, type = "link")
summary(m1)
# 6:
# Stata: margins, at(hp=(66 243.5))
m <- margins(x1, at = list(hp = c(66, 243.5)))
print(m, digits = 10)
coef(m)[1]
summary(m, digits = 6)
plot(m)
plot(m1)
(66 - mean(mtcars_sub$hp)) * (-.0824)
margins_hp = -0.08235294118
low = 66
high = 243.5
mtcars_sub$predict_low = mtcars_sub$mpg + (low - mtcars_sub$hp) * (margins_hp)
mtcars_sub$predict_high = mtcars_sub$mpg + (high - mtcars_sub$hp) * (margins_hp)
mean(mtcars_sub$predict_low)
mean(mtcars_sub$predict_high)
# the Stata command gives a point prediction of the dependent variable whereas
# the R command gives the average marginal effect, so to get the point
# prediction you have to multiply it by the change in the variable of interest
# and add that to the original point prediction!!!!!!!
# FINAL SUB MODEL
# Stata: xtmixed mpg wt i.carb || cyl:
# R: t4 <- lmer(mpg ~ wt + as.factor(carb) + (1 | cyl), data = mtcars, REML = FALSE)
# FINAL MODEL:
# STATA: margins, at((mean) _all H_citytract_multi_i=(.23 .54))
################################## regressions in R v. stata ###############################3
## model 1: exactly the same
# stata : reg mpg cyl hp wt
t1 <- lm(mpg ~ cyl + hp + wt, data = mtcars)
summary(t1)
## model 2: as.factor is the same thing as doing i. in stata!!!!
# stata: reg mpg i.cyl hp wt
t2 <- lm(mpg ~ as.factor(cyl) + hp + wt, data = mtcars)
summary(t2)
## model 3: THIS IS THE SAME
# stata: xtmixed mpg wt || geo_id2:
t3 <- lmer(mpg ~ wt + (1 | cyl), data = mtcars, REML = FALSE)
summary(t3)
## model 4: THIS IS THE SAME
# stata: xtmixed mpg wt || cyl:
t3 <- lmer(mpg ~ wt + (1 | cyl), data = mtcars, REML = FALSE)
summary(t3)
## model 4: THIS IS THE SAME
# stata xtmixed mpg wt i.carb || cyl:
t4 <- lmer(mpg ~ wt + as.factor(carb) + (1 | cyl), data = mtcars, REML = FALSE)
summary(t4)
# FINAL MODEL:
# xtmixed biggestsplit H_citytract_multi_i diversityinterp pctasianpopinterp
# pctblkpopinterp pctlatinopopinterp medincinterp pctrentersinterp
# pctcollegegradinterp biracial nonpartisan primary logpop i. year south midwest
# west if winner==1||geo_id2:
m1 <- lmer(biggestsplit ~ H_citytract_multi_i + diversityinterp +
pctasianpopinterp + pctblkpopinterp + pctlatinopopinterp +
medincinterp + pctrentersinterp + pctcollegegradinterp +
biracial + nonpartisan + primary + logpop + year.f +
south + midwest + west + (1 | geo_id2),
data = rp_1, REML=FALSE)
|
LoadSellerList <- function(venture){
require(XLConnect)
sellerWB <- loadWorkbook(paste0("../../1_Input/SellerList/",venture,".xlsx"))
sellerList <- readWorksheet(sellerWB, sheet = 1)
sellerList
} | /3_Script/1_Code/fn_LoadSellerList.R | no_license | datvuong/Crossborder_SellerCharges | R | false | false | 209 | r | LoadSellerList <- function(venture){
require(XLConnect)
sellerWB <- loadWorkbook(paste0("../../1_Input/SellerList/",venture,".xlsx"))
sellerList <- readWorksheet(sellerWB, sheet = 1)
sellerList
} |
library("rphast")
args <- commandArgs(trailingOnly = TRUE)
ref_root <- args[1]
msa_root <- args[2]
feat_file <- args[3]
tree <- args[4]
tree_output <- args[5]
chr_vector <- paste("chr", c(1:10), sep = "")
chr <- chr_vector[1]
ref_file <- paste(ref_root, chr, ".fa", sep = "")
msa <- paste(msa_root, chr, ".fa", sep="")
align4d <- read.msa(msa, refseq = ref_file, format = "FASTA", do.4d = TRUE, features= read.feat(feat_file))
for (chr in chr_vector[2:length(chr_vector)]){
ref_file <- paste(ref_root, chr, ".fa", sep = "")
msa <- paste(msa_root, chr, ".fa", sep="")
align4d2 <- read.msa(msa, refseq = ref_file, format = "FASTA", do.4d = TRUE, features= read.feat(feat_file))
align4d <- concat.msa(list(align4d, align4d2))
}
neutralMod <- phyloFit(align4d, tree=tree, subst.mod="REV")
sink(tree_output)
neutralMod$tree
sink()
| /gerp/scripts-gerp/estimate_neutral_tree.R | permissive | HuffordLab/NAM-genomes | R | false | false | 841 | r | library("rphast")
args <- commandArgs(trailingOnly = TRUE)
ref_root <- args[1]
msa_root <- args[2]
feat_file <- args[3]
tree <- args[4]
tree_output <- args[5]
chr_vector <- paste("chr", c(1:10), sep = "")
chr <- chr_vector[1]
ref_file <- paste(ref_root, chr, ".fa", sep = "")
msa <- paste(msa_root, chr, ".fa", sep="")
align4d <- read.msa(msa, refseq = ref_file, format = "FASTA", do.4d = TRUE, features= read.feat(feat_file))
for (chr in chr_vector[2:length(chr_vector)]){
ref_file <- paste(ref_root, chr, ".fa", sep = "")
msa <- paste(msa_root, chr, ".fa", sep="")
align4d2 <- read.msa(msa, refseq = ref_file, format = "FASTA", do.4d = TRUE, features= read.feat(feat_file))
align4d <- concat.msa(list(align4d, align4d2))
}
neutralMod <- phyloFit(align4d, tree=tree, subst.mod="REV")
sink(tree_output)
neutralMod$tree
sink()
|
### Stock market prediction using Numarical and Text anaylsis ###
#installing package quantmod
install.packages("quantmod")
library(quantmod)
# stock price analyis of SENS
getSymbols("SENS", src="yahoo")
chartSeries(SENS)
addMACD() # adds moving average convergence divergence signals
addBBands() # Adds Bollinger bands to the stock
addCCI() # Add Commodity channel index.
addADX() #Add Directional Movement Indicator
addCMF() #Add Money Flow chart
Returns_by_yaer_SENS<-yearlyReturn(SENS)
head(Returns_by_yaer_SENS)
Returns_by_month_SENS=monthlyReturn(SENS)
head(Returns_by_month_SENS)
getSymbols("^BSESN", src="yahoo")
chartSeries(BSESN)
addMACD() # adds moving average convergence divergence signals
addBBands() # Adds Bollinger bands to the stock price.
addCCI() # Add Commodity channel index.
addADX() #Add Directional Movement Indicator
addCMF() #Add Money Flow chart
### Text analysis ###
df=read.csv(file.choose())
head(df)
str(df)
View(df)
#build corpus
library(tm)
Corpus=iconv(df$headline_text)
Corpus=Corpus(VectorSource(Corpus))
inspect(Corpus[1:5])
#cleaning the text
Corpus=tm_map(Corpus,tolower)
inspect(Corpus[1:5])
Corpus=tm_map(Corpus,removeNumbers)
Corpus=tm_map(Corpus,removePunctuation)
inspect(Corpus[1:15])
stopwords('english')
clean.set=tm_map(Corpus,removeWords,stopwords("english"))
inspect(Corpus[1:15])
clean.set=tm_map(Corpus,stripWhitespace)
inspect(Corpus[1:15])
#term doc matrix
tdm=TermDocumentMatrix(clean.set)
tdm
tdm=as.matrix(tdm)
tdm[1:10,1:20]
w=rowSums(tdm)
w
w=subset(w,w>10)
barplot(w,las=2,col = rainbow(20))
#wordcloud
library(wordcloud)
w=sort(rowSums(tdm),decreasing = T)
set.seed(222)
wordcloud(words = names(w),
freq = w,
max.words = 300,
random.order = F,
min.freq = 5)
| /stock market analysis.R | no_license | ZishanSayyed/Stock-analysis-and-Text-analysis | R | false | false | 1,886 | r | ### Stock market prediction using Numarical and Text anaylsis ###
#installing package quantmod
install.packages("quantmod")
library(quantmod)
# stock price analyis of SENS
getSymbols("SENS", src="yahoo")
chartSeries(SENS)
addMACD() # adds moving average convergence divergence signals
addBBands() # Adds Bollinger bands to the stock
addCCI() # Add Commodity channel index.
addADX() #Add Directional Movement Indicator
addCMF() #Add Money Flow chart
Returns_by_yaer_SENS<-yearlyReturn(SENS)
head(Returns_by_yaer_SENS)
Returns_by_month_SENS=monthlyReturn(SENS)
head(Returns_by_month_SENS)
getSymbols("^BSESN", src="yahoo")
chartSeries(BSESN)
addMACD() # adds moving average convergence divergence signals
addBBands() # Adds Bollinger bands to the stock price.
addCCI() # Add Commodity channel index.
addADX() #Add Directional Movement Indicator
addCMF() #Add Money Flow chart
### Text analysis ###
df=read.csv(file.choose())
head(df)
str(df)
View(df)
#build corpus
library(tm)
Corpus=iconv(df$headline_text)
Corpus=Corpus(VectorSource(Corpus))
inspect(Corpus[1:5])
#cleaning the text
Corpus=tm_map(Corpus,tolower)
inspect(Corpus[1:5])
Corpus=tm_map(Corpus,removeNumbers)
Corpus=tm_map(Corpus,removePunctuation)
inspect(Corpus[1:15])
stopwords('english')
clean.set=tm_map(Corpus,removeWords,stopwords("english"))
inspect(Corpus[1:15])
clean.set=tm_map(Corpus,stripWhitespace)
inspect(Corpus[1:15])
#term doc matrix
tdm=TermDocumentMatrix(clean.set)
tdm
tdm=as.matrix(tdm)
tdm[1:10,1:20]
w=rowSums(tdm)
w
w=subset(w,w>10)
barplot(w,las=2,col = rainbow(20))
#wordcloud
library(wordcloud)
w=sort(rowSums(tdm),decreasing = T)
set.seed(222)
wordcloud(words = names(w),
freq = w,
max.words = 300,
random.order = F,
min.freq = 5)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/mean_survival.R
\name{survmean}
\alias{survmean}
\title{Compute Mean Survival Times Using Extrapolation}
\usage{
survmean(formula, data, adjust = NULL, weights = NULL, breaks = NULL,
pophaz = NULL, e1.breaks = NULL, e1.pophaz = pophaz, r = "auto",
surv.method = "hazard", subset = NULL, verbose = FALSE)
}
\arguments{
\item{formula}{a \code{formula}, e.g. \code{FUT ~ V1} or
\code{Surv(FUT, lex.Xst) ~ V1}.
Supplied in the same way as to \code{\link{survtab}}, see that help
for more info.}
\item{data}{a \code{Lexis} data set; see \code{\link[Epi]{Lexis}}.}
\item{adjust}{variables to adjust estimates by, e.g. \code{adjust = "agegr"}.
\link[=flexible_argument]{Flexible input}.}
\item{weights}{weights to use to adjust mean survival times. See the
\link[=direct_standardization]{dedicated help page} for more details on
weighting. \code{survmean}
computes curves separately by all variables to adjust by, computes mean
survival times, and computes weighted means of the mean survival times.
See Examples.}
\item{breaks}{a list of breaks defining the time window to compute
observed survival in, and the intervals used in estimation. E.g.
\code{list(FUT = 0:10)} when \code{FUT} is the follow-up time scale in your
data.}
\item{pophaz}{a data set of population hazards passed to
\code{\link{survtab}} (see the
\link[=pophaz]{dedicated help page} and the help page of
\code{survtab} for more information). Defines the
population hazard in the time window where observed survival is estimated.}
\item{e1.breaks}{\code{NULL} or a list of breaks defining the time
window to compute
\strong{expected} survival in, and the intervals used in estimation. E.g.
\code{list(FUT = 0:100)} when \code{FUT} is the follow-up time scale in your
data to extrapolate up to 100 years from where the observed survival
curve ends. \strong{NOTE:} the breaks on the survival time scale
MUST include the breaks supplied to argument \code{breaks}; see Examples.
If \code{NULL}, uses decent defaults (maximum follow-up time of 50 years).}
\item{e1.pophaz}{Same as \code{pophaz}, except this defines the
population hazard in the time window where \strong{expected}
survival is estimated. By default uses the same data as
argument \code{pophaz}.}
\item{r}{either a numeric multiplier such as \code{0.995}, \code{"auto"}, or
\code{"autoX"} where \code{X} is an integer;
used to determine the relative survival ratio (RSR) persisting after where
the estimated observed survival curve ends. See Details.}
\item{surv.method}{passed to \code{survtab}; see that help for more info.}
\item{subset}{a logical condition; e.g. \code{subset = sex == 1};
subsets the data before computations}
\item{verbose}{\code{logical}; if \code{TRUE}, the function is returns
some messages and results along the run, which may be useful in debugging}
}
\value{
Returns a \code{data.frame} or \code{data.table} (depending on
\code{getOptions("popEpi.datatable")}; see \code{?popEpi}) containing the
following columns:
\itemize{
\item{est}{: The estimated mean survival time}
\item{exp}{: The computed expected survival time}
\item{obs}{: Counts of subjects in data}
\item{YPLL}{: Years of Potential Life Lost, computed as
(\code{(exp-est)*obs}) - though your time data may be in e.g. days,
this column will have the same name regardless.}
}
The returned data also has columns named according to the variables
supplied to the right-hand-side of the formula.
}
\description{
Computes mean survival times based on survival estimation up to
a point in follow-up time (e.g. 10 years),
after which survival is extrapolated
using an appropriate hazard data file (\code{pophaz}) to yield the "full"
survival curve. The area under the full survival curve is the mean survival.
}
\details{
\strong{Basics}
\code{survmean} computes mean survival times. For median survival times
(i.e. where 50 % of subjects have died or met some other event)
use \code{\link{survtab}}.
The mean survival time is simply the area under the survival curve.
However, since full follow-up rarely happens, the observed survival curves
are extrapolated using expected survival: E.g. one might compute observed
survival till up to 10 years and extrapolate beyond that
(till e.g. 50 years) to yield an educated guess on the full observed survival
curve.
The area is computed by trapezoidal integration of the area under the curve.
This function also computes the "full" expected survival curve from
T = 0 till e.g. T = 50 depending on supplied arguments. The
expected mean survival time is the area under the
mean expected survival curve.
This function returns the mean expected survival time to be compared with
the mean survival time and for computing years of potential life lost (YPLL).
Results can be formed by strata and adjusted for e.g. age by using
the \code{formula} argument as in \code{survtab}. See also Examples.
\strong{Extrapolation tweaks}
Argument \code{r} controls the relative survival ratio (RSR) assumed to
persist beyond the time window where observed survival is computed
(defined by argument \code{breaks}; e.g. up to \code{FUT = 10}).
The RSR is simply \code{RSR_i = p_oi / p_ei} for a time interval \code{i},
i.e. the observed divided by the expected
(conditional, not cumulative) probability of surviving from the beginning of
a time interval till its end. The cumulative product of \code{RSR_i}
over time is the (cumulative) relative survival curve.
If \code{r} is numeric, e.g. \code{r = 0.995}, that RSR level is assumed
to persist beyond the observed survival curve.
Numeric \code{r} should be \code{> 0} and expressed at the annual level
when using fractional years as the scale of the time variables.
E.g. if RSR is known to be \code{0.95} at the month level, then the
annualized RSR is \code{0.95^12}. This enables correct usage of the RSR
with survival intervals of varying lengths. When using day-level time
variables (such as \code{Dates}; see \code{as.Date}), numeric \code{r}
should be expressed at the day level, etc.
If \code{r = "auto"} or \code{r = "auto1"}, this function computes
RSR estimates internally and automatically uses the \code{RSR_i}
in the last survival interval in each stratum (and adjusting group)
and assumes that to persist beyond the observed survival curve.
Automatic determination of \code{r} is a good starting point,
but in situations where the RSR estimate is uncertain it may produce poor
results. Using \code{"autoX"} such as \code{"auto6"} causes \code{survmean}
to use the mean of the estimated RSRs in the last X survival intervals,
which may be more stable.
Automatic determination will not use values \code{>1} but set them to 1.
Visual inspection of the produced curves is always recommended: see
Examples.
One may also tweak the accuracy and length of extrapolation and
expected survival curve computation by using
\code{e1.breaks}. By default this is whatever was supplied to \code{breaks}
for the survival time scale, to which
\code{c(seq(1/12, 1, 1/12), seq(1.2, 1.8, 0.2), 2:19, seq(20, 50, 5))}
is added after the maximum value, e.g. with \code{breaks = list(FUT = 0:10)}
we have
\code{..., 10+1/12, ..., 11, 11.2, ..., 2, 3, ..., 19, 20, 25, ... 50}
as the \code{e1.breaks}. Supplying \code{e1.breaks} manually requires
the breaks over time survival time scale supplied to argument \code{breaks}
to be reiterated in \code{e1.breaks}; see Examples. \strong{NOTE}: the
default extrapolation breaks assume the time scales in the data to be
expressed as fractional years, meaning this will work extremely poorly
when using e.g. day-level time scales (such as \code{Date} variables).
Set the extrapolation breaks manually in such cases.
}
\examples{
library(survival)
library(Epi)
## take 500 subjects randomly for demonstration
data(sire)
sire <- sire[sire$dg_date < sire$ex_date, ]
set.seed(1L)
sire <- sire[sample(x = nrow(sire), size = 500),]
## NOTE: recommended to use factor status variable
x <- Lexis(entry = list(FUT = 0, AGE = dg_age, CAL = get.yrs(dg_date)),
exit = list(CAL = get.yrs(ex_date)),
data = sire,
exit.status = factor(status, levels = 0:2,
labels = c("alive", "canD", "othD")),
merge = TRUE)
## phony variable
set.seed(1L)
x$group <- rbinom(nrow(x), 1, 0.5)
## age group
x$agegr <- cut(x$dg_age, c(0,45,60,Inf), right=FALSE)
## population hazards data set
pm <- data.frame(popEpi::popmort)
names(pm) <- c("sex", "CAL", "AGE", "haz")
## breaks to define observed survival estimation
BL <- list(FUT = seq(0, 10, 1/12))
## crude mean survival
sm1 <- survmean(Surv(FUT, lex.Xst != "alive") ~ 1,
pophaz = pm, data = x, weights = NULL,
breaks = BL)
sm1 <- survmean(FUT ~ 1,
pophaz = pm, data = x, weights = NULL,
breaks = BL)
\dontrun{
## mean survival by group
sm2 <- survmean(FUT ~ group,
pophaz = pm, data = x, weights = NULL,
breaks = BL)
## ... and adjusted for age using internal weights (counts of subjects)
## note: need also longer extrapolation here so that all curves
## converge to zero in the end.
eBL <- list(FUT = c(BL$FUT, 11:75))
sm3 <- survmean(FUT ~ group + adjust(agegr),
pophaz = pm, data = x, weights = "internal",
breaks = BL, e1.breaks = eBL)
}
## visual inspection of how realistic extrapolation is for each stratum;
## solid lines are observed + extrapolated survivals;
## dashed lines are expected survivals
plot(sm1)
\dontrun{
## plotting object with both stratification and standardization
## plots curves for each strata-std.group combination
plot(sm3)
## for finer control of plotting these curves, you may extract
## from the survmean object using e.g.
attributes(sm3)$survmean.meta$curves
#### using Dates
x <- Lexis(entry = list(FUT = 0L, AGE = dg_date-bi_date, CAL = dg_date),
exit = list(CAL = ex_date),
data = sire[sire$dg_date < sire$ex_date, ],
exit.status = factor(status, levels = 0:2,
labels = c("alive", "canD", "othD")),
merge = TRUE)
## phony group variable
set.seed(1L)
x$group <- rbinom(nrow(x), 1, 0.5)
## NOTE: population hazard should be reported at the same scale
## as time variables in your Lexis data.
data(popmort, package = "popEpi")
pm <- data.frame(popmort)
names(pm) <- c("sex", "CAL", "AGE", "haz")
## from year to day level
pm$haz <- pm$haz/365.25
pm$CAL <- as.Date(paste0(pm$CAL, "-01-01"))
pm$AGE <- pm$AGE*365.25
BL <- list(FUT = seq(0, 8, 1/12)*365.25)
eBL <- list(FUT = c(BL$FUT, c(8.25,8.5,9:60)*365.25))
smd <- survmean(FUT ~ group, data = x,
pophaz = pm, verbose = TRUE, r = "auto5",
breaks = BL, e1.breaks = eBL)
plot(smd)
}
}
\seealso{
Other survmean functions: \code{\link{lines.survmean}},
\code{\link{plot.survmean}}
Other main functions: \code{\link{rate}},
\code{\link{relpois_ag}}, \code{\link{relpois}},
\code{\link{sirspline}}, \code{\link{sir}},
\code{\link{survtab_ag}}, \code{\link{survtab}}
}
\author{
Joonas Miettinen
}
\concept{main functions}
\concept{survmean functions}
| /man/survmean.Rd | no_license | m-allik/popEpi | R | false | true | 11,353 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/mean_survival.R
\name{survmean}
\alias{survmean}
\title{Compute Mean Survival Times Using Extrapolation}
\usage{
survmean(formula, data, adjust = NULL, weights = NULL, breaks = NULL,
pophaz = NULL, e1.breaks = NULL, e1.pophaz = pophaz, r = "auto",
surv.method = "hazard", subset = NULL, verbose = FALSE)
}
\arguments{
\item{formula}{a \code{formula}, e.g. \code{FUT ~ V1} or
\code{Surv(FUT, lex.Xst) ~ V1}.
Supplied in the same way as to \code{\link{survtab}}, see that help
for more info.}
\item{data}{a \code{Lexis} data set; see \code{\link[Epi]{Lexis}}.}
\item{adjust}{variables to adjust estimates by, e.g. \code{adjust = "agegr"}.
\link[=flexible_argument]{Flexible input}.}
\item{weights}{weights to use to adjust mean survival times. See the
\link[=direct_standardization]{dedicated help page} for more details on
weighting. \code{survmean}
computes curves separately by all variables to adjust by, computes mean
survival times, and computes weighted means of the mean survival times.
See Examples.}
\item{breaks}{a list of breaks defining the time window to compute
observed survival in, and the intervals used in estimation. E.g.
\code{list(FUT = 0:10)} when \code{FUT} is the follow-up time scale in your
data.}
\item{pophaz}{a data set of population hazards passed to
\code{\link{survtab}} (see the
\link[=pophaz]{dedicated help page} and the help page of
\code{survtab} for more information). Defines the
population hazard in the time window where observed survival is estimated.}
\item{e1.breaks}{\code{NULL} or a list of breaks defining the time
window to compute
\strong{expected} survival in, and the intervals used in estimation. E.g.
\code{list(FUT = 0:100)} when \code{FUT} is the follow-up time scale in your
data to extrapolate up to 100 years from where the observed survival
curve ends. \strong{NOTE:} the breaks on the survival time scale
MUST include the breaks supplied to argument \code{breaks}; see Examples.
If \code{NULL}, uses decent defaults (maximum follow-up time of 50 years).}
\item{e1.pophaz}{Same as \code{pophaz}, except this defines the
population hazard in the time window where \strong{expected}
survival is estimated. By default uses the same data as
argument \code{pophaz}.}
\item{r}{either a numeric multiplier such as \code{0.995}, \code{"auto"}, or
\code{"autoX"} where \code{X} is an integer;
used to determine the relative survival ratio (RSR) persisting after where
the estimated observed survival curve ends. See Details.}
\item{surv.method}{passed to \code{survtab}; see that help for more info.}
\item{subset}{a logical condition; e.g. \code{subset = sex == 1};
subsets the data before computations}
\item{verbose}{\code{logical}; if \code{TRUE}, the function is returns
some messages and results along the run, which may be useful in debugging}
}
\value{
Returns a \code{data.frame} or \code{data.table} (depending on
\code{getOptions("popEpi.datatable")}; see \code{?popEpi}) containing the
following columns:
\itemize{
\item{est}{: The estimated mean survival time}
\item{exp}{: The computed expected survival time}
\item{obs}{: Counts of subjects in data}
\item{YPLL}{: Years of Potential Life Lost, computed as
(\code{(exp-est)*obs}) - though your time data may be in e.g. days,
this column will have the same name regardless.}
}
The returned data also has columns named according to the variables
supplied to the right-hand-side of the formula.
}
\description{
Computes mean survival times based on survival estimation up to
a point in follow-up time (e.g. 10 years),
after which survival is extrapolated
using an appropriate hazard data file (\code{pophaz}) to yield the "full"
survival curve. The area under the full survival curve is the mean survival.
}
\details{
\strong{Basics}
\code{survmean} computes mean survival times. For median survival times
(i.e. where 50 % of subjects have died or met some other event)
use \code{\link{survtab}}.
The mean survival time is simply the area under the survival curve.
However, since full follow-up rarely happens, the observed survival curves
are extrapolated using expected survival: E.g. one might compute observed
survival till up to 10 years and extrapolate beyond that
(till e.g. 50 years) to yield an educated guess on the full observed survival
curve.
The area is computed by trapezoidal integration of the area under the curve.
This function also computes the "full" expected survival curve from
T = 0 till e.g. T = 50 depending on supplied arguments. The
expected mean survival time is the area under the
mean expected survival curve.
This function returns the mean expected survival time to be compared with
the mean survival time and for computing years of potential life lost (YPLL).
Results can be formed by strata and adjusted for e.g. age by using
the \code{formula} argument as in \code{survtab}. See also Examples.
\strong{Extrapolation tweaks}
Argument \code{r} controls the relative survival ratio (RSR) assumed to
persist beyond the time window where observed survival is computed
(defined by argument \code{breaks}; e.g. up to \code{FUT = 10}).
The RSR is simply \code{RSR_i = p_oi / p_ei} for a time interval \code{i},
i.e. the observed divided by the expected
(conditional, not cumulative) probability of surviving from the beginning of
a time interval till its end. The cumulative product of \code{RSR_i}
over time is the (cumulative) relative survival curve.
If \code{r} is numeric, e.g. \code{r = 0.995}, that RSR level is assumed
to persist beyond the observed survival curve.
Numeric \code{r} should be \code{> 0} and expressed at the annual level
when using fractional years as the scale of the time variables.
E.g. if RSR is known to be \code{0.95} at the month level, then the
annualized RSR is \code{0.95^12}. This enables correct usage of the RSR
with survival intervals of varying lengths. When using day-level time
variables (such as \code{Dates}; see \code{as.Date}), numeric \code{r}
should be expressed at the day level, etc.
If \code{r = "auto"} or \code{r = "auto1"}, this function computes
RSR estimates internally and automatically uses the \code{RSR_i}
in the last survival interval in each stratum (and adjusting group)
and assumes that to persist beyond the observed survival curve.
Automatic determination of \code{r} is a good starting point,
but in situations where the RSR estimate is uncertain it may produce poor
results. Using \code{"autoX"} such as \code{"auto6"} causes \code{survmean}
to use the mean of the estimated RSRs in the last X survival intervals,
which may be more stable.
Automatic determination will not use values \code{>1} but set them to 1.
Visual inspection of the produced curves is always recommended: see
Examples.
One may also tweak the accuracy and length of extrapolation and
expected survival curve computation by using
\code{e1.breaks}. By default this is whatever was supplied to \code{breaks}
for the survival time scale, to which
\code{c(seq(1/12, 1, 1/12), seq(1.2, 1.8, 0.2), 2:19, seq(20, 50, 5))}
is added after the maximum value, e.g. with \code{breaks = list(FUT = 0:10)}
we have
\code{..., 10+1/12, ..., 11, 11.2, ..., 2, 3, ..., 19, 20, 25, ... 50}
as the \code{e1.breaks}. Supplying \code{e1.breaks} manually requires
the breaks over time survival time scale supplied to argument \code{breaks}
to be reiterated in \code{e1.breaks}; see Examples. \strong{NOTE}: the
default extrapolation breaks assume the time scales in the data to be
expressed as fractional years, meaning this will work extremely poorly
when using e.g. day-level time scales (such as \code{Date} variables).
Set the extrapolation breaks manually in such cases.
}
\examples{
library(survival)
library(Epi)
## take 500 subjects randomly for demonstration
data(sire)
sire <- sire[sire$dg_date < sire$ex_date, ]
set.seed(1L)
sire <- sire[sample(x = nrow(sire), size = 500),]
## NOTE: recommended to use factor status variable
x <- Lexis(entry = list(FUT = 0, AGE = dg_age, CAL = get.yrs(dg_date)),
exit = list(CAL = get.yrs(ex_date)),
data = sire,
exit.status = factor(status, levels = 0:2,
labels = c("alive", "canD", "othD")),
merge = TRUE)
## phony variable
set.seed(1L)
x$group <- rbinom(nrow(x), 1, 0.5)
## age group
x$agegr <- cut(x$dg_age, c(0,45,60,Inf), right=FALSE)
## population hazards data set
pm <- data.frame(popEpi::popmort)
names(pm) <- c("sex", "CAL", "AGE", "haz")
## breaks to define observed survival estimation
BL <- list(FUT = seq(0, 10, 1/12))
## crude mean survival
sm1 <- survmean(Surv(FUT, lex.Xst != "alive") ~ 1,
pophaz = pm, data = x, weights = NULL,
breaks = BL)
sm1 <- survmean(FUT ~ 1,
pophaz = pm, data = x, weights = NULL,
breaks = BL)
\dontrun{
## mean survival by group
sm2 <- survmean(FUT ~ group,
pophaz = pm, data = x, weights = NULL,
breaks = BL)
## ... and adjusted for age using internal weights (counts of subjects)
## note: need also longer extrapolation here so that all curves
## converge to zero in the end.
eBL <- list(FUT = c(BL$FUT, 11:75))
sm3 <- survmean(FUT ~ group + adjust(agegr),
pophaz = pm, data = x, weights = "internal",
breaks = BL, e1.breaks = eBL)
}
## visual inspection of how realistic extrapolation is for each stratum;
## solid lines are observed + extrapolated survivals;
## dashed lines are expected survivals
plot(sm1)
\dontrun{
## plotting object with both stratification and standardization
## plots curves for each strata-std.group combination
plot(sm3)
## for finer control of plotting these curves, you may extract
## from the survmean object using e.g.
attributes(sm3)$survmean.meta$curves
#### using Dates
x <- Lexis(entry = list(FUT = 0L, AGE = dg_date-bi_date, CAL = dg_date),
exit = list(CAL = ex_date),
data = sire[sire$dg_date < sire$ex_date, ],
exit.status = factor(status, levels = 0:2,
labels = c("alive", "canD", "othD")),
merge = TRUE)
## phony group variable
set.seed(1L)
x$group <- rbinom(nrow(x), 1, 0.5)
## NOTE: population hazard should be reported at the same scale
## as time variables in your Lexis data.
data(popmort, package = "popEpi")
pm <- data.frame(popmort)
names(pm) <- c("sex", "CAL", "AGE", "haz")
## from year to day level
pm$haz <- pm$haz/365.25
pm$CAL <- as.Date(paste0(pm$CAL, "-01-01"))
pm$AGE <- pm$AGE*365.25
BL <- list(FUT = seq(0, 8, 1/12)*365.25)
eBL <- list(FUT = c(BL$FUT, c(8.25,8.5,9:60)*365.25))
smd <- survmean(FUT ~ group, data = x,
pophaz = pm, verbose = TRUE, r = "auto5",
breaks = BL, e1.breaks = eBL)
plot(smd)
}
}
\seealso{
Other survmean functions: \code{\link{lines.survmean}},
\code{\link{plot.survmean}}
Other main functions: \code{\link{rate}},
\code{\link{relpois_ag}}, \code{\link{relpois}},
\code{\link{sirspline}}, \code{\link{sir}},
\code{\link{survtab_ag}}, \code{\link{survtab}}
}
\author{
Joonas Miettinen
}
\concept{main functions}
\concept{survmean functions}
|
#library(xlsx)
SofiaLaugh=data.frame()
ACA2.11=read.xlsx("Angels Care Sofia A 2-11-14.xlsx",1)
ACA2.11t=t(ACA2.11)
LaughACA2.11=data.frame(ACA2.11t[c(9,12,15,18,21,24),])
LaughACA2.11=data.matrix(LaughACA2.11)
LaughACA2.11=LaughACA2.11-1
LaughACA2.11=colSums(LaughACA2.11,na.rm=T)
LaughACA2.11
ACB2.11=read.xlsx("Angels Care Sofia B 2-11-14.xlsx",1)
ACB2.11t=t(ACB2.11)
LaughACB2.11=data.frame(ACB2.11t[c(9,12,15,18,21,24),])
LaughACB2.11=data.matrix(LaughACB2.11)
LaughACB2.11=LaughACB2.11-1
LaughACB2.11=colSums(LaughACB2.11,na.rm=T)
LaughACB2.11
ACA2.17=read.xlsx("AngelsCare_Sofia1.2.17.14_BL.xlsx",1)
ACA2.17t=t(ACA2.17)
LaughACA2.17=data.frame(ACA2.17t[c(9,12,15,18,21,24),])
LaughACA2.17=data.matrix(LaughACA2.17)
LaughACA2.17=LaughACA2.17-1
LaughACA2.17=colSums(LaughACA2.17,na.rm=T)
LaughACA2.17
ACB2.17=read.xlsx("AngelsCare_Sofia2.2.17.14_BL (Video Label is Sesame).xlsx",1)
ACB2.17t=t(ACB2.17)
LaughACB2.17=data.frame(ACB2.17t[c(9,12,15,18,21,24),])
LaughACB2.17=data.matrix(LaughACB2.17)
LaughACB2.17=LaughACB2.17-1
LaughACB2.17=colSums(LaughACB2.17,na.rm=T)
LaughACB2.17
ASCA2.13=read.xlsx("Austin S. Christian Academy Sofia A 2-13-14.xlsx",1)
ASCA2.13t=t(ASCA2.13)
LaughASCA2.13=data.frame(ASCA2.13t[c(9,12,15,18,21,24),])
LaughASCA2.13=data.matrix(LaughASCA2.13)
LaughASCA2.13=LaughASCA2.13-1
LaughASCA2.13=colSums(LaughASCA2.13,na.rm=T)
LaughASCA2.13
CWA2.3=read.xlsx("CreativeWorld_Sofia1.2.3.14_BL.xlsx",1)
CWA2.3t=t(CWA2.3)
LaughCWA2.3=data.frame(CWA2.3t[c(9,12,15,18,21,24),])
LaughCWA2.3=data.matrix(LaughCWA2.3)
LaughCWA2.3=LaughCWA2.3-1
LaughCWA2.3=colSums(LaughCWA2.3,na.rm=T)
LaughCWA2.3
CWB2.3=read.xlsx("CreativeWorld_Sofia2.2.3.14_BL (Video Label is SST).xlsx",1)
CWB2.3t=t(CWB2.3)
LaughCWB2.3=data.frame(CWB2.3t[c(9,12,15,18,21,24),])
LaughCWB2.3=data.matrix(LaughCWB2.3)
LaughCWB2.3=LaughCWB2.3-1
LaughCWB2.3=colSums(LaughCWB2.3,na.rm=T)
LaughCWB2.3
PNA2.10=read.xlsx("Papa and Nanas Sofia A 2-10-14.xlsx",1)
PNA2.10t=t(PNA2.10)
LaughPNA2.10=data.frame(PNA2.10t[c(9,12,15,18,21,24),])
LaughPNA2.10=data.matrix(LaughPNA2.10)
LaughPNA2.10=LaughPNA2.10-1
LaughPNA2.10=colSums(LaughPNA2.10,na.rm=T)
LaughPNA2.10
PNB2.10=read.xlsx("Papa and Nanas Sofia B 2-10-14.xlsx",1)
PNB2.10t=t(PNB2.10)
LaughPNB2.10=data.frame(PNB2.10t[c(9,12,15,18,21,24),])
LaughPNB2.10=data.matrix(LaughPNB2.10)
LaughPNB2.10=LaughPNB2.10-1
LaughPNB2.10=colSums(LaughPNB2.10,na.rm=T)
LaughPNB2.10
RDA2.12=read.xlsx("QRosiesDaycare_Sofia1.2.12.14_BL.xlsx",1)
RDA2.12=t(RDA2.12)
LaughRDA2.12=data.frame(RDA2.12t[c(9,12,15,18,21,24),])
LaughRDA2.12=data.matrix(LaughRDA2.12)
LaughRDA2.12=LaughRDA2.12-1
LaughRDA2.12=colSums(LaughRDA2.12,na.rm=T)
RDB2.12=read.xlsx("QRosiesDaycare_Sofia1.2.12.14_BL.xlsx",1)
RDB2.12=t(RDB2.12)
LaughRDB2.12=data.frame(RDB2.12t[c(9,12,15,18,21,24),])
LaughRDB2.12=data.matrix(LaughRDB2.12)
LaughRDB2.12=LaughRDB2.12-1
LaughRDB2.12=colSums(LaughRDB2.12,na.rm=T)
SofiaLaugh=rbind(LaughACA2.11,LaughACB2.11,LaughACA2.17,LaughACB2.17,LaughASCA2.13,LaughCWA2.3,LaughCWB2.3,LaughPNA2.10,LaughPNB2.10,LaughRDA2.12,LaughRDB2.12)
SofiaLaugh=t(SofiaLaugh)
edit(SofiaLaugh)
write.xlsx(SofiaLaugh,"/Users/bradlide/Desktop/Data/2Sofia/SofiaCompiledLaugh.xlsx",sheetName="Laugh")
| /Unsaved R Document 5.R | no_license | bradlide/SesameFinal | R | false | false | 3,217 | r | #library(xlsx)
SofiaLaugh=data.frame()
ACA2.11=read.xlsx("Angels Care Sofia A 2-11-14.xlsx",1)
ACA2.11t=t(ACA2.11)
LaughACA2.11=data.frame(ACA2.11t[c(9,12,15,18,21,24),])
LaughACA2.11=data.matrix(LaughACA2.11)
LaughACA2.11=LaughACA2.11-1
LaughACA2.11=colSums(LaughACA2.11,na.rm=T)
LaughACA2.11
ACB2.11=read.xlsx("Angels Care Sofia B 2-11-14.xlsx",1)
ACB2.11t=t(ACB2.11)
LaughACB2.11=data.frame(ACB2.11t[c(9,12,15,18,21,24),])
LaughACB2.11=data.matrix(LaughACB2.11)
LaughACB2.11=LaughACB2.11-1
LaughACB2.11=colSums(LaughACB2.11,na.rm=T)
LaughACB2.11
ACA2.17=read.xlsx("AngelsCare_Sofia1.2.17.14_BL.xlsx",1)
ACA2.17t=t(ACA2.17)
LaughACA2.17=data.frame(ACA2.17t[c(9,12,15,18,21,24),])
LaughACA2.17=data.matrix(LaughACA2.17)
LaughACA2.17=LaughACA2.17-1
LaughACA2.17=colSums(LaughACA2.17,na.rm=T)
LaughACA2.17
ACB2.17=read.xlsx("AngelsCare_Sofia2.2.17.14_BL (Video Label is Sesame).xlsx",1)
ACB2.17t=t(ACB2.17)
LaughACB2.17=data.frame(ACB2.17t[c(9,12,15,18,21,24),])
LaughACB2.17=data.matrix(LaughACB2.17)
LaughACB2.17=LaughACB2.17-1
LaughACB2.17=colSums(LaughACB2.17,na.rm=T)
LaughACB2.17
ASCA2.13=read.xlsx("Austin S. Christian Academy Sofia A 2-13-14.xlsx",1)
ASCA2.13t=t(ASCA2.13)
LaughASCA2.13=data.frame(ASCA2.13t[c(9,12,15,18,21,24),])
LaughASCA2.13=data.matrix(LaughASCA2.13)
LaughASCA2.13=LaughASCA2.13-1
LaughASCA2.13=colSums(LaughASCA2.13,na.rm=T)
LaughASCA2.13
CWA2.3=read.xlsx("CreativeWorld_Sofia1.2.3.14_BL.xlsx",1)
CWA2.3t=t(CWA2.3)
LaughCWA2.3=data.frame(CWA2.3t[c(9,12,15,18,21,24),])
LaughCWA2.3=data.matrix(LaughCWA2.3)
LaughCWA2.3=LaughCWA2.3-1
LaughCWA2.3=colSums(LaughCWA2.3,na.rm=T)
LaughCWA2.3
CWB2.3=read.xlsx("CreativeWorld_Sofia2.2.3.14_BL (Video Label is SST).xlsx",1)
CWB2.3t=t(CWB2.3)
LaughCWB2.3=data.frame(CWB2.3t[c(9,12,15,18,21,24),])
LaughCWB2.3=data.matrix(LaughCWB2.3)
LaughCWB2.3=LaughCWB2.3-1
LaughCWB2.3=colSums(LaughCWB2.3,na.rm=T)
LaughCWB2.3
PNA2.10=read.xlsx("Papa and Nanas Sofia A 2-10-14.xlsx",1)
PNA2.10t=t(PNA2.10)
LaughPNA2.10=data.frame(PNA2.10t[c(9,12,15,18,21,24),])
LaughPNA2.10=data.matrix(LaughPNA2.10)
LaughPNA2.10=LaughPNA2.10-1
LaughPNA2.10=colSums(LaughPNA2.10,na.rm=T)
LaughPNA2.10
PNB2.10=read.xlsx("Papa and Nanas Sofia B 2-10-14.xlsx",1)
PNB2.10t=t(PNB2.10)
LaughPNB2.10=data.frame(PNB2.10t[c(9,12,15,18,21,24),])
LaughPNB2.10=data.matrix(LaughPNB2.10)
LaughPNB2.10=LaughPNB2.10-1
LaughPNB2.10=colSums(LaughPNB2.10,na.rm=T)
LaughPNB2.10
RDA2.12=read.xlsx("QRosiesDaycare_Sofia1.2.12.14_BL.xlsx",1)
RDA2.12=t(RDA2.12)
LaughRDA2.12=data.frame(RDA2.12t[c(9,12,15,18,21,24),])
LaughRDA2.12=data.matrix(LaughRDA2.12)
LaughRDA2.12=LaughRDA2.12-1
LaughRDA2.12=colSums(LaughRDA2.12,na.rm=T)
RDB2.12=read.xlsx("QRosiesDaycare_Sofia1.2.12.14_BL.xlsx",1)
RDB2.12=t(RDB2.12)
LaughRDB2.12=data.frame(RDB2.12t[c(9,12,15,18,21,24),])
LaughRDB2.12=data.matrix(LaughRDB2.12)
LaughRDB2.12=LaughRDB2.12-1
LaughRDB2.12=colSums(LaughRDB2.12,na.rm=T)
SofiaLaugh=rbind(LaughACA2.11,LaughACB2.11,LaughACA2.17,LaughACB2.17,LaughASCA2.13,LaughCWA2.3,LaughCWB2.3,LaughPNA2.10,LaughPNB2.10,LaughRDA2.12,LaughRDB2.12)
SofiaLaugh=t(SofiaLaugh)
edit(SofiaLaugh)
write.xlsx(SofiaLaugh,"/Users/bradlide/Desktop/Data/2Sofia/SofiaCompiledLaugh.xlsx",sheetName="Laugh")
|
# Return full path to problem data file.
dataFileFullPath <- function(fname) {
paste("data", fname, sep="/")
}
# Returns solution for a given problem.
computeSolutionFor <- function(fname) {
# This is for the standard, python solver
sol <- system(paste('python solver.py', dataFileFullPath(fname)), intern=T)
#sol <- system(paste('./facility', dataFileFullPath(fname)), intern=T)
cost <- as.numeric(substr(sol[1], 1, nchar(sol[1])-1))
solution <- as.numeric(unlist(strsplit(sol[2], " ")))
list(cost=cost, solution=solution)
}
# Create a color mapping function between the given vector (which may contain duplicate values)
# and a color for each unique value in the vector.
makeColorMapper <- function(vals) {
uniq <- sort(unique(vals))
colors <- rainbow(length(uniq))
function(val) {
colors[which(uniq == val)]
}
}
# Loads one data file and returns a list with facilities (fx) and customers (cx).
loadDataFile <- function(fname) {
rawData <- read.csv(dataFileFullPath(fname), header=F, sep=' ')
facCount <- rawData$V1[1]; custCount <-rawData$V2[1]
facilities <- rawData[2:(1+facCount), 1:4]
colnames(facilities) <- c('setupCost', 'capacity', 'x', 'y')
customers <- rawData[(2+facCount):(1+facCount+custCount), 1:3]
colnames(customers) <- c('demand', 'x', 'y')
list(fx = facilities, cx = customers)
}
# Plots one instance of the Facility Location problem defined in fname, including
# the solution (obtained via python solver.py fname).
plotInstance <- function(fname, plotLegend=F, plotSol=T) {
z <- loadDataFile(fname)
facilities <- z$fx
customers <- z$cx
plot(customers$y ~ customers$x, col="red", main=paste("Facility Location",fname), xlab="x", ylab="y",
xlim=c(min(c(min(customers$x), min(facilities$x))),max(c(max(customers$x), max(facilities$x)))),
ylim=c(min(c(min(customers$y), min(facilities$y))),max(c(max(customers$y), max(facilities$y)))))
points(facilities$y ~ facilities$x, col="blue")
if (plotLegend) {
colors <- c("red", "blue", "blue")
legend("topleft", legend=c("customer", "facility", "facility open"), col=colors,
text.col=colors, y.intersp=0.85, pch=c(1,1,16))
}
if (plotSol) {
plotSolution(fname, customers, facilities)
}
}
# Plots solution for one instance of the problem, defined in fname.
plotSolution <- function(fname, customers=NULL, facilities=NULL) {
# Lazy load customers & facilities
if (is.null(customers) || is.null(facilities)) {
z <- loadDataFile(fname)
facilities <- z$fx
customers <- z$cx
}
z <- computeSolutionFor(fname)
# converting from 0 based arrays of the output to 1 based arrays of R
z$solution <- z$solution + rep(1,length(z$solution))
facSel <- facilities[sort(unique(z$solution)),]
colors <- makeColorMapper(z$solution)
mtext(formatC(z$cost, format="d", big.mark=","), line=0.2, outer=F, col="#CC2211")
points(facSel$y ~ facSel$x, col="blue", pch=16)
ci <- 0
for (fi in z$solution) {
ci <- ci + 1
f <- facilities[fi,]
c <- customers[ci, ]
segments(f$x, f$y, c$x, c$y, col=colors(fi))
}
}
# Plots multiple data instances (all found in _metadata by default).
# If only specific instances need to be plotted, then limit can be passed a vector
# of their IDs (e.g. c(1,3,5) would only plot problems #1, #3 and #5).
plotMulti <- function(limit=NULL, plotSol=T, fname = './_metadata') {
z <- read.table(fname, header=F, sep=' ', skip=3)
names <- lapply(as.character(z$V2), function (a) substr(a,8,nchar(a)-1))
if (!is.null(limit)) {
names <- names[limit]
}
if (length(names) > 1) {
par(mfrow=c(2,length(names)/2))
} else {
par(mfrow=c(1,1))
}
lapply(names, function (a) plotInstance(a, a == names[[1]], plotSol))
}
#png(width=1600, height=900, res=96, antialias="subpixel", filename="out.png")
#plotMulti()
#dev.off()
# Manually plotting selected problems:
#plotInstance('fl_25_2')
| /facility/plot.R | no_license | alexvorobiev/dopt | R | false | false | 3,921 | r | # Return full path to problem data file.
dataFileFullPath <- function(fname) {
paste("data", fname, sep="/")
}
# Returns solution for a given problem.
computeSolutionFor <- function(fname) {
# This is for the standard, python solver
sol <- system(paste('python solver.py', dataFileFullPath(fname)), intern=T)
#sol <- system(paste('./facility', dataFileFullPath(fname)), intern=T)
cost <- as.numeric(substr(sol[1], 1, nchar(sol[1])-1))
solution <- as.numeric(unlist(strsplit(sol[2], " ")))
list(cost=cost, solution=solution)
}
# Create a color mapping function between the given vector (which may contain duplicate values)
# and a color for each unique value in the vector.
makeColorMapper <- function(vals) {
uniq <- sort(unique(vals))
colors <- rainbow(length(uniq))
function(val) {
colors[which(uniq == val)]
}
}
# Loads one data file and returns a list with facilities (fx) and customers (cx).
loadDataFile <- function(fname) {
rawData <- read.csv(dataFileFullPath(fname), header=F, sep=' ')
facCount <- rawData$V1[1]; custCount <-rawData$V2[1]
facilities <- rawData[2:(1+facCount), 1:4]
colnames(facilities) <- c('setupCost', 'capacity', 'x', 'y')
customers <- rawData[(2+facCount):(1+facCount+custCount), 1:3]
colnames(customers) <- c('demand', 'x', 'y')
list(fx = facilities, cx = customers)
}
# Plots one instance of the Facility Location problem defined in fname, including
# the solution (obtained via python solver.py fname).
plotInstance <- function(fname, plotLegend=F, plotSol=T) {
z <- loadDataFile(fname)
facilities <- z$fx
customers <- z$cx
plot(customers$y ~ customers$x, col="red", main=paste("Facility Location",fname), xlab="x", ylab="y",
xlim=c(min(c(min(customers$x), min(facilities$x))),max(c(max(customers$x), max(facilities$x)))),
ylim=c(min(c(min(customers$y), min(facilities$y))),max(c(max(customers$y), max(facilities$y)))))
points(facilities$y ~ facilities$x, col="blue")
if (plotLegend) {
colors <- c("red", "blue", "blue")
legend("topleft", legend=c("customer", "facility", "facility open"), col=colors,
text.col=colors, y.intersp=0.85, pch=c(1,1,16))
}
if (plotSol) {
plotSolution(fname, customers, facilities)
}
}
# Plots solution for one instance of the problem, defined in fname.
plotSolution <- function(fname, customers=NULL, facilities=NULL) {
# Lazy load customers & facilities
if (is.null(customers) || is.null(facilities)) {
z <- loadDataFile(fname)
facilities <- z$fx
customers <- z$cx
}
z <- computeSolutionFor(fname)
# converting from 0 based arrays of the output to 1 based arrays of R
z$solution <- z$solution + rep(1,length(z$solution))
facSel <- facilities[sort(unique(z$solution)),]
colors <- makeColorMapper(z$solution)
mtext(formatC(z$cost, format="d", big.mark=","), line=0.2, outer=F, col="#CC2211")
points(facSel$y ~ facSel$x, col="blue", pch=16)
ci <- 0
for (fi in z$solution) {
ci <- ci + 1
f <- facilities[fi,]
c <- customers[ci, ]
segments(f$x, f$y, c$x, c$y, col=colors(fi))
}
}
# Plots multiple data instances (all found in _metadata by default).
# If only specific instances need to be plotted, then limit can be passed a vector
# of their IDs (e.g. c(1,3,5) would only plot problems #1, #3 and #5).
plotMulti <- function(limit=NULL, plotSol=T, fname = './_metadata') {
z <- read.table(fname, header=F, sep=' ', skip=3)
names <- lapply(as.character(z$V2), function (a) substr(a,8,nchar(a)-1))
if (!is.null(limit)) {
names <- names[limit]
}
if (length(names) > 1) {
par(mfrow=c(2,length(names)/2))
} else {
par(mfrow=c(1,1))
}
lapply(names, function (a) plotInstance(a, a == names[[1]], plotSol))
}
#png(width=1600, height=900, res=96, antialias="subpixel", filename="out.png")
#plotMulti()
#dev.off()
# Manually plotting selected problems:
#plotInstance('fl_25_2')
|
with(ac471f882a8104baa9de21680922ff92e, {ROOT <- 'D:/SEMOSS_v4.0.0_x64/SEMOSS_v4.0.0_x64/semosshome/db/Atadata2__3b3e4a3b-d382-4e98-9950-9b4e8b308c1c/version/1c4fa71c-191c-4da9-8102-b247ffddc5d3';rm(list=ls())}); | /1c4fa71c-191c-4da9-8102-b247ffddc5d3/R/Temp/aVgKSrNdeDvD2.R | no_license | ayanmanna8/test | R | false | false | 212 | r | with(ac471f882a8104baa9de21680922ff92e, {ROOT <- 'D:/SEMOSS_v4.0.0_x64/SEMOSS_v4.0.0_x64/semosshome/db/Atadata2__3b3e4a3b-d382-4e98-9950-9b4e8b308c1c/version/1c4fa71c-191c-4da9-8102-b247ffddc5d3';rm(list=ls())}); |
File <- setRefClass(
'File',
fields = c('root','path_info','path'),
methods = list(
initialize = function(root,...){
root <<- root
callSuper(...)
},
call = function(env){
path_info <<- Utils$unescape(env[["PATH_INFO"]])
if (length(grep('..',path_info,fixed=TRUE))){
return(forbidden())
}
if (grepl('#',path_info))
path_info <- strsplit(path_info,'#')[[1]]
if (grepl('\\?',path_info))
path_info <- strsplit(path_info,'\\?',)[[1]]
path <<- normalizePath(file.path(root,path_info))
if (file_test('-d',path)){
if(!grepl(".*/$", path_info)){
return(redirect(paste(env[["SCRIPT_NAME"]], env[["PATH_INFO"]], "/", sep=""), status=301))
}
newpath <- file.path(path, "index.html")
if(file.exists(newpath)){
path <<- normalizePath(newpath)
serving()
} else {
return(indexdir())
}
} else if (file.exists(path)){
serving()
} else {
not_found()
}
},
forbidden = function(){
body = 'Forbidden\n'
list(
status=403L,
headers = list(
'Content-type' = 'text/plain',
'Content-Length' = as.character(nchar(body)),
'X-Cascade' = 'pass'
),
body = body
)
},
indexdir = function(){
body <- paste(list.files(path), collapse="\n")
list(
status=200L,
headers = list(
'Content-type' = 'text/plain',
'Content-Length' = as.character(nchar(body))
),
body = body
)
},
redirect = function(location){
res <- Response$new()
res$redirect(location, status=302)
res$finish()
},
serving = function(){
fi <- file.info(path)
if (fi$size > 0) {
body = readBin(path,'raw',fi$size)
} else {
body <- path
names(body) <- 'file'
}
list (
status=200L,
headers = list(
'Last-Modified' = Utils$rfc2822(fi$mtime),
'Content-Type' = Mime$mime_type(Mime$file_extname(basename(path))),
'Content-Length' = as.character(fi$size)
),
body=body
)
},
not_found = function(){
body <- paste("File not found:",path_info,"\n")
list(
status=404L,
headers = list(
"Content-Type" = "text/plain",
"Content-Length" = as.character(nchar(body)),
"X-Cascade" = "pass"
),
body = body
)
}
)
)
| /R/File.R | no_license | qlycool/Rook | R | false | false | 2,273 | r | File <- setRefClass(
'File',
fields = c('root','path_info','path'),
methods = list(
initialize = function(root,...){
root <<- root
callSuper(...)
},
call = function(env){
path_info <<- Utils$unescape(env[["PATH_INFO"]])
if (length(grep('..',path_info,fixed=TRUE))){
return(forbidden())
}
if (grepl('#',path_info))
path_info <- strsplit(path_info,'#')[[1]]
if (grepl('\\?',path_info))
path_info <- strsplit(path_info,'\\?',)[[1]]
path <<- normalizePath(file.path(root,path_info))
if (file_test('-d',path)){
if(!grepl(".*/$", path_info)){
return(redirect(paste(env[["SCRIPT_NAME"]], env[["PATH_INFO"]], "/", sep=""), status=301))
}
newpath <- file.path(path, "index.html")
if(file.exists(newpath)){
path <<- normalizePath(newpath)
serving()
} else {
return(indexdir())
}
} else if (file.exists(path)){
serving()
} else {
not_found()
}
},
forbidden = function(){
body = 'Forbidden\n'
list(
status=403L,
headers = list(
'Content-type' = 'text/plain',
'Content-Length' = as.character(nchar(body)),
'X-Cascade' = 'pass'
),
body = body
)
},
indexdir = function(){
body <- paste(list.files(path), collapse="\n")
list(
status=200L,
headers = list(
'Content-type' = 'text/plain',
'Content-Length' = as.character(nchar(body))
),
body = body
)
},
redirect = function(location){
res <- Response$new()
res$redirect(location, status=302)
res$finish()
},
serving = function(){
fi <- file.info(path)
if (fi$size > 0) {
body = readBin(path,'raw',fi$size)
} else {
body <- path
names(body) <- 'file'
}
list (
status=200L,
headers = list(
'Last-Modified' = Utils$rfc2822(fi$mtime),
'Content-Type' = Mime$mime_type(Mime$file_extname(basename(path))),
'Content-Length' = as.character(fi$size)
),
body=body
)
},
not_found = function(){
body <- paste("File not found:",path_info,"\n")
list(
status=404L,
headers = list(
"Content-Type" = "text/plain",
"Content-Length" = as.character(nchar(body)),
"X-Cascade" = "pass"
),
body = body
)
}
)
)
|
# clear variables and close windows
rm(list = ls(all = TRUE))
graphics.off()
# load data
x = read.table("implvola.dat")
# rescale
x = x/100
# number of rows
n = nrow(x)
# compute first differences
z = apply(x,2,diff)
# calculate covariance
s = cov(z) * 1e+05
# determine eigenvectors
e = eigen(s)
e = e$vectors
f1 = e[, 1]
f2 = e[, 2]
# Adjust second Eigenvector in R not necessary - the computation differs from R to Matlab
# Plot
plot(f1, col = "blue3", ylim = c(-0.6, 0.8), lwd = 2, type = "l", xlab = "Time",
ylab = "Percentage [%]", main = "Factor loadings")
points(f1)
lines(f2, col = "red3", lwd = 2, lty = "dotdash")
points(f2)
| /QID-3401-SFEPCA/SFEPCA.R | no_license | QuantLet/SFE | R | false | false | 651 | r |
# clear variables and close windows
rm(list = ls(all = TRUE))
graphics.off()
# load data
x = read.table("implvola.dat")
# rescale
x = x/100
# number of rows
n = nrow(x)
# compute first differences
z = apply(x,2,diff)
# calculate covariance
s = cov(z) * 1e+05
# determine eigenvectors
e = eigen(s)
e = e$vectors
f1 = e[, 1]
f2 = e[, 2]
# Adjust second Eigenvector in R not necessary - the computation differs from R to Matlab
# Plot
plot(f1, col = "blue3", ylim = c(-0.6, 0.8), lwd = 2, type = "l", xlab = "Time",
ylab = "Percentage [%]", main = "Factor loadings")
points(f1)
lines(f2, col = "red3", lwd = 2, lty = "dotdash")
points(f2)
|
#***require packages for the analysis the analysis
pkg=c("plyr", "ggplot2", "phyloseq", "data.table", "reshape2", "ape", "GUniFrac", "ape", "phytools", "metagenomeSeq", "ggtern","PMCMR")
lapply(pkg, library, character.only = TRUE)
rm(list = ls()[grep("*.*", ls())])
alphaInd = c("Shannon", "Observed")
color_palette<-c("#000000","#806600","#803300","666666")
# ***Upload and prepare phyloseq objects***
mat=read.table( "otu_table.txt", sep="\t", row.names=1, header=T)
mat=as.matrix(mat)
OTU=otu_table(mat, taxa_are_rows=T)
tax=read.table("taxa_table.txt", sep="\t", row.names=1, header=1)
tax=as.matrix(tax)
TAXA=tax_table(tax)
sd=read.table("sample_data.txt", sep="\t", row.names=1, header=1)
SD=sample_data(sd)
TREE=read.tree("16s_trim_v5v6_KG_DepExp_Derep.tree")
physeq= phyloseq(OTU, TAXA, SD,TREE)
#Remove samples with less than 1000 reads
physeq1 = prune_samples(sample_sums(physeq) > 1000, physeq)
#Select for full and hs depleted samples
cond=c("input","afull","hs")
physeq2=subset_samples(physeq1, config2 %in% cond)
comm=c("Back","Comp")
physeq2=subset_taxa(physeq2, tag %in% comm)
#Rarefication to even depth based on smallest sample size
rf=min(sample_sums(physeq2))
physeq.rrf=rarefy_even_depth(physeq2, rf, replace=TRUE)
p_nat=plot_richness(physeq.rrf,"config3","config2" ,measures=alphaInd)
p_nat$layers <- p_nat$layers[-1]
#neworder<-c("input","matrixpure","matrix3per","rootpure","root3per")
#p_nat$data$config3 <- ordered(p_nat$data$config3, levels= neworder)
p1=p_nat+geom_boxplot(data=p_nat$data, aes(x=config3, y=value, color=config2), width=1)+
geom_point(size=4)+theme_bw()+
theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())+
scale_colour_manual(values=color_palette)
#ggtitle(paste0("Alpha diversity rarefication to ",rf, " sequencing depth"))
subp2 = ggplot(data=p_nat$data[p_nat$data$variable=="Observed",], aes(x=config3, y=value, color=config2))+ geom_boxplot(width=1)+
geom_point(size=2)+theme_bw()+
theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(),legend.position="none")+
scale_colour_manual(values=color_palette) + facet_wrap(~variable) + scale_y_continuous(limits=c(0, 130), breaks=c(0,25,50,75,100,125))
kruskal.test(data=subp2$data, value ~ config3)
posthoc.kruskal.conover.test(data=subp2$data, value ~ config3, method="BH")
subp1 = ggplot(data=p_nat$data[p_nat$data$variable=="Shannon",], aes(x=config3, y=value, color=config2))+ geom_boxplot(width=1)+
geom_point(size=2)+theme_bw()+
theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(),legend.position="none")+
scale_colour_manual(values=color_palette) + facet_wrap(~variable) + scale_y_continuous(limits=c(0, 6), breaks=c(0,0.5,1.5,2.5,3.5,4.5,5.5))
kruskal.test(data=subp1$data, value ~ config3)
posthoc.kruskal.conover.test(data=subp1$data, value ~ config3)
gridExtra::grid.arrange(subp2, subp1, ncol=2)
| /community_analysis/FlowPots_system/alpha-hs.R | no_license | hmamine/ciiwarm17 | R | false | false | 2,923 | r | #***require packages for the analysis the analysis
pkg=c("plyr", "ggplot2", "phyloseq", "data.table", "reshape2", "ape", "GUniFrac", "ape", "phytools", "metagenomeSeq", "ggtern","PMCMR")
lapply(pkg, library, character.only = TRUE)
rm(list = ls()[grep("*.*", ls())])
alphaInd = c("Shannon", "Observed")
color_palette<-c("#000000","#806600","#803300","666666")
# ***Upload and prepare phyloseq objects***
mat=read.table( "otu_table.txt", sep="\t", row.names=1, header=T)
mat=as.matrix(mat)
OTU=otu_table(mat, taxa_are_rows=T)
tax=read.table("taxa_table.txt", sep="\t", row.names=1, header=1)
tax=as.matrix(tax)
TAXA=tax_table(tax)
sd=read.table("sample_data.txt", sep="\t", row.names=1, header=1)
SD=sample_data(sd)
TREE=read.tree("16s_trim_v5v6_KG_DepExp_Derep.tree")
physeq= phyloseq(OTU, TAXA, SD,TREE)
#Remove samples with less than 1000 reads
physeq1 = prune_samples(sample_sums(physeq) > 1000, physeq)
#Select for full and hs depleted samples
cond=c("input","afull","hs")
physeq2=subset_samples(physeq1, config2 %in% cond)
comm=c("Back","Comp")
physeq2=subset_taxa(physeq2, tag %in% comm)
#Rarefication to even depth based on smallest sample size
rf=min(sample_sums(physeq2))
physeq.rrf=rarefy_even_depth(physeq2, rf, replace=TRUE)
p_nat=plot_richness(physeq.rrf,"config3","config2" ,measures=alphaInd)
p_nat$layers <- p_nat$layers[-1]
#neworder<-c("input","matrixpure","matrix3per","rootpure","root3per")
#p_nat$data$config3 <- ordered(p_nat$data$config3, levels= neworder)
p1=p_nat+geom_boxplot(data=p_nat$data, aes(x=config3, y=value, color=config2), width=1)+
geom_point(size=4)+theme_bw()+
theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank())+
scale_colour_manual(values=color_palette)
#ggtitle(paste0("Alpha diversity rarefication to ",rf, " sequencing depth"))
subp2 = ggplot(data=p_nat$data[p_nat$data$variable=="Observed",], aes(x=config3, y=value, color=config2))+ geom_boxplot(width=1)+
geom_point(size=2)+theme_bw()+
theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(),legend.position="none")+
scale_colour_manual(values=color_palette) + facet_wrap(~variable) + scale_y_continuous(limits=c(0, 130), breaks=c(0,25,50,75,100,125))
kruskal.test(data=subp2$data, value ~ config3)
posthoc.kruskal.conover.test(data=subp2$data, value ~ config3, method="BH")
subp1 = ggplot(data=p_nat$data[p_nat$data$variable=="Shannon",], aes(x=config3, y=value, color=config2))+ geom_boxplot(width=1)+
geom_point(size=2)+theme_bw()+
theme(panel.grid.major = element_blank(), panel.grid.minor = element_blank(),legend.position="none")+
scale_colour_manual(values=color_palette) + facet_wrap(~variable) + scale_y_continuous(limits=c(0, 6), breaks=c(0,0.5,1.5,2.5,3.5,4.5,5.5))
kruskal.test(data=subp1$data, value ~ config3)
posthoc.kruskal.conover.test(data=subp1$data, value ~ config3)
gridExtra::grid.arrange(subp2, subp1, ncol=2)
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/bundle_linking.R
\name{remove_duplicate_bundle_links}
\alias{remove_duplicate_bundle_links}
\title{remove_duplicate_bundle_links}
\usage{
remove_duplicate_bundle_links(linked_bundles)
}
\arguments{
\item{linked_bundles}{output of link_bundles}
}
\value{
linked_bundles with duplicate links removed
}
\description{
remove_duplicate_bundle_links
}
| /man/remove_duplicate_bundle_links.Rd | no_license | gabrielburcea/clahrcnwlhf | R | false | true | 425 | rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/bundle_linking.R
\name{remove_duplicate_bundle_links}
\alias{remove_duplicate_bundle_links}
\title{remove_duplicate_bundle_links}
\usage{
remove_duplicate_bundle_links(linked_bundles)
}
\arguments{
\item{linked_bundles}{output of link_bundles}
}
\value{
linked_bundles with duplicate links removed
}
\description{
remove_duplicate_bundle_links
}
|
library(plyr)
library(dplyr)
library(data.table)
library(Stack)
test = read.csv('/Train-Test Splits/Context/test.csv', header = TRUE)
#Order LEs by 'user_id'
setDT(test)[,freq := .N, by = "user_id"]
test = test[order(freq, decreasing = T),]
#Get the unique user_ids and their frequencies
unique_user_id = with(test,aggregate(freq ~ user_id,FUN=function(x){unique(x)}))
frequen = unique_user_id$freq
frequen = sort(frequen, decreasing = TRUE)
user = unique(test$user_id)
#Positive LEs are given a rating 1
test$rating=1
test$freq = NULL
#Creating a test set with one temporary positive LE to start with
temp = test[1,]
temp$lang = as.character(temp$lang)
temp$hashtag = as.character(temp$hashtag)
temp$tweet_lang = as.character(temp$tweet_lang)
for (i in 1:length(user))
{
#Get LEs of the particular user
lis = filter( test, test$user_id ==user[i])
#Creating 9 negative samples for each positive sample of the 'user_id'
notlis = do.call("rbind", replicate(9, lis, simplify = FALSE))
#Get vector of the languages that the user hasn't used
notlang = setdiff(test$lang, lis$lang)
notlang = rep(notlang,length.out=nrow(notlis))
#Get vector of the hashtags that the user hasn't used
nothash = setdiff(test$hashtag, lis$hashtag)
nothash = rep(nothash,length.out=nrow(notlis))
notlis$lang = notlang
notlis$hashtag = nothash
notlis$tweet_lang = notlang
#Negative LEs are given a rating 0
notlis$rating = 0
#Stacking the negative samples for each user
temp = Stack(temp, notlis)
print(i)
}
#Discarding the temporary LE that was used at the beginning of creating the test set
temp = temp[2:nrow(temp),]
#Merging the positive and negative LEs to create the final test set
test_all = Stack(test, temp)
#Writing the final test set (to be input to FM) to file
write.table(test_all, 'test_final_POP_USER.txt', quote = FALSE, col.names= FALSE, row.names = FALSE, sep = '\t')
| /Context_POP_USER/test_POP_USER.r | no_license | asmitapoddar/nowplaying-RS-Music-Reco-FM | R | false | false | 1,956 | r | library(plyr)
library(dplyr)
library(data.table)
library(Stack)
test = read.csv('/Train-Test Splits/Context/test.csv', header = TRUE)
#Order LEs by 'user_id'
setDT(test)[,freq := .N, by = "user_id"]
test = test[order(freq, decreasing = T),]
#Get the unique user_ids and their frequencies
unique_user_id = with(test,aggregate(freq ~ user_id,FUN=function(x){unique(x)}))
frequen = unique_user_id$freq
frequen = sort(frequen, decreasing = TRUE)
user = unique(test$user_id)
#Positive LEs are given a rating 1
test$rating=1
test$freq = NULL
#Creating a test set with one temporary positive LE to start with
temp = test[1,]
temp$lang = as.character(temp$lang)
temp$hashtag = as.character(temp$hashtag)
temp$tweet_lang = as.character(temp$tweet_lang)
for (i in 1:length(user))
{
#Get LEs of the particular user
lis = filter( test, test$user_id ==user[i])
#Creating 9 negative samples for each positive sample of the 'user_id'
notlis = do.call("rbind", replicate(9, lis, simplify = FALSE))
#Get vector of the languages that the user hasn't used
notlang = setdiff(test$lang, lis$lang)
notlang = rep(notlang,length.out=nrow(notlis))
#Get vector of the hashtags that the user hasn't used
nothash = setdiff(test$hashtag, lis$hashtag)
nothash = rep(nothash,length.out=nrow(notlis))
notlis$lang = notlang
notlis$hashtag = nothash
notlis$tweet_lang = notlang
#Negative LEs are given a rating 0
notlis$rating = 0
#Stacking the negative samples for each user
temp = Stack(temp, notlis)
print(i)
}
#Discarding the temporary LE that was used at the beginning of creating the test set
temp = temp[2:nrow(temp),]
#Merging the positive and negative LEs to create the final test set
test_all = Stack(test, temp)
#Writing the final test set (to be input to FM) to file
write.table(test_all, 'test_final_POP_USER.txt', quote = FALSE, col.names= FALSE, row.names = FALSE, sep = '\t')
|
## This is the code for the programming assignment of week 3.
## Holds variables in memory for:
## - The input matrix
## - It's inverse (when required)
makeCacheMatrix <- function(x = matrix()) {
i <- NULL
set <- function(y) {
x <<- y
i <<- NULL
}
get <- function() x
setInverse <- function(inverse) i <<- inverse
getInverse <- function() i
list(set=set, get=get, getInverse=getInverse, setInverse=setInverse)
}
## Calculates the inverse of the matrix in `x` if there's no cached result
## already. Otherwise, return the cached version.
cacheSolve <- function(x, ...) {
inverse <- x$getInverse()
if (is.null(inverse)) {
inverse <- solve(x$get(), ...)
x$setInverse(inverse)
}
inverse
}
| /cachematrix.R | no_license | jverce/ProgrammingAssignment2 | R | false | false | 792 | r | ## This is the code for the programming assignment of week 3.
## Holds variables in memory for:
## - The input matrix
## - It's inverse (when required)
makeCacheMatrix <- function(x = matrix()) {
i <- NULL
set <- function(y) {
x <<- y
i <<- NULL
}
get <- function() x
setInverse <- function(inverse) i <<- inverse
getInverse <- function() i
list(set=set, get=get, getInverse=getInverse, setInverse=setInverse)
}
## Calculates the inverse of the matrix in `x` if there's no cached result
## already. Otherwise, return the cached version.
cacheSolve <- function(x, ...) {
inverse <- x$getInverse()
if (is.null(inverse)) {
inverse <- solve(x$get(), ...)
x$setInverse(inverse)
}
inverse
}
|
#If the data files don't already exist, download and unzip them
if(!file.exists("./summarySCC_PM25.rds")) {
download.file("https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2FNEI_data.zip", "./temp_data.zip")
unzip("temp_data.zip")
file.remove("./temp_data.zip")
}
# read the source files (if pm25 object does not already exist)
if(!exists("pm25")) {
pm25 <- readRDS("summarySCC_PM25.rds")
}
if(!exists("SCC")) {
SCC <- readRDS("Source_Classification_Code.rds")
}
# Subset the SCC codes that belong to vehicle emissions by regex against the EI.Sector column
# then filter pm25 by SCC code and Baltimore fips value, and aggregate and sum resulting Emissions by year
vehicleSCC <- SCC[grep("vehicle", SCC$EI.Sector, ignore.case = TRUE), ]
veh_emissions <- pm25[pm25$fips == "24510" & pm25$SCC %in% vehicleSCC$SCC, ]
veh_emissions <- aggregate(Emissions ~ year, veh_emissions, sum)
# create a new png device, plot the data, then close the device
png("plot5.png")
barplot(veh_emissions$Emissions, names.arg = veh_emissions$year, xlab = "Year", ylab = "PM25 emission (tons)", main = "Total PM2.5 emission from vehicle sources in Baltimore, MD")
dev.off() | /Assignment 2 - Week 4/plot5.R | no_license | myer0154/Coursera--Exploratory-Data-Analysis | R | false | false | 1,164 | r | #If the data files don't already exist, download and unzip them
if(!file.exists("./summarySCC_PM25.rds")) {
download.file("https://d396qusza40orc.cloudfront.net/exdata%2Fdata%2FNEI_data.zip", "./temp_data.zip")
unzip("temp_data.zip")
file.remove("./temp_data.zip")
}
# read the source files (if pm25 object does not already exist)
if(!exists("pm25")) {
pm25 <- readRDS("summarySCC_PM25.rds")
}
if(!exists("SCC")) {
SCC <- readRDS("Source_Classification_Code.rds")
}
# Subset the SCC codes that belong to vehicle emissions by regex against the EI.Sector column
# then filter pm25 by SCC code and Baltimore fips value, and aggregate and sum resulting Emissions by year
vehicleSCC <- SCC[grep("vehicle", SCC$EI.Sector, ignore.case = TRUE), ]
veh_emissions <- pm25[pm25$fips == "24510" & pm25$SCC %in% vehicleSCC$SCC, ]
veh_emissions <- aggregate(Emissions ~ year, veh_emissions, sum)
# create a new png device, plot the data, then close the device
png("plot5.png")
barplot(veh_emissions$Emissions, names.arg = veh_emissions$year, xlab = "Year", ylab = "PM25 emission (tons)", main = "Total PM2.5 emission from vehicle sources in Baltimore, MD")
dev.off() |
### =========================================================================
### RangedData objects
### -------------------------------------------------------------------------
## For keeping data with your ranges
## There are two design aims:
## 1) Efficiency when data is large (i.e. apply by chromosome)
## 2) Convenience when data is not so large (i.e. unrolling the data)
## The ranges are stored in a IntegerRangesList, while the data is stored
## in a SplitDataFrameList. The IntegerRangesList is uncompressed, because
## users will likely want to apply over each IntegerRanges separately,
## as they are usually in separate spaces. Also, it is difficult to
## compress RangesLists, as lists containing Views or NCLists
## are uncompressible. The SplitDataFrameList should be compressed,
## because it's cheap to create from a split factor and, more
## importantly, cheap to get and set columns along the entire dataset,
## which is common. Usually the data columns are atomic vectors and
## thus trivially compressed. It does, however, incur a slight
## performance penalty when applying over the RangedData.
setClass("RangedData", contains = c("DataTable", "List"),
representation(ranges = "IntegerRangesList",
values = "SplitDataFrameList"),
prototype = prototype(ranges = new("SimpleIRangesList"),
values = new("CompressedSplitDataFrameList")))
wmsg2 <- function(...)
paste0(" ",
paste0(strwrap(paste0(c(...), collapse="")), collapse="\n "))
RangedData_is_deprecated_msg <-
c("RangedData objects are deprecated. ",
"Please migrate your code to use GRanges or GRangesList objects instead. ",
"See IMPORTANT NOTE in ?RangedData")
RangedData_method_is_defunct_msg <- function(what)
c("RangedData objects are deprecated and the ", what, " is now defunct. ",
"Please migrate your code to use GRanges or GRangesList objects instead. ",
"See IMPORTANT NOTE in ?RangedData")
### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
### Accessor methods.
###
setMethod("values", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
x@values
})
setReplaceMethod("values", "RangedData",
function(x, value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (extends(class(value), "SplitDataFrameList")) {
if (!identical(elementNROWS(values(x)),
elementNROWS(value)))
stop("'value' must have same elementNROWS ",
"as current 'values'")
} else if (extends(class(value), "DataFrame")) {
value <- split(value, space(x))
} else {
stop("'value' must extend class SplitDataFrameList or DataFrame")
}
if (is.null(rownames(value)) && !is.null(rownames(x)))
rownames(value) <- rownames(x)
else if (!identical(rownames(value), rownames(values(x))))
stop("rownames of 'value', if non-NULL, must match the ",
"rownames of 'x'")
x@values <- value
x
})
setMethod("ranges", "RangedData",
function(x, use.names=TRUE, use.mcols=FALSE) x@ranges
)
setReplaceMethod("ranges", "RangedData",
function(x, value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (extends(class(value), "IntegerRangesList")) {
if (!identical(lapply(ranges(x), names), lapply(value, names)))
stop("'value' must have same length and names as current 'ranges'")
} else if (extends(class(value), "IRanges")) {
value <- split(value, space(x))
} else {
stop("'value' must extend class IntegerRangesList or IRanges")
}
x@ranges <- value
x
})
## range delegates
setMethod("start", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
start(unlist(ranges(x), use.names=FALSE))
})
setMethod("end", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
end(unlist(ranges(x), use.names=FALSE))
})
setMethod("width", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
width(unlist(ranges(x), use.names=FALSE))
})
setReplaceMethod("start", "RangedData",
function(x, ..., value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
start(ranges(x), ...) <- value
x
})
setReplaceMethod("end", "RangedData",
function(x, ..., value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
end(ranges(x), ...) <- value
x
})
setReplaceMethod("width", "RangedData",
function(x, ..., value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
width(ranges(x), ...) <- value
x
})
setMethod("length", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
length(ranges(x))
})
setMethod("names", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
names(ranges(x))
})
setReplaceMethod("names", "RangedData",
function(x, value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (!is.null(value) && !is.character(value))
stop("'value' must be NULL or a character vector")
names(x@ranges) <- value
names(x@values) <- value
x
})
setMethod("elementNROWS", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
elementNROWS(ranges(x))
})
setMethod("space", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
space(ranges(x))
})
setGeneric("universe", function(x) standardGeneric("universe"))
setMethod("universe", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
universe(ranges(x))
})
setGeneric("universe<-", function(x, value) standardGeneric("universe<-"))
setReplaceMethod("universe", "RangedData",
function(x, value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
universe(x@ranges) <- value
x
})
setMethod("score", "RangedData",
function(x) {
what <- "score() getter for RangedData objects"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
score <- x[["score"]]
## if (is.null(score) && ncol(x) > 0 && is.numeric(x[[1L]]))
## score <- x[[1L]]
score
})
setReplaceMethod("score", "RangedData",
function(x, value) {
what <- "score() setter for RangedData objects"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
if (!is.numeric(value))
stop("score must be numeric")
if (length(value) != nrow(x))
stop("number of scores must equal the number of rows")
x[["score"]] <- value
x
})
## values delegates
setMethod("nrow", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
sum(nrow(values(x)))
})
setMethod("ncol", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
ncol(values(x))[[1L]]
})
setMethod("rownames", "RangedData",
function(x, do.NULL = TRUE, prefix = "row") {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
rn <-
unlist(rownames(values(x), do.NULL = do.NULL, prefix = prefix),
use.names=FALSE)
if (length(rn) == 0)
rn <- NULL
rn
})
setMethod("colnames", "RangedData",
function(x, do.NULL = TRUE, prefix = "col") {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (length(x) == 0)
character()
else
colnames(values(x), do.NULL = do.NULL, prefix = prefix)[[1L]]
})
setReplaceMethod("rownames", "RangedData",
function(x, value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (!is.null(value)) {
if (length(value) != nrow(x)) {
stop("invalid 'row.names' length")
} else {
if (!is.character(value))
value <- as.character(value)
ends <- cumsum(elementNROWS(x))
value <-
new("CompressedCharacterList",
unlistData = value,
partitioning = PartitioningByEnd(ends))
}
}
ranges <- ranges(x)
for(i in seq_len(length(ranges))) {
names(ranges[[i]]) <- value[[i]]
}
x@ranges <- ranges
rownames(x@values) <- value
x
})
setReplaceMethod("colnames", "RangedData",
function(x, value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
colnames(x@values) <- value
x
})
setMethod("columnMetadata", "RangedData", function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
columnMetadata(values(x))
})
setReplaceMethod("columnMetadata", "RangedData", function(x, value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
columnMetadata(values(x)) <- value
x
})
### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
### Validity.
###
.valid.RangedData.ranges <- function(x)
{
if (!identical(lapply(ranges(x), length), lapply(values(x), nrow)))
"'ranges' and 'values' must be of the same length and have the same names"
else if (!identical(unlist(lapply(ranges(x), names), use.names=FALSE),
rownames(x)))
"the names of the ranges must equal the rownames"
else NULL
}
.valid.RangedData.names <- function(x) {
nms <- names(x)
if (length(nms) != length(x))
"length(names(x)) must equal length(x)"
else if (!is.character(nms) || S4Vectors:::anyMissing(nms) || anyDuplicated(nms))
"names(x) must be a character vector without any NA's or duplicates"
else NULL
}
.valid.RangedData <- function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
c(.valid.RangedData.ranges(x), .valid.RangedData.names(x))
}
setValidity2("RangedData", .valid.RangedData)
### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
### Constructor.
###
## creates a single-element RangedData (unless splitter (space) is specified)
RangedData <- function(ranges = IRanges(), ..., space = NULL)
{
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
hasDots <- (((nargs() - !missing(space))) > 1)
if (is(ranges, "IntegerRangesList") && !is(ranges, "IntegerRanges")) {
if (!is.null(space))
warning("since 'class(ranges)' extends IntegerRangesList, 'space' argument is ignored")
if (is.null(names(ranges)))
names(ranges) <- as.character(seq_len(length(ranges)))
space <-
Rle(factor(names(ranges), levels = names(ranges)),
elementNROWS(ranges))
N <- sum(elementNROWS(ranges))
NAMES <- unlist(lapply(ranges, names), use.names=FALSE)
} else {
if (!is(ranges, "IntegerRanges")) {
coerced <- try(as(ranges, "RangedData"), silent=TRUE)
if (is(coerced, "RangedData"))
return(coerced)
stop("'ranges' must be an IntegerRanges or directly coercible to RangedData")
}
N <- length(ranges)
NAMES <- names(ranges)
if (is.null(space)) {
if (N == 0)
space <- Rle(factor())
else
space <- Rle(factor("1"))
} else if (!is(space, "Rle")) {
space <- Rle(space)
}
if (!is.factor(runValue(space)))
runValue(space) <- factor(runValue(space))
if (length(space) != N) {
if (length(space) == 0L)
stop("'space' is a 0-length vector but length of 'ranges' is > 0")
## We make an exception to the "length(space) must be <= N" rule when
## N != 0L so we can support the direct creation of RangedData objects
## with 0 rows across 1 or more user-specified spaces like in:
## RangedData(ranges=IRanges(), space=letters)
if (N != 0L && length(space) > N)
stop("length of 'space' greater than length of 'ranges'")
if (N %% length(space) != 0)
stop("length of 'ranges' not a multiple of 'space' length")
space <- rep(space, length.out = N)
}
if (!is(ranges, "IRanges"))
ranges <- as(ranges, "IRanges")
ranges <- split(ranges, space)
}
if (hasDots) {
args <- list(...)
if (length(args) == 1L && is(args[[1L]], "SplitDataFrameList")) {
values <- unlist(args[[1L]], use.names=FALSE)
} else {
values <- DataFrame(...)
}
}
else
values <- S4Vectors:::make_zero_col_DataFrame(N)
if (N != nrow(values)) {
if (nrow(values) > N)
stop("length of value(s) in '...' greater than length of 'ranges'")
if (nrow(values) == 0 || N %% nrow(values) != 0)
stop("length of 'ranges' not a multiple of length of value(s) in '...'")
rind <- S4Vectors:::recycleVector(seq_len(nrow(values)), N)
values <- values[rind,,drop=FALSE]
}
rownames(values) <- NAMES ## ensure these are identical
values <- split(values, space)
new2("RangedData", ranges = ranges, values = values, check=FALSE)
}
### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
### Subsetting.
###
## The extraction operator delegates to the values (extracts columns)
setMethod("[[", "RangedData",
function(x, i, j, ...)
{
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
dotArgs <- list(...)
if (length(dotArgs) > 0)
dotArgs <- dotArgs[names(dotArgs) != "exact"]
if (!missing(j) || length(dotArgs) > 0)
stop("invalid subsetting")
if (missing(i))
stop("subscript is missing")
if (!is.character(i) && !is.numeric(i))
stop("invalid subscript type")
if (length(i) < 1L)
stop("attempt to select less than one element")
if (length(i) > 1L)
stop("attempt to select more than one element")
if (is.numeric(i) && !is.na(i) && (i < 1L || i > ncol(x)))
stop("subscript out of bounds")
if (is.na(i) || (is.character(i) &&
!(i %in% c("space", "ranges", colnames(x)))))
NULL
else if (i == "space")
space(x)
else if (i == "ranges")
unlist(ranges(x), use.names=FALSE)
else
unlist(values(x), use.names=FALSE)[[i]]
})
setReplaceMethod("[[", "RangedData",
function(x, i, j,..., value)
{
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (!missing(j) || length(list(...)) > 0)
stop("invalid subsetting")
if (missing(i))
stop("subscript is missing")
if (!is.character(i) && !is.numeric(i))
stop("invalid subscript type")
if (length(i) < 1L)
stop("attempt to select less than one element")
if (length(i) > 1L)
stop("attempt to select more than one element")
if (is.numeric(i) && (i < 1L || i > ncol(x) + 1L))
stop("subscript out of bounds")
if (i == "space")
stop("cannot replace \"space\" information")
if (i == "ranges") {
ranges(x) <- value
} else {
nrx <- nrow(x)
lv <- length(value)
if (!is.null(value) && (nrx != lv)) {
if ((nrx == 0) || (nrx %% lv != 0))
stop(paste(lv, "elements in value to replace",
nrx, "elements"))
else
value <- rep(value, length.out = nrx)
}
nrows <- elementNROWS(values(x))
inds <- seq_len(length(x))
spaces <- factor(rep.int(inds, nrows), inds)
values <- unlist(values(x), use.names=FALSE)
values[[i]] <- value
x@values <- split(values, spaces)
names(x@values) <- names(x)
}
x
})
### Supported index types: numeric, logical, character, NULL and missing.
## Two index modes:
## - list style ([i]): subsets by range space (e.g. chromosome)
## - matrix style ([i,j]): subsets the data frame
setMethod("[", "RangedData",
function(x, i, j, ..., drop=FALSE)
{
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (length(list(...)) > 0)
stop("parameters in '...' not supported")
if (missing(i) && missing(j))
return(x)
checkIndex <- function(i, lx, nms) {
if (!is.atomic(i))
return("invalid subscript type")
if (is.numeric(i)) {
if (!is.integer(i))
i <- as.integer(i)
if (S4Vectors:::anyMissingOrOutside(i, upper = lx))
return("subscript contains NAs or out of bounds indices")
if (S4Vectors:::anyMissingOrOutside(i, 0L, lx) &&
S4Vectors:::anyMissingOrOutside(i, upper = 0L))
return("negative and positive indices cannot be mixed")
} else if (is.logical(i)) {
if (S4Vectors:::anyMissing(i))
return("subscript contains NAs")
if (length(i) > lx)
return("subscript out of bounds")
} else if ((is.character(i) || is.factor(i))) {
if (S4Vectors:::anyMissing(i))
return("subscript contains NAs")
if (S4Vectors:::anyMissing(match(i, nms)))
return("mismatching names")
} else if (!is.null(i)) {
return("invalid subscript type")
}
NULL
}
mstyle <- nargs() > 2
if (mstyle) {
ranges <- ranges(x)
values <- values(x)
if (!missing(j)) {
prob <- checkIndex(j, ncol(x), colnames(x))
if (!is.null(prob))
stop("selecting cols: ", prob)
values <- values[, j, drop=FALSE]
}
if (!missing(i)) {
if (is(i, "IntegerRangesList"))
stop("subsetting a RangedData object ",
"by an IntegerRangesList subscript is not supported")
if (is(i, "LogicalList")) {
x_eltNROWS <- elementNROWS(ranges(x))
whichRep <- which(x_eltNROWS != elementNROWS(i))
for (k in whichRep)
i[[k]] <- rep(i[[k]], length.out = x_eltNROWS[k])
i <- unlist(i, use.names=FALSE)
} else if (is(i, "IntegerList")) {
itemp <-
LogicalList(lapply(elementNROWS(ranges(x)), rep,
x = FALSE))
for (k in seq_len(length(itemp)))
itemp[[k]][i[[k]]] <- TRUE
i <- unlist(itemp, use.names=FALSE)
}
prob <- checkIndex(i, nrow(x), rownames(x))
if (!is.null(prob))
stop("selecting rows: ", prob)
if (is.numeric(i) && any(i < 0))
i <- setdiff(seq(nrow(x)), -i)
if (is.logical(i)) {
igroup <-
factor(rep.int(seq_len(length(x)), elementNROWS(x)),
levels = seq_len(length(x)))
if (length(i) < nrow(x))
i <- rep(i, length.out = nrow(x))
} else {
if (is.null(i))
i <- integer(0)
if (is.factor(i))
i <- as.character(i)
if (is.character(i)) {
dummy <- seq_len(nrow(x))
names(dummy) <- rownames(x)
i <- dummy[i]
if (S4Vectors:::anyMissing(i)) ## cannot subset by NAs yet
stop("invalid rownames specified")
}
starts <- cumsum(c(1L, head(elementNROWS(x), -1)))
igroup <-
factor(findInterval(i, starts), levels = seq_len(length(x)))
if (anyDuplicated(runValue(Rle(igroup))))
stop("cannot mix row indices from different spaces")
i <- i - (starts - 1L)[as.integer(igroup)]
}
isplit <- split(i, igroup)
names(isplit) <- names(x)
ranges <- S4Vectors:::subset_List_by_List(ranges, isplit)
values <- S4Vectors:::subset_List_by_List(values, isplit)
if (drop) {
ok <- (elementNROWS(ranges) > 0)
ranges <- ranges[ok]
values <- values[ok]
}
}
} else {
if (!missing(i)) {
prob <- checkIndex(i, length(x), names(x))
if (!is.null(prob))
stop("selecting spaces: ", prob)
ranges <- ranges(x)[i]
values <- values(x)[i]
}
}
x@ranges <- ranges
x@values <- values
x
})
### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
### Combining and splitting.
###
setMethod("c", "RangedData", function(x, ..., recursive = FALSE) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (!identical(recursive, FALSE))
stop("\"c\" method for RangedData objects ",
"does not support the 'recursive' argument")
if (missing(x))
rds <- unname(list(...))
else
rds <- unname(list(x, ...))
rd <- rds[[1L]]
if (!all(sapply(rds, is, "RangedData")))
stop("all arguments in '...' must be RangedData objects")
nms <- lapply(rds, ## figure out names like 'c' on an ordinary vector
function(rd) structure(logical(length(rd)), names = names(rd)))
nms <- names(do.call(c, nms))
names(rds) <- NULL # critical for dispatch to work
ranges <- do.call(c, lapply(rds, ranges))
values <- do.call(c, lapply(rds, values))
names(ranges) <- nms
rd@ranges <- ranges
names(values) <- nms
rd@values <- values
rd
})
setMethod("rbind", "RangedData", function(..., deparse.level=1) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
args <- unname(list(...))
rls <- lapply(args, ranges)
nms <- unique(unlist(lapply(args, names), use.names=FALSE))
rls <- lapply(rls, function(x) {y <- as.list(x)[nms];names(y) <- nms;y})
dfs <-
lapply(args, function(x) {y <- as.list(values(x))[nms];names(y) <- nms;y})
safe.c <- function(...) {
x <- list(...)
do.call(c, x[!sapply(x, is.null)])
}
rls <- IRangesList(do.call(Map, c(list(safe.c), rls)))
safe.rbind <- function(...) {
x <- list(...)
do.call(rbind, x[!sapply(x, is.null)])
}
dfs <- SplitDataFrameList(do.call(Map, c(list(safe.rbind), dfs)))
for (i in seq_len(length(rls)))
names(rls[[i]]) <- rownames(dfs[[i]])
initialize(args[[1L]], ranges = rls, values = dfs)
})
### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
### Coercion
###
### The 2 functions, as.data.frame.IntegerRangesList() and
### as.data.frame.DataFrameList() are needed for as.data.frame.RangedData().
###
### A new as.data.frame,List method was implemented in BioC 2.15 and
### is now used by all List classes. Because the RangedData class is being
### phased out, we want to retain the old behavior. In order to do that
### we have to keep these 2 helpers because as.data.frame.RangedData()
### uses old methods from both IntegerRangesList and DataFrameList.
###
### These helpers are not exported.
.as.data.frame.IntegerRangesList <- function(x, row.names=NULL, optional=FALSE,
...)
{
if (!(is.null(row.names) || is.character(row.names)))
stop("'row.names' must be NULL or a character vector")
x <- as(x, "CompressedIRangesList")
spaceLevels <- seq_len(length(x))
if (length(names(x)) > 0) {
spaceLabels <- names(x)
} else {
spaceLabels <- as.character(spaceLevels)
}
data.frame(space =
factor(rep.int(seq_len(length(x)), elementNROWS(x)),
levels = spaceLevels,
labels = spaceLabels),
as.data.frame(unlist(x, use.names = FALSE)),
row.names = row.names,
stringsAsFactors = FALSE)
}
.as.data.frame.DataFrameList <- function(x, row.names=NULL,
optional=FALSE, ...)
{
if (!(is.null(row.names) || is.character(row.names)))
stop("'row.names' must be NULL or a character vector")
if (!missing(optional) || length(list(...)))
warning("'optional' and arguments in '...' ignored")
stacked <- stack(x)
if (is.null(row.names))
row.names <- rownames(stacked)
as.data.frame(stacked, row.names = row.names, optional = optional)
}
.as.data.frame.RangedData <- function(x, row.names=NULL, optional=FALSE, ...)
{
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (!(is.null(row.names) || is.character(row.names)))
stop("'row.names' must be NULL or a character vector")
if (!missing(optional) || length(list(...)))
warning("'optional' and arguments in '...' ignored")
data.frame(.as.data.frame.IntegerRangesList(ranges(x)),
.as.data.frame.DataFrameList(values(x))[-1L],
row.names = row.names,
stringsAsFactors = FALSE)
}
setMethod("as.data.frame", "RangedData", .as.data.frame.RangedData)
setAs("RangedData", "DataFrame",
function(from)
{
DataFrame(as.data.frame(ranges(from)),
unlist(values(from), use.names=FALSE))
})
setAs("Rle", "RangedData",
function(from)
{
what <- "coercion method from Rle to RangedData"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
new2("RangedData",
ranges = IRangesList("1" = successiveIRanges(runLength(from))),
values =
SplitDataFrameList("1" = DataFrame(score = runValue(from))),
metadata = metadata(from),
check = FALSE)
})
setAs("RleList", "RangedData",
function(from)
{
what <- "coercion method from RleList to RangedData"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
ranges <-
IRangesList(lapply(from, function(x)
successiveIRanges(runLength(x))))
values <-
SplitDataFrameList(lapply(from, function(x)
DataFrame(score = runValue(x))))
if (is.null(names(from))) {
nms <- as.character(seq_len(length(from)))
names(ranges) <- nms
names(values) <- nms
}
new2("RangedData",
ranges = ranges, values = values,
metadata = metadata(from),
elementMetadata = elementMetadata(from, use.names=FALSE),
check = FALSE)
})
setAs("RleViewsList", "RangedData", function(from) {
what <- "coercion method from RleViewsList to RangedData"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
subject <- subject(from)
from_ranges <- restrict(ranges(from), 1L, elementNROWS(subject),
keep.all.ranges = TRUE)
### FIXME: do we want to insert NAs for out of bounds views?
score <- subject[from_ranges]
score_part <- as(lapply(width(from_ranges), PartitioningByWidth),
"IntegerRangesList")
score_ranges <- ranges(score)
ol <- findOverlaps(score_ranges, score_part)
offind <- as(lapply(ol, subjectHits), "IntegerList")
offset <- (start(from_ranges) - start(score_part))[offind]
ranges <- shift(ranges(ol, score_ranges, score_part), offset)
viewNames <- lapply(from_ranges, function(x) {
if (is.null(names(x)))
seq_len(length(x))
else names(x)
})
RangedData(ranges,
score = unlist(runValue(score), use.names = FALSE)[queryHits(ol)],
view = unlist(viewNames, use.names = FALSE)[subjectHits(ol)])
})
setAs("IntegerRanges", "RangedData",
function(from)
{
what <- "coercion method from IntegerRanges to RangedData"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
RangedData(from)
})
setAs("IntegerRangesList", "RangedData",
function(from)
{
what <- "coercion method from IntegerRangesList to RangedData"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
from_names <- names(from)
if (is.null(from_names) || anyDuplicated(from_names))
stop("cannot coerce a IntegerRangesList object with no names ",
"or duplicated names to a RangedData object")
unlisted_from <- unlist(from, use.names=FALSE)
unlisted_values <- mcols(unlisted_from, use.names=FALSE)
mcols(unlisted_from) <- NULL
ans_ranges <- relist(unlisted_from, skeleton=from)
metadata(ans_ranges) <- metadata(from)
if (!is(unlisted_values, "DataFrame")) {
if (!is.null(unlisted_values))
warning("could not propagate the inner metadata columns of ",
"'from' (accessed with 'mcols(unlist(from))') ",
"to the data columns (aka values) of the returned ",
"RangedData object")
unlisted_values <-
S4Vectors:::make_zero_col_DataFrame(length(unlisted_from))
}
ans_values <- newCompressedList0("CompressedSplitDataFrameList",
unlisted_values,
PartitioningByEnd(ans_ranges))
new2("RangedData",
ranges=ans_ranges,
values=ans_values,
#metadata=metadata(from),
elementMetadata=elementMetadata(from, use.names=FALSE),
check=FALSE)
}
)
setAs("list", "RangedData",
function(from) {
what <- "coercion method from list to RangedData"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
do.call(c, unname(from))
}
)
.fromRangedDataToCompressedIRangesList <- function(from)
{
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
ans <- ranges(from)
## Propagate 'values(from)'.
ans_unlisted_values <- unlist(values(from), use.names=FALSE)
mcols(ans@unlistData) <- ans_unlisted_values
ans
}
setAs("RangedData", "CompressedIRangesList",
.fromRangedDataToCompressedIRangesList
)
setAs("RangedData", "IRangesList", .fromRangedDataToCompressedIRangesList)
setMethod("as.env", "RangedData", function(x, enclos = parent.frame(2)) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
env <- S4Vectors:::makeEnvForNames(x, colnames(x), enclos)
makeAccessorBinding <- function(fun, name = deparse(substitute(fun))) {
makeActiveBinding(name, function() {
val <- fun(x)
rm(list=name, envir=env)
assign(name, val, env) ## cache for further use
val
}, env)
}
makeAccessorBinding(ranges)
makeAccessorBinding(space)
makeAccessorBinding(start)
makeAccessorBinding(width)
makeAccessorBinding(end)
env
})
.RangedData_fromDataFrame <- function(from) {
what <- "coercion method from data.frame or DataTable to RangedData"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
required <- c("start", "end")
if (!all(required %in% colnames(from)))
stop("'from' must at least include a 'start' and 'end' column")
datacols <- setdiff(colnames(from), c(required, "space", "width"))
RangedData(IRanges(from$start, from$end), from[,datacols,drop=FALSE],
space = from$space)
}
setAs("data.frame", "RangedData", .RangedData_fromDataFrame)
setAs("DataTable", "RangedData", .RangedData_fromDataFrame)
### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
### Show
###
.show_RangedData <- function(object) {
nr <- nrow(object)
nc <- ncol(object)
lo <- length(object)
cat(class(object), " with ",
nr, ifelse(nr == 1, " row and ", " rows and "),
nc, ifelse(nc == 1, " value column across ", " value columns across "),
lo, ifelse(lo == 1, " space\n", " spaces\n"), sep = "")
if (nr > 0) {
nms <- rownames(object)
if (nr < 20) {
ranges <- unlist(ranges(object), use.names=FALSE)
values <- unlist(values(object), use.names=FALSE)
out <-
cbind(space = as.character(space(object)), ranges = showAsCell(ranges),
"|" = rep.int("|", nr))
if (nc > 0)
out <-
cbind(out,
as.matrix(format(do.call(data.frame,
lapply(values, showAsCell)))))
if (is.null(nms))
rownames(out) <- as.character(seq_len(nr))
else
rownames(out) <- nms
classinfo <-
matrix(c("<factor>", "<IRanges>", "|",
unlist(lapply(values, function(x) {
paste("<", classNameForDisplay(x), ">", sep = "")
}), use.names = FALSE)), nrow = 1,
dimnames = list("", colnames(out)))
} else {
top <- object[1:9, ]
topRanges <- unlist(ranges(top), use.names=FALSE)
topValues <- unlist(values(top), use.names=FALSE)
bottom <- object[(nr-8L):nr, ]
bottomRanges <- unlist(ranges(bottom), use.names=FALSE)
bottomValues <- unlist(values(bottom), use.names=FALSE)
out <-
rbind(cbind(space = as.character(space(top)),
ranges = showAsCell(topRanges),
"|" = rep.int("|", 9)),
rbind(rep.int("...", 3)),
cbind(space = as.character(space(bottom)),
ranges = showAsCell(bottomRanges),
"|" = rep.int("|", 9)))
if (nc > 0)
out <-
cbind(out,
rbind(as.matrix(format(do.call(data.frame,
lapply(topValues,
showAsCell)))),
rbind(rep.int("...", nc)),
rbind(as.matrix(format(do.call(data.frame,
lapply(bottomValues,
showAsCell)))))))
if (is.null(nms)) {
rownames(out) <- c(as.character(1:9), "...", as.character((nr-8L):nr))
} else {
rownames(out) <- c(head(nms, 9), "...", tail(nms, 9))
}
classinfo <-
matrix(c("<factor>", "<IRanges>", "|",
unlist(lapply(topValues, function(x) {
paste("<", classNameForDisplay(x), ">", sep = "")
}), use.names = FALSE)), nrow = 1,
dimnames = list("", colnames(out)))
}
out <- rbind(classinfo, out)
print(out, quote = FALSE, right = TRUE)
}
}
setMethod("show", "RangedData",
function(object) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
suppressWarnings(.show_RangedData(object))
})
| /R/RangedData-class.R | no_license | rajaldebnath/IRanges | R | false | false | 37,081 | r | ### =========================================================================
### RangedData objects
### -------------------------------------------------------------------------
## For keeping data with your ranges
## There are two design aims:
## 1) Efficiency when data is large (i.e. apply by chromosome)
## 2) Convenience when data is not so large (i.e. unrolling the data)
## The ranges are stored in a IntegerRangesList, while the data is stored
## in a SplitDataFrameList. The IntegerRangesList is uncompressed, because
## users will likely want to apply over each IntegerRanges separately,
## as they are usually in separate spaces. Also, it is difficult to
## compress RangesLists, as lists containing Views or NCLists
## are uncompressible. The SplitDataFrameList should be compressed,
## because it's cheap to create from a split factor and, more
## importantly, cheap to get and set columns along the entire dataset,
## which is common. Usually the data columns are atomic vectors and
## thus trivially compressed. It does, however, incur a slight
## performance penalty when applying over the RangedData.
setClass("RangedData", contains = c("DataTable", "List"),
representation(ranges = "IntegerRangesList",
values = "SplitDataFrameList"),
prototype = prototype(ranges = new("SimpleIRangesList"),
values = new("CompressedSplitDataFrameList")))
wmsg2 <- function(...)
paste0(" ",
paste0(strwrap(paste0(c(...), collapse="")), collapse="\n "))
RangedData_is_deprecated_msg <-
c("RangedData objects are deprecated. ",
"Please migrate your code to use GRanges or GRangesList objects instead. ",
"See IMPORTANT NOTE in ?RangedData")
RangedData_method_is_defunct_msg <- function(what)
c("RangedData objects are deprecated and the ", what, " is now defunct. ",
"Please migrate your code to use GRanges or GRangesList objects instead. ",
"See IMPORTANT NOTE in ?RangedData")
### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
### Accessor methods.
###
setMethod("values", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
x@values
})
setReplaceMethod("values", "RangedData",
function(x, value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (extends(class(value), "SplitDataFrameList")) {
if (!identical(elementNROWS(values(x)),
elementNROWS(value)))
stop("'value' must have same elementNROWS ",
"as current 'values'")
} else if (extends(class(value), "DataFrame")) {
value <- split(value, space(x))
} else {
stop("'value' must extend class SplitDataFrameList or DataFrame")
}
if (is.null(rownames(value)) && !is.null(rownames(x)))
rownames(value) <- rownames(x)
else if (!identical(rownames(value), rownames(values(x))))
stop("rownames of 'value', if non-NULL, must match the ",
"rownames of 'x'")
x@values <- value
x
})
setMethod("ranges", "RangedData",
function(x, use.names=TRUE, use.mcols=FALSE) x@ranges
)
setReplaceMethod("ranges", "RangedData",
function(x, value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (extends(class(value), "IntegerRangesList")) {
if (!identical(lapply(ranges(x), names), lapply(value, names)))
stop("'value' must have same length and names as current 'ranges'")
} else if (extends(class(value), "IRanges")) {
value <- split(value, space(x))
} else {
stop("'value' must extend class IntegerRangesList or IRanges")
}
x@ranges <- value
x
})
## range delegates
setMethod("start", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
start(unlist(ranges(x), use.names=FALSE))
})
setMethod("end", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
end(unlist(ranges(x), use.names=FALSE))
})
setMethod("width", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
width(unlist(ranges(x), use.names=FALSE))
})
setReplaceMethod("start", "RangedData",
function(x, ..., value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
start(ranges(x), ...) <- value
x
})
setReplaceMethod("end", "RangedData",
function(x, ..., value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
end(ranges(x), ...) <- value
x
})
setReplaceMethod("width", "RangedData",
function(x, ..., value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
width(ranges(x), ...) <- value
x
})
setMethod("length", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
length(ranges(x))
})
setMethod("names", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
names(ranges(x))
})
setReplaceMethod("names", "RangedData",
function(x, value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (!is.null(value) && !is.character(value))
stop("'value' must be NULL or a character vector")
names(x@ranges) <- value
names(x@values) <- value
x
})
setMethod("elementNROWS", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
elementNROWS(ranges(x))
})
setMethod("space", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
space(ranges(x))
})
setGeneric("universe", function(x) standardGeneric("universe"))
setMethod("universe", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
universe(ranges(x))
})
setGeneric("universe<-", function(x, value) standardGeneric("universe<-"))
setReplaceMethod("universe", "RangedData",
function(x, value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
universe(x@ranges) <- value
x
})
setMethod("score", "RangedData",
function(x) {
what <- "score() getter for RangedData objects"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
score <- x[["score"]]
## if (is.null(score) && ncol(x) > 0 && is.numeric(x[[1L]]))
## score <- x[[1L]]
score
})
setReplaceMethod("score", "RangedData",
function(x, value) {
what <- "score() setter for RangedData objects"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
if (!is.numeric(value))
stop("score must be numeric")
if (length(value) != nrow(x))
stop("number of scores must equal the number of rows")
x[["score"]] <- value
x
})
## values delegates
setMethod("nrow", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
sum(nrow(values(x)))
})
setMethod("ncol", "RangedData",
function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
ncol(values(x))[[1L]]
})
setMethod("rownames", "RangedData",
function(x, do.NULL = TRUE, prefix = "row") {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
rn <-
unlist(rownames(values(x), do.NULL = do.NULL, prefix = prefix),
use.names=FALSE)
if (length(rn) == 0)
rn <- NULL
rn
})
setMethod("colnames", "RangedData",
function(x, do.NULL = TRUE, prefix = "col") {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (length(x) == 0)
character()
else
colnames(values(x), do.NULL = do.NULL, prefix = prefix)[[1L]]
})
setReplaceMethod("rownames", "RangedData",
function(x, value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (!is.null(value)) {
if (length(value) != nrow(x)) {
stop("invalid 'row.names' length")
} else {
if (!is.character(value))
value <- as.character(value)
ends <- cumsum(elementNROWS(x))
value <-
new("CompressedCharacterList",
unlistData = value,
partitioning = PartitioningByEnd(ends))
}
}
ranges <- ranges(x)
for(i in seq_len(length(ranges))) {
names(ranges[[i]]) <- value[[i]]
}
x@ranges <- ranges
rownames(x@values) <- value
x
})
setReplaceMethod("colnames", "RangedData",
function(x, value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
colnames(x@values) <- value
x
})
setMethod("columnMetadata", "RangedData", function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
columnMetadata(values(x))
})
setReplaceMethod("columnMetadata", "RangedData", function(x, value) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
columnMetadata(values(x)) <- value
x
})
### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
### Validity.
###
.valid.RangedData.ranges <- function(x)
{
if (!identical(lapply(ranges(x), length), lapply(values(x), nrow)))
"'ranges' and 'values' must be of the same length and have the same names"
else if (!identical(unlist(lapply(ranges(x), names), use.names=FALSE),
rownames(x)))
"the names of the ranges must equal the rownames"
else NULL
}
.valid.RangedData.names <- function(x) {
nms <- names(x)
if (length(nms) != length(x))
"length(names(x)) must equal length(x)"
else if (!is.character(nms) || S4Vectors:::anyMissing(nms) || anyDuplicated(nms))
"names(x) must be a character vector without any NA's or duplicates"
else NULL
}
.valid.RangedData <- function(x) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
c(.valid.RangedData.ranges(x), .valid.RangedData.names(x))
}
setValidity2("RangedData", .valid.RangedData)
### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
### Constructor.
###
## creates a single-element RangedData (unless splitter (space) is specified)
RangedData <- function(ranges = IRanges(), ..., space = NULL)
{
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
hasDots <- (((nargs() - !missing(space))) > 1)
if (is(ranges, "IntegerRangesList") && !is(ranges, "IntegerRanges")) {
if (!is.null(space))
warning("since 'class(ranges)' extends IntegerRangesList, 'space' argument is ignored")
if (is.null(names(ranges)))
names(ranges) <- as.character(seq_len(length(ranges)))
space <-
Rle(factor(names(ranges), levels = names(ranges)),
elementNROWS(ranges))
N <- sum(elementNROWS(ranges))
NAMES <- unlist(lapply(ranges, names), use.names=FALSE)
} else {
if (!is(ranges, "IntegerRanges")) {
coerced <- try(as(ranges, "RangedData"), silent=TRUE)
if (is(coerced, "RangedData"))
return(coerced)
stop("'ranges' must be an IntegerRanges or directly coercible to RangedData")
}
N <- length(ranges)
NAMES <- names(ranges)
if (is.null(space)) {
if (N == 0)
space <- Rle(factor())
else
space <- Rle(factor("1"))
} else if (!is(space, "Rle")) {
space <- Rle(space)
}
if (!is.factor(runValue(space)))
runValue(space) <- factor(runValue(space))
if (length(space) != N) {
if (length(space) == 0L)
stop("'space' is a 0-length vector but length of 'ranges' is > 0")
## We make an exception to the "length(space) must be <= N" rule when
## N != 0L so we can support the direct creation of RangedData objects
## with 0 rows across 1 or more user-specified spaces like in:
## RangedData(ranges=IRanges(), space=letters)
if (N != 0L && length(space) > N)
stop("length of 'space' greater than length of 'ranges'")
if (N %% length(space) != 0)
stop("length of 'ranges' not a multiple of 'space' length")
space <- rep(space, length.out = N)
}
if (!is(ranges, "IRanges"))
ranges <- as(ranges, "IRanges")
ranges <- split(ranges, space)
}
if (hasDots) {
args <- list(...)
if (length(args) == 1L && is(args[[1L]], "SplitDataFrameList")) {
values <- unlist(args[[1L]], use.names=FALSE)
} else {
values <- DataFrame(...)
}
}
else
values <- S4Vectors:::make_zero_col_DataFrame(N)
if (N != nrow(values)) {
if (nrow(values) > N)
stop("length of value(s) in '...' greater than length of 'ranges'")
if (nrow(values) == 0 || N %% nrow(values) != 0)
stop("length of 'ranges' not a multiple of length of value(s) in '...'")
rind <- S4Vectors:::recycleVector(seq_len(nrow(values)), N)
values <- values[rind,,drop=FALSE]
}
rownames(values) <- NAMES ## ensure these are identical
values <- split(values, space)
new2("RangedData", ranges = ranges, values = values, check=FALSE)
}
### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
### Subsetting.
###
## The extraction operator delegates to the values (extracts columns)
setMethod("[[", "RangedData",
function(x, i, j, ...)
{
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
dotArgs <- list(...)
if (length(dotArgs) > 0)
dotArgs <- dotArgs[names(dotArgs) != "exact"]
if (!missing(j) || length(dotArgs) > 0)
stop("invalid subsetting")
if (missing(i))
stop("subscript is missing")
if (!is.character(i) && !is.numeric(i))
stop("invalid subscript type")
if (length(i) < 1L)
stop("attempt to select less than one element")
if (length(i) > 1L)
stop("attempt to select more than one element")
if (is.numeric(i) && !is.na(i) && (i < 1L || i > ncol(x)))
stop("subscript out of bounds")
if (is.na(i) || (is.character(i) &&
!(i %in% c("space", "ranges", colnames(x)))))
NULL
else if (i == "space")
space(x)
else if (i == "ranges")
unlist(ranges(x), use.names=FALSE)
else
unlist(values(x), use.names=FALSE)[[i]]
})
setReplaceMethod("[[", "RangedData",
function(x, i, j,..., value)
{
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (!missing(j) || length(list(...)) > 0)
stop("invalid subsetting")
if (missing(i))
stop("subscript is missing")
if (!is.character(i) && !is.numeric(i))
stop("invalid subscript type")
if (length(i) < 1L)
stop("attempt to select less than one element")
if (length(i) > 1L)
stop("attempt to select more than one element")
if (is.numeric(i) && (i < 1L || i > ncol(x) + 1L))
stop("subscript out of bounds")
if (i == "space")
stop("cannot replace \"space\" information")
if (i == "ranges") {
ranges(x) <- value
} else {
nrx <- nrow(x)
lv <- length(value)
if (!is.null(value) && (nrx != lv)) {
if ((nrx == 0) || (nrx %% lv != 0))
stop(paste(lv, "elements in value to replace",
nrx, "elements"))
else
value <- rep(value, length.out = nrx)
}
nrows <- elementNROWS(values(x))
inds <- seq_len(length(x))
spaces <- factor(rep.int(inds, nrows), inds)
values <- unlist(values(x), use.names=FALSE)
values[[i]] <- value
x@values <- split(values, spaces)
names(x@values) <- names(x)
}
x
})
### Supported index types: numeric, logical, character, NULL and missing.
## Two index modes:
## - list style ([i]): subsets by range space (e.g. chromosome)
## - matrix style ([i,j]): subsets the data frame
setMethod("[", "RangedData",
function(x, i, j, ..., drop=FALSE)
{
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (length(list(...)) > 0)
stop("parameters in '...' not supported")
if (missing(i) && missing(j))
return(x)
checkIndex <- function(i, lx, nms) {
if (!is.atomic(i))
return("invalid subscript type")
if (is.numeric(i)) {
if (!is.integer(i))
i <- as.integer(i)
if (S4Vectors:::anyMissingOrOutside(i, upper = lx))
return("subscript contains NAs or out of bounds indices")
if (S4Vectors:::anyMissingOrOutside(i, 0L, lx) &&
S4Vectors:::anyMissingOrOutside(i, upper = 0L))
return("negative and positive indices cannot be mixed")
} else if (is.logical(i)) {
if (S4Vectors:::anyMissing(i))
return("subscript contains NAs")
if (length(i) > lx)
return("subscript out of bounds")
} else if ((is.character(i) || is.factor(i))) {
if (S4Vectors:::anyMissing(i))
return("subscript contains NAs")
if (S4Vectors:::anyMissing(match(i, nms)))
return("mismatching names")
} else if (!is.null(i)) {
return("invalid subscript type")
}
NULL
}
mstyle <- nargs() > 2
if (mstyle) {
ranges <- ranges(x)
values <- values(x)
if (!missing(j)) {
prob <- checkIndex(j, ncol(x), colnames(x))
if (!is.null(prob))
stop("selecting cols: ", prob)
values <- values[, j, drop=FALSE]
}
if (!missing(i)) {
if (is(i, "IntegerRangesList"))
stop("subsetting a RangedData object ",
"by an IntegerRangesList subscript is not supported")
if (is(i, "LogicalList")) {
x_eltNROWS <- elementNROWS(ranges(x))
whichRep <- which(x_eltNROWS != elementNROWS(i))
for (k in whichRep)
i[[k]] <- rep(i[[k]], length.out = x_eltNROWS[k])
i <- unlist(i, use.names=FALSE)
} else if (is(i, "IntegerList")) {
itemp <-
LogicalList(lapply(elementNROWS(ranges(x)), rep,
x = FALSE))
for (k in seq_len(length(itemp)))
itemp[[k]][i[[k]]] <- TRUE
i <- unlist(itemp, use.names=FALSE)
}
prob <- checkIndex(i, nrow(x), rownames(x))
if (!is.null(prob))
stop("selecting rows: ", prob)
if (is.numeric(i) && any(i < 0))
i <- setdiff(seq(nrow(x)), -i)
if (is.logical(i)) {
igroup <-
factor(rep.int(seq_len(length(x)), elementNROWS(x)),
levels = seq_len(length(x)))
if (length(i) < nrow(x))
i <- rep(i, length.out = nrow(x))
} else {
if (is.null(i))
i <- integer(0)
if (is.factor(i))
i <- as.character(i)
if (is.character(i)) {
dummy <- seq_len(nrow(x))
names(dummy) <- rownames(x)
i <- dummy[i]
if (S4Vectors:::anyMissing(i)) ## cannot subset by NAs yet
stop("invalid rownames specified")
}
starts <- cumsum(c(1L, head(elementNROWS(x), -1)))
igroup <-
factor(findInterval(i, starts), levels = seq_len(length(x)))
if (anyDuplicated(runValue(Rle(igroup))))
stop("cannot mix row indices from different spaces")
i <- i - (starts - 1L)[as.integer(igroup)]
}
isplit <- split(i, igroup)
names(isplit) <- names(x)
ranges <- S4Vectors:::subset_List_by_List(ranges, isplit)
values <- S4Vectors:::subset_List_by_List(values, isplit)
if (drop) {
ok <- (elementNROWS(ranges) > 0)
ranges <- ranges[ok]
values <- values[ok]
}
}
} else {
if (!missing(i)) {
prob <- checkIndex(i, length(x), names(x))
if (!is.null(prob))
stop("selecting spaces: ", prob)
ranges <- ranges(x)[i]
values <- values(x)[i]
}
}
x@ranges <- ranges
x@values <- values
x
})
### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
### Combining and splitting.
###
setMethod("c", "RangedData", function(x, ..., recursive = FALSE) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (!identical(recursive, FALSE))
stop("\"c\" method for RangedData objects ",
"does not support the 'recursive' argument")
if (missing(x))
rds <- unname(list(...))
else
rds <- unname(list(x, ...))
rd <- rds[[1L]]
if (!all(sapply(rds, is, "RangedData")))
stop("all arguments in '...' must be RangedData objects")
nms <- lapply(rds, ## figure out names like 'c' on an ordinary vector
function(rd) structure(logical(length(rd)), names = names(rd)))
nms <- names(do.call(c, nms))
names(rds) <- NULL # critical for dispatch to work
ranges <- do.call(c, lapply(rds, ranges))
values <- do.call(c, lapply(rds, values))
names(ranges) <- nms
rd@ranges <- ranges
names(values) <- nms
rd@values <- values
rd
})
setMethod("rbind", "RangedData", function(..., deparse.level=1) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
args <- unname(list(...))
rls <- lapply(args, ranges)
nms <- unique(unlist(lapply(args, names), use.names=FALSE))
rls <- lapply(rls, function(x) {y <- as.list(x)[nms];names(y) <- nms;y})
dfs <-
lapply(args, function(x) {y <- as.list(values(x))[nms];names(y) <- nms;y})
safe.c <- function(...) {
x <- list(...)
do.call(c, x[!sapply(x, is.null)])
}
rls <- IRangesList(do.call(Map, c(list(safe.c), rls)))
safe.rbind <- function(...) {
x <- list(...)
do.call(rbind, x[!sapply(x, is.null)])
}
dfs <- SplitDataFrameList(do.call(Map, c(list(safe.rbind), dfs)))
for (i in seq_len(length(rls)))
names(rls[[i]]) <- rownames(dfs[[i]])
initialize(args[[1L]], ranges = rls, values = dfs)
})
### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
### Coercion
###
### The 2 functions, as.data.frame.IntegerRangesList() and
### as.data.frame.DataFrameList() are needed for as.data.frame.RangedData().
###
### A new as.data.frame,List method was implemented in BioC 2.15 and
### is now used by all List classes. Because the RangedData class is being
### phased out, we want to retain the old behavior. In order to do that
### we have to keep these 2 helpers because as.data.frame.RangedData()
### uses old methods from both IntegerRangesList and DataFrameList.
###
### These helpers are not exported.
.as.data.frame.IntegerRangesList <- function(x, row.names=NULL, optional=FALSE,
...)
{
if (!(is.null(row.names) || is.character(row.names)))
stop("'row.names' must be NULL or a character vector")
x <- as(x, "CompressedIRangesList")
spaceLevels <- seq_len(length(x))
if (length(names(x)) > 0) {
spaceLabels <- names(x)
} else {
spaceLabels <- as.character(spaceLevels)
}
data.frame(space =
factor(rep.int(seq_len(length(x)), elementNROWS(x)),
levels = spaceLevels,
labels = spaceLabels),
as.data.frame(unlist(x, use.names = FALSE)),
row.names = row.names,
stringsAsFactors = FALSE)
}
.as.data.frame.DataFrameList <- function(x, row.names=NULL,
optional=FALSE, ...)
{
if (!(is.null(row.names) || is.character(row.names)))
stop("'row.names' must be NULL or a character vector")
if (!missing(optional) || length(list(...)))
warning("'optional' and arguments in '...' ignored")
stacked <- stack(x)
if (is.null(row.names))
row.names <- rownames(stacked)
as.data.frame(stacked, row.names = row.names, optional = optional)
}
.as.data.frame.RangedData <- function(x, row.names=NULL, optional=FALSE, ...)
{
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
if (!(is.null(row.names) || is.character(row.names)))
stop("'row.names' must be NULL or a character vector")
if (!missing(optional) || length(list(...)))
warning("'optional' and arguments in '...' ignored")
data.frame(.as.data.frame.IntegerRangesList(ranges(x)),
.as.data.frame.DataFrameList(values(x))[-1L],
row.names = row.names,
stringsAsFactors = FALSE)
}
setMethod("as.data.frame", "RangedData", .as.data.frame.RangedData)
setAs("RangedData", "DataFrame",
function(from)
{
DataFrame(as.data.frame(ranges(from)),
unlist(values(from), use.names=FALSE))
})
setAs("Rle", "RangedData",
function(from)
{
what <- "coercion method from Rle to RangedData"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
new2("RangedData",
ranges = IRangesList("1" = successiveIRanges(runLength(from))),
values =
SplitDataFrameList("1" = DataFrame(score = runValue(from))),
metadata = metadata(from),
check = FALSE)
})
setAs("RleList", "RangedData",
function(from)
{
what <- "coercion method from RleList to RangedData"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
ranges <-
IRangesList(lapply(from, function(x)
successiveIRanges(runLength(x))))
values <-
SplitDataFrameList(lapply(from, function(x)
DataFrame(score = runValue(x))))
if (is.null(names(from))) {
nms <- as.character(seq_len(length(from)))
names(ranges) <- nms
names(values) <- nms
}
new2("RangedData",
ranges = ranges, values = values,
metadata = metadata(from),
elementMetadata = elementMetadata(from, use.names=FALSE),
check = FALSE)
})
setAs("RleViewsList", "RangedData", function(from) {
what <- "coercion method from RleViewsList to RangedData"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
subject <- subject(from)
from_ranges <- restrict(ranges(from), 1L, elementNROWS(subject),
keep.all.ranges = TRUE)
### FIXME: do we want to insert NAs for out of bounds views?
score <- subject[from_ranges]
score_part <- as(lapply(width(from_ranges), PartitioningByWidth),
"IntegerRangesList")
score_ranges <- ranges(score)
ol <- findOverlaps(score_ranges, score_part)
offind <- as(lapply(ol, subjectHits), "IntegerList")
offset <- (start(from_ranges) - start(score_part))[offind]
ranges <- shift(ranges(ol, score_ranges, score_part), offset)
viewNames <- lapply(from_ranges, function(x) {
if (is.null(names(x)))
seq_len(length(x))
else names(x)
})
RangedData(ranges,
score = unlist(runValue(score), use.names = FALSE)[queryHits(ol)],
view = unlist(viewNames, use.names = FALSE)[subjectHits(ol)])
})
setAs("IntegerRanges", "RangedData",
function(from)
{
what <- "coercion method from IntegerRanges to RangedData"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
RangedData(from)
})
setAs("IntegerRangesList", "RangedData",
function(from)
{
what <- "coercion method from IntegerRangesList to RangedData"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
from_names <- names(from)
if (is.null(from_names) || anyDuplicated(from_names))
stop("cannot coerce a IntegerRangesList object with no names ",
"or duplicated names to a RangedData object")
unlisted_from <- unlist(from, use.names=FALSE)
unlisted_values <- mcols(unlisted_from, use.names=FALSE)
mcols(unlisted_from) <- NULL
ans_ranges <- relist(unlisted_from, skeleton=from)
metadata(ans_ranges) <- metadata(from)
if (!is(unlisted_values, "DataFrame")) {
if (!is.null(unlisted_values))
warning("could not propagate the inner metadata columns of ",
"'from' (accessed with 'mcols(unlist(from))') ",
"to the data columns (aka values) of the returned ",
"RangedData object")
unlisted_values <-
S4Vectors:::make_zero_col_DataFrame(length(unlisted_from))
}
ans_values <- newCompressedList0("CompressedSplitDataFrameList",
unlisted_values,
PartitioningByEnd(ans_ranges))
new2("RangedData",
ranges=ans_ranges,
values=ans_values,
#metadata=metadata(from),
elementMetadata=elementMetadata(from, use.names=FALSE),
check=FALSE)
}
)
setAs("list", "RangedData",
function(from) {
what <- "coercion method from list to RangedData"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
do.call(c, unname(from))
}
)
.fromRangedDataToCompressedIRangesList <- function(from)
{
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
ans <- ranges(from)
## Propagate 'values(from)'.
ans_unlisted_values <- unlist(values(from), use.names=FALSE)
mcols(ans@unlistData) <- ans_unlisted_values
ans
}
setAs("RangedData", "CompressedIRangesList",
.fromRangedDataToCompressedIRangesList
)
setAs("RangedData", "IRangesList", .fromRangedDataToCompressedIRangesList)
setMethod("as.env", "RangedData", function(x, enclos = parent.frame(2)) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
env <- S4Vectors:::makeEnvForNames(x, colnames(x), enclos)
makeAccessorBinding <- function(fun, name = deparse(substitute(fun))) {
makeActiveBinding(name, function() {
val <- fun(x)
rm(list=name, envir=env)
assign(name, val, env) ## cache for further use
val
}, env)
}
makeAccessorBinding(ranges)
makeAccessorBinding(space)
makeAccessorBinding(start)
makeAccessorBinding(width)
makeAccessorBinding(end)
env
})
.RangedData_fromDataFrame <- function(from) {
what <- "coercion method from data.frame or DataTable to RangedData"
.Defunct(msg=wmsg(RangedData_method_is_defunct_msg(what)))
required <- c("start", "end")
if (!all(required %in% colnames(from)))
stop("'from' must at least include a 'start' and 'end' column")
datacols <- setdiff(colnames(from), c(required, "space", "width"))
RangedData(IRanges(from$start, from$end), from[,datacols,drop=FALSE],
space = from$space)
}
setAs("data.frame", "RangedData", .RangedData_fromDataFrame)
setAs("DataTable", "RangedData", .RangedData_fromDataFrame)
### - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
### Show
###
.show_RangedData <- function(object) {
nr <- nrow(object)
nc <- ncol(object)
lo <- length(object)
cat(class(object), " with ",
nr, ifelse(nr == 1, " row and ", " rows and "),
nc, ifelse(nc == 1, " value column across ", " value columns across "),
lo, ifelse(lo == 1, " space\n", " spaces\n"), sep = "")
if (nr > 0) {
nms <- rownames(object)
if (nr < 20) {
ranges <- unlist(ranges(object), use.names=FALSE)
values <- unlist(values(object), use.names=FALSE)
out <-
cbind(space = as.character(space(object)), ranges = showAsCell(ranges),
"|" = rep.int("|", nr))
if (nc > 0)
out <-
cbind(out,
as.matrix(format(do.call(data.frame,
lapply(values, showAsCell)))))
if (is.null(nms))
rownames(out) <- as.character(seq_len(nr))
else
rownames(out) <- nms
classinfo <-
matrix(c("<factor>", "<IRanges>", "|",
unlist(lapply(values, function(x) {
paste("<", classNameForDisplay(x), ">", sep = "")
}), use.names = FALSE)), nrow = 1,
dimnames = list("", colnames(out)))
} else {
top <- object[1:9, ]
topRanges <- unlist(ranges(top), use.names=FALSE)
topValues <- unlist(values(top), use.names=FALSE)
bottom <- object[(nr-8L):nr, ]
bottomRanges <- unlist(ranges(bottom), use.names=FALSE)
bottomValues <- unlist(values(bottom), use.names=FALSE)
out <-
rbind(cbind(space = as.character(space(top)),
ranges = showAsCell(topRanges),
"|" = rep.int("|", 9)),
rbind(rep.int("...", 3)),
cbind(space = as.character(space(bottom)),
ranges = showAsCell(bottomRanges),
"|" = rep.int("|", 9)))
if (nc > 0)
out <-
cbind(out,
rbind(as.matrix(format(do.call(data.frame,
lapply(topValues,
showAsCell)))),
rbind(rep.int("...", nc)),
rbind(as.matrix(format(do.call(data.frame,
lapply(bottomValues,
showAsCell)))))))
if (is.null(nms)) {
rownames(out) <- c(as.character(1:9), "...", as.character((nr-8L):nr))
} else {
rownames(out) <- c(head(nms, 9), "...", tail(nms, 9))
}
classinfo <-
matrix(c("<factor>", "<IRanges>", "|",
unlist(lapply(topValues, function(x) {
paste("<", classNameForDisplay(x), ">", sep = "")
}), use.names = FALSE)), nrow = 1,
dimnames = list("", colnames(out)))
}
out <- rbind(classinfo, out)
print(out, quote = FALSE, right = TRUE)
}
}
setMethod("show", "RangedData",
function(object) {
.Deprecated(msg=wmsg2(RangedData_is_deprecated_msg))
suppressWarnings(.show_RangedData(object))
})
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.