blob_id stringlengths 40 40 | directory_id stringlengths 40 40 | path stringlengths 2 327 | content_id stringlengths 40 40 | detected_licenses listlengths 0 91 | license_type stringclasses 2 values | repo_name stringlengths 5 134 | snapshot_id stringlengths 40 40 | revision_id stringlengths 40 40 | branch_name stringclasses 46 values | visit_date timestamp[us]date 2016-08-02 22:44:29 2023-09-06 08:39:28 | revision_date timestamp[us]date 1977-08-08 00:00:00 2023-09-05 12:13:49 | committer_date timestamp[us]date 1977-08-08 00:00:00 2023-09-05 12:13:49 | github_id int64 19.4k 671M ⌀ | star_events_count int64 0 40k | fork_events_count int64 0 32.4k | gha_license_id stringclasses 14 values | gha_event_created_at timestamp[us]date 2012-06-21 16:39:19 2023-09-14 21:52:42 ⌀ | gha_created_at timestamp[us]date 2008-05-25 01:21:32 2023-06-28 13:19:12 ⌀ | gha_language stringclasses 60 values | src_encoding stringclasses 24 values | language stringclasses 1 value | is_vendor bool 2 classes | is_generated bool 2 classes | length_bytes int64 7 9.18M | extension stringclasses 20 values | filename stringlengths 1 141 | content stringlengths 7 9.18M |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
a20d5290ef72b377b5db5ef5966c875bbf07e5a5 | ca8f63825f504399f51b753a8dd5581b423d9503 | /RHotStuff/man/getStokes.from.expData.Rd | eecbc9511dc02c1fbf7daf6d1425f86a8d7190da | [
"MIT"
] | permissive | AlreadyTakenJonas/bachelorThesisSummary | 740cbd3f162f4279f8281ec5081c4e498a3cb1c8 | c7bca66ad007a01a4da275d0a9edc9f62ffb3f0f | refs/heads/master | 2023-03-06T01:51:36.303378 | 2021-02-21T22:06:44 | 2021-02-21T22:06:44 | 340,993,779 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 2,291 | rd | getStokes.from.expData.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/getStokes_from_experiment.R
\name{getStokes.from.expData}
\alias{getStokes.from.expData}
\title{Preprocess Data From Stokes Measurements Of Optical Fibers}
\usage{
getStokes.from.expData(data.elab)
}
\arguments{
\item{data.elab}{The experimental data of a stokes measurement experiment}
}
\value{
A list containing two data.frames: The stokes vectors before interferring with the
fiber, after interferring with the fiber for different inital orientations of the lasers
plane of polarisation. The data.frames also contain the total measured laser power after
and before the fiber. The returned data has the same structure as the return value of
getStokes.from.metaData.
}
\description{
This function takes the output from RHotStuff::GET.elabftw.bycaption
or RHotStuff::parseTable.elabftw with the parameter outputHTTP=FALSE
(see the man page of the functions for details) and computes the stokes
vectors for the experimental data of the experiment. The function extracts
the power measurements from the input table, normalises it with data from
the input table and calculates the stokes vectors before and after the
optical fiber for different inital orientations of the lasers plane of
polarisation.
}
\details{
This function expects the input to be in a specific format. If the elabFTW
template "Bestimmung von Stokesvektoren an einer optischen Faser" is used
for logging the measurements, the data meets the expectations. If the right
template is used, the data can be downloaded like shown in the examples.
}
\examples{
# Read data from elabFTW
input.data <- GET.elabftw.bycaption(EXPID, caption="Messdaten", header=T)
# Read data from elabFTW and read the attached .csv files
input.data <- GET.elabftw.bycaption(EXPID, caption="Messdaten", header=T, outputHTTP=T)
\%>\% parseTable.elabftw(., func=function(x) qmean(x[,4],
0.8,
na.rm=T,
inf.rm=T),
header=T, skip=14, sep=";")
# Convert the measurements into stokes vectors
getStokes.from.expData(input.data)
}
|
6d08aa6c0f5d2d2b1437bec62e70b93885699f51 | 5e4bedf9956fe07b3fa27fee4736e939b3e13fb1 | /man/change_door.Rd | 4b4746b4e1ab18598dd402a326e8237c860c93d6 | [] | no_license | JasonSills/montyhall2 | 315bbe0ae4bf65a38a0389f70b2d5b62d99e3968 | d03ffadca385090e3ca38af00d40486f25b65475 | refs/heads/master | 2022-12-15T22:48:46.954795 | 2020-09-19T23:05:29 | 2020-09-19T23:05:29 | 296,965,394 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 945 | rd | change_door.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/monty-hall-problem2.R
\name{change_door}
\alias{change_door}
\title{Change door strategy.}
\usage{
change_door(stay = T, opened.door, a.pick)
}
\arguments{
\item{The}{function utilizes opened.door and a.pick and sets the stay value
to T.}
}
\value{
The The function returns a numeric value representing 1 door,
neither the door in select_door or opened_goat_door.
}
\description{
\code{change_door()} simulates a strategy of switching from the initial pick
to the remaining door.
}
\details{
This function simulates the change the door strategy. In this strategy
the contestant decides to switch from their initial pick chosen
in the select_door function. The final pick is neither the door selected
in select_door or open_goat_door. The if statement identifies this as
final.pick and is returned if the change door strategy is chosen.
}
\examples{
change_door()
}
|
759f95b4e15922ff3a496a4e8816683831f23b46 | 2a7e77565c33e6b5d92ce6702b4a5fd96f80d7d0 | /fuzzedpackages/PandemicLP/R/pandemicPredicted_Class.R | 962d42e5366bb54fcb3ed85135d9b49678bd4dec | [] | no_license | akhikolla/testpackages | 62ccaeed866e2194652b65e7360987b3b20df7e7 | 01259c3543febc89955ea5b79f3a08d3afe57e95 | refs/heads/master | 2023-02-18T03:50:28.288006 | 2021-01-18T13:23:32 | 2021-01-18T13:23:32 | 329,981,898 | 7 | 1 | null | null | null | null | UTF-8 | R | false | false | 1,336 | r | pandemicPredicted_Class.R | #' pandemicPredicted objects: Predictions made from a fitted PandemicLP model
#'
#' The \pkg{PandemicLP} prediction function returns an object of S3 class
#' \code{pandemicPredicted}, which is a list containing the components described below. \cr
#' \cr
#'
#' @name pandemicPredicted-objects
#'
#' @section Elements for \code{pandemicPredicted} objects:
#' \describe{
#' \item{\code{predictive_Long}}{
#' The full sample of the predictive distribution for the long-term prediction.
#' The prediction is for daily new cases.
#' }
#' \item{\code{predictive_Short}}{
#' The full sample of the predictive distribution for the short-term prediction.
#' The prediction is for daily cumulative cases.
#' }
#' \item{\code{data}}{
#' The data passed on from the \code{\link{pandemicEstimated-objects}} under the element \code{Y$data}.
#' }
#' \item{\code{location}}{
#' A string with the name of the location.
#' }
#' \item{\code{cases_type}}{
#' A string with either "confirmed" or "deaths" to represent the type of data that has been fitted and predicted.
#' }
#' \item{\code{pastMu}}{
#' The fitted means of the data for the observed data points.
#' }
#' \item{\code{futMu}}{
#' The predicted means of the data for the predicted data points.
#' }
#' }
#'
NULL
|
50fa58c82a5c6307870153779746ab786dc7f76c | 2d34708b03cdf802018f17d0ba150df6772b6897 | /googleadexchangebuyerv14.auto/man/Creative.nativeAd.Rd | c16ed6b3d3f08cab0f9182c3f0488a0012c7ddb9 | [
"MIT"
] | permissive | GVersteeg/autoGoogleAPI | 8b3dda19fae2f012e11b3a18a330a4d0da474921 | f4850822230ef2f5552c9a5f42e397d9ae027a18 | refs/heads/master | 2020-09-28T20:20:58.023495 | 2017-03-05T19:50:39 | 2017-03-05T19:50:39 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 2,552 | rd | Creative.nativeAd.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/adexchangebuyer_objects.R
\name{Creative.nativeAd}
\alias{Creative.nativeAd}
\title{Creative.nativeAd Object}
\usage{
Creative.nativeAd(Creative.nativeAd.appIcon = NULL,
Creative.nativeAd.image = NULL, Creative.nativeAd.logo = NULL,
advertiser = NULL, appIcon = NULL, body = NULL, callToAction = NULL,
clickLinkUrl = NULL, clickTrackingUrl = NULL, headline = NULL,
image = NULL, impressionTrackingUrl = NULL, logo = NULL, price = NULL,
starRating = NULL, store = NULL, videoURL = NULL)
}
\arguments{
\item{Creative.nativeAd.appIcon}{The \link{Creative.nativeAd.appIcon} object or list of objects}
\item{Creative.nativeAd.image}{The \link{Creative.nativeAd.image} object or list of objects}
\item{Creative.nativeAd.logo}{The \link{Creative.nativeAd.logo} object or list of objects}
\item{advertiser}{No description}
\item{appIcon}{The app icon, for app download ads}
\item{body}{A long description of the ad}
\item{callToAction}{A label for the button that the user is supposed to click}
\item{clickLinkUrl}{The URL that the browser/SDK will load when the user clicks the ad}
\item{clickTrackingUrl}{The URL to use for click tracking}
\item{headline}{A short title for the ad}
\item{image}{A large image}
\item{impressionTrackingUrl}{The URLs are called when the impression is rendered}
\item{logo}{A smaller image, for the advertiser logo}
\item{price}{The price of the promoted app including the currency info}
\item{starRating}{The app rating in the app store}
\item{store}{The URL to the app store to purchase/download the promoted app}
\item{videoURL}{The URL of the XML VAST for a native ad}
}
\value{
Creative.nativeAd object
}
\description{
Creative.nativeAd Object
}
\details{
Autogenerated via \code{\link[googleAuthR]{gar_create_api_objects}}
If nativeAd is set, HTMLSnippet and the videoURL outside of nativeAd should not be set. (The videoURL inside nativeAd can be set.)
}
\seealso{
Other Creative functions: \code{\link{Creative.corrections.contexts}},
\code{\link{Creative.corrections}},
\code{\link{Creative.filteringReasons.reasons}},
\code{\link{Creative.filteringReasons}},
\code{\link{Creative.nativeAd.appIcon}},
\code{\link{Creative.nativeAd.image}},
\code{\link{Creative.nativeAd.logo}},
\code{\link{Creative.servingRestrictions.contexts}},
\code{\link{Creative.servingRestrictions.disapprovalReasons}},
\code{\link{Creative.servingRestrictions}},
\code{\link{Creative}}, \code{\link{creatives.insert}}
}
|
e5349cbccedfac91cb607aa8402487a9ba5eae78 | 84807965bc7e6b1a02e36cc2dcd242075ed3bd10 | /demo/model/data.R | fe6f615bd7dc0e43349066fa725faf702f621c8d | [] | no_license | albertgoncalves/hoquei | 4614babc258837924ba03d97a5faec31e22fa97d | a4a0179b920179f4dbd5f17ba241b057c08c25cd | refs/heads/master | 2020-04-14T09:45:14.913947 | 2019-03-31T01:44:18 | 2019-03-31T01:44:18 | 163,768,060 | 0 | 0 | null | 2019-03-22T14:01:59 | 2019-01-01T21:33:35 | OCaml | UTF-8 | R | false | false | 2,039 | r | data.R | #!/usr/bin/env Rscript
source("../utils.R")
teams_to_indices = function(index, teams) {
return(as.vector(sapply(teams, name_to_index(index))))
}
adjust_ot = function(data) {
lambda = function(data, rows, team) {
column = sprintf("%s_goals", team)
values = data[, column]
values[rows] = data[rows, column] - 1
return(values)
}
data$ot = ifelse(data$ot == "", 0, 1)
ot = data$ot == 1
home_ot_wins = which(ot & (data$home_goals > data$away_goals))
away_ot_wins = which(ot & (data$home_goals < data$away_goals))
data$home_goals_no_ot = lambda(data, home_ot_wins, "home")
data$away_goals_no_ot = lambda(data, away_ot_wins, "away")
return(data)
}
export_stan_data = function(data, datafile, teamsfile) {
teams_list = invert_list(index_names(c(data$away, data$home)))
n_teams = length(teams_list)
n_games = NROW(data)
n_train = sum(data$played)
# n_train = as.integer(n_games * 0.33)
home = teams_to_indices(teams_list, data$home)
away = teams_to_indices(teams_list, data$away)
home_goals = data$home_goals
away_goals = data$away_goals
home_goals_no_ot = data$home_goals_no_ot
away_goals_no_ot = data$away_goals_no_ot
ot_input = data$ot
sigma_offense_lambda = 0.05
sigma_defense_lambda = 0.05
sigma_adv_lambda = 0.001
items = c( "n_teams"
, "n_games"
, "n_train"
, "home"
, "away"
, "home_goals"
, "away_goals"
, "home_goals_no_ot"
, "away_goals_no_ot"
, "ot_input"
, "sigma_offense_lambda"
, "sigma_defense_lambda"
, "sigma_adv_lambda"
)
dump(items, datafile)
dump("teams_list", teamsfile)
}
if (sys.nframe() == 0) {
season = "regular_2019"
csvfile = sprintf("../data/%s.csv", season)
datafile = "input.data.R"
data = adjust_ot(read_data(csvfile))
export_stan_data(data, datafile, teamsfile())
}
|
d7f208df076fd2d641469f0fabba266dd73c0d5e | 72166929998ef4ae822b411da9f11b6c375a09ed | /pair.mglmm/man/llik.fim.Rd | eab2a473da4f53ef6f7df2a7f0ac9df61f3bc78f | [] | no_license | rceratti/Dissertacao | 03699c499ce435c32361c79450fc1d615a9a3def | 7dcd06a2a6a983943b282fb816326d6a1b1032c2 | refs/heads/master | 2021-01-22T13:47:05.330564 | 2014-05-03T18:45:57 | 2014-05-03T18:45:57 | 19,409,309 | 0 | 1 | null | null | null | null | UTF-8 | R | false | false | 713 | rd | llik.fim.Rd | \name{llik.fim}
\alias{llik.fim}
%- Also NEED an '\alias' for EACH other topic documented here.
\title{
llik.fim
}
\description{
Monte Carlo approxmation to the log-likelihood. Internal usage.
}
\usage{
llik.fim(mod, formula, beta, S, phi, p, B = 10000, cl = NULL)
}
%- maybe also 'usage' for other objects documented here.
\arguments{
\item{mod}{
'mer' object.
}
\item{formula}{
Formula.
}
\item{beta}{
Estimated fixed effects vector.
}
\item{S}{
Estimated variance components matrix.
}
\item{phi}{
Estimated dispersion parameter.
}
\item{p}{
Estimated compound Poisson index.
}
\item{B}{
Number of simulated samples from the multivariate normal distribution.
}
\item{cl}{
Cluster to be used.
}
} |
1ccf47914e4d60659aa843953582bff91a34c2f2 | 08363118c6636f4eaf27be92e0403cb44c35af32 | /osha_inspections.R | ce3ded62de300879341bf70ac271e44dd873b853 | [] | no_license | BenCasselman/osha | 62ecc2565820e4f6e6795e78c2c353c47ca5d2c4 | affc68f5e5e87d4e70f32550ff9a015fcfaf4c57 | refs/heads/master | 2021-01-23T03:43:20.856117 | 2017-04-06T22:53:47 | 2017-04-06T22:53:47 | 86,114,683 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,273 | r | osha_inspections.R | # Tracking inspections activity
library(tidyr)
library(dplyr)
library(ggplot2)
library(lubridate)
# First update data. This function is from "labor_updater" file
daily_data()
load("inspections.RData")
load("violations.RData")
# Overall inspections by month
inspections %>%
mutate(month = as.Date(cut(open_date, breaks = "month"))) %>%
ggplot(., aes(month)) + geom_bar()
# Inspections by month (color to help see seasonal pattern)
inspections %>%
filter(year(open_date) > 2008) %>% # More recent data
mutate(month = as.Date(cut(open_date, breaks = "month"))) %>%
group_by(month) %>%
summarize(inspec = length(month)) %>%
ggplot(., aes(month, inspec, fill = as.factor(month(month)))) + geom_bar(stat = "identity") + ggtitle("OSHA inspections by month")
# Violations by month
violations %>%
filter(year(issuance_date) > 2008) %>%
mutate(month = as.Date(cut(issuance_date, breaks = "month"))) %>%
group_by(month) %>%
summarize(viols = length(current_penalty)) %>%
ggplot(., aes(month, viols)) + geom_bar(stat = "identity") + ggtitle("OSHA violations by month")
# Penalties by month
violations %>%
filter(!is.na(current_penalty),
year(issuance_date) > 2008) %>%
mutate(month = as.Date(cut(issuance_date, breaks = "month"))) %>%
group_by(month) %>%
summarize(fine = sum(current_penalty)) %>%
ggplot(., aes(month, fine)) + geom_bar(stat = "identity") + ggtitle("OSHA fines by month")
# Look ONLY at federal OSHA inspections
inspections %>%
filter(substr(reporting_id, 3, 3) != 5) %>% # Remove state plan states
filter(year(open_date) > 2008) %>% # More recent data
mutate(month = as.Date(cut(open_date, breaks = "month"))) %>%
group_by(month) %>%
summarize(inspec = length(month)) %>%
ggplot(., aes(month, inspec, fill = as.factor(month(month)))) + geom_bar(stat = "identity") + ggtitle("OSHA federal office inspections by month")
inspections %>%
filter(year(open_date) > 2008, substr(reporting_id, 3, 3) != 5) %>%
select(activity_nr) %>%
inner_join(violations, by = "activity_nr") %>%
mutate(month = as.Date(cut(issuance_date, breaks = "month"))) %>%
group_by(month) %>%
summarize(viols = length(current_penalty)) %>%
ggplot(., aes(month, viols)) + geom_bar(stat = "identity") + ggtitle("OSHA federal violations by month")
inspections %>%
filter(year(open_date) > 2008, substr(reporting_id, 3, 3) != 5) %>%
select(activity_nr) %>%
inner_join(violations, by = "activity_nr") %>%
filter(!is.na(current_penalty),
year(issuance_date) > 2008) %>%
mutate(month = as.Date(cut(issuance_date, breaks = "month"))) %>%
group_by(month) %>%
summarize(fine = sum(current_penalty)) %>%
ggplot(., aes(month, fine, fill = as.factor(month(month)))) + geom_bar(stat = "identity") + ggtitle("OSHA Federal office fines by month")
# Just look at programmatic inspections
# Limit to planned, program related
inspections %>%
filter(substr(reporting_id, 3, 3) != 5) %>% # Remove state plan states
filter(year(open_date) > 2008, insp_type %in% c("H", "I", "K")) %>% # More recent data
mutate(month = as.Date(cut(open_date, breaks = "month"))) %>%
group_by(month) %>%
summarize(inspec = length(month)) %>%
ggplot(., aes(month, inspec, fill = as.factor(month(month)))) + geom_bar(stat = "identity") + ggtitle("OSHA federal office inspections by month")
# By industry
naics <- read_csv("naics_codes_major.csv")
naics <- naics %>% mutate(naics_code = as.character(naics_code))
inspections %>%
filter(substr(reporting_id, 3, 3) != 5) %>% # Remove state plan states
filter(year(open_date) > 2008) %>% # More recent data
mutate(month = as.Date(cut(open_date, breaks = "month")),
maj_ind = substr(naics_code, 1, 2)) %>%
left_join(naics, by = c("maj_ind" = "naics_code")) %>%
group_by(month, major_ind) %>%
summarize(inspec = length(month)) %>%
ggplot(., aes(month, inspec, fill = as.factor(month(month)))) + geom_bar(stat = "identity") + facet_wrap(~major_ind)
# mining
inspections %>%
filter(substr(reporting_id, 3, 3) != 5) %>% # Remove state plan states
filter(year(open_date) > 2008, # More recent data
substr(naics_code, 1, 2) == 21) %>% # Mining industry
mutate(month = as.Date(cut(open_date, breaks = "month"))) %>%
group_by(month) %>%
summarize(inspec = length(month)) %>%
ggplot(., aes(month, inspec, fill = as.factor(month(month)))) + geom_bar(stat = "identity")
violations %>%
filter(!is.na(current_penalty)) %>%
mutate(month = as.Date(cut(issuance_date, breaks = "month"))) %>%
group_by(month) %>%
summarize(fine = sum(current_penalty)) %>% View()
arrange(desc(fine))
# Fines
violations %>%
filter(!is.na(current_penalty), year(issuance_date) > 2012) %>%
mutate(month = as.Date(cut(issuance_date, breaks = "month"))) %>%
group_by(month) %>%
summarize(fine = sum(current_penalty)) %>%
ggplot(., aes(month, fine)) + geom_bar(stat = "identity") + ggtitle("OSHA fines issued per month")
violations %>%
filter(!is.na(current_penalty), year(issuance_date) > 2012) %>%
mutate(month = as.Date(cut(issuance_date, breaks = "month"))) %>%
group_by(month) %>%
summarize(fine = sum(current_penalty)) %>% View("fines")
violations %>%
filter(year(issuance_date) > 2012) %>%
mutate(month = as.Date(cut(issuance_date, breaks = "month"))) %>%
group_by(month) %>%
summarize(viols = length(current_penalty)) %>% View("violations")
violations %>% filter(issuance_date > as.Date("2017-03-01")) %>% arrange(desc(issuance_date))
violations %>%
filter(year(issuance_date) > 2012) %>%
mutate(month = as.Date(cut(issuance_date, breaks = "month"))) %>%
group_by(month) %>%
summarize(viols = length(current_penalty)) %>%
ggplot(., aes(month, viols)) + geom_bar(stat = "identity") + ggtitle("OSHA violations by month")
rm(violations)
load("inspections.RData")
# Inspections by month (color to help see seasonal pattern)
inspections %>%
filter(year(open_date) > 2008) %>%
mutate(month = as.Date(cut(open_date, breaks = "month"))) %>%
group_by(month) %>%
summarize(inspec = length(month)) %>%
ggplot(., aes(month, inspec, fill = as.factor(month(month)))) + geom_bar(stat = "identity") + ggtitle("OSHA inspections by month")
|
2e21709e6e374377992883620d5a09cbaeab6608 | a06e089a75fa28b42029223f12161f3ad83d7d6c | /Resources/OPUS-book_opt-2014-Springer-Cortez-Rcode/Solutions/s6-1.R | 3a02dd1488bec17bf72ddeb60cdd75da719307b8 | [] | no_license | CSC-801-StochasticOptimization/Course | 65fd8eeb4290708a1a6e8e507f3a3521e1fa71c0 | 691eb38e08f68e8e90040a9876e541d091fbbec7 | refs/heads/master | 2021-09-06T18:30:19.165056 | 2018-01-13T18:16:26 | 2018-01-13T18:16:26 | 117,366,630 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,249 | r | s6-1.R | source("hill.R") # load the blind search methods
source("mo-tasks.R") # load MO bag prices task
source("lg-ga.R") # load tournament function
# lexicographic hill climbing, assumes minimization goal:
lhclimbing=function(par,fn,change,lower,upper,control,
...)
{
for(i in 1:control$maxit)
{
par1=change(par,lower,upper)
if(control$REPORT>0 &&(i==1||i%%control$REPORT==0))
cat("i:",i,"s:",par,"f:",eval(par),"s'",par1,"f:",
eval(par1),"\n")
pop=rbind(par,par1) # population with 2 solutions
I=tournament(pop,fn,k=2,n=1,m=2)
par=pop[I,]
}
if(control$REPORT>=1) cat("best:",par,"f:",eval(par),"\n")
return(list(sol=par,eval=eval(par)))
}
# lexico. hill climbing for all bag prices, one run:
D=5; C=list(maxit=10000,REPORT=10000) # 10000 iterations
s=sample(1:1000,D,replace=TRUE) # initial search
ichange=function(par,lower,upper) # integer value change
{ hchange(par,lower,upper,rnorm,mean=0,sd=1) }
LEXI=c(0.1,0.1) # explicitly defined lexico. tolerances
eval=function(x) c(-profit(x),produced(x))
b=lhclimbing(s,fn=eval,change=ichange,lower=rep(1,D),
upper=rep(1000,D),control=C)
cat("final ",b$sol,"f(",profit(b$sol),",",produced(b$sol),")\n")
|
bd1e02319444f1b3f9a1aafed982149c80402787 | cc44905c6b73213bfe1d5269c7aab3ecebbd8337 | /ARMA.R | f46e1f01b52f3eafcc1ed9f920ba17e2033a4aba | [] | no_license | andresrg99/EQUIPO_MACRO | 9fecc9378e915774732e34aa0cd3c12fa89a57f7 | 95b96a547fa1b83adf90ddeedcb15462cf101256 | refs/heads/main | 2023-04-04T03:56:05.582072 | 2021-04-12T21:35:43 | 2021-04-12T21:35:43 | 344,629,963 | 0 | 2 | null | 2021-03-23T16:47:47 | 2021-03-04T22:47:40 | R | UTF-8 | R | false | false | 3,139 | r | ARMA.R | #Trabajamos con los precios diferenciados
library(lubridate)
library(readr)
library(tidyverse)
library(ggplot2)
library(ggthemes)
library(plyr)
library(forecast)
library(stats)
library(tseries)
library(performance)
library(quantmod)
library(lmtest)
library(moments)
library(dynlm)
library(fpp2)
library(readxl)
library(mlr)
# DATOS DE INVESTING.COM
# Dólares por barril
View(PWTI)
# Lectura y limpieza de datos
PWTI <- read_csv("Precios WTI_2.csv")
PWTI$Fechas <- parse_date_time(PWTI$Date, "mdy")
# 1607 entradas
#Realizando nuestra serie de tiempo
#Confirmamos que eliminamos el precio negativo
precio.ts=ts(PWTI$Price,start=2015,frequency=365)
precio.ts[] <- rev(precio.ts)
plot(precio.ts,main="Precio del petróleo",col="blue")
#Diferenciando
diflogprecios.ts=diff(log(precio.ts))
plot(diflogprecios.ts)
#Checando estacionariedad
adf.test(diflogprecios.ts,alternative="stationary") #Es estacionaria nuestra serie
#Toca observar nuestra autocorrelación y autocorrelación parcial.
##autocorrelación y autocorrelación parcial, esto nos va a ayuda para saber
#cuántos autoregresivos vamos a utilizar en nuestro modelo ARIMA
plot(diflogprecios.ts,type="o",lty="dashed",col="red",main="Serie de Tiempo")
#A continuación en las gráficas, lo que tenemos que observar son las líneas
#que se salen de la autocorrelación normal y parcial, para determinar el número
#de medias móviles y de medias móviles y de autoregresivos (respectivamente)
par(mfrow=c(2,1),mar=c(4,4,4,1)+.1)
acf(diflogprecios.ts) #número de medias móviles: 2 media móvil observada
pacf(diflogprecios.ts) ##autocorrelación, número de autoregresivos, tenemos 5
#Para que el rezago coincida con la frecuencia:
acf(ts(diflogprecios.ts,frequency=1))
pacf(ts(diflogprecios.ts,frequency=1))
#Comenzando ahora con nuestras autoregresiones
modelo1<-dynlm(precio.ts~L(precio.ts),data=precio.ts) #L=un rezago
summary(modelo1)
#Podemos observar claramente que el primer periodo es sigificativo.
#Ahora procede realizar 30 rezagos porque nuestra serie es diaria.
modelo2<-dynlm(precio.ts~L(precio.ts,1:30),data=precio.ts) #L=un rezago
summary(modelo2)
#La mayoría de los periodos anteriores no son significativos. Por lo tanto,
#después de analizar el número de autoregresivos y el número de medias móviles,
#realizacmos nuestro modelo ARMA.
#hacemos ahora nuestro ARIMA, con nuestra serie de tiempo original!!!!
#c(autoregresivos,diferencias,medias movil)
modeloARMA=arima(precio.ts,order=c(5,1,2)) #lo hacemos con la serie de tiempo inicial
modeloARMA
tsdiag(modeloARMA)
#=====================================================
#NO NECESARIO?
#LJUNG BOX: mayor a .05 que se ve en la gráfica de tsdiag
Box.test(residuals(modeloARMA),type="Ljung-Box") #>.05 y sí hay ruido blanco
#CONFIRMANDO : RUIDO BLANCO media cero, var constante y errores no correlacionados
#Observando gráficamente el modelo
error=residuals(modeloARMA)
par(mfrow=c(1,1))
plot(error) #media=0 de los errores
jarque.bera.test(error) #>0.05 por lo tanto sí son normales
|
ea8e9c343420759c4357d7b01fc6565d692bec2c | 555a96d19c5ba05d7c481549188219b0ee2b5855 | /man/HS.recAR.Rd | fa1766cd63012180c1ca84236c39b4fc33dac033 | [
"GPL-3.0-only"
] | permissive | ShotaNishijima/frasyr | 9023aede3b3646ceafb2167130f40b70c9c7d176 | 2bee5572aab4cee692d47dfb9d8e83ce478c889c | refs/heads/master | 2023-08-16T11:43:39.327357 | 2020-01-21T07:49:29 | 2020-01-21T07:49:29 | 199,401,086 | 0 | 0 | Apache-2.0 | 2019-07-29T07:26:21 | 2019-07-29T07:26:20 | null | UTF-8 | R | false | true | 437 | rd | HS.recAR.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/future.r
\encoding{UTF-8}
\name{HS.recAR}
\alias{HS.recAR}
\title{HSを仮定したときの加入関数}
\usage{
HS.recAR(ssb, vpares, rec.resample = NULL, rec.arg = list(a = 1000, b =
1000, sd = 0.1, rho = 0, resid = 0))
}
\arguments{
\item{ssb}{親魚資源量}
\item{vpares}{VPAの出力結果}
}
\description{
HSを仮定したときの加入関数
}
|
ddb5026aed1418bd1a9961517109d210ea917158 | 6bef150edd1c5db48241b6a6032b984ef52eec35 | /src/at.15.de.8.diy.R | 79325502f3eb911c08f3fdeb2af9a33ea79ca872 | [] | no_license | maizeumn/atlas | 8e0784a4c904b3212144fdfaf5363edf68659b94 | 7f3a0a0a0ef4a0e2e0d272757f39338e8cecc318 | refs/heads/master | 2022-01-15T22:12:57.818274 | 2022-01-10T08:59:46 | 2022-01-10T08:59:46 | 153,759,178 | 3 | 4 | null | null | null | null | UTF-8 | R | false | false | 7,400 | r | at.15.de.8.diy.R | source("br.fun.r")
require(DESeq2)
#require(lmtest)
require(edgeR)
dirw = file.path(dird, "42_de_old")
fi = file.path(dird, '41_qc/10.rda')
x = load(fi)
tm = tm %>% inner_join(th[,1:2], by = 'SampleID') %>%
group_by(Tissue) %>% nest()
th = th %>% inner_join(tl, by = 'SampleID') %>%
select(-paired, -Treatment) %>%
group_by(Tissue) %>% nest()
#{{{ # proof of concept for NB
mu = 20
x = 0:(mu*4)
y1 = dpois(x, lambda = mu)
tp = tibble(model = 'poisson', x = x, y = y1)
for (prob in c(0.1,0.2,0.5,0.7,0.9)) {
size = mu * prob / (1-prob)
y = dnbinom(x, size=size, prob=prob)
tp1 = tibble(model = sprintf("nbinom: size=%.1f, prob=%g", size, prob), x=x, y=y)
tp = rbind(tp, tp1)
}
tp$model = factor(tp$model, levels = unique(tp$model))
p <- ggplot(tp) +
geom_line(aes(x=x,y=y,color=model)) +
geom_vline(xintercept = 20, size = 0.3) +
theme(legend.position = 'top', legend.direction = 'vertical') +
scale_color_brewer(palette = "Set1")
fo = file.path(dirw, 'nb.pdf')
ggsave(p, filename = fo, width = 6, height = 6)
#}}}
tissue = "ear_v14"
#{{{ identify DEGs using DESeq2
tc = th %>% filter(Tissue == tissue, Genotype %in% gts) %>% arrange(SampleID)
tcd = data.frame(tc)
rownames(tcd) = tc$SampleID
tw = tm %>%
filter(Tissue == tissue, Genotype %in% gts) %>%
select(SampleID, gid, ReadCount) %>%
spread(SampleID, ReadCount) %>%
mutate(totalRC = rowSums(.[grep("BR", names(.))], na.rm = T)) %>%
filter(totalRC >= 10) %>%
select(-totalRC)
twd = data.frame(tw[,-1])
rownames(twd) = tw$gid
stopifnot(identical(tcd$SampleID, colnames(twd)))
dds = DESeqDataSetFromMatrix(countData=twd, colData=tcd, design = ~ Genotype)
#dds = estimateSizeFactors(dds1)
dds = DESeq(dds, fitType = 'parametric')
disp = dispersions(dds)
res = results(dds, contrast = c("Genotype", "Mo17", "B73"), pAdjustMethod = "fdr")
td = as_tibble(data.frame(res)) %>%
add_column(disp = disp) %>%
mutate(tissue = tissue,
gid = rownames(res),
log2MB = log2FoldChange,
DE_B = padj < .05 & log2MB < -1,
DE_M = padj < .05 & log2MB > 1) %>%
select(tissue, gid, disp, DE_B, DE_M, log2MB, pvalue, padj) %>%
replace_na(list(DE_B = F, DE_M = F)) %>%
mutate(DE = ifelse(padj<.05, "DE", "non-DE"))
#}}}
#{{{ identify DEGs using mle+dnbinom+lrtest
tx = as_tibble(counts(dds, normalized = T)) %>%
mutate(gid = tw$gid) %>%
gather(SampleID, nRC, -gid) %>%
left_join(tc[,c("SampleID","Genotype")], by = 'SampleID') %>%
group_by(gid, Genotype) %>%
summarise(nRC = list(nRC)) %>%
ungroup() %>%
spread(Genotype, nRC) %>%
mutate(disp = disp)
tx2 = as_tibble(counts(dds, normalized = T)) %>%
mutate(gid = tw$gid) %>%
gather(SampleID, nRC, -gid) %>%
left_join(tc[,c("SampleID","Genotype")], by = 'SampleID') %>%
group_by(gid, Genotype) %>%
summarise(nRC = mean(nRC)) %>%
ungroup() %>%
spread(Genotype, nRC) %>%
mutate(disp = disp)
get_bic <- function(i, tt) {
#{{{
xb = unlist(tt$B73[i]); xm = unlist(tt$Mo17[i])
size = 1/tt$disp[i]
bicd = NA; bicn = NA; mode = NA
probb.s = size / (size + mean(xb))
probm.s = size / (size + mean(xm))
xb = round(xb); xm = round(xm)
LLd <- function(probb, probm) {
if(probb > 0 & probb < 1 & probm > 0 & probm < 1)
-sum(dnbinom(xb, size, prob = probb, log = T) +
dnbinom(xm, size, prob = probm, log = T))
else 100
}
LLn <- function(probb) {
if(probb > 0 & probb < 1)
-sum(dnbinom(xb, size, prob = probb, log = T) +
dnbinom(xm, size, prob = probb, log = T))
else 100
}
fitd = mle(LLd, start = list(probb = probb.s, probm = probm.s),
method = "L-BFGS-B", lower = c(1e-5,1e-5), upper = c(1-1e-5,1-1e-5),
nobs = length(xb)+length(xm))
fitn = mle(LLn, start = list(probb = probb.s),
method = "L-BFGS-B", lower = c(1e-5), upper = c(1-1e-5),
nobs = length(xb)+length(xm))
#coef(fitc)
lrt = lrtest(fitd, fitn)
pval = lrt[2,5]
bic = BIC(fitd, fitn)
bicd = bic$BIC[1]; bicn = bic$BIC[2]
tb = as_tibble(bic) %>%
add_column(mode = c('DE', 'non-DE')) %>% arrange(BIC)
c('bicd'=bicd, 'bicn'=bicn, 'mode'=tb$mode[1], 'pval'=pval)
#}}}
}
require(BiocParallel)
bpparam <- MulticoreParam()
y = bplapply(1:nrow(tx), get_bic, tx, BPPARAM = bpparam)
tb = do.call(rbind.data.frame, y) %>% as_tibble()
colnames(tb) = c("bicd", "bicn", "mode", 'pval')
tb = tb %>% mutate(bicd = as.numeric(bicd),
bicn = as.numeric(bicn),
pval = as.numeric(pval),
padj = p.adjust(pval, "fdr"),
DE = ifelse(mode=='DE' & padj < 0.05, "DE", "non-DE"))
stopifnot(nrow(tb) == nrow(tx))
table(tibble(DE_DESeq2 = td$DE, DE_DIY = tb$DE))
#}}}
#{{{ use GLM to identify additive/dominant pattern
t_de = tm %>% inner_join(th, by = 'Tissue') %>%
mutate(res = map2(data.x, data.y, run_de_test)) %>%
mutate(deseq = map(res, 'deseq'), edger = map(res, 'edger')) %>%
select(Tissue, deseq, edger)
fo = file.path(dirw, '10.rda')
save(t_de, file = fo)
#}}}
#{{{
fi = file.path(dirw, '10.rda')
x = load(fi)
tx = t_de %>% select(Tissue, deseq) %>% unnest() %>%
filter(padj.mb < .01) %>%
mutate(padj.hh = ifelse(log2mb<0, padj.hb, padj.hm),
log2hh = ifelse(log2mb<0, log2hb, log2hm),
padj.hl = ifelse(log2mb<0, padj.hm, padj.hb),
log2hl = ifelse(log2mb<0, padj.hm, padj.hb)) %>%
mutate(dom = ifelse(padj.fm<.01,
ifelse(log2hm<0,
ifelse(padj.hl<.01, ifelse(log2hl<0, 'BLP','PD_L'), 'LP'),
ifelse(padj.hh<.01, ifelse(log2hh>0, 'AHP','PD_H'), 'HP')),
'MP'))
doms= c("BLP","LP","PD_L","MP","PD_H","HP","AHP")
tp = tx %>% dplyr::count(Tissue, dom) %>%
mutate(Tissue=factor(Tissue, levels = tissues23)) %>%
mutate(dom = factor(dom, levels = doms))
tpx = tp %>% group_by(Tissue) %>% summarise(n = sum(n)) %>% ungroup() %>%
mutate(lab = sprintf("%s (%d)", Tissue, n))
p = ggplot(tp, aes(Tissue, n, fill=dom)) +
geom_bar(stat='identity',position='stack') +
scale_x_discrete(breaks = tpx$Tissue, labels = tpx$lab, expand=c(0,0)) +
scale_fill_simpsons() +
coord_flip() +
otheme(xtext=T,ytext=T)
fo = file.path(dirw, 'test2.pdf')
ggsave(p, file=fo, width=6, height=6)
tp %>% group_by(Tissue, dom) %>%
summarise(n = n(), q5=quantile(log2hm,.05), q25=quantile(log2hm,.25),
q50=quantile(log2hm,.5), q75=quantile(log2hm,.75),
q95=quantile(log2hm,.95))
p = ggplot(tp) +
#geom_point(aes(x=asinh(mp), y=asinh(BxM), color=log(padj))) +
geom_density(aes(x = log2HM, fill = dom), alpha=.5) +
scale_x_continuous(limits = c(-3,3)) +
scale_fill_aaas() +
scale_color_viridis()
ggsave(p, file=file.path(dirw,'test.pdf'), width=8,height=8)
#}}}
#{{{ glm test
y = tm %>% filter(Tissue == tissue) %>% select(-Tissue) %>% unnest() %>%
inner_join(vh[,c('SampleID','Genotype','ac','hybrid')], by = 'SampleID')
y0 = y %>% filter(gid == vm$gid[1])
fit0 = glm.nb(ReadCount ~ 1, data=y0)
fit1 = glm.nb(ReadCount ~ Genotype, data=y0)
fit2 = glm.nb(ReadCount ~ ac, data=y0)
anova(fit0, fit1, fit2, test = 'Chisq')
#}}}
|
ff0bd872ce0f76c6525ef9bd9621b0cd7e4d0feb | 293246e7c8204686b01fc2e93b7b02ed22259bd3 | /R/nagler-fd-sims.R | db5b2d9453c5b139d4cc3876aab7f0a4669a335e | [] | no_license | leeper/unnecessary | 9749ee598e324480964e062af85dcc3abd055b95 | 281e008d955e26852310d05a5763986d9a90ebdd | refs/heads/master | 2020-03-25T16:09:59.948834 | 2018-08-07T19:24:47 | 2018-08-07T19:24:47 | 143,918,146 | 1 | 1 | null | 2018-08-07T19:29:20 | 2018-08-07T19:29:20 | null | UTF-8 | R | false | false | 3,334 | r | nagler-fd-sims.R |
# load packages
library(tidyverse)
library(magrittr)
# set seed
set.seed(589)
# load data
turnout_df <- haven::read_dta("data/scobit.dta") %>%
filter(newvote != -1) %>%
mutate(case_priority = sample(1:n())) %>%
glimpse()
# fit model
f <- newvote ~ poly(neweduc, 2, raw = TRUE) + closing + poly(age, 2, raw = TRUE) + south + gov
fit <- glm(f, data = turnout_df, family = binomial(link = "probit"))
# print coef. estimates
texreg::screenreg(fit)
# simulation parameters
n_c <- 250 # number of cases for which to compute the quantity of interest
n_sims <- 5000
sample_size <- c(100, 200, 400, 800)
# compute simulation requisites
beta <- coef(fit)
turnout_df %<>%
mutate(p = predict(fit, type = "response")) %>%
glimpse()
pred_df <- filter(turnout_df, case_priority <= n_c)
pred1_df <- pred_df %>%
mutate(closing = closing + sd(closing))
X_pred <- model.matrix(f, data = pred_df)
X1_pred <- model.matrix(f, data = pred1_df)
# do simulation
bias_df <- NULL
for (j in 1:length(sample_size)){
qi_df <- NULL
beta_hat_mat <- matrix(NA, nrow = n_sims, ncol = length(beta))
fd_mle <- fd_avg <- matrix(NA, nrow = n_sims, ncol = n_c)
sim_df <- turnout_df %>%
filter(case_priority <= sample_size[j])
cat(paste0("\nWorking on sample size ", sample_size[j], "...\n"))
progress <- progress_estimated(n_sims)
for (i in 1:n_sims) {
sim_df %<>%
mutate(y_sim = rbinom(sample_size[j], 1, p))
sim_f <- update(f, y_sim ~ .)
sim_fit <- glm(sim_f, data = sim_df,
family = binomial(link = "probit"))
# extract estimates
beta_hat_mat[i, ] <- coef(sim_fit)
Sigma_hat <- vcov(sim_fit)
# simulate beta
beta_tilde <- MASS::mvrnorm(1000, beta_hat_mat[i, ], Sigma_hat)
# compute tau-hat
fd_mle[i, ] <- pnorm(X1_pred%*%beta_hat_mat[i, ]) - pnorm(X_pred%*%beta_hat_mat[i, ])
fd_avg[i, ] <- apply(pnorm(X1_pred%*%t(beta_tilde)) - pnorm(X_pred%*%t(beta_tilde)), 1, mean)
progress$tick()$print()
}
# predicted probability
e_beta <- apply(beta_hat_mat, 2, mean)
tau_beta <- pnorm(X1_pred%*%beta) - pnorm(X_pred%*%beta)
tau_e_beta <- pnorm(X1_pred%*%e_beta) - pnorm(X_pred%*%e_beta)
e_tau_beta_mle <- apply(fd_mle, 2, mean)
e_tau_beta_avg <- apply(fd_avg, 2, mean)
# compute biases
ci_bias <- tau_e_beta - tau_beta
ti_bias <- e_tau_beta_mle - tau_e_beta
sim_bias <- e_tau_beta_avg - e_tau_beta_mle
bias_df_j <- data.frame(sample_size = sample_size[j],
tau = tau_beta,
ci_bias = ci_bias,
ti_bias = ti_bias,
sim_bias = sim_bias,
case_id = pred_df$case_priority)
bias_df <- bind_rows(bias_df, bias_df_j)
}
# tidy the data
tall_bias_df <- bias_df %>%
gather(concept, bias, ends_with("_bias")) %>%
mutate(concept = fct_relevel(concept, c("ci_bias", "ti_bias", "sim_bias")),
concept = fct_recode(concept,
`Coefficient-Induced Bias` = "ci_bias",
`Transformation-Induced Bias` = "ti_bias",
`Simulation-Induced Bias` = "sim_bias"),
sample_size_fct = factor(paste0("N = ", sample_size))) %>%
glimpse() %>%
write_rds("data/nagler-fd-bias.rds") %>%
write_csv("data/nagler-fd-bias.csv")
|
b99561ce92b26346297b11a950c94a387cff3da6 | 2bdef30d5971c256f3b927b833700faa61479cf0 | /man/est2SLSCoefCov.Rd | 07d804da89cd4ae382cf24175b40dabce05409eb | [] | no_license | zackfisher/MIIVsem | da3d686547b4af75dca646a2cb58046bf51484cf | a14d6e8fc4d7a5df6317424a5589c2f138f46c1d | refs/heads/master | 2021-12-02T07:07:00.019902 | 2021-11-27T19:45:30 | 2021-11-27T19:45:30 | 47,993,547 | 11 | 7 | null | null | null | null | UTF-8 | R | false | true | 413 | rd | est2SLSCoefCov.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/est2SLSCoefCov.R
\name{est2SLSCoefCov}
\alias{est2SLSCoefCov}
\title{estimate the 2SLS coefficient covariance matrix}
\usage{
est2SLSCoefCov(
d,
poly.mat = NULL,
cov.mat = NULL,
mean.vec = NULL,
acov = NULL,
acov.sat = NULL,
r = NULL
)
}
\description{
estimate the 2SLS coefficient covariance matrix
}
\keyword{internal}
|
d23d0a27ae881ea75195d8dc5c29bcfc8eab1cc2 | 5a4e746ef2dd9862672d1d8197cca011790ce92f | /work/w-cal_part1Pg_part2Pg.R | 2131b5a1860ef0a5054511b6d4dbd4ccf6aa0895 | [] | no_license | shunw/R-coding | 3942d3a494dfdc9c15ed13bc29e0d8743d7ec9da | 54688b1b18b2a92638f5629f4a55a987a5ae9699 | refs/heads/master | 2020-04-17T03:53:27.211759 | 2015-09-23T08:23:54 | 2015-09-23T08:23:54 | 34,199,186 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,943 | r | w-cal_part1Pg_part2Pg.R | #this is to cal the part1 and part2 sum.
#first step: clean data ---> move the NA rows, and sort the data by"D" and "B.Pg"
#second step: add Totalsum ---> cumsum the X..of.pg by D
#third step: add the part1sum/ part2sum ---> cumsum the X..of.pg by D and by part1
#fourth step: add the part1beg/ part1 end/ part2 beg/ part2 end
#fifth step: write the csv file.
#++++++++++++++++++++++++++++++++++++++++++++
# FIRST
#++++++++++++++++++++++++++++++++++++++++++++
#set the correct path
setwd("file_path")
#get the raw data from csv file.
raw_0<-read.csv("souce_file.csv", fill=TRUE, header=TRUE, sep=",")
#clear the raw data from NA rows
raw<-raw_0[!is.na(raw_0$J.ID), ]
#sort the data
raw_ord<-raw[order(raw$"D", raw$"B.Pg"), ]
#++++++++++++++++++++++++++++++++++++++++++++
# SECOND
#++++++++++++++++++++++++++++++++++++++++++++
#calculate the total sum including part1 and part2
require(data.table)
raw_sum<-data.table(raw_ord)
raw_sum<-within(raw_sum, {
Totalsum<-ave(X..of.Pg, D, FUN=cumsum)
})
#++++++++++++++++++++++++++++++++++++++++++++
# THIRD
#++++++++++++++++++++++++++++++++++++++++++++
#calculate the accumulated page by part1 and part2
raw_sum[raw_sum$Input=="part1", part1sum:=cumsum(X..of.Pg), by=c("D", "I")]
raw_sum[raw_sum$Input=="part2", part2sum:=cumsum(X..of.Pg), by=c("D", "I")]
#++++++++++++++++++++++++++++++++++++++++++++
# FOURTH
#++++++++++++++++++++++++++++++++++++++++++++
#Add part1/ part2 Begin/ End col
raw_sum$part1_Beg<-NA
raw_sum$part1_End<-NA
raw_sum$part1_Beg<-as.integer(raw_sum$part1_Beg)
raw_sum$part1_End<-as.integer(raw_sum$part1_End)
raw_sum$part2_Beg<-NA
raw_sum$part2_End<-NA
raw_sum$part2_Beg<-as.integer(raw_sum$part2_Beg)
raw_sum$part2_End<-as.integer(raw_sum$part2_End)
#part1 and part2 Begin/ End
raw_sum[raw_sum$Input=="part1", ]$part1_Beg=raw_sum[raw_sum$Input=="part1", ]$part1sum-raw_sum[raw_sum$Input=="part1", ]$X..of.Page+1
raw_sum[raw_sum$Input=="part1", ]$part1_End=raw_sum[raw_sum$Input=="part1", ]$part1sum
raw_sum[raw_sum$Input=="part2", ]$part2_Beg=raw_sum[raw_sum$Input=="part2", ]$part2sum-raw_sum[raw_sum$Input=="part2", ]$X..of.Page+1
raw_sum[raw_sum$Input=="part2", ]$part2_End=raw_sum[raw_sum$Input=="part2", ]$part2sum
#============================================================
# Fill the NA item in the sum col
#============================================================
raw_sum[is.na(raw_sum$part2sum), ]$part2sum=raw_sum[is.na(raw_sum$part2sum), ]$Totalsum-raw_sum[is.na(raw_sum$part2sum), ]$part1sum
raw_sum[is.na(raw_sum$part1sum), ]$part1sum=raw_sum[is.na(raw_sum$part1sum), ]$Totalsum-raw_sum[is.na(raw_sum$part1sum), ]$part2sum
#++++++++++++++++++++++++++++++++++++++++++++
# FIFTH
#++++++++++++++++++++++++++++++++++++++++++++
#============================================================
# write the output file
#============================================================
write.csv(raw_sum, file="test_output.csv", row.names=FALSE)
|
cf1e144d8fe903e3528bec35c65d3d6deda22e62 | 1d48691fe89fabceae9c89e28d06d7d4b90ca570 | /man/errors_are_warnings.Rd | fcd1f7e29dbc12218cf4da9f03f4cea95427bff8 | [
"MIT"
] | permissive | peterhurford/handlr | b5d7c2ca38b0fc2f5552cbb4e5fdb377bc4346c4 | dc42611e9cb489b9abb87e507fc5e2b14e1711fb | refs/heads/master | 2021-01-21T15:04:51.610361 | 2017-09-14T19:06:52 | 2017-09-14T19:06:52 | 58,596,029 | 3 | 3 | null | 2017-09-14T19:06:53 | 2016-05-12T01:17:48 | R | UTF-8 | R | false | true | 318 | rd | errors_are_warnings.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/errors.R
\name{errors_are_warnings}
\alias{errors_are_warnings}
\title{Convert errors to warnings.}
\usage{
errors_are_warnings(exp)
}
\arguments{
\item{exp}{expresion. The expression to run.}
}
\description{
Convert errors to warnings.
}
|
1adc62a956e7b7bbc77c8cb825dda9e936effbfe | 5d8b3c10ce85e06d4a16e59a8d97b0601bbd71cc | /Model NEW 040217.R | 1dfecbcba545c464145f3d9850ef467b968aff56 | [] | no_license | ChandSooran/DataScienceCapstone | e73b7d29e22ccaf4e01a64779777b7e57695b0c3 | 3f3b65a65262e15cf987a5232487c8553fac4333 | refs/heads/master | 2021-01-21T15:37:32.456147 | 2017-05-20T00:19:06 | 2017-05-20T00:19:06 | 91,852,932 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 40,321 | r | Model NEW 040217.R | ## Model 5.0 - REDUX using Modified Kneser-Ney - Starting March 2017
## Initialize libraries
library(tm)
library(stringi)
library(stringr)
library(quanteda)
library(tictoc)
library(ggplot2)
library(caret)
library(AppliedPredictiveModeling)
library(data.table)
library(plyr)
## Define directories
zipurl <- "https://d396qusza40orc.cloudfront.net/dsscapstone/dataset/Coursera-SwiftKey.zip"
zipfiledirectory <- c("C://Chand Sooran/Johns Hopkins/Capstone 5.0/Course Dataset Zipfile/")
workingdirectory1 <- c("C://Chand Sooran/Johns Hopkins/Capstone 5.0/Week 1/")
workingdirectory2 <- paste(workingdirectory1, "final", "en_US", sep = "/")
workingdirectory3 <- c("C://Chand Sooran/Johns Hopkins/Capstone 5.0/Profanity")
workingdirectory4 <- c("C://Chand Sooran/Johns Hopkins/Capstone 5.0/Model 5.0/Data/")
## Set working directory
setwd(workingdirectory4)
## Initialize time tracking
TotalTime <- 0
## RETRIEVE DATA
## Set working directory
setwd(workingdirectory1) # Week 1
## Set filenames for download
coursedatafile = paste(zipfiledirectory, "Coursera-Swiftkey.zip", sep = "/")
courseunzippedfile = "final"
filenames = c("en_US.blogs.txt", "en_US.twitter.txt", "en_US.news.txt")
## Download zip file
tic()
if(!file.exists(coursedatafile)){
download.file(zipurl, coursedatafile)
}
ExecTime <- toc()
ZipTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + ZipTime
## Create function for obtaining files into the working directory
getfiles <- function(x){
if(!file.exists(paste(workingdirectory1, x, sep = "/"))){
if(!file.exists(paste(workingdirectory1, courseunzippedfile, sep ="/"))){
unzip(zipfile = coursedatafile)
file.copy(paste(workingdirectory2, x, sep = "/"), workingdirectory1)
} else {
file.copy(paste(workingdirectory2, x, sep = "/"), workingdirectory1)
}
}
}
## Get files
sapply(filenames, getfiles)
## Get profane words
if(!exists("ProfaneWords")){
ProfaneWords <- read.csv(paste(workingdirectory3, "Profane Words.txt", sep = "/"))
ProfaneWords <- as.vector(t(ProfaneWords))
ProfaneWords <- gsub("\\s","", ProfaneWords)
setwd(workingdirectory1)
}
## Read the three files using readLines
tic()
if(!exists("Twitter")){
Twitter <- readLines(con = "en_US.twitter.txt", n = -1L, skipNul = TRUE, encoding = "UTF-8")
}
ExecTime <- toc()
TwitterReadTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + TwitterReadTime
tic()
if(!exists("Blogs")){
Blogs <- readLines(con = "en_US.blogs.txt", n = -1L, skipNul = TRUE, encoding = "UTF-8")
}
ExecTime <- toc()
BlogsReadTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + BlogsReadTime
tic()
if(!exists("News")){
News <- readLines(con = "en_US.news.txt", n = -1L, skipNul = TRUE, warn = FALSE, encoding = "UTF-8")
}
ExecTime <- toc()
NewsReadTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + NewsReadTime
## SAMPLE 9% OF THE COMBINED DATASETS
## Make and save a file sampling 1% of all three datasets together
tic()
if(!exists("All")){
All <- c(Twitter, Blogs, News)
SampleProportion <- 0.085 # Set the percentage to sample ****
All <- sample(All, size = round(SampleProportion * length(All)))
All <- gsub("#\\S+", "", All) # Remove hashtags
All <- gsub("@\\S+", "", All) # Remove mentions
All <- gsub("\\S+\\d\\S+", "", All) # Remove digits
All <- gsub("\\d\\S+", "", All) ## Remove digits
All <- gsub("\\S+\\d", "", All) ## Remove digits
All <- gsub("\\rt|\\RT", "", All) ## Remove "rt"
setwd(workingdirectory4)
save(All, file = "All.RData")
}
ExecTime <- toc()
SampleTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + SampleTime
## MAKE THE TRAINING, VALIDATION, AND TESTING DATA SETS
## Set the fractions of the dataframe you want to split in training
fractionTraining <- 0.6
fractionValidation <- 0.2
fractionTest <- 0.2
## Compute sample sizes, rounded down to next integer with floor()
sampleSizeTraining <- floor(fractionTraining * length(All))
sampleSizeValidation <- floor(fractionValidation * length(All))
sampleSizeTest <- floor(fractionTest * length(All))
## Create the randomly sampled indices, using setdiff() to avoid overlapping subsets of indices
indexTraining <- sort(sample(seq_len(length(All)),size = sampleSizeTraining))
indexNotTraining <- setdiff(seq_len(length(All)), indexTraining) ## Take the dataset All, remove Training items
indexValidation <- sort(sample(indexNotTraining, size = sampleSizeValidation))
indexTest <- setdiff(indexNotTraining, indexValidation)
## Make the three vectors for training, validation, and testing
Training <- All[indexTraining]
Validation <- All[indexValidation]
Test <- All[indexTest]
## Save three files
tic()
setwd(workingdirectory4)
save(Training, file = "Training.RData")
save(Validation, file = "Validation.RData")
save(Test, file = "Test.RData")
ExecTime <- toc()
SaveFoldsTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + SaveFoldsTime
## MAKE THE CORPUSES
## Make the training corpus
tic()
if(!exists("TrainingCorpus")){
TrainingCorpus <- corpus(Training,
docvars = data.frame(party = names(All)))
save(TrainingCorpus, file = "TrainingCorpus.RData")
}
ExecTime <- toc()
TrainingCorpusTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + TrainingCorpusTime
## Make the validation corpus
tic()
if(!exists("ValidationCorpus")){
ValidationCorpus <- corpus(Validation,
docvars = data.frame(party = names(All)))
save(ValidationCorpus, file = "ValidationCorpus.RData")
}
ExecTime <- toc()
ValidationCorpusTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + ValidationCorpusTime
## Make the testing corpus
tic()
if(!exists("TestCorpus")){
TestCorpus <- corpus(Test,
docvars = data.frame(party = names(All)))
save(TestCorpus, file = "TestCorpus.RData")
}
ExecTime <- toc()
TestCorpusTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + TestCorpusTime
## MAKE THE DOCUMENT FREQUENCY MATRICES, INCLUDING TOKENIZATION
## Make the Training dfm
tic()
if(!exists("TrainingDFM")){
TrainingDFM <- dfm(TrainingCorpus, stem = FALSE, ignoredFeatures = stopwords("english"))
save(TrainingDFM, file = "TrainingDFM.RData")
}
ExecTime <- toc()
TrainingDFMTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + TrainingDFMTime
## Make the Validation dfm
tic()
if(!exists("ValidationDFM")){
ValidationDFM <- dfm(ValidationCorpus, stem = FALSE, ignoredFeatures = stopwords("english"))
save(ValidationDFM, file = "ValidationDFM.RData")
}
ExecTime <- toc()
ValidationDFMTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + ValidationDFMTime
## Make the Testing dfm
tic()
if(!exists("TestDFM")){
TestDFM <- dfm(TestCorpus, stem = FALSE, ignoredFeatures = stopwords("english"))
save(TestDFM, file = "TestDFM.RData")
}
ExecTime <- toc()
TestDFMTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + TestDFMTime
## MAKE UNIGRAM DATAFRAMES
## Make the Training unigram dataframe
tic()
if(!exists("TrainingUnigramDF")){
TrainingUnigram <- tokenize(TrainingCorpus, ngrams = 1L, skip = 0L,
removePunct = TRUE, removeNumbers = TRUE)
TrainingUnigramDFM <- dfm(TrainingUnigram, stem = FALSE,
ignoredFeatures = stopwords("english"))
TrainingUnigramDF <- data.frame(Content = features(TrainingUnigramDFM), Frequency = colSums(TrainingUnigramDFM),
row.names = NULL, stringsAsFactors = FALSE)
TrainingUnigramDF <- TrainingUnigramDF[with(TrainingUnigramDF, order(Frequency, decreasing = TRUE)),]
save(TrainingUnigramDF, file = "TrainingUnigramDF.RData")
}
ExecTime <- toc()
TrainingUnigramTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + TrainingUnigramTime
## Make the Validation unigram dataframe
tic()
if(!exists("ValidationUnigramDF")){
ValidationUnigram <- tokenize(ValidationCorpus, ngrams = 1L, skip = 0L,
removePunct = TRUE, removeNumbers = TRUE)
ValidationUnigramDFM <- dfm(ValidationUnigram, stem = FALSE,
ignoredFeatures = stopwords("english"))
ValidationUnigramDF <- data.frame(Content = features(ValidationUnigramDFM), Frequency = colSums(ValidationUnigramDFM),
row.names = NULL, stringsAsFactors = FALSE)
save(ValidationUnigramDF, file = "ValidationUnigramDF.RData")
}
ExecTime <- toc()
ValidationUnigramTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + ValidationUnigramTime
## Make the Testing unigram dataframe
tic()
if(!exists("TestUnigramDF")){
TestUnigram <- tokenize(TestCorpus, ngrams = 1L, skip = 0L,
removePunct = TRUE, removeNumbers = TRUE)
TestUnigramDFM <- dfm(TestUnigram, stem = FALSE,
ignoredFeatures = stopwords("english"))
TestUnigramDF <- data.frame(Content = features(TestUnigramDFM), Frequency = colSums(TestUnigramDFM),
row.names = NULL, stringsAsFactors = FALSE)
save(TestUnigramDF, file = "TestUnigramDF.RData")
}
ExecTime <- toc()
TestUnigramTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + TestUnigramTime
## MAKE BIGRAM DATAFRAMES
## Make Training bigram dataframe
tic()
if(!exists("TrainingBigramDF")){
TrainingBigram <- tokenize(TrainingCorpus, ngrams = 2L, skip = 0L,
removePunct = TRUE, removeNumbers = TRUE)
TrainingBigramDFM <- dfm(TrainingBigram,
stem = FALSE, ignoredFeatures = stopwords("english"))
TrainingBigramDF <- data.frame(Content = features(TrainingBigramDFM), Frequency = colSums(TrainingBigramDFM),
row.names = NULL, stringsAsFactors = FALSE)
save(TrainingBigramDF, file = "TrainingBigramDF.RData")
}
ExecTime <- toc()
TrainingBigramTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + TrainingBigramTime
## Make Validation bigram dataframe
tic()
if(!exists("ValidationBigramDF")){
ValidationBigram <- tokenize(ValidationCorpus, ngrams = 2L, skip = 0L,
removePunct = TRUE, removeNumbers = TRUE)
ValidationBigramDFM <- dfm(ValidationBigram, stem = FALSE,
ignoredFeatures = stopwords("english"))
ValidationBigramDF <- data.frame(Content = features(ValidationBigramDFM), Frequency = colSums(ValidationBigramDFM),
row.names = NULL, stringsAsFactors = FALSE)
save(ValidationBigramDF, file = "ValidationBigramDF.RData")
}
ExecTime <- toc()
ValidationBigramTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + ValidationBigramTime
## Make Testing bigram dataframe
tic()
if(!exists("TestBigramDF")){
TestBigram <- tokenize(TestCorpus, ngrams = 2L, skip = 0L,
removePunct = TRUE, removeNumbers = TRUE)
TestBigramDFM <- dfm(TestBigram, stem = FALSE,
ignoredFeatures = stopwords("english"))
TestBigramDF <- data.frame(Content = features(TestBigramDFM), Frequency = colSums(TestBigramDFM),
row.names = NULL, stringsAsFactors = FALSE)
save(TestBigramDF, file = "TestBigramDF.RData")
}
ExecTime <- toc()
TestBigramTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + TestBigramTime
## MAKE TRIGRAM DATAFRAMES
## Make Training trigram dataframe
tic()
if(!exists("TrainingTrigramDF")){
TrainingTrigram <- tokenize(TrainingCorpus, ngrams = 3L, skip = 0L,
removePunct = TRUE, removeNumbers = TRUE)
TrainingTrigramDFM <- dfm(TrainingTrigram, stem = FALSE,
ignoredFeatures = stopwords("english"))
TrainingTrigramDF <- data.frame(Content = features(TrainingTrigramDFM), Frequency = colSums(TrainingTrigramDFM),
row.names = NULL, stringsAsFactors = FALSE)
save(TrainingTrigramDF, file = "TrainingTrigramDF.RData")
}
ExecTime <- toc()
TrainingTrigramTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + TrainingTrigramTime
## Make Validation trigram dataframe
tic()
if(!exists("ValidationTrigramDF")){
ValidationTrigram <- tokenize(ValidationCorpus, ngrams = 3L, skip = 0L,
removePunct = TRUE, removeNumbers = TRUE)
ValidationTrigramDFM <- dfm(ValidationTrigram, stem = FALSE,
ignoredFeatures = stopwords("english"))
ValidationTrigramDF <- data.frame(Content = features(ValidationTrigramDFM), Frequency = colSums(ValidationTrigramDFM),
row.names = NULL, stringsAsFactors = FALSE)
save(ValidationTrigramDF, file = "ValidationTrigramDF.RData")
}
ExecTime <- toc()
ValidationTrigramTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + ValidationTrigramTime
## Make Testing trigram dataframe
tic()
if(!exists("TestTrigramDF")){
TestTrigram <- tokenize(TestCorpus, ngrams = 3L, skip = 0L,
removePunct = TRUE, removeNumbers = TRUE)
TestTrigramDFM <- dfm(TestTrigram, stem = FALSE,
ignoredFeatures = stopwords("english"))
TestTrigramDF <- data.frame(Content = features(TestTrigramDFM), Frequency = colSums(TestTrigramDFM),
row.names = NULL, stringsAsFactors = FALSE)
save(TestTrigramDF, file = "TestTrigramDF.RData")
}
ExecTime <- toc()
TestTrigramTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + TestTrigramTime
## CALCULATE THE KNESER-NEY BIGRAM PROBABILITIES FOR THE TRAINING DATA SET WITH ASSUMED VALUE OF d
## Need to calculate five inputs:
## 1. # of words occurring before word w in a bigram (i.e. number of bigram types, word w completes)
## "TrainingBigramEndCount"
## 2. total # of bigram types
## 3. counts observed for each bigram type
## 4. counts observed for each unigram type
## 5. # of word types that can follow word w(i-1) in a bigram, i.e. # of bigrams that w(i-1) begins
## Remove numbers from alpha-numeric characters
TrainingBigramDF$Content <- gsub("[0-9]+", "", TrainingBigramDF$Content)
## Remove all rows in Unigram where Content is either "#" or "@"
TrainingUnigramDF <- TrainingUnigramDF[!TrainingUnigramDF$Content == "#",]
TrainingUnigramDF <- TrainingUnigramDF[!TrainingUnigramDF$Content == "@",]
## *** SEPARATE BIGRAMS AND TRIGRAMS INTO SEPARATE WORDS ***
## Separate bigrams into separate words by removing concatenator "_", sorted by second word
## Calculate the # of words occurring before each word w completing a bigram
if(!exists("SeparateBigramTime")) {
tic()
TrainingBigramDF$First <- sapply(strsplit(TrainingBigramDF$Content, "\\_"),"[",1) # Strip first word
TrainingBigramDF$First <- tolower(TrainingBigramDF$First) # Make first word lower case
TrainingBigramDF <- TrainingBigramDF[!TrainingBigramDF$First == "#",]
TrainingBigramDF <- TrainingBigramDF[!TrainingBigramDF$First == "@",]
TrainingBigramDF$Second <- sapply(strsplit(TrainingBigramDF$Content, "\\_"), "[", 2) # Strip second word
TrainingBigramDF$Second <- tolower(TrainingBigramDF$Second) # Make second word lower case
TrainingBigramDF <- TrainingBigramDF[!TrainingBigramDF$Second == "#",]
TrainingBigramDF <- TrainingBigramDF[!TrainingBigramDF$Second == "@",]
TrainingBigramDF$Content <- iconv(TrainingBigramDF$Content, "latin1", "ASCII", sub = "") # Remove foreign characters
TrainingBigramDF$First <- iconv(TrainingBigramDF$First, "latin1", "ASCII", sub = "")
TrainingBigramDF$Second <- iconv(TrainingBigramDF$Second, "latin1", "ASCII", sub = "")
TrainingBigramDF[TrainingBigramDF == ""] <- NA # Remove blanks
TrainingBigramDF <- TrainingBigramDF[complete.cases(TrainingBigramDF),] # Remove NAs
TrainingBigramDF <- TrainingBigramDF[with(TrainingBigramDF, order(Second, First)),] # NEW Sort alphabetically
NumberWordsBeforeTrainingBigramDF <- aggregate(TrainingBigramDF["Content"], by = TrainingBigramDF[c("First","Second")], FUN = length) # Aggregate by number for number of unique bigrams with common second word
NumberWordsBeforeTrainingBigramDF <- aggregate(NumberWordsBeforeTrainingBigramDF["First"], by = NumberWordsBeforeTrainingBigramDF["Second"], FUN = length) # Aggregate by number for number of bigrams by second word
names(NumberWordsBeforeTrainingBigramDF) <- c("Second", "Before")
save(TrainingBigramDF, file = "TrainingBigramDF.RData")
save(NumberWordsBeforeTrainingBigramDF, file = "NumberWordsBeforeTrainingBigramDF.RData")
ExecTime <- toc()
SeparateBigramTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + SeparateBigramTime
}
## Separate trigrams into 3 columns, removing concatentor "_"
## Calculate the number of words occurring before each word w completing a trigram
if(!exists("SeparateTrigramTime")) {
tic()
TrainingTrigramDF$First <- sapply(strsplit(TrainingTrigramDF$Content, "\\_"),"[",1)
TrainingTrigramDF$First <- tolower(TrainingTrigramDF$First)
TrainingTrigramDF <- TrainingTrigramDF[!TrainingTrigramDF$First == "#",]
TrainingTrigramDF <- TrainingTrigramDF[!TrainingTrigramDF$First == "@",]
TrainingTrigramDF$Second <- sapply(strsplit(TrainingTrigramDF$Content, "\\_"), "[", 2)
TrainingTrigramDF$Second <- tolower(TrainingTrigramDF$Second)
TrainingTrigramDF <- TrainingTrigramDF[!TrainingTrigramDF$Second == "#",]
TrainingTrigramDF <- TrainingTrigramDF[!TrainingTrigramDF$Second == "@",]
TrainingTrigramDF$Third <- sapply(strsplit(TrainingTrigramDF$Content, "\\_"), "[", 3)
TrainingTrigramDF$Third <- tolower(TrainingTrigramDF$Third)
TrainingTrigramDF <- TrainingTrigramDF[!TrainingTrigramDF$Third == "#",]
TrainingTrigramDF <- TrainingTrigramDF[!TrainingTrigramDF$Third == "@",]
TrainingTrigramDF$Content <- iconv(TrainingTrigramDF$Content, "latin1", "ASCII", sub = "") # Remove foreign characters
TrainingTrigramDF$First <- iconv(TrainingTrigramDF$First, "latin1", "ASCII", sub = "")
TrainingTrigramDF$Second <- iconv(TrainingTrigramDF$Second, "latin1", "ASCII", sub = "")
TrainingTrigramDF$Third <- iconv(TrainingTrigramDF$Third, "latin1", "ASCII", sub = "")
TrainingTrigramDF[TrainingTrigramDF == ""] <- NA # Remove blanks
TrainingTrigramDF <- TrainingTrigramDF[complete.cases(TrainingTrigramDF),] # Remove NAs
TrainingTrigramDF <- TrainingTrigramDF[with(TrainingTrigramDF, order(Third, Second, First)),] # NEW
TrainingTrigramDF <- TrainingTrigramDF[with(TrainingTrigramDF, order(Third, Second, First)),]
save(TrainingTrigramDF, file = "TrainingTrigramDF.RData")
ExecTime <- toc()
SeparateTrigramTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + SeparateTrigramTime
}
## Calculate the total number of bigram types
NumberBigramTypes <- length(unique(TrainingBigramDF$Content))
## ** CALCULATE CONTINUATION PROBABILITIES ***
if(!exists("ContProbTime")){
tic()
NumberWordsBeforeTrainingBigramDF$ContProb <- NumberWordsBeforeTrainingBigramDF$Before / sum(NumberWordsBeforeTrainingBigramDF$Before)
NumberWordsBeforeTrainingBigramDF <- NumberWordsBeforeTrainingBigramDF[ ,-2]
save(NumberWordsBeforeTrainingBigramDF, file = "NumberWordsBeforeTrainingBigramDF.RData")
ExecTime <- toc()
ContProbTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + ContProbTime
}
## ** CALCULATE LAMBDAS ***
## Calculate the number of word types that can follow an individual word w
if(!exists("CompletedTime")){
tic()
TrainingBigramDF <- TrainingBigramDF[with(TrainingBigramDF, order(First,Second)),] # Re-order bigrams by first word
NumberWordsAfterTrainingBigramDF <- aggregate(TrainingBigramDF["Content"],
by = TrainingBigramDF[c("First","Second")], FUN = length)
NumberWordsAfterTrainingBigramDF <- aggregate(NumberWordsAfterTrainingBigramDF["Second"],
by = NumberWordsAfterTrainingBigramDF["First"],
FUN = length)
save(NumberWordsAfterTrainingBigramDF, file = "NumberWordsAfterTrainingBigramDF.RData")
ExecTime <- toc()
CompletedTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + CompletedTime
}
## Move the unigram dataframe to all lower case
tolower(TrainingUnigramDF$Content)
## Clean up Unigram
TrainingUnigramDF$Content <- iconv(TrainingUnigramDF$Content, "latin1", "ASCII", sub = "")
## Get rid of blank content in Unigram
TrainingUnigramDF$Content[TrainingUnigramDF$Content == ""] <- NA
TrainingUnigramDF <- TrainingUnigramDF[complete.cases(TrainingUnigramDF),]
TrainingUnigramDF <- TrainingUnigramDF[with(TrainingUnigramDF, order(Content)),]
save(TrainingUnigramDF, file = "TrainingUnigramDF.RData")
## Make new Unigram variable
NumberWordsTrainingUnigramDF <- aggregate(TrainingUnigramDF["Frequency"],
by = TrainingUnigramDF["Content"],
FUN = sum)
save(NumberWordsTrainingUnigramDF, file = "NumberWordsTrainingUnigramDF.RData")
# Make new dataframe for merge activity
NewTrainingUnigramDF <- data.frame(NumberWordsTrainingUnigramDF$Content, NumberWordsTrainingUnigramDF$Frequency)
names(NewTrainingUnigramDF) <- c("First", "Unigram")
save(NewTrainingUnigramDF, file = "NewTrainingUnigramDF.RData")
## Add unigram count for words that begin bigrams
NewAfterTrainingBigramDF <- merge (x = NumberWordsAfterTrainingBigramDF,
y = NewTrainingUnigramDF,
by = "First")
names(NewAfterTrainingBigramDF) <- c("First", "Bigram", "Unigram")
## Initialize d
d <- 0.88
## Calculate lambdas
NewAfterTrainingBigramDF$Lambda <- d * NewAfterTrainingBigramDF$Bigram / NewAfterTrainingBigramDF$Unigram
NewAfterTrainingBigramDF <- NewAfterTrainingBigramDF[ ,-(2:3)]
save(NewAfterTrainingBigramDF, file = "NewAfterTrainingBigramDF.RData")
## CALCULATE KNESER-NEY PROBABILITIES
if(!exists("PKNTime")){
tic()
TrainingPKN <- TrainingBigramDF
TrainingPKN <- merge(x = TrainingPKN, y = NewAfterTrainingBigramDF, by = "First")
names(TrainingPKN) <- c("First", "Content", "Bigram Frequency", "Second", "Lambda")
TrainingPKN <- merge(x = TrainingPKN, y = NumberWordsBeforeTrainingBigramDF, by = "Second")
TrainingPKN <- merge(x = TrainingPKN, y = NewTrainingUnigramDF, by = "First")
names(TrainingPKN) <- c("First", "Second", "Content", "Bigram Frequency", "Lambda", "ContProb", "Unigram Frequency")
TrainingPKN$MinMax <- TrainingPKN$`Bigram Frequency` - d
TrainingPKN$MinMax <- TrainingPKN$MinMax / TrainingPKN$`Unigram Frequency`
TrainingPKN$PKN <- TrainingPKN$MinMax + (TrainingPKN$Lambda * TrainingPKN$ContProb)
save(TrainingPKN, file = "TrainingPKN.RData")
ExecTime <- toc()
PKNTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + PKNTime
}
## CALCULATE PROBABILITIES FOR VALIDATION SET
## Remove numbers from alpha-numeric content
ValidationBigramDF$Content <- gsub("[0-9]+", " ",ValidationBigramDF$Content)
## Remove all rows in Unigram where content is either "@" or "#"
ValidationUnigramDF <- ValidationUnigramDF[!ValidationUnigramDF$Content == "#",]
ValidationUnigramDF <- ValidationUnigramDF[!ValidationUnigramDF$Content == "@",]
## *** SEPARATE BIGRAMS AND TRIGRAMS INTO SEPARATE WORDS ***
## Separate bigrams into separate words by removing concatenator "_", sorted by second word
## Calculate the # of words occurring before each word w completing a bigram
if(!exists("ValSeparateBigramTime")) {
tic()
ValidationBigramDF$First <- sapply(strsplit(ValidationBigramDF$Content, "\\_"),"[",1) # Strip first word
ValidationBigramDF$First <- tolower(ValidationBigramDF$First) # Make first word lower case
ValidationBigramDF <- ValidationBigramDF[!ValidationBigramDF$First == "#",]
ValidationBigramDF <- ValidationBigramDF[!ValidationBigramDF$First == "@",]
ValidationBigramDF$Second <- sapply(strsplit(ValidationBigramDF$Content, "\\_"), "[", 2) # Strip second word
ValidationBigramDF$Second <- tolower(ValidationBigramDF$Second) # Make second word lower case
ValidationBigramDF <- ValidationBigramDF[!ValidationBigramDF$Second == "#",]
ValidationBigramDF <- ValidationBigramDF[!ValidationBigramDF$Second == "@",]
ValidationBigramDF$Content <- iconv(ValidationBigramDF$Content, "latin1", "ASCII", sub = "") # Remove foreign characters
ValidationBigramDF$First <- iconv(ValidationBigramDF$First, "latin1", "ASCII", sub = "")
ValidationBigramDF$Second <- iconv(ValidationBigramDF$Second, "latin1", "ASCII", sub = "")
ValidationBigramDF[ValidationBigramDF == ""] <- NA # Remove blanks
ValidationBigramDF <- ValidationBigramDF[complete.cases(ValidationBigramDF),] # Remove NAs
ValidationBigramDF <- ValidationBigramDF[with(ValidationBigramDF, order(Second, First)),] # NEW Sort alphabetically
NumberWordsBeforeValidationBigramDF <- aggregate(ValidationBigramDF["Content"], by = ValidationBigramDF[c("First","Second")], FUN = length) # Aggregate by number for number of unique bigrams with common second word
NumberWordsBeforeValidationBigramDF <- aggregate(NumberWordsBeforeValidationBigramDF["First"], by = NumberWordsBeforeValidationBigramDF["Second"], FUN = length) # Aggregate by number for number of bigrams by second word
names(NumberWordsBeforeValidationBigramDF) <- c("Second", "Before")
save(ValidationBigramDF, file = "ValidationBigramDF.RData")
save(NumberWordsBeforeValidationBigramDF, file = "NumberWordsBeforeValidationBigramDF.RData")
ExecTime <- toc()
ValSeparateBigramTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + ValSeparateBigramTime
}
## Separate trigrams into 3 columns, removing concatentor "_"
## Calculate the number of words occurring before each word w completing a trigram
if(!exists("ValSeparateTrigramTime")) {
tic()
ValidationTrigramDF$First <- sapply(strsplit(ValidationTrigramDF$Content, "\\_"),"[",1)
ValidationTrigramDF$First <- tolower(ValidationTrigramDF$First)
ValidationTrigramDF <- ValidationTrigramDF[!ValidationTrigramDF$First == "#",]
ValidationTrigramDF <- ValidationTrigramDF[!ValidationTrigramDF$First == "@",]
ValidationTrigramDF$Second <- sapply(strsplit(ValidationTrigramDF$Content, "\\_"), "[", 2)
ValidationTrigramDF$Second <- tolower(ValidationTrigramDF$Second)
ValidationTrigramDF <- ValidationTrigramDF[!ValidationTrigramDF$Second == "#",]
ValidationTrigramDF <- ValidationTrigramDF[!ValidationTrigramDF$Second == "@",]
ValidationTrigramDF$Third <- sapply(strsplit(ValidationTrigramDF$Content, "\\_"), "[", 3)
ValidationTrigramDF$Third <- tolower(ValidationTrigramDF$Third)
ValidationTrigramDF <- ValidationTrigramDF[!ValidationTrigramDF$Third == "#",]
ValidationTrigramDF <- ValidationTrigramDF[!ValidationTrigramDF$Third == "@",]
ValidationTrigramDF$Content <- iconv(ValidationTrigramDF$Content, "latin1", "ASCII", sub = "") # Remove foreign characters
ValidationTrigramDF$First <- iconv(ValidationTrigramDF$First, "latin1", "ASCII", sub = "")
ValidationTrigramDF$Second <- iconv(ValidationTrigramDF$Second, "latin1", "ASCII", sub = "")
ValidationTrigramDF$Third <- iconv(ValidationTrigramDF$Third, "latin1", "ASCII", sub = "")
ValidationTrigramDF[ValidationTrigramDF == ""] <- NA # Remove blanks
ValidationTrigramDF <- ValidationTrigramDF[complete.cases(ValidationTrigramDF),] # Remove NAs
ValidationTrigramDF <- ValidationTrigramDF[with(ValidationTrigramDF, order(Third, Second, First)),] # NEW
ValidationTrigramDF <- ValidationTrigramDF[with(ValidationTrigramDF, order(Third, Second, First)),]
save(ValidationTrigramDF, file = "ValidationTrigramDF.RData")
ExecTime <- toc()
ValSeparateTrigramTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + ValSeparateTrigramTime
}
## Sort Validation bigram
ValidationBigramDF <- ValidationBigramDF[with(ValidationBigramDF, order(First,Second)),] # Re-order bigrams by first word
## Move the unigram dataframe to all lower case
tolower(ValidationUnigramDF$Content)
## Clean up Unigram
ValidationUnigramDF$Content <- iconv(ValidationUnigramDF$Content, "latin1", "ASCII", sub = "")
## Get rid of blank content in Unigram
ValidationUnigramDF$Content[ValidationUnigramDF$Content == ""] <- NA
ValidationUnigramDF <- ValidationUnigramDF[complete.cases(ValidationUnigramDF),]
ValidationUnigramDF <- ValidationUnigramDF[with(ValidationUnigramDF, order(Content)),]
save(ValidationUnigramDF, file = "ValidationUnigramDF.RData")
## Make new Unigram variable
NumberWordsValidationUnigramDF <- aggregate(ValidationUnigramDF["Frequency"],
by = ValidationUnigramDF["Content"],
FUN = sum)
save(NumberWordsValidationUnigramDF, file = "NumberWordsValidationUnigramDF.RData")
# Make new dataframe for merge activity
if(!exists("NewValidationUnigramDF")) {
tic()
NewValidationUnigramDF <- data.frame(NumberWordsValidationUnigramDF$Content, NumberWordsValidationUnigramDF$Frequency)
names(NewValidationUnigramDF) <- c("First", "Unigram")
save(NewValidationUnigramDF, file = "NewValidationUnigramDF.RData")
ExecTime <- toc()
NewValUniTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + NewValUniTime
}
## Calculate KN probabilities for validation bigrams that exist in the training set
CommonValidationPKN <- merge(x = TrainingPKN, y = ValidationBigramDF, by = c("First", "Second"))
CommonValidationPKN <- CommonValidationPKN[c(-4,-7,-8,-10,-11)]
CommonValidationPKN <- CommonValidationPKN[c(3,1,2,4,5,6)]
names(CommonValidationPKN) <- c("Content", "First", "Second", "Lambda", "ContProb", "PKN")
## Isolate validation bigrams not in the training set
if(!exists("ValidationResidual")) {
tic()
ValidationCheck <- CommonValidationPKN
ValidationResidual <- setdiff(ValidationBigramDF$Content, ValidationCheck$Content)
ValidationResidual <- as.data.frame(ValidationResidual, stringsAsFactors = FALSE)
names(ValidationResidual) <- "Content"
ValidationResidual <- merge(x = ValidationResidual, y = ValidationBigramDF, by = "Content")
ValidationResidual <- ValidationResidual[-2]
save(ValidationResidual, file = "ValidationResidual.RData")
ExecTime <- toc()
ValResidualTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + ValResidualTime
}
## Calculate ValidationResidual lambdas for those first words existing in Training set
if(!exists("ValidationLambdaTrue")) {
tic()
ValidationLambdaCheck <- ValidationResidual$First
ValidationLambdaTrue <- ValidationLambdaCheck[ValidationLambdaCheck %in% NewAfterTrainingBigramDF$First]
ValidationLambdaTrue <- as.data.frame(ValidationLambdaTrue, stringsAsFactors = FALSE)
names(ValidationLambdaTrue) <- "First"
ValidationLambdaTrue <- merge(x = ValidationLambdaTrue, y = NewAfterTrainingBigramDF, by = "First")
ValidationLambdaTrue <- ValidationLambdaTrue[!duplicated(ValidationLambdaTrue),]
save(ValidationLambdaTrue, file = "ValidationLambdaTrue.RData")
ExecTime <- toc()
TrueLambdaTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + TrueLambdaTime
}
## Calculate ValidationResidual lambdas for first words not existing in Training set
if(!exists("ValidationResidualLambda")) {
tic()
ValidationLambdaFalse <- ValidationLambdaCheck[- which(ValidationLambdaCheck %in% ValidationLambdaTrue$First)]
ValidationLambdaFalse <- as.data.frame(ValidationLambdaFalse, stringsAsFactors = FALSE)
names(ValidationLambdaFalse) <- "First"
ValidationLambdaFalseZero <- as.vector(matrix(2.422684e-02, nrow = length(ValidationLambdaFalse$First)))
ValidationLambdaFalse <- cbind(ValidationLambdaFalse, ValidationLambdaFalseZero)
names(ValidationLambdaFalse) <- c("First", "Lambda")
ValidationResidualLambda <- rbind.data.frame(ValidationLambdaTrue, ValidationLambdaFalse)
ValidationResidualLambda <- ValidationResidualLambda[!duplicated(ValidationResidualLambda),]
save(ValidationResidualLambda, file = "ValidationResidualLambda.RData")
ExecTime <- toc()
ValLambdaTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + ValLambdaTime
}
## Calculate the continuation probabilities for second words existing in the training set
if(!exists("ValidationContTrue")){
tic()
ValidationContCheck <- ValidationResidual$Second
ValidationContTrue <- ValidationContCheck[ValidationContCheck %in% NumberWordsBeforeTrainingBigramDF$Second]
ValidationContTrue <- as.data.frame(ValidationContTrue, stringsAsFactors = FALSE)
names(ValidationContTrue) <- "Second"
ValidationContTrue <- merge(x = ValidationContTrue, y = NumberWordsBeforeTrainingBigramDF, by = "Second")
ValidationContTrue <- ValidationContTrue[!duplicated(ValidationContTrue),]
save(ValidationContTrue, file = "ValidationContTrue.RData")
ExecTime <- toc()
ContTrueTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + ContTrueTime
}
## Calculate the continuation probabilities for second words not in the training set
if(!exists("ValidationResidualContProb")){
tic()
ValidationContFalse <- ValidationContCheck[- which(ValidationContCheck %in% ValidationContTrue$Second)]
ValidationContFalse <- as.data.frame(ValidationContFalse, stringsAsFactors = FALSE)
names(ValidationContFalse) <- "Second"
ValidationContFalseZero <- as.vector(matrix(4.915e-06, nrow = length(ValidationContFalse$Second)))
ValidationContFalse <- cbind(ValidationContFalse, ValidationContFalseZero)
names(ValidationContFalse) <- c("Second", "ContProb")
ValidationContFalse <- ValidationContFalse[!duplicated(ValidationContFalse),]
save(ValidationContFalse, file = "ValidationContFalse.RData")
## Make continuation probabilities for all second words in Validation set
ValidationResidualContProb <- rbind.data.frame(ValidationContTrue, ValidationContFalse)
ValidationResidualContProb <- ValidationResidualContProb[!duplicated(ValidationResidualContProb),]
save(ValidationResidualContProb, file = "ValidationResidualContProb.RData")
ExecTime <- toc()
ValContFalseTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + ValContFalseTime
}
## Calculate KN probabilities for bigrams not in the training set
if(!exists("ValidationResidualPKN")){
tic()
ValidationResidualPKN <- merge(x = ValidationResidual, y = ValidationResidualLambda, by = "First")
ValidationResidualPKN <- merge(x = ValidationResidualPKN, y = ValidationResidualContProb, by = "Second")
ValidationResidualPKN <- ValidationResidualPKN[c("Content","First","Second","Lambda","ContProb")]
ValidationPKNProduct <- ValidationResidualPKN$Lambda * ValidationResidualPKN$ContProb
ValidationResidualPKN <- cbind(ValidationResidualPKN, ValidationPKNProduct)
ValidationResidualPKN <- ValidationResidualPKN[with(ValidationResidualPKN, order(First, Second)),]
names(ValidationResidualPKN) <- c("Content","First","Second","Lambda","ContProb", "PKN")
save(ValidationResidualPKN, file = "ValidationResidualPKN.RData")
ExecTime <- toc()
ValidationPKNTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + ValidationPKNTime
}
## Make full PKN matrix for Validation set
if(!exists("ValidationPKN")){
tic()
ValidationPKN <- rbind(CommonValidationPKN, ValidationResidualPKN)
save(ValidationPKN, file = "ValidationPKN.RData")
ExecTime <- toc()
ValPKNTime <- ExecTime$toc - ExecTime$tic
TotalTime <- TotalTime + ValPKNTime
}
## CALCULATE PERPLEXITY FOR VALIDATION SET, OPTIMIZING FOR d
ValidationPKN$LogPKN <- -log2(ValidationPKN$PKN)
ValidationEntropy <- sum(ValidationPKN$LogPKN) / length(ValidationPKN$LogPKN)
ValidationPerplexity <- 2^ValidationEntropy
## MAKE BACKOFF MODEL STARTING WITH TRIGRAM, THEN USING KNESER-NEY
## Make TrainingTrigramDF Check vector to check against
TrainingTrigramDF$Check <- paste(TrainingTrigramDF$First, TrainingTrigramDF$Second, sep = "_")
Sentence <- "Research into effects of poverty"
## Remove punctuation
punct <- '[]\\?!\"\'#$%&(){}+*/:;,._`|~\\[<=>@\\^-]'
Sentence <- gsub(punct, "", Sentence)
## To lower case
Sentence <- tolower(Sentence)
## Extract trigram words
SentenceWords <- unlist(strsplit(Sentence, " "))
## If there is an available bigram at the end of the fragment, isolate it
if(length(SentenceWords) >= 2) {
TrigramSearch <- c(SentenceWords[length(SentenceWords)-1], SentenceWords[length(SentenceWords)])
TrigramSearch <- paste(TrigramSearch[1], TrigramSearch[2], sep ="_")
} else {
TrigramSearch <- NULL
}
## Extract last word of final bigram
if(length(SentenceWords) >= 1) {
BigramSearch <- SentenceWords[length(SentenceWords)]
} else {
BigramSearch <- NULL
}
TrigramCheck <- subset(TrainingTrigramDF, Check == TrigramSearch)
## Check to see if the bigram from the sentence exists in any trigrams in the Training set
if(nrow(TrigramCheck) > 0) {
TrigramCheck <- TrigramCheck[TrigramCheck$Frequency == max(TrigramCheck$Frequency),]
}
## Make BigramCheck
BigramCheck <- subset(TrainingPKN, First == BigramSearch)
BigramCheck <- BigramCheck[, -(5:8)]
names(BigramCheck) <- c("First", "Second", "Content", "Frequency", "PKN")
BigramCheck <- BigramCheck[with(BigramCheck, order(c(Frequency, PKN), decreasing = TRUE)),]
BigramCheck <- BigramCheck[complete.cases(BigramCheck),]
## SCENARIO I:
## Trigram search produces results
## There is only one trigram produced with max frequency
if(nrow(TrigramCheck) > 0) { # Trigram search produces results
if(length(TrigramCheck$Frequency) == 1) { # Only one trigram produced with max frequency
Result <- TrigramCheck$Third
print(paste("There was at least one trigram, leading to the word:", Result))
}
}
## SCENARIO II:
## Trigram search produces results
## There are multiple trigrams produced with max frequency
if(nrow(TrigramCheck) > 0) { # Trigram search produces results
if(length(TrigramCheck$Frequency) > 1) { # More than one trigram produced with max frequency
BigramLookup <- paste(TrigramCheck$Second, TrigramCheck$Third, sep = "_")
BigramLookup <- as.data.frame(BigramLookup, stringsAsFactors = FALSE)
names(BigramLookup) <- "Content"
BigramLookupTest <- merge(x = BigramCheck, y = BigramLookup, by = "Content")
BigramLookupTest <- BigramLookupTest[BigramLookupTest$Frequency == max(BigramLookupTest$Frequency),]
BigramLookupTest <- BigramLookupTest[with(BigramLookupTest, order(PKN, decreasing = TRUE)),]
Result <- BigramLookupTest$Second[1]
print(paste("There was at least one trigram, resulting in the word:", Result))
}
}
## SCENARIO III:
## Trigram search does not produce any results
## There is a relevant bigram
if(nrow(TrigramCheck) == 0){ # Trigram search produces no results
if(nrow(BigramCheck) > 0) { # Bigram search produces results
BigramCheckAdjusted <- BigramCheck[BigramCheck$Frequency == max(BigramCheck$Frequency),]
BigramCheckAdjusted <- BigramCheckAdjusted[with(BigramCheckAdjusted, order(PKN, decreasing = TRUE)),]
BigramCheckAdjusted <- BigramCheckAdjusted[complete.cases(BigramCheckAdjusted),]
Result <- BigramCheckAdjusted$Second[1]
print(paste("There were no trigrams, but bigrams suggested:", Result))
}
}
## SCENARIO IV
## Trigram search does not produce any results
## There is no relevant bigram
if(nrow(TrigramCheck) == 0) { # Trigram search produces no results
if(nrow(BigramCheck) == 0) { # Bigram search produces no results
TrainingUnigramDF <- TrainingUnigramDF[with(TrainingUnigramDF, order(Frequency, decreasing = TRUE)),]
Result <- TrainingUnigramDF$Content[1]
print(paste("There were no trigrams or bigrams, so we chose the most frequent Unigram:", Result))
}
}
|
85b09b86b02005f4e2c3160ef2d0ae8689c0627a | 93c7ad720188eec7a3c17b79f5133d9157b530b5 | /R_lecture/Day_06/Day6_1(Web).R | b3fe8679a475c172a3ba2cfb66448d0c07e39a54 | [] | no_license | won-spec/TIL | e1d51d7d1854015b6061d65d9481e8d0fda91089 | 2eee5d96f79b9342cd14dc392e84b226470d6bcd | refs/heads/master | 2022-12-13T01:00:20.591113 | 2019-12-14T11:55:10 | 2019-12-14T11:55:10 | 226,786,238 | 0 | 0 | null | 2022-12-08T03:18:22 | 2019-12-09T04:42:52 | Jupyter Notebook | UTF-8 | R | false | false | 1,902 | r | Day6_1(Web).R | #2주차
#Web 크롤링, 스크랩핑
#외부 API를 이용한 데이터 구축(JSON)
#컴퓨터에서 데이터 통신을 하려면? => 랜카드(NIC) : Network Interface Card
#여러개의 컴퓨터를 이렇게 NIC를 통해서 연결한 후 네트워크망을 생성할 수
#있다.
#LAN ( Local Area Network )
#LAN of LAN / 네트워크의 네트워크 => 인터넷 (물리적인 framework)
#이 위에 여러가지 서비스를 지정해서 사용하고 있어요
#=>파일을 전송하기 위한 서비스 : FTP
#=>메일을 주고받기 위한 서비스 : SMTP
#=>특정 내용을 게시하고 클라이언트가 볼 수 있도록 하는 서비스 : HTTP
# HTTPS : secure보안이 추가된 HTTP 서비스
#프로토콜? => 데이터를 주고받기 위해서 존재하는 약속, 규칙
#언어또한 하나의 프로토콜
#########################################################################
#Web Service는 기본적으로 CS(Client-Server)구조를 가진다.
#Web 시스템을 구축하기 위해
#1. Web Server Program (Tomcat)을 다운받자. tomcat 7, 64비트
#2. 클라이언트에게 제공할 HTML, CSS, Javascript, 서버쪽 프로그램을
# 작성하기 위해서 IDE(개발툴, 개발환경)가 필요
# Eclipse를 다운로드 => Java개발툴 / enterprise edition
#서버쪽 프로그램을 통해서 프로젝트를 deploy한 후클라이언트는 다음과 같이 접속 / tomcat의 port번호 : 8080
# URL : http:// IP주소 : port번호/프로젝트루트/파일명
#ex) http://60.162.125.23:8080/testabc/test.html
#ex) http://localhost:8080/testabc/test.html => 현재컴퓨터내에서 찾아라
#웹스톰에서 프로젝트를 만들고, 파일을 만들고
#=>웹 서버가 있어야함(tomcat)같은역할.
#웹 스톰의 port번호는 63342
#=>자동으로 configure시켜서 웹에 게시함. 클라이언트 browser열어서 접속까지 실행 |
c8f49b74e87b3da2a706945ae0a614348a730b6c | 660b3ce26917f021fbfa611d1a7883cdf4ce47a2 | /capeb/api/data/stats/generate_csv.r | 632c2ca4cae968ddfd22b35a2e104a7bf1e5c45e | [
"MIT"
] | permissive | mok33/HyblabDDJ2018 | 2edea7096c347f1d9cc32c09b200c793ad66a3cc | a8734a788fca04b31ed94d4d247c19e13a6ef8f4 | refs/heads/master | 2021-09-06T18:08:44.862322 | 2018-02-09T13:31:17 | 2018-02-09T13:31:17 | 117,953,438 | 0 | 1 | null | 2018-01-18T08:17:13 | 2018-01-18T08:17:12 | null | ISO-8859-1 | R | false | false | 13,147 | r | generate_csv.r | data = read.csv('../raw/CAPEBPaysDelaLoire_2014-2017.csv', header=TRUE, sep=";", encoding ="UTF-8")
data2 = read.csv('Marchés_publics2017.csv', header=TRUE, sep=",", encoding ="UTF-8")
attach(data2)
max(Oui, na.rm = T)
min(Oui, na.rm = T)
attach(data)
data$Code.postal
mean(Oui, na.rm = T)
getAttYear <- function(y, l_atr){
l_atr = c(l_atr, "X..Date")
return(subset(data, X..Date == y, select = l_atr))
}
d17
conj = matrix(nrow = 0, ncol = 2)
colnames(conj) = c("EPCI", "Conjoncture.calculée")
epcis
for(epci in epcis){
epci_set = subset(d17, intercommunalite.2017_EPCI==epci)
conj = rbind(conj, c(epci, mean(epci_set$Conjoncture.calculée, na.rm = T)))
}
write.csv(file="ConjonctureEPCI.csv",x=conj,row.names=FALSE,quote = FALSE,fileEncoding ="UTF-8")
mean(conj[,2], na.rm = T)
epci_set[,0]
conj
data$Conjoncture.calculée
data$X..Date = as.numeric(substring(data$X..Date, 7, 10))
annee = unique(data$X..Date)
annee
table(data$Développement.durable)
DD = table(subset(data, data$X..Date == 2016,select = c("intercommunalite.2017_EPCI", "Développement.durable")))
colnames(DD)[1] = "Pas de réponse"
DD
#write.csv(file="Développement_durable2016.csv",x=DD,quote = FALSE,fileEncoding ="UTF-8")
data$Inter
data$Marchés.publics
MP = table(subset(data, data$X..Date == 2017,select = c("intercommunalite.2017_EPCI","Marchés.publics")))
colnames(MP)[1] = "Pas de réponse"
MP
#write.csv(file="Marchés_publics2017.csv",x=MP, quote = FALSE,fileEncoding ="UTF-8")
data$Zone.intervention
ZI = aggregate( Zone.intervention ~ intercommunalite.2017_EPCI, getAttYear(2017, c("intercommunalite.2017_EPCI", "Zone.intervention")), mean)
ZI[,1] = as.numeric(ZI[,1])
ZI[,2] = as.numeric(ZI[,2])
zi = matrix(nrow = nrow(ZI), ncol=2)
zi[,1] = as.numeric(ZI[,1])
zi[,2] = as.numeric(ZI[,2])
colnames(zi) = c("Epci", "Zone intervention moyenne")
zi
#write.csv(file="Zone_intervention2017.csv",x=ZI,row.names=FALSE, quote = FALSE,fileEncoding ="UTF-8")
AC = table(subset(data, data$X..Date == 2017,select = c("intercommunalite.2017_EPCI","Activité")))
AC
#write.csv(file="Activité2017.csv",x=AC,quote = FALSE,fileEncoding ="UTF-8")
data$intercommunalite.2017_EPCI
contrats = aggregate( cbind(CDD,CDI,Apprentis,Intérimaires) ~ intercommunalite.2017_EPCI + X..Date,subset(data,select = c("intercommunalite.2017_EPCI", "X..Date","CDD", "CDI")), FUN=sum)
#write.csv(file="Contrats_2014-2017.csv",x=contrats,row.names=FALSE, quote = FALSE,fileEncoding ="UTF-8")
data$CA.réalisé
AC = table(subset(data,select = c("intercommunalite.2017_EPCI","X..Date","CA.réalisé")))
AC
#write.csv(file="CA_2014-2017.csv",x=AC, quote = FALSE,fileEncoding ="UTF-8")
data$CA.Batiments.neufs + data$CA.Logements.neufs
epcis = unique(data$intercommunalite.2017_EPCI)
acts = unique(data$Activité)
acts
Bneufs = unique(data$CA.Batiments.neufs)
Lneufs = unique(data$CA.Logements.neufs)
evo_car = unique(data$Evolution.Carnet.de.commandes)
data$CA.Réhabilitation.entretien
evo_car = c('A la baiss', 'A la hausse', 'Stable')
cols = c('EPCI', 'Activité', 'Freq', 'Marché Principale', 'Freq', 'Evolution.Carnet.de.commandes' , 'Freq', 'Contrat', 'Moyenne')
length(cols)
sunburst = matrix(nrow = 0, ncol = 3)
subset(data, X..Date=2017,select = c("intercommunalite.2017_EPCI","X..Date","CA.réalisé"))
colnames(sunburst) = c('ecpi', 'chemin', 'count')
type_mp = c('CA.Logements.neufs', 'CA.Batiments.neufs', 'CA.Réhabilitation.entretien')
contrats = c('CDD', 'CDI', 'Apprentis', 'Intérimaires')
sl = c('intercommunalite.2017_EPCI', 'Activité', type_mp, 'Evolution.Carnet.de.commandes', 'CDI', 'CDD', 'Apprentis', 'Intérimaires')
epcis
for(epci in epcis){
epci_set = subset(data, intercommunalite.2017_EPCI==epci, select=sl)
for(act in acts){
act_set = subset(epci_set, Activité == act, select=sl)
mp_neuf_bat = subset(act_set, CA.Batiments.neufs > CA.Réhabilitation.entretien && CA.Batiments.neufs > CA.Logements.neufs, select=sl[6:10])
mp_neuf_log = subset(act_set, CA.Logements.neufs > CA.Batiments.neufs && CA.Logements.neufs > CA.Réhabilitation.entretien, select=sl[6:10])
mp_entretien = subset(act_set, CA.Réhabilitation.entretien >= CA.Logements.neufs && CA.Réhabilitation.entretien >= CA.Batiments.neufs, select=sl[6:10])
id_mp = 0
#for(mp in list(mp_neuf_bat, mp_neuf_log, mp_entretien)){
#if(nrow(mp) > 0){
for(car in evo_car){
car_set = subset(act_set, Evolution.Carnet.de.commandes == car, select=sl[7:10])
if(nrow(car_set) > 0){
for(c in contrats){
eff = car_set[,c]
chemin = c(act, car, c)
#freq = c(nrow(act_set), nrow(mp), nrow(car_set), mean(eff[eff >= 1], na.rm = TRUE))
row = c(epci, paste(chemin,collapse="&"), mean(eff[eff >= 1], na.rm = TRUE))
#row = c(epci, act, , toString(type_mp[id_mp]), , car, , c, )
sunburst = rbind(sunburst, row)
}
}
}
#}
#id_mp = id_mp + 1
#}
}
}
sunburst
eff
chemin
write.csv(file="sunburst.csv",x=sunburst,row.names=FALSE, quote = FALSE,fileEncoding ="UTF-8")
length(car_set)
mean(car_set[,c])
sunburst
mps[2]
mp_neuf_bat
epci
nrow(act_set)
mp_neuf_log
mp
nrow(mp_neuf_bat)
nrow(mp_entretien)
length(mp)
act_set
sl[6:10]
nrow(mp_neuf_bat)
act_set[,'CDD']
mp_neuf_bat[,'Activité']
id_mp
sunburst
colnames(data)
d =subset(data, Evolution.Carnet.de.commandes == 'A la baisse', select=colnames(data))
d$CDD
d[,'CDI']
length(car_set[,'Activité'])
length(mp_neuf_bat)
mp[,'CDD']
d[,'Code.postal']
act
data$CDD
#stat de la region
#conjoncture
#investisement
#zone intervention
#Marche public
stat_region = matrix(nrow = 0, ncol = 4)
colnames(stat_region) = c('Conjoncture.calculée.Moy', 'Investissement.Moy.Oui', 'Zone.intervention.Moy', 'Marchés.publics.Max')
cc = mean(data$Conjoncture.calculée, na.rm=T)
im = table(data$Investissement)[-1]
im = unname(im/sum(im))
dm = mean(data$Zone.intervention, na.rm =T)
mp = table(data$Marchés.publics)[-1]
mp
mp = unname(mp/sum(mp))
mp
stat_region = rbind(c(cc, im[2], dm, mp[2]),stat_region)
write.csv(file="stats_region.csv",x=stat_region,row.names=FALSE, quote = FALSE,fileEncoding ="UTF-8")
sum(sum(data$Zone.intervention - mean(data$Zone.intervention, na.rm = T))^2)
conj = aggregate(Conjoncture.calculée ~ intercommunalite.2017_EPCI,subset(data,select = c("intercommunalite.2017_EPCI", "Conjoncture.calculée")), FUN=mean)
contrats
table(data$Sujets.intérêt)[-1] + table(data$Sujets.intérêt_1)[-1]
table(data[,316])
m = c()
table(data[307,])
colnames(data)
for(i in 307:316){
for(d in getAttYear(2017, colnames(data))[,i]){
m = c(d, m)
}
}
reps = unique(m)
reps = rep[-1]
rep
dd = matrix(nrow = 0, ncol = 3)
colnames(dd) = c("epci", "Aspects", "Count")
d17 = getAttYear(2017, colnames(data))
colnames(d17)[307:316]
table(m)
for(epci in epcis){
sub = subset(d17, intercommunalite.2017_EPCI == epci, select = colnames(d17))
m = c()
for(i in 307:316){
for(d in sub[,i]){
if(d != ""){
m = c(d, m)
}
}
}
tab = table(m)
tab = (tab/sum(tab)) * 100
i = 1
for(v in tab){
dd = rbind(dd, c(epci, names(tab)[i], tab[i]))
i = i + 1
}
}
m
dd
tab
m
dd
sub = subset(d17, intercommunalite.2017_EPCI == 200071934, select = colnames(d17))
sub[, 'intercommunalite.2017_EPCI']
m = c()
for(i in 307:316){
for(d in data[,i]){
m = c(d, m)
}
}
m
names(tab)[1]
data$Suje
dd
colnames(data)[316]
table(data[,315])
dd
write.csv(file="DD_2017.csv",x=dd,row.names=FALSE, quote = FALSE,fileEncoding ="UTF-8")
wirte.csv()
annee
data
annee
epcis = unique(data$intercommunalite.2017_EPCI)
data$X..Date
annee
acts = unique(data$Activité)
act
subset(data, X..Date == 2014)$X..Date
evo_nb = matrix(nrow = 0, ncol=length(acts) + 2)
colnames(evo_nb) = c('epci', 'annee', levels(acts))
for(epci in epcis){
sepci = subset(data, intercommunalite.2017_EPCI == epci, select = c('X..Date','Activité', 'Nb.recr..envisagés'))
for(a in annee){
r = c(epci, a)
sa = subset(sepci, X..Date == a)
for(act in acts){
sact = subset(sa, Activité == act)
r = c(r, sum(sact$Nb.recr..envisagés, na.rm = T))
}
evo_nb = rbind(evo_nb, r)
}
}
annee
evo_nb
act
row
data$Nb.recr..envisagés
write.csv(file="recrutement_Activité2014_2017.csv",x=evo_nb,row.names=FALSE, quote = FALSE,fileEncoding ="UTF-8")
write.csv(evo_nb)
ids = match(c('Freins.MP', 'Freins.MP_1', 'Freins.MP_2'), colnames(data))
cloud = c()
for(id in ids){
for(r in data[,id]){
cloud = c(cloud, r)
}
}
cloud_e[length(cloud_e) == 0]
words = unique(cloud)
words
cloud_epci = matrix(nrow=0, ncol=3)
colnames(cloud_epci) = c('epci', 'Freins.MP', 'Freq')
for(epci in epcis){
sub = subset(data, intercommunalite.2017_EPCI == epci, select = c('Freins.MP', 'Freins.MP_1', 'Freins.MP_2'))
cloud_e = c()
for(fr in c('Freins.MP', 'Freins.MP_1', 'Freins.MP_2')){
for(r in sub[,fr]){
if(r != ""){
cloud_e = c(cloud_e, r)
}
}
}
i = 1
tab = table(cloud_e)
#rm_empty = match("", names(tab))
tab = tab/(sum(tab))
for(f in tab){
cloud_epci = rbind(cloud_epci, c(epci, names(tab)[i], f))
i = i + 1
}
}
length("")
tab
cloud_epci
table(cloud_e)
cloud_e
tab
match(c(""), names(tab))
rm_empty
cloud_epci
write.csv(file="FreinsMP.csv",x=cloud_epci,row.names=FALSE, quote = FALSE,fileEncoding ="UTF-8")
names(tab)[1] = ""
data$Freins.MP_2
cloud = c(data$Freins.MP,data$Freins.MP_1,data$Freins.MP_2)
table(cloud)
data$Investissement
data
data$Difficultés.MP_1
cloud_epci = matrix(nrow=0, ncol=3)
colnames(cloud_epci) = c('epci', 'Difficultés.MP', 'Freq')
for(epci in epcis){
sub = subset(data, intercommunalite.2017_EPCI == epci, select =c('Difficultés.MP', 'Difficultés.MP_1', 'Difficultés.MP_2'))
e = c()
for(fr in c('Difficultés.MP', 'Difficultés.MP_1', 'Difficultés.MP_2')){
for(r in sub[,fr]){
e = c(e, r)
}
}
i = 1
tab = table(e)
print(sum(tab/sum(tab)))
#rm_empty = match("", names(tab))
for(f in tab){
cloud_epci = rbind(cloud_epci, c(epci, names(tab)[i], f/sum(tab)))
i = i + 1
}
}
cloud_e
cloud_epci
length("")
tab
cloud_epci
table(cloud_e)
cloud_e
tab
match(c(""), names(tab))
rm_empty
cloud_epci
write.csv(file="DifficultésMP.csv",x=cloud_epci,row.names=FALSE, quote = FALSE,fileEncoding ="UTF-8")
data$DifficultéDifficultés.MP
aspects = unique(m)
aspects = aspects[3:10]
aspects
match("Interet.Qualité.des.matériaux_2", colnames(data))
aspects = c('Accessibilité', 'Assainissement', 'Déchets', 'Eco.construction', 'EcoEnergie', 'EnR', 'Qualité.de.l.U.0092.air', 'Qualité.de.l.U.0092.eau', 'Qualité.des.matériaux')
interets = matrix(nrow = 0, ncol = 9)
for(asp in aspects){
merg = c()
for(pl in c('', '_1', '_2')){
for(r in data[, paste("Interet.", asp,pl ,sep = "")]){
merg = c(merg, r)
}
}
interets = cbind(interets, merg)
}
aspects
interts
data$Interet.Qualité.des.matériaux_2
dd_i = matrix(nrow = 0, ncol = 4)
colnames(dd_i) = c('epci', 'aspect', 'interet', 'freq')
d17 = subset(data, X..Date == 2017)
epcis
for(epci in epcis){
sub = subset(d17, intercommunalite.2017_EPCI == epci)
for(asp in aspects){
merg = c()
for(pl in c('', '_1', '_2')){
for(r in sub[, paste("Interet.", asp, pl ,sep = "")]){
if(r != ""){
merg = c(merg, r)
}
}
}
tab = table(merg)
tab = (tab/sum(tab)) * 100
i = 1
for(r in tab){
if(r > 0){
dd_i = rbind(c(epci, asp, names(tab)[i], r), dd_i)
}
i = i + 1
}
}
}
write.csv(file="DD_INTERET_2017.csv",x=dd_i,row.names=FALSE, quote = FALSE,fileEncoding ="UTF-8")
dd_i
names(tab)[2] == ""
sub$Code.postal
data[, paste("Interet.", asp, pl ,sep = "")]
names(merg)[0]
pl
tab
colnames(data)[307]
data$De
d17 = subset(data, X..Date == 2014)
d17$Re
table(d17$Recrutement.envisagé)
rec_env = matrix(nrow = 0, ncol = 3)
colnames(rec_env) = c('Epci', 'Année', 'Oui')
annee
for(epci in epcis){
sub = subset(data, intercommunalite.2017_EPCI == epci, select = c('X..Date', 'Recrutement.envisagé'))
for(a in annee){
s = subset(sub, X..Date == a, select = c('Recrutement.envisagé'))
tab = table(s[,1])
print(tab)
if(length(tab) >= 3){
tab = tab[-1]
}
tab = tab/sum(tab)
rec_env = rbind(rec_env, c(epci,a , tab[2] * 100))
}
}
write.csv(file="EvoRecrutementEnv2014_2017.csv",x=rec_env,row.names=FALSE, quote = FALSE,fileEncoding ="UTF-8")
s
tab
rec_env
tab
sub[,1]
|
22884360601952298f5640aebf5fe93fed8e395a | 9c8cf90abe2d29d2301852aa8b9306d9c3f87983 | /run_analysis.R | 49f45af0b91b998f0025a2d86a77c4e1edd8b410 | [] | no_license | dchang99/ActivityRecognition | a435ef97eaad60732566b29a7d25122f7b57225b | 4a51efe09c2743e98fa183bf8354c3fbdc0eba0e | refs/heads/master | 2020-03-27T23:00:52.908714 | 2014-11-22T13:51:52 | 2014-11-22T13:51:52 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,760 | r | run_analysis.R | run_analysis <- function() {
# Read variable names
headers <- read.table("features.txt", col.names=c("id", "name"),
colClasses=c("integer", "character"))
# Read test and train data sets
dataTest <- read.table("./test/X_test.txt", colClasses="numeric")
dataTrain <- read.table("./train/X_train.txt", colClasses="numeric")
# Assign variable names to corresponding column names of test
# and train data sets
names(dataTest) <- headers$name
names(dataTrain) <- headers$name
# Read activity identifiers
activitiesTest <- read.table("./test/y_test.txt", col.names="activityId",
colClasses="integer")
activitiesTrain <- read.table("./train/y_train.txt", col.names="activityId",
colClasses="integer")
# Read subject identifiers
subjectTest <- read.table("./test/subject_test.txt", col.names="subjectId",
colClasses="integer")
subjectTrain <- read.table("./train/subject_train.txt", col.names="subjectId",
colClasses="integer")
# Bind subject, activity, and motion data
dataTest <- cbind(subjectTest, activitiesTest, dataTest)
dataTrain <- cbind(subjectTrain, activitiesTrain, dataTrain)
# Filter columns containing mean() and std() values
colSet <- c("subjectId", "activityId",
grep("(mean|std)\\(\\)", headers$name,
ignore.case=TRUE, value=TRUE))
dataTest <- dataTest[, colSet]
dataTrain <- dataTrain[, colSet]
# Combine test and train data
data <- rbind(dataTest, dataTrain)
# Remove parentheses from column names for compatibility
names(data) <- gsub("()", "", names(data), fixed=TRUE)
# Expand "t" and "f" in variable names to "time" and "freq"
# for readability
names(data) <- gsub("^t", "time", names(data))
names(data) <- gsub("^f", "freq", names(data))
# Read activity labels
labels <- read.table("activity_labels.txt", col.names=c("id", "name"),
colClasses=c("integer", "character"))
# Assign activity labels as level names for activityId
data$activityId <- as.factor(data$activityId)
levels(data$activityId) <- labels$name
# Output mean of each variable grouped by subject and activity
aggregate(data[,-(1:2)], FUN=mean,
by=list(subjectId=data$subjectId, activityId=data$activityId))
}
|
e05c1913ffc86bf55867cb9ef9f235c79e1dbfe0 | f05ce58140ec9316e2449cda141d3df089fdd363 | /src/main/java/time_series/m5.7/m5.7.data.R | bea3148665835f7efcf586f18c85cd70e8f31399 | [] | no_license | zhekunz2/Stan2IRTranslator | e50448745642215c5803d7a1e000ca1f7b10e80c | 5e710a3589e30981568b3dde8ed6cd90556bb8bd | refs/heads/master | 2021-08-05T16:55:24.818560 | 2019-12-03T17:41:51 | 2019-12-03T17:41:51 | 225,680,232 | 0 | 0 | null | 2020-10-13T17:56:36 | 2019-12-03T17:39:38 | Java | UTF-8 | R | false | false | 693 | r | m5.7.data.R | kcal_per_g <-
c(0.49, 0.47, 0.56, 0.89, 0.92, 0.8, 0.46, 0.71, 0.68, 0.97, 0.84, 0.62, 0.54, 0.49, 0.48,
0.55, 0.71)
neocortex_perc <-
c(55.16, 64.54, 64.54, 67.64, 68.85, 58.85, 61.69, 60.32, 69.97, 70.41, 73.4, 67.53,
71.26, 72.6, 70.24, 76.3, 75.49)
log_mass <-
c(0.667829372575655, 1.65822807660353, 1.68082790852077, 0.920282753143692, -0.385662480811985,
-2.12026353620009, -0.755022584278033, -1.13943428318836, 0.438254930931155, 1.17557332980424,
2.50959926237837, 1.68082790852077, 3.56896915744138, 4.37487613064504, 3.70721041079866, 3.49983535155915,
4.00642368084963)
n <- 17
a_mean <- 0
a_scale <- 100
bn_mean <- 0
bn_scale <- 1
bm_mean <- 0
bm_scale <- 1
sigma_scale <- 0.5
|
299d7cbf59936e59c54e3b5f7bcaf0622cf0c444 | f1ad76fa058a2235d3adb05ccefc6b262570478e | /man/auto_name.Rd | b86c7b3b36e4ab5e5d9ca45b6b414c7f3eae58fd | [
"CC-BY-3.0",
"MIT"
] | permissive | Ostluft/rOstluft.plot | 863f733b949dd37e5eaf1d8c1e197596242ef072 | fbed7ce639ae6778e24c13773b73344942ca7dc2 | refs/heads/master | 2022-11-16T12:56:44.199402 | 2020-03-23T11:12:02 | 2020-03-23T11:12:02 | 180,803,285 | 3 | 0 | null | null | null | null | UTF-8 | R | false | true | 961 | rd | auto_name.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/utils.R
\name{auto_name}
\alias{auto_name}
\title{Ensure that all elements of a list of expressions are named}
\usage{
auto_name(exprs)
}
\arguments{
\item{exprs}{A list of expressions.}
}
\value{
A named list of expressions
}
\description{
Nearly identical to \code{\link[rlang:exprs_auto_name]{rlang::exprs_auto_name()}}, but \code{\link[rlang:as_name]{rlang::as_name()}}
is used instead of \code{\link[rlang:as_label]{rlang::as_label()}}. For String items the string will
returned without wrapping in double quotes. The naming of functions and
formulas is not optimal, it is better to manually name theme.
}
\examples{
funs <- list(
"mean",
function(x) stats::quantile(x, probs = 0.95),
~ stats::quantile(., probs = 0.95),
q95 = function(x) stats::quantile(x, probs = 0.95)
)
auto_name(funs)
# exprs_autoname adds double quotes to strings
rlang::exprs_auto_name(funs)
}
|
d244b4843ce59abe26633aadeea36e0d5b89bb76 | 3ac522af39cc180a268485fa59a448c9a4debb49 | /CS 6313 - Statistical Methods for Data Science/inclass/sep5.R | e965e98966ee785d234bfcab3063cf7f78a08be0 | [] | no_license | Nandangonchikar/Masters-CS-UT-Dallas | 5a27fa19082112a0a083c048da9da2f26f09a4c8 | 0047d092bd2d3b08859fc5651898f20709746e2c | refs/heads/master | 2022-03-19T06:41:38.255850 | 2019-12-14T23:31:07 | 2019-12-14T23:31:07 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,887 | r | sep5.R | ########################
# sample size computation #
########################
alpha <- 0.05
epsilon <- 0.03
# > (qnorm((1-alpha/2))/(2*epsilon))^2
# [1] 1067.072
# >
# round up to the next integer
# > ceiling((qnorm((1-alpha/2))/(2*epsilon))^2)
# [1] 1068
# >
########################
# Binomial distribution #
########################
# simulate 1 draw of X ~ Binomial (size, prob)
y <- rbinom(1, 10, 0.25)
# simulate 100 draws of X
x <- rbinom(100, 10, 0.25)
# > head(x)
# [1] 2 0 2 4 1 1
# > tail(x)
# [1] 5 3 5 3 3 1
# >
# P(X = 5) --- PMF
dbinom(5, 10, 0.25)
# P(X <= 5) --- CDF
pbinom(5, 10, 0.25)
########################
# Normal distribution #
########################
# X ~ Normal (0, 1)
# simulate 1000 draws of X
?rnorm
x <- rnorm(1000) # by default, mean = 0, sd = 1
# make a histogram of draws and superimpose Normal (0, 1) density
hist(x, probability = T)
curve(dnorm(x), add = T, xlab = "x", ylab = "density")
# P(|X| > 1) -- exact
pnorm(-1) + 1 - pnorm(1)
# P(|X| > 1) -- Monte Carlo estimate
mean(abs(x) > 1)
# E(X) = 0, var(X) = 1 -- exact
# Monte Carlo estimate of E(X)
mean(x)
# Monte Carlo estimate of var(X)
var(x)
# see the effect of N by increasing to, say, 5000, 10000 and 50000
#########################
# computing probabilities #
#########################
?pnorm
pnorm(0, mean = 0, sd = 1) # CDF
punif(0.5, min = 0, max = 1) # CDF
pexp(1, rate = 2) # CDF
#########################
# computing quantiles #
#########################
?qnorm
qnorm(0.975, mean = 0, sd = 1)
qnorm(0.5, mean = 0, sd = 1)
#########################
# normal Q-Q plot #
#########################
x <- rnorm(100, mean = 0, sd = 1)
qqnorm(x)
qqline(x)
########################
# Monte Carlo estimation #
########################
# coin toss experiment
# X = indicator of heads (1) in one toss. X ~ Bernoulli (p = P(X = 1))
# simulate 10 draws from Bernoulli with p = 0.25
?rbinom
rbinom(n = 10, size = 1, prob = 0.25)
# simulate 1000 draws from Bernoulli with p = 0.8 and take their mean
x <- rbinom(n = 1000, size = 1, prob = 0.8)
mean(x)
# repeat 100 times the process of simulating 1000 draws from Bernoulli with
# p = 0.8 and taking their average --- this will give 100 averages, all close
# to p because of Law of Large Numbers
p.1k <- replicate(100, mean(rbinom(1000, 1, 0.8)))
# repeat 100 times the process of simulating 10000 draws from Bernoulli with
# p = 0.8 and taking their average --- this will give 100 averages, all close
# to p because of Law of Large Numbers
p.10k <- replicate(100, mean(rbinom(10000, 1, 0.8)))
# compare the distributions of the averages based on 1000 and 10000 draws
boxplot(p.1k, p.10k)
abline(h=0.8)
# check normality of the averages (predicted by Central Limit Theorem)
qqnorm(p.1k)
qqline(p.1k)
# Will the normal approximation for the average be good if p = 0.001? Check.
|
56af59656576a629bfa52f99cd587617828433d2 | 380ce6ddf7d0480cf4c938c373130dac331cc7e2 | /made4 coding 16april2018.R | 34a56e47b4f70d9f3e4891b8daafd1f6801e45ab | [
"MIT"
] | permissive | JohnBarlowVT/Food_Scraps_Resistome | 33cf50b8e5b1035f959b1a2253fa641e32a23132 | 336f69164bce7df967cafc8d7d218a4faa66386e | refs/heads/master | 2020-03-11T16:02:12.621545 | 2018-04-25T18:39:58 | 2018-04-25T18:39:58 | 130,103,923 | 0 | 0 | null | 2018-04-18T18:39:19 | 2018-04-18T18:16:15 | JavaScript | UTF-8 | R | false | false | 6,866 | r | made4 coding 16april2018.R | # data exploration experiments using made4 package (version 1.52.0)
# made4 = Multivariate analysis of microarray data using ADE4
# code for coinertia analysis of antimicrobial resistance gene (ARGs) data from the food scraps metagenomics project
# 16april2018
#JWB
#---------------------------------------------------------
# REFERENCES:
# I used the following resources to learn about using made4
# 1. Bioconductor made4
# https://www.bioconductor.org/packages/release/bioc/html/made4.html
# http://www.bioconductor.org/packages//2.7/bioc/vignettes/made4/inst/doc/introduction.pdf
# 2. Culhane AC, Thioulouse J, Perriere G, Higgins DG.(2005) MADE4: an R package for multivariate analysis of gene expression data. Bioinformatics 21(11):2789-90.
# note the introduction and reference manual can be opened from within R
#
# "made4 is useful for Multivariate data analysis and graphical display of microarray data. Functions include between group analysis and coinertia analysis. It contains functions that require ade4 package."
#---------------------------------------------------------------------
# before the preliminaries
# recent versions of made4 and associated packages were built under R version 3.4.4, so like me you might need to upgrade from an older R version or else when you download made4 it will throw a warning message - I don't know if updating was absolutely necessary, but I did not chance it as my last R update was about 6 months old...
# I found a package to make the process of upgrading R a bit easier - the installr package - it seemed to work well and included an option to upgrade packages when it ran
# preference is to run installr in the R Gui console, not from R studio
# here is a link on updating R with installr http://bioinfo.umassmed.edu/bootstrappers/bootstrappers-courses/courses/rCourse/Additional_Resources/Updating_R.html
# you can install the installr package from RStudio
install.packages("installr")
# then close RStudio and open RGui
# in RGui open the installr library and run updateR
# library(installr)
# updateR()
#----------------------------------------------------------------------
# the preliminaries
# download made4 package - made4 package uses ade4 package, scatterplot3d package, RColorBrewer and gplots
# once your R version is up to date, then on to the preliiminaries of installing made4 and associated packages - ade4 and scatterplot3d appear to install automatically when you install made4
install.packages("made4")
# ggtree and ape are a couple phylogenetics packages I have been exploring for our work, # not using in this exercise for now
#install.packages("ggtree") # ggtree not available for most recent version
# install.packages("ape")
## Korin suggests one needs to install made4 as below from the biocLite source, not via install.packages(), but install.packages seemed to work for me (although it loaded an older version and to get the most recent version I had to download direct from the made4 bioconductor site
#source("https://bioconductor.org/biocLite.R")
#biocLite("made4")
#biocLite("BiocUpgrade")
setwd("C:/Users/jbarlow/Documents/Computational Biology/Food_Scraps/Food_Scraps_Resistome") #modify for your particular wd
getwd()
#load made4 - associated packages ade4, RColorBrewer, gplots, and scatterplot3d load automatically when you load made4
library(made4)
# I also loaded other packages for graphical exploration of the data sets to compare some graphics to overview graphics created by made4 overview function
library (reshape2)
library(tidyr)
library(dplyr)
library (ggplot2)
library (ggthemes)
library (patchwork)
library(readr)
#### co-inertia analysis (cia) using the made4 package
### loading ARG data file
res_mat <- read.csv("res_mat_abun.csv")
head(res_mat)
str(res_mat)
### cia analysis won't run if there are instances where some of the samples have no data, i.e. the HOSP, WOCA, and SWCA for ARGs all have zero abundance for ARGs (see , the following code identifies those instances to be removed
none <- lapply(res_mat, function(x) all(x == 0))
which(none == "TRUE")
## removing sites with no observations for ARGs
res_mat2 <- res_mat[,-c(1,5,11,12,16,22,25)]
res_mat2[] <- lapply(res_mat2[], as.numeric)
## need to force name column as rownames of the matrix to get the labels into the test
rownames(res_mat2) <- res_mat$Name
head(res_mat2)
str(res_mat2)
## loading bacterial taxonomy data
# same code as above compiled for bac data
bac_mat <- read.csv("bac_mat_abun.csv")
bac_mat2 <- bac_mat[,-c(1,5,11,12,16,22,25)]
rownames(bac_mat2) <- bac_mat$Name
bac_mat2 <- b_mat2[,-1]
rownames(bac_mat2) <- b_mat$Name
### loading virulence gene dataset
vf_mat <- read.csv("vf_mat_abun.csv")
vf_mat2 <- vf_mat[,-c(1,5,11,12,16,22,25)]
rownames(vf_mat2) <- vf_mat$Name
head(vf_mat2)
vf2 <- vf_abun[,-1]
rownames(vf2) <- vf_abun$Name
#made4 has an overview function which generates a boxplot, histogram and hierachial tree of the data - demonstrated
overview(res_mat2)
# the histogram is compiled across all "sites" - big skew - lots of "0" values
# might want to see these data by site in a lattice
# I explored the overview function, it does not seem possible to generate separate histograms by site inside this package
# so generate in ggplot2
# the imported files are data frames
res_mat_all <- res_mat[,-c(1)]
res_mat_all[] <- lapply(res_mat_all[], as.numeric)
rownames(res_mat_all) <- res_mat$Name
head(res_mat_all)
str(res_mat_all)
.<-gather(res_mat_all, key="Site", value="Abundance")
head(.)
str(.)
# note this did not generate a column with the gene names - need to append this code perhaps eventually, but really not important to see the distribution on the frequency of gene counts
# histogram on gene frequency by sites in lattice
g1<- ggplot(data=., mapping=aes(x=Abundance,fill=I("tomato"), color=I("black"))) + geom_histogram()+ theme_minimal()
g2<- g1+facet_wrap(~Site, dir="v", nrow=2)
#ggsave(filename="AGRhisto.jpg", plot=g2, device=jpeg) #jpeg not working
ggsave(filename="AGRhisto.pdf", plot=g2, device=pdf, width=40, height=20, units="cm", dpi=300) #posted to project web page
# actually using made4
c <- cia(bac_mat2, res_mat2, cia.nf=2, cia.scan=FALSE, nsc=TRUE)
c$coinertia$RV
#0.445
plot.cia(c)
# virulence and ARGs
c2 <- cia(vf_mat2, res_mat2, cia.nf=2, cia.scan=FALSE, nsc=TRUE)
c2$coinertia$RV
#0.647
plot.cia(c2)
# virulence and bacteria
c3 <- cia(vf_mat2, bac_mat2, cia.nf=2, cia.scan=FALSE, nsc=TRUE)
c3$coinertia$RV
#0.358
c3$coinertia
plot.cia(c3)
# check out what the other parameters could be used for
c4 <- cia(vf_mat, bac_mat, cia.nf=2, cia.scan=FALSE, nsc=TRUE)
c4$coinertia$RV
#0.19
#testing other functions
res.coa<-ord(res_mat2)
res.coa
plot(res.coa)
plotgenes(res.coa)
topgenes(res.coa, axis=1, n=5)
a<-topgenes(res.coa, end="neg")
b<-topgenes(res.coa, end="pos")
comparelists(a,b)
|
eb469432062e6b7647203ba10712ac113f63c151 | d354e58efb0f8804fe501766abbef6e25be48aec | /MetaMultiSKAT/R/Transform_Harmonize.R | 2e4c272bebb2a238d25015952317da53888094a6 | [] | no_license | diptavo/MetaMultiSKAT | 246ec0ca32649d5e5975e8882129b7c37a7623eb | 4e89920d1681a8df2edaf556316b1ac61863a997 | refs/heads/master | 2020-04-01T00:45:35.646646 | 2019-01-22T09:42:25 | 2019-01-22T09:42:25 | 152,712,175 | 6 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,114 | r | Transform_Harmonize.R | Transform.MultiSKAT <- function(S1,Sigma_g, Sigma_p){
m <- ncol(S1$Score.Object$Score.Matrix); n.pheno <- nrow(S1$Score.Object$Score.Matrix)
Sc <- mat.sqrt(Sigma_p)%*%S1$Score.Object$Score.Matrix%*%mat.sqrt(Sigma_g)
Ls <- mat.sqrt(Sigma_g%x%Sigma_p)
S1$Regional.Info.Pheno.Adj <- t(Ls)%*%S1$Regional.Info.Pheno.Adj%*%(Ls)
S1$Score.Object$Score.Matrix.kern <- Sc; S1$Method.Sigma_P = Sigma_p; S1$Method.Sigma_g = Sigma_g;
Q <- sum(Sc^2)
m1 <- m*n.pheno
S1$Test.Stat <- Q;
S1$p.value <- SKAT:::Get_Davies_PVal(Q/2, S1$Regional.Info.Pheno.Adj, NULL)$p.value
return(S1)
}
Harmonize.Test.Info <- function(Info.list){
n.studies <- length(Info.list)
n.pheno <- nrow(Info.list[[1]]$Score.Object$Score.Matrix)
snp.list <- vector("list",n.studies)
for(i in 1:n.studies){
snp.list[[i]] <- colnames(Info.list[[i]]$Score.Object$Score.Matrix);
}
all.snps <- Reduce(union,snp.list)
for(i in 1:n.studies){
m1 <- match(all.snps,snp.list[[i]])
Info.list[[i]]$Score.Object$Score.Matrix <- Info.list[[i]]$Score.Object$Score.Matrix[,m1];
Info.list[[i]]$Score.Object$Score.Matrix.kern <- Info.list[[i]]$Score.Object$Score.Matrix.kern[,m1];
Info.list[[i]]$Score.Object$Score.Matrix[which(is.na(Info.list[[i]]$Score.Object$Score.Matrix))] = 0
Info.list[[i]]$Score.Object$Score.Matrix.kern[which(is.na(Info.list[[i]]$Score.Object$Score.Matrix.kern))] = 0
colnames(Info.list[[i]]$Score.Object$Score.Matrix) <- all.snps
colnames(Info.list[[i]]$Score.Object$Score.Matrix.kern) <- all.snps
}
m = length(all.snps);
for(i in 1:n.studies){
r1 = match(all.snps,snp.list[[i]])
r2 <- r1
for(idx in 2:n.pheno){
r2 = c(r2,(r1 + (idx-1)*length(snp.list[[i]])))
}
Info.list[[i]]$Regional.Info.Pheno.Adj <- Info.list[[i]]$Regional.Info.Pheno.Adj[r2,r2]
Info.list[[i]]$Regional.Info.Pheno.Adj[which(is.na(Info.list[[i]]$Regional.Info.Pheno.Adj))] = 0
colnames(Info.list[[i]]$Regional.Info.Pheno.Adj) = rownames(Info.list[[i]]$Regional.Info.Pheno.Adj) = rep(all.snps,n.pheno);
}
return(Info.list)
}
|
6702c8fe2b9832bbb3ddcbd7a98a4ddc06fd2468 | 037415285031e0ff3cc3891c70824946ee45b352 | /한이진/R/R프로그래밍/list0429.R | 370f1ccf48d5c86c15d81d9d1b3e6fd66f3ab3d8 | [] | no_license | kb-ict/20210111_AI_BigData_class | f157634baf0614e443046d699a90b31dc15a8b26 | 0516b1a322cb43602086b689b7b7d93e238024db | refs/heads/main | 2023-06-16T18:17:53.421908 | 2021-07-16T05:30:03 | 2021-07-16T05:30:03 | 346,164,949 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 5,079 | r | list0429.R | #List 자료구조: 다른 자료형과 자료구조(벡터, 행열, 리스트, 데이터프레임)를 객체로 생성
list1 <- list(c(1,2,3), c("제니","리사","로제"),TRUE,12.5)
list1
list2 <- list(c("제니","로제","리사"),c(20,30,40))
names(list2) <- c('NAME','AGE')
list2
print(list2[1])
print(list2$NAME)
print(list2$NAME[1])
print(list2$AGE[3])
blackpink <- list(name=c("제니","로제","리사","지수"),age=c(26,25,25,27),address=c("뉴질랜드","호주","태국","서울"),
gender=c("여자","여자","여자","여자"),home=c("YG","yg","와이쥐","Yg"))
blackpink
blackpink$name
blackpink$name[1]
blackpink$address[2]
#값 변경
blackpink$age[1] <-100
blackpink$address[4] <-"한국"
blackpink
#데이터 프레임 data frame
id <- c(1,2,3,4,5)
gender <- c('m','F','m','F','F')
age <- c(25,32,45,51,12)
addr <- c('대구','서울','수원','울산','부산')
datavalue <- data.frame(id,gender,age,addr)
datavalue
mode(datavalue)
class(datavalue)
View(datavalue)
#데이터프레임 편집기
dataval <-edit(data.frame())
dataval
id_r1 <- c('a1','a2','a3','a4')
name_r1 <-c('제니','리사','로제','지수')
stu_r1 <- data.frame(id_r1,name_r1)
stu_r1
stu_r2 <- data.frame(id_r2 = c('b1','b2','b3','b4'), name_r2=c('한이','한수','한동','다발'))
stu_r2
names(stu_r1) <- c('ID','NAME')
names(stu_r2) <- names
#행 결합
studRbind <- rbind(stu_r1,stu_r2)
studRbind
#열 결합
stu_c1 <- data.frame(id=c("c1","c2","c3"),name=c('김씨','한씨','홍씨'))
names(stu_c1) <- c('ID','NAME')
stu_c1
stu_c2 <- data.frame(age= c(20,30,40),gender=c('M','F','F'))
names(stu_c2) <- c('AGE','GENDER')
stu_c2
studCbind <- cbind(stu_c1,stu_c2)
studCbind
# 내부 join
stu_j1 <- data.frame(id=c("c1","c2","c3"),name=c('김씨','한씨','홍씨'))
names(stu_j1) <- c('ID','NAME')
stu_j1
stu_j2 <- data.frame(id=c("c2","c3","c4"),gender=c('M','F','F'))
names(stu_j2) <- c('ID','GENDER')
stu_j2
studJoin <- merge(x= stu_j1, y=stu_j2, by='ID')
studJoin
#라이브러리 설치
install.packages('stringr')
library(stringr) # 라이브러리 설치 후 사용
strData <- c('제니26리사25로제25')
#stringr라이브러리 함수
#str_extract
str_extract(strData,'[1-9]{2}')
#str_extract_all
str_extract_all(strData, '[1-9]{2}')
strData1<- 'hongkd1051eess1002you25감감찬2055'
str_extract_all(strData1,'[a-z]{3}') # 3자 연속하는 경우 추출
str_extract_all(strData1,'[a-z]{3,}') # 3자 이상 연속하는 경우 추출
str_extract_all(strData1,'[a-z]{3,5}') # 3~5자 연속하는 경우 추출
# 해당문자열 추출
str_extract_all(strData1,'hong')
str_extract_all(strData1,'25')
#한글 문자열 추출
str_extract_all(strData1,'[가-힣]{4}')
#알파벳 문자열 추출
str_extract_all(strData1,'[a-z]{3}')
#숫자 추출
str_extract_all(strData1,'[0-9]{3}')
#포함하지 않은 문자열 추출
#[^a-z]: 알파벳 제외 문자열 추출
str_extract_all(strData1,'[^a-z]')
str_extract_all(strData1,'[^a-z]{4}')
#한글을 제외한 문자열 추출
str_extract_all(strData1,'[^가-힣]{5}')
# 숫자를 제외한 문자열 추출
str_extract_all(strData1,'[^0-9]{3}')
name <-'홍길동1234이순신5678김길동1011'
str_extract_all(name,'\\w{8,}') # 콤마 기준으로 5자
str_extract_all(name, '\\d')
str_match_all(name,'\\d')
#문자열 길이 반환
size <- str_length(name)
size
#인데스 값 시작, 끝
str_locate(strData1,'감감찬')
#문자열 슬라이싱
#인덱스 1에서 부터 문자열 길이-10까지의 문자열 추출
strDatasub <- str_sub(strData1, 1, str_length(strData1)-10)
strDatasub
#문자열 대문자로 변경
upstr <- str_to_upper(strDatasub); upstr
# 소문자로 변경
str_to_lower(upstr)
#주민번호
jumin <- '961116-2904567'
#주민번호 앞자리 추출
str_extract(jumin,'[0-9]{6}-')
str_extract_all(jumin,'[0-9]{6}')
#주민번호 뒤자리 추출
str_extract(jumin,'[0-9]{6}-[1-4][0-9]{6}')
# 1974년 미국 자동차 잡지 dataframe
mtcars
#구조 보기
#칼러(열) 단위 데이터를 보여줌
str(mtcars)
# 상위 6개의 데이터
head(mtcars)
#하위 6개의 데이터
tail(mtcars)
#행과 열 개수 출력
dim(mtcars)
#데이터 자료 구조 길이, 칼럼(열) 길이
length(mtcars)
#해당 컬럼의 개수 (행의 길이)
length(mtcars$cyl)
# 칼러명 출력
names(mtcars)
class(mtcars)
mode(mtcars)
sapply(mtcars,class)
#문자열 추출
str <-"홍길동35이순신45유관순25"
str_extract(str,'[1-9]{3}')
str_extract_all(str,"[1-9]{3}")
#정규표현식
string <- "hongkd105leess1002you25강감찬2055"
str_extract_all(string,'[a-z]{3,5}')
# 해당 문자열 추출
str_extract_all(string,"[0-9]{4}")
# 특정 문자열을 제외하는 정규표현식
str_extract_all(string,'[^0-9]{3}')
#
jumin <- "123456-9234567"
str_extract(jumin,"[0-9]{6}-[9][0-9]{6}")
str_extract_all(jumin,"\\d{6}-[923]\\d{6}")
email <- "gksdlwls123@naver.com"
str_extract(email,"\\w{11}[@]\\w{5}")
name <- "홍길동1234,이순신5678,강감찬1012"
str_extract_all(name,"\\w{7,}")
string_sub<- str_sub(string,5,str_length(string))
string_sub
|
d5c6fb5c40b924df8f6b4558fa4c04223c891e44 | 17e579f385870baa035f6ae338774b856a4b05dc | /imageManipulationSF.R | 541e6ff0b4017839277e95073efca4db5697dad6 | [] | no_license | ryanetracy/spatial-frequency-filtering-in-R | fd535dd1c83a5ff2e36ea2b1bb8d19dc11f73e90 | 346e9986dc1531be0c3a2274c88bc8f87d74f0a7 | refs/heads/master | 2022-08-21T20:36:41.625527 | 2020-05-19T22:56:37 | 2020-05-19T22:56:37 | 265,387,181 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,543 | r | imageManipulationSF.R | # #install the BiocManager suite
# if (!requireNamespace("BiocManager", quietly = TRUE))
# install.packages("BiocManager")
#
# BiocManager::install("EBImage")
library(EBImage)
files <- list.files(path="LIST FILE PATH WHERE IMAGES ARE FOUND", pattern=".jpg", all.files=T, full.names=T, no.. = T)
#make empty lists for the base files, BSF images, LSF images, and HSF images
imgFile <- list()
imgBSF <- list()
imgLSF <- list()
imgHSF <- list()
#set up LSF filtering
lsf <- makeBrush(31, shape = "gaussian", sigma = 5) #generate image weight weight and gaussian filter of width 5
lsf <- lsf/sum(lsf) #calculate lsf filtering cpf
#set up HSF filtering
hsf <- matrix(-1, nc = 3, nr = 3) #set the 3 x 3 kernel
hsf[2,2] <- 8.55 #change the center value of the kernel
#set up a loop to:
#(1) set all images to gray scale (serve as BSF images)
#(2) set all images to LSF
#(3) set all images to HSF
for (i in 1:length(files)) {
#load all the images into the empty list imgFile
imgFile[[i]] <- readImage(files[[i]])
#set to grayscale (creates "BSF" images)
imgBSF[[i]] <- imgFile[[i]]
colorMode(imgBSF[[i]]) <- Grayscale
#apply lsf filtering
imgLSF[[i]] <- filter2(imgBSF[[i]], lsf)
#apply hsf filtering
imgHSF[[i]] <- filter2(imgBSF[[i]], hsf)
}
#view all BSF images
for (i in 1:length(imgBSF)) {
displayBSF <- EBImage::display(imgBSF[[i]])
print(displayBSF)
}
#view all LSF images
for (i in 1:length(imgLSF)) {
displayLSF <- EBImage::display(imgLSF[[i]])
print(displayLSF)
}
#view all hsf images
for (i in 1:length(imgHSF)) {
displayHSF <- EBImage::display(imgHSF[[i]])
print(displayHSF)
}
#write loops to save the image files (NOTE: all images will be saved to current working directory)
#BSF images
for (i in 1:length(imgBSF)) {
#set as array
imgBSFArray <- as.array(imgBSF)
#define the file name for use in writeImage
fileNameBSF <- paste("NAME FOR SAVING FILE", i, ".jpeg", sep = "")
#save the images
writeImage(imgBSFArray[[i]], files = fileNameBSF)
}
#LSF images
for (i in 1:length(imgLSF)) {
#set as array
imgLSFArray <- as.array(imgLSF)
#define file name
fileNameLSF <- paste("NAME FOR SAVING FILE", i, ".jpeg", sep = "")
#save
writeImage(imgLSFArray[[i]], files = fileNameLSF)
}
#HSF images
for (i in 1:length(imgHSF)) {
#set as array
imgHSFArray <- as.array(imgHSF)
#define file name
fileNameHSF <- paste("NAME FOR SAVING FILE", i, ".jpeg", sep = "")
#save the images
writeImage(imgHSFArray[[i]], files = fileNameHSF)
}
|
5d81780f9557382d780663af77f8098a40c2a1e7 | 7e1218a7155e15f7d5c59aeb14c245ca5aa10f0b | /man/SEP_FIM.Rd | 1ee43f22853b24f738f23147adf0bd64e24411a5 | [
"MIT"
] | permissive | fishinfo/FiSh | b64af3af74dec77350d94f0fb5fd49bb967b3385 | 8bd59194569a4f133ec1ad700160a8a789b69f1e | refs/heads/master | 2020-08-11T02:55:12.117433 | 2020-01-02T10:38:40 | 2020-01-02T10:38:40 | 214,477,143 | 1 | 1 | null | null | null | null | UTF-8 | R | false | true | 1,391 | rd | SEP_FIM.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/SEP_FIM.R
\name{SEP_FIM}
\alias{SEP_FIM}
\title{Fisher-Shannon method}
\usage{
SEP_FIM(x, h, log_trsf=FALSE, resol=1000, tol = .Machine$double.eps)
}
\arguments{
\item{x}{Univariate data.}
\item{h}{Value of the bandwidth for the density estimate}
\item{log_trsf}{Logical flag: if \code{TRUE} the data are log-transformed (used for skewed data), in this case
the data should be positive. By default, \code{log_trsf = FALSE}.}
\item{resol}{Number of equally-spaced points, over which function approximations are computed and integrated.}
\item{tol}{A tolerance to avoid dividing by zero values.}
}
\value{
A table with one row containing:
\itemize{
\item \code{SEP} Shannon Entropy Power.
\item \code{FIM} Fisher Information Measure.
\item \code{FSC} Fisher-Shannon Complexity
}
}
\description{
Non-parametric estimates of the Shannon Entropy Power (SEP), the Fisher Information Measure (FIM) and the
Fisher-Shannon Complexity (FSC), using kernel density estimators with Gaussian kernel.
}
\examples{
library(KernSmooth)
x <- rnorm(1000)
h <- dpik(x)
SEP_FIM(x, h)
}
\references{
F. Guignard, M. Laib, F. Amato, M. Kanevski, Advanced analysis of temporal
data using Fisher-Shannon information : theoretical development and
application to geoscience
}
|
edcc1a8facd6b677e9f6489452c40445c2888b0c | 875c89121e065a01ffe24d865f549d98463532f8 | /man/example.spatial.Rd | f281eb8c2e2501c63bd013c1b498886f6818a01c | [] | no_license | hugomflavio/actel | ba414a4b16a9c5b4ab61e85d040ec790983fda63 | 2398a01d71c37e615e04607cc538a7c154b79855 | refs/heads/master | 2023-05-12T00:09:57.106062 | 2023-05-07T01:30:19 | 2023-05-07T01:30:19 | 190,181,871 | 25 | 6 | null | 2021-03-31T01:47:24 | 2019-06-04T10:42:27 | R | UTF-8 | R | false | true | 1,174 | rd | example.spatial.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/z_examples.R
\name{example.spatial}
\alias{example.spatial}
\title{Example spatial data}
\format{
A data frame with 18 rows and 6 variables:
\describe{
\item{Station.name}{The name of the hydrophone station or release site}
\item{Latitude}{The latitude of the hydrophone station or release site in WGS84}
\item{Longitude}{The longitude of the hydrophone station or release site in WGS84}
\item{x}{The x coordinate of the hydrophone station or release site in EPSG 32632}
\item{y}{The y coordinate of the hydrophone station or release site in EPSG 32632}
\item{Array}{If documenting a hydrophone station, the array to which the station belongs.
If documenting a release site, the first array(s) where the fish is expected to be detected.}
\item{Section}{The Section to which the hydrophone station belongs (irrelevant for the release sites).}
\item{Type}{The type of spatial object (must be either 'Hydrophone' or 'Release')}
}
}
\source{
Data collected by the authors.
}
\description{
A dataset containing the positions of the deployed hydrophone stations and release sites.
}
\keyword{internal}
|
1fc13edbeb8443948adf8e349aa6f2578e468caf | d68cfab4a9b2876f4cdfd0c2c8da3b8b5da99157 | /Week5/DataScience05_ModelingInR_WernerColangelo.R | 85ca1e22aa87a341bb440e9b7b1eb416f3dff3c9 | [] | no_license | alfl2/IntroToDS-Fall2014 | 920291e7cc35958bff9e081f2e7f7053b5239a4b | 202360a9c24fd8a91c732cbaf1d66847585c5adb | refs/heads/master | 2021-01-17T08:52:50.424430 | 2014-12-06T23:15:32 | 2014-12-06T23:15:32 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,118 | r | DataScience05_ModelingInR_WernerColangelo.R | # DataSience05_ModelingInR.R
# Complete the code:
# Classify outcomes using Naive Bayes
# Show Confusion Matrix for Naive Bayes
# DATASCI 250: Introduction to Data Science (4690)
# Autumn 14
# Instructor: Ernst Henle
#
# Homework 5
#
# Submitted by:
# Werner Colangelo
# wernercolangelo@gmail.com
# Clear objects from Memory
rm(list=ls())
# Clear Console:
cat("\014")
# Set repeatable random seed
randomNumberSeed <- 4
set.seed(randomNumberSeed)
# Add functions and objects from ModelingHelper.R
source("DataScience05_ModelingHelper.R")
# Install and Load Packages (may already be installed)
installAndLoadModeling()
# Get cleaned data
modelingData <- GetDemoData(1)
dataframe <- modelingData$dataframe
# Data also included a formula and the number of the target column
formula <- modelingData$formula
targetColumnNumber <- modelingData$targetColumnNumber
# Partition data between training and testing sets
# Replace the following line with a function that partitions the data correctly
# print("WrongWayPartition")
# dataframeSplit <- wrongWayPartition(dataframe, fractionOfTest=0.4)
print("SlowAndExact")
dataframeSplit <- SlowAndExact(dataframe, fractionOfTest=0.4)
# print("QuickAndDirty")
# dataframeSplit <- QuickAndDirty(dataframe, fractionOfTest=0.4)
testSet <- dataframeSplit$testSet
trainSet <-dataframeSplit$trainSet
# Actual Test Outcomes
actual <- testSet[,targetColumnNumber]
positiveState <- 1
isPositive <- actual == positiveState
###################################################
# Logistic Regression (glm, binomial)
# http://data.princeton.edu/R/glms.html
# http://www.statmethods.net/advstats/glm.html
# http://stat.ethz.ch/R-manual/R-patched/library/stats/html/glm.html
# http://www.stat.umn.edu/geyer/5931/mle/glm.pdf
# Create logistic regression; family="binomial" means logistic regression
glmModel <- glm(formula, data=trainSet, family="binomial")
# Predict the outcomes for the test data
predictedProbabilities.GLM <- predict(glmModel, newdata=testSet, type="response")
###################################################
# Naive Bayes
# http://cran.r-project.org/web/packages/e1071/index.html
# http://cran.r-project.org/web/packages/e1071/e1071.pdf
# Create Naive Bayes model
# nBModel <- naiveBayes(formula, data = dataframe, laplace = 0 , trainSet)
nBModel <- naiveBayes(formula, trainSet)
# Predict the outcomes for the test data
#predictedProbabilities.nB <- predict(nBModel, type = "raw", newdata=testSet, threshold = 0.5)
predictedProbabilities.nB <-predict(nBModel, newdata=testSet, type="raw")
###################################################
# Confusion Matrix
threshold = 0.5
#Confusion Matrix for Logistic Regression
predicted.GLM <- as.numeric(predictedProbabilities.GLM > threshold)
print("Confusion Matrix for Logistic Regression")
table(predicted.GLM, actual)
#Confusion Matrix for Naive Bayes
predicted.nB <- as.numeric(predictedProbabilities.nB[,2] > threshold)
print("Confusion Matrix Naive Bayes")
table(predicted.nB, actual)
###################################################
# WrongWayPartition used
# Confusion Matrix for Logistic Regression
# actual
# predicted.GLM 0 1
# 0 41 45
# 1 54 91
# Confusion Matrix Naive Bayes
# actual
# predicted.nB 0 1
# 0 94 122
# 1 1 14
# SlowAndExact used
# Confusion Matrix for Logistic Regression
# actual
# predicted.GLM 0 1
# 0 14 14
# 1 46 157
# Confusion Matrix Naive Bayes
# actual
# predicted.nB 0 1
# 0 58 100
# 1 2 71
# Answers
# 1F - Note the confusion matrix. Which is better or worse?
# SlowAndExact produces many fewer false negatives, although it still produced false positives. But
# the reduction in false negatives was much larger than the increase in false positives, so it
# does seem to be a better was of partitioning the data.
#2B - How many rows are there in the outcomes? How many columns do you want/need?
# There are 231 rows and 2 columns.
# The first column is the probability that the output is 0 for that input row.
# The second column is the probabilitythat the output is 1 for that input row.
# We just want to take the second column.
|
812539a595726b0169fdc7719aa251a02c2ab60e | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/xfun/examples/parse_only.Rd.R | 0b62520f3fe565081e41477ecc4f3ab398230749 | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 230 | r | parse_only.Rd.R | library(xfun)
### Name: parse_only
### Title: Parse R code and do not keep the source
### Aliases: parse_only
### ** Examples
library(xfun)
parse_only("1+1")
parse_only(c("y~x", "1:5 # a comment"))
parse_only(character(0))
|
8cd3a564db3c249b568112aabb7e261c1805e86d | f755b2cb723c372c3313e6c5a09205d7208decc9 | /test/v0.0.2/testObjectives.R | 55dcaf1f698320e991e61316ee0770987fe0ffe8 | [] | no_license | AcuoFS/acuo-allocation | f25febe5f7d3f5d129c2a14678d65020f932befb | a72da9fff6020a51c25deaacde95c92c92908a0d | refs/heads/master | 2018-10-05T08:12:22.993237 | 2018-04-04T06:50:27 | 2018-04-04T06:50:27 | 67,778,974 | 2 | 0 | null | null | null | null | UTF-8 | R | false | false | 821 | r | testObjectives.R | library('RUnit')
setwd("E://ACUO/projects/acuo-allocation/test/v0.0.2")
test.suite1 = defineTestSuite("example",
dirs = file.path("testObjectives"),
testFileRegexp = 'objectivesTests.R')
test.suite2 = defineTestSuite("example",
dirs = file.path("testObjectives"),
testFileRegexp = 'objectivesTests2.R')
test.suite3 = defineTestSuite("example",
dirs = file.path("testObjectives"),
testFileRegexp = 'objectivesTests3.R')
test.result1 <- runTestSuite(test.suite1)
test.result2 <- runTestSuite(test.suite2)
test.result3 <- runTestSuite(test.suite3)
printTextProtocol(test.result1)
printTextProtocol(test.result2)
printTextProtocol(test.result3)
|
39f3055e31a194fea12a7dccc6a1abcf9400fbd2 | 7be68c0b3218e14f6af748209b31d2e1c6f6b047 | /R/add_initialedit_module.R | 96ab1130091471b21eff5746012013662a82a8ed | [
"Apache-2.0"
] | permissive | jhk0530/eCRF-SMCcath | 1a27da64e35da0f7b13c21acd18da7fdc48c09bb | cd331a798203b90fcf881e28c8cef1e5a9e0236d | refs/heads/main | 2023-07-02T19:00:29.241116 | 2021-07-24T03:26:38 | 2021-07-24T03:26:38 | 351,021,491 | 0 | 0 | Apache-2.0 | 2021-03-24T09:34:21 | 2021-03-24T09:34:21 | null | UTF-8 | R | false | false | 20,052 | r | add_initialedit_module.R |
#' Enroll Add & Edit Module
#'
#' Module to add & initial edit
#'
#' @importFrom shiny observeEvent showModal modalDialog removeModal fluidRow column textInput numericInput selectInput modalButton actionButton reactive eventReactive
#' @importFrom shinyFeedback showFeedbackDanger hideFeedback showToast
#' @importFrom shinyjs enable disable
#' @importFrom lubridate with_tz
#' @importFrom DBI dbExecute
#'
#' @param modal_title string - the title for the modal
#' @param car_to_edit reactive returning a 1 row data frame of the car to edit
#' from the "mt_cars" table
#' @param modal_trigger reactive trigger to open the modal (Add or Edit buttons)
#' @param tbl "rct" or "pros"
#' @param data data to extract pid
#' @return None
#'
add_initialedit_module <- function(input, output, session, modal_title, car_to_edit, modal_trigger, tbl = "rct", sessionid, data, rd = rd) {
ns <- session$ns
observeEvent(modal_trigger(), {
hold <- car_to_edit()
if (tbl == "rct") {
showModal(
modalDialog(
h3("Stratified randomization"),
uiOutput(ns("pidui")),
fluidRow(
column(
width = 3,
radioButtons(
ns("DM_random"),
"DM",
c("No","Yes"),
inline = T,
selected = character(0)
),
),
column(
width = 3,
radioButtons(
ns("STEMI_random"),
"AMI Type",
c("NSTEMI", "STEMI"),
inline = T,
selected = character(0)
),
),
column(
width = 3,
radioButtons(
ns('Center'),
'Center',
c('삼성서울병원','전남대병원'),
inline = T,
selected = character(0)
)
),
column(
width = 3,
radioButtons(
ns("Sex"),
"Sex",
choices = c("M", "F"),
selected = character(0),
inline = T
)
)
),
fluidRow(
column(
width = 3,
textInput(
ns("Initial"),
"Initial",
value = ifelse(is.null(hold), "", hold$Initial)
)
),
column(
width = 3,
dateInput(
ns("Index_PCI_Date"),
"Index PCI Date",
value = NULL,
language = "kr"
)
),
column(
width = 3,
dateInput(
ns("Agree_Date"),
"동의서 서명일",
value = NULL,
language = "kr"
)
),
column(
width = 3,
dateInput(
ns("Birthday"),
"Birthday",
value = as.character(lubridate::as_date(NA)),
language = "kr"
)
)
),
h3("Inclusion"),
fluidRow(
column(
width = 6,
radioButtons(
ns("in_1"),
"1. 만 19세 이상",
choices = c("Yes", "No"),
selected = character(0),
inline = T
)
),
column(
width = 6,
actionButton(
ns("CYfA"),
"Check Yes for All",
class = "btn btn-default"
)
)
),
radioButtons(
ns("in_2"),
# "2. 관상동맥 질환으로 경피적 관상동맥 중재시술이 필요한 환자",
"2. 임상시험의 시험군 및 대조군의 정의 및 시술의 위험성을 인지하고 임상시험 참가에 환자 또는 법정 대리인이 자발적으로 동의한 경우",
choices = c("Yes", "No"),
selected = character(0),
inline = T
),
radioButtons(
ns("in_3"),
"3. Type 1 급성심근경색으로 진단된 환자 (ST 분절 상승 심근경색 또는 비 ST 분절 상승 심근경색)",
choices = c("Yes", "No"),
selected = character(0),
inline = T
),
h5("1) 심근 효소(troponin)의 값이 참고치의 상위 99% 이상 상승 (above the 99th percentile upper reference limit)"),
h5("2) 심근 허혈을 시사하는 증상 혹은 심전도 변화"),
radioButtons(
ns("in_4"),
"4. 심부전 발생 고 위험 환자 (아래 두 가지 기준 중 하나 이상 만족하는 경우)",
choices = c("Yes", "No"),
selected = character(0),
inline = T
),
h5("1) 좌심실 구혈율 (left ventricular ejection fraction) <50% 또는"),
h5("2) 폐 울혈의 증상이나 징후가 있어 치료가 필요한 경우"),
br(),
h3("Exclusion"),
actionButton(
ns("CNfA"),
"Check No for All",
class = "btn btn-default"
),
radioButtons(
ns("ex_1"),
# "1. 시술자에 의해 표적혈관의 협착이 관상동맥 중재시술에 적합하지 않다고 판단되는 경우(Target lesion not amenable for PCI by operators decision)",
"1. 시술자에 의해 표적혈관의 협착이 관상동맥 중재시술에 적합하지 않다고 판단되는 경우",
choices = c("Yes", "No"),
selected = character(0),
inline = T
),
radioButtons(
ns("ex_2"),
# "2. 심혈관성 쇼크 상태인 경우 (Cardiogenic shock (Killip class IV) at presentation)",
"2. 무작위 배정 전 심정지로 심폐소생술이 필요한 경우",
choices = c("Yes", "No"),
selected = character(0),
inline = T
),
radioButtons(
ns("ex_3"),
# "3. 다음 약제에 과민성이 있거나, 투약의 금기사항이 있는 경우(aspirin, clopidogrel, ticagrelor, prasugrel, heparin, everolimus, zotarolimus, biolimus, sirolimus)",
"3. 혈전용해술 이후 구제적 관상동맥 중재술/용이성 관상동맥 중재술을 시행한 경우",
choices = c("Yes", "No"),
selected = character(0),
inline = T
),
fluidRow(
column(
width = 6,
radioButtons(
ns("ex_4"),
"4. 과거에 심근경색이 있었던 경우",
choices = c("Yes", "No"),
selected = character(0),
inline = T
)
),
column(
width = 6,
radioButtons(
ns("ex_5"),
"5. SGLT-2 억제제를 지속 복용중인 환자",
choices = c("Yes", "No"),
selected = character(0),
inline = T
)
)
),
fluidRow(
column(
width = 6,
radioButtons(
ns("ex_6"),
# "6. 비 심장질환으로 기대 여명이 1년 미만이거나 치료에 순응도가 낮을 것으로 기대되는 자(조사자가 의학적인 판단으로 정함)",
"6. 사구체 여과율 30 ml/min/1.73m2 미만이거나 투석중인 환자",
choices = c("Yes", "No"),
selected = character(0),
inline = T
)
),
column(
width = 6,
radioButtons(
ns("ex_7"),
# "7. 연구 참여를 거부한 환자",
"7. 1형 당뇨병을 앓고 있는 경우",
choices = c("Yes", "No"),
selected = character(0),
inline = T
)
)
),
fluidRow(
column(
width = 6,
radioButtons(
ns("ex_8"),
# "7. 연구 참여를 거부한 환자",
"8. SGLT-2 억제제에 과민성이 있는 환자",
choices = c("Yes", "No"),
selected = character(0),
inline = T
)
),
column(
width = 6,
radioButtons(
ns("ex_9"),
# "7. 연구 참여를 거부한 환자",
"9. 임산부 및 수유부",
choices = c("Yes", "No"),
selected = character(0),
inline = T
)
)
),
radioButtons(
ns("ex_10"),
# "7. 연구 참여를 거부한 환자",
"10. 비 심장질환으로 인하여 기대여명이 1년 이내이거나 치료에 순응도가 낮을 것으로 기대되는 자 (조사자가 의학적인 판단으로 정함)",
choices = c("Yes", "No"),
selected = character(0),
inline = T
),
title = modal_title,
size = "l",
footer = list(
modalButton("Cancel"),
actionButton(
ns("submit"),
"Randomization & submit",
class = "btn btn-primary",
style = "color: white"
)
)
)
)
} else {
showModal(
modalDialog(
h3("Prospective study"),
uiOutput(ns("pidui")),
fluidRow(
column(
width = 3,
textInput(
ns("Initial"),
"Initial",
value = ifelse(is.null(hold), "", hold$Initial)
)
),
column(
width = 3,
dateInput(
ns("Index_PCI_Date"),
"Index PCI Date",
value = NULL,
language = "kr"
)
),
column(
width = 3,
dateInput(
ns("Agree_Date"),
"동의서 서명일",
value = NULL,
language = "kr"
)
),
column(
width = 3,
dateInput(
ns("Birthday"),
"Birthday",
value = as.character(lubridate::as_date(NA)),
language = "kr"
)
)
),
radioButtons(
ns("Sex"),
"Sex",
choices = c("M", "F"),
selected = character(0),
inline = T
),
title = modal_title,
size = "m",
footer = list(
modalButton("Cancel"),
actionButton(
ns("submit"),
"Submit",
class = "btn btn-primary",
style = "color: white"
)
)
)
)
}
# Observe event for "Model" text input in Add/Edit Car Modal
# `shinyFeedback`
observe({
req(!is.null(input$Initial))
if (input$Initial == "") {
shinyFeedback::showFeedbackDanger(
"Initial",
text = "환자 이니셜 입력"
)
} else {
shinyFeedback::hideFeedback("Initial")
}
if (length(input$Index_PCI_Date) == 0) {
shinyFeedback::showFeedbackDanger(
"Index_PCI_Date",
text = "Index_PCI_Date 입력"
)
} else {
shinyFeedback::hideFeedback("Index_PCI_Date")
}
if (length(input$Birthday) == 0) {
shinyFeedback::showFeedbackDanger(
"Birthday",
text = "Birthday 입력"
)
}
else {
shinyFeedback::hideFeedback("Birthday")
}
if (tbl == "rct"){
if (!is.null(input$pid) && (length(input$DM_random) != 0) && (length(input$STEMI_random) != 0) && (length(input$Birthday) != 0) && (length(input$Index_PCI_Date)) != 0 && (length(input$Agree_Date) != 0) &&
(input$Initial != "") && (length(input$Sex) != 0) && (length(input$in_1) != 0) && (length(input$in_2) != 0) && (length(input$in_3) != 0) && (length(input$in_4) != 0) &&
(length(input$ex_1) != 0) && (length(input$ex_2) != 0) && (length(input$ex_3) != 0) && (length(input$ex_4) != 0) && (length(input$ex_5) != 0) && (length(input$ex_6) != 0) && (length(input$ex_7) != 0) && (length(input$ex_8) != 0) && (length(input$ex_9) != 0) && (length(input$ex_10) != 0)) {
shinyjs::enable("submit")
} else{
shinyjs::disable("submit")
}
} else{
if ((length(input$Birthday) != 0) && (length(input$Index_PCI_Date) != 0) && (length(input$Agree_Date) != 0) && (input$Initial != "") && (length(input$Sex) != 0)) {
shinyjs::enable("submit")
} else{
shinyjs::disable("submit")
}
}
})
})
output$pidui <- renderUI({
idlist <- choices.group <- NULL
if (tbl == "rct") {
req(input$DM_random)
req(input$STEMI_random)
type.strata <- ifelse(
input$DM_random == "No",
ifelse(input$STEMI_random == "NSTEMI", "R-NDNST", "R-NDST"),
ifelse(input$STEMI_random == "NSTEMI", "R-DNST", "R-DST")
)
pid.group <- grep(type.strata, rd$pid, value = T)
data.stata <- subset(data(), DM == input$DM_random & AMI_Type == input$STEMI_random)
# idlist <- setdiff(pid.group, data()$pid)
idlist <- setdiff(paste0("R-", 1:100000), data()$pid)
if (nrow(data.stata) >= length(pid.group)) {
## Random assign
choices.group <- ifelse(rbinom(1, 1, 0.5) == 1, "SGLT-inhibitor", "Control")
} else {
choices.group <- rd[rd$pid %in% pid.group, ]$Group[nrow(data.stata) + 1]
}
} else {
idlist <- setdiff(paste0("P-", 1:100000), data()$pid)
choices.group <- ""
}
hold <- car_to_edit()
choices.pid <- ifelse(is.null(hold), idlist[1], hold$pid)
tagList(
fluidRow(
column(
width = 6,
selectInput(
session$ns("pid"),
"pid",
choices = choices.pid,
selected = choices.pid
)
),
hidden(
column(
width = 6,
radioButtons(
session$ns("Group"),
"Group",
choices.group,
choices.group[1],
inline = T
)
)
)
)
)
})
observeEvent(input$CYfA, {
updateRadioButtons(session, "in_1", selected = "Yes")
updateRadioButtons(session, "in_2", selected = "Yes")
updateRadioButtons(session, "in_3", selected = "Yes")
updateRadioButtons(session, "in_4", selected = "Yes")
})
observeEvent(input$CNfA, {
updateRadioButtons(session, "ex_1", selected = "No")
updateRadioButtons(session, "ex_2", selected = "No")
updateRadioButtons(session, "ex_3", selected = "No")
updateRadioButtons(session, "ex_4", selected = "No")
updateRadioButtons(session, "ex_5", selected = "No")
updateRadioButtons(session, "ex_6", selected = "No")
updateRadioButtons(session, "ex_7", selected = "No")
updateRadioButtons(session, "ex_8", selected = "No")
updateRadioButtons(session, "ex_9", selected = "No")
updateRadioButtons(session, "ex_10", selected = "No")
})
edit_car_dat <- reactive({
hold <- car_to_edit()
dat <- list(
"pid" = input$pid,
"Group" = input$Group,
"Center" = input$Center,
# Essentials
"Initial" = input$Initial,
"Birthday" = lubridate::as_date(input$Birthday),
"Age" = as.period(interval(start = lubridate::as_date(input$Birthday), end = Sys.Date()))$year,
"Sex" = input$Sex,
"Agree_Date" = lubridate::as_date(input$Agree_Date),
"Index_PCI_Date" = lubridate::as_date(input$Index_PCI_Date)
# index_PCI_Date 필수 입력
# "Index_PCI_Date" = ifelse(is.null(input$Index_PCI_Date), "", as.character(input$Index_PCI_Date)),
)
if (tbl == "rct") {
dat$DM <- input$DM_random
dat$AMI_Type <- input$STEMI_random
}
time_now <- as.character(lubridate::with_tz(Sys.time(), tzone = "UTC"))
if (is.null(hold)) {
# adding a new car
dat$created_at <- time_now
dat$created_by <- sessionid
} else {
# Editing existing car
dat$created_at <- as.character(hold$created_at)
dat$created_by <- hold$created_by
}
dat$modified_at <- time_now
dat$modified_by <- sessionid
return(dat)
})
validate_edit <- eventReactive(input$submit, {
dat <- edit_car_dat()
# Logic to validate inputs...
dat
})
observeEvent(validate_edit(), {
removeModal()
dat <- validate_edit()
hold <- car_to_edit()
# sqlsub <- paste(paste0(names(dat$data), "=$", 1:length(dat$data)), collapse = ",")
# [1] "pid" "Group" "CENTER" "DM" "AMI_Type" "Initial"
# [7] "Birthday" "Age" "Sex" "Agree_Date" "Index_PCI_Date" "Withdrawal"
# [13] "SGLT" "Withdrawal_date" "Withdrawal_reason" "Date_adm" "Height" "Weight"
# [19] "BMI" "BSA_adm"
code.sql <- paste0(
"INSERT INTO ",
tbl,
" (pid, 'Group', Center, Initial, ",
" Birthday, Age, Sex, Agree_Date, Index_PCI_Date, DM, AMI_Type, created_at, created_by, modified_at, modified_by)",
" VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15)")
# " (pid, 'Group', Index_PCI_Date, Agree_Date, Initial, Age, Birthday, Sex, CENTER, DM, AMI_Type, created_at, created_by, modified_at, modified_by) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15)")
if (tbl == "pros") {
code.sql <- paste0(
"INSERT INTO ",
tbl,
" (pid, 'Group', Center, Initial, ",
" Birthday, Age, Sex, Agree_Date, Index_PCI_Date, created_at, created_by, modified_at, modified_by)",
" VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13)")
# " (pid, 'Group', Index_PCI_Date, Agree_Date, Initial, Age, Birthday, Sex, CENTER, created_at, created_by, modified_at, modified_by) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13)")
}
tryCatch(
{
dbExecute(conn, code.sql, params = unname(dat))
session$userData$mtcars_trigger(session$userData$mtcars_trigger() + 1)
showToast("success", paste0(modal_title, " Successs"))
},
error = function(error) {
msg <- paste0(modal_title, " Error")
# print `msg` so that we can find it in the logs
print(msg)
# print the actual error to log it
print(error)
# show error `msg` to user. User can then tell us about error and we can
# quickly identify where it cam from based on the value in `msg`
showToast("error", msg)
}
)
})
}
|
711b3051ad277802b88aceb494a904f694fe7f37 | 611ba9b2fc085dfb7556f1aa7441b9f47acf82cc | /Digit Recognition - SVM/digit_recognition_svm_v2.R | b4d0710fb467571e19f3de178f8f5d21b2ca7504 | [] | no_license | arunnalpet/PGDDS | 121e39f4a5e68c5f9bef80e4c1fec18eff26c212 | 1da89215b102306d5ce8b3d9b9cc4bdc133b27f3 | refs/heads/master | 2020-04-09T12:20:42.173471 | 2019-08-08T06:32:12 | 2019-08-08T06:32:12 | 160,345,608 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,645 | r | digit_recognition_svm_v2.R | # Assignment: Digit Recognition using SVM
# Objective: Develop SVM classification model to identify the digits based on pixel values.
# Load required libraries
library(caret)
library(kernlab)
library(dplyr)
library(readr)
library(ggplot2)
library(gridExtra)
library(caTools)
# Source the data. Please set this according to your data set location.
setwd('C:/Project/IIITB/4. Predictive Analysis II/Module 3 - Digit Recognition SVM Assignment')
# test and train data doesnt have headers, let R generate one instead.
# Test data is given seperately. So using train data for model building and cross validation.
digit_data <- read.csv('mnist_train.csv', header = FALSE)
test_data <- read.csv('mnist_test.csv', header = FALSE)
#####################
# Understand the data
#####################
dim(digit_data)
# It has 60k rows and 785 observations!
str(digit_data)
# All are integers.
# The first column v1 is the dependent variable
# All the columns hold pixel data. Plotting graphs for such data, doesnt yield anything useful.
# Check for missing values
sapply(digit_data, function(x) sum(is.na(x)))
sum(is.na(digit_data))
# There are no NAs, so the data is clean
# Remove the columns that has only zeros.
sapply(digit_data, function(x) sum(x))
# Check how many columns have zeros.
emptyCols = colnames(digit_data)[colSums(digit_data)==0]
length(emptyCols)
# 67 rows have only 0s.
# Check the same for test data.
emptyCols = colnames(test_data)[colSums(test_data)==0]
length(emptyCols)
# 116 rows have only 0s.
# Since both train and test data have different set of columns with 0s, will leave it as it is.
# Will not remove empty columns.
# digit_data <- digit_data[,-which(names(digit_data) %in% emptyCols)]
# test_data <- test_data[,-which(names(test_data) %in% emptyCols)]
##################
# Prepare the data
##################
# Convert the target class to factor
digit_data$V1 <- factor(digit_data$V1)
test_data$V1 <- factor(test_data$V1)
# Check how many samples we have for each digit.
digit_group <- group_by(digit_data,V1)
count(digit_group)
# This is large data set. Consider 15% of train data..
# If 15% is larger and taking time, reduce this further and then proceed.
set.seed(100)
indices = sample(1:nrow(digit_data), 0.15*nrow(digit_data))
train = digit_data[indices,]
# Verifying the count of random samples, make sure we have captured enough samples of each digit.
count(group_by(train,V1))
################
# Model Building
################
######Linear Model######
# Test out the simple linear model first.
# Scaling is not required here, as all the columns represent pixel data.
Model_linear <- ksvm(V1~ ., data = train, scale = FALSE, kernel = "vanilladot")
# Evaluate model against the seperate test data.
Eval_linear<- predict(Model_linear, test_data)
# Check the accuracy
confusionMatrix(Eval_linear,test_data$V1)
# We get accuracy of about 91%
# Also notice that 9&4, 9&7, 8&3 are the major ones that are not classified properly.
###### RBF Kernel ######
Model_RBF <- ksvm(V1~ ., data = train, scale = FALSE, kernel = "rbfdot")
# Check out the RBF model built.
Model_RBF
# It has very very low sigma value.
# sigma = 1.64178908877282e-07
# C = 1
# That means, kernel is leveraging small amount of non-linearity
Eval_RBF<- predict(Model_RBF, test_data)
#confusion matrix - RBF Kernel
confusionMatrix(Eval_RBF,test_data$V1)
# Accuracy has improved to 95%
# Classification has improved significantly over linear model.
# We can just stick to linear model, or now vary the c parameter to get better accuracy.
####### Hyperparameter and cross-validation ########
# I am running this on i7 processor and took about 40 minutes for this to finish.
# If it is very slow, reduce the number of folds here and proceed.
# enable verbose option, we can see what is going on currently.
trainControl <- trainControl(method="cv", number=5, verboseIter = TRUE)
# Define the evaluation metric
metric <- "Accuracy"
#Expand.grid functions takes set of hyperparameters, that we shall pass to our model.
set.seed(7)
# Commenting the below two hyperparameter variation trials, as it consumes lot of time to test all of these.
# grid <- expand.grid(.sigma=c(0.025, 0.05), .C=c(0.1,0.5,1,2) )
# This yielded very low accuracy, hence not considering it.
# grid <- expand.grid(.sigma=c(0,0.0000001,0.000001), .C=c(0.1,0.5,1,2) )
# Getting about 92% accuracy, can be made better.
# Matching the sigma here with the one we found in RBF evaluation.
grid <- expand.grid(.sigma=c(1.6e-07,1.64e-07,1e-07), .C=c(1:4) )
#train function takes Target ~ Prediction, Data, Method = Algorithm
#Metric = Type of metric, tuneGrid = Grid of Parameters,
# trcontrol = Our traincontrol method.
fit.svm <- train(V1~., data=train, method="svmRadial", metric=metric,
tuneGrid=grid, trControl=trainControl)
# Selecting tuning parameters
# Fitting sigma = 1.64e-07, C = 4 on full training set
# Got accuracy of 96.31% with these tuned hyperparameters.
print(fit.svm)
plot(fit.svm)
# Use the best of the cross-validated tuning parameters and try it on test data.
Eval_CV<- predict(fit.svm, test_data)
#confusion matrix - RBF Kernel
confusionMatrix(Eval_CV,test_data$V1)
# Accuracy of 96.31% over the test data.
############
# Conclusion
############
# Step1: SVM model was built with simple linear kernel, yielded about 91% Accuracy.
# Step2: SVM model was built with RBF kernel, with default tuning parameters. yielded about 95% Accuracy.
# Step3: Since RBF yielded better results, performed Cross Validation using RBF by tuning hyperparameters.
# Test Run 1:
# Tuning Parameters: CV=2, sigma=c(0.025, 0.05), .C=c(0.1,0.5,1,2)
# Results: Very low Accuracy.
# Interpretation: Model is not good. Need to tune hyperparameters.
# Test Run 2:
# Tuning Parameters: CV=5, sigma=c(0,0.0000001,0.000001), .C=c(0.1,0.5,1,2)
# Results: Accuracy of 92%
# Interpretation: Model is doing good prediction after significantly reducing the value of sigma.
# Need to tune more to get better Accuracy.
# Test Run 3:
# Tuning Parameters: .sigma=c(1.6e-07,1.64e-07,1e-07), .C=c(1:4)
# Results: Good accuracy with 96.31%
# Interpretation: final tuned parameters are: sigma = 1.64e-07, C = 4
# Ran the model against the test data and got 96.31% Accuracy, which is good. Hence stopping further tuning.
# 'fit.svm' is the final model.
|
ee4504b174bc9726dc80042de7e448d2f312e2d5 | cab774f05204f8e24c91d1358cf27f6a894f3adb | /wombatR/R/convert.pedigree.multigeneration.R | 24c866f4e63926b3fbea4bbcc5a958dadb6103d8 | [] | no_license | mdjbru-R-packages/wombatR | 7a931e9839bd5086f1afe31ea84f47ee51c42129 | 251c4cb26a3402ce63cc2b36ab74ac05bb4894b8 | refs/heads/master | 2021-01-21T07:53:40.138365 | 2014-10-03T14:06:19 | 2014-10-03T14:06:19 | 25,148,687 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,959 | r | convert.pedigree.multigeneration.R | #' @title Convert parents and offspring identities to unique identities
#'
#' @description Convert a pedigree with independent numbering for the identities
#' of parents and offsprings to a multigeneration compatible
#' pedigree, where identical numbers are not shared between different
#' individuals between the offspring and parent columns.
#'
#' @details Convert the id for animal, father and mother from an independent
#' format (i.e. id start at 1 for animal, father and mother) to
#' a multigeneration compatible format (i.e. no overlap between
#' offspring and parent identities).
#'
#' Note: 0 in id is considered as NA.
#'
#' IMPORTANT: This should NOT be used on multigeneration pedigrees
#' where offspring and parents id are related, since such relations
#' will be broken during the recoding of the id.
#'
#' @param data a \code{data.frame} containing the data. The id should be
#' integers. Id set to 0 are considered as NA.
#' @param animal.id,father.id,mother.id the names of the columns containing the
#' animal, father and mother identity information (\code{strings})
#'
#' @return A \code{data.frame} with the same data with updated id.
#'
#' @examples
#' # generate a random pedigree structure for 20 offsprings, using 5 different
#' # sires and 7 different dams
#' set.seed(4)
#' animal_id = 1:20
#' sire_id = sample(1:5, size = 20, replace = T)
#' dam_id = sample(1:7, size = 20, replace = T)
#'
#' # generate some random traits
#' weight = rnorm(n = 20, mean = 50)
#' length = rnorm(n = 20, mean = 100)
#'
#' # assemble the information into one data frame
#' ped_pheno_data = data.frame(animal_id, sire_id, dam_id, weight, length)
#' ped_pheno_data
#'
#' # convert the pedigree part of the data frame
#' ped_pheno_data2 = convert.pedigree.multigeneration(data = ped_pheno_data,
#' animal.id = "animal_id", father.id = "sire_id", mother.id = "dam_id")
#' ped_pheno_data2
#'
#' @export
#'
convert.pedigree.multigeneration = function(data, animal.id, father.id,
mother.id) {
# get the ids
data.id = data[, c(animal.id, father.id, mother.id)]
# convert 0 to NA in the id
data.id[data.id == 0] = NA
animals = data.id[, animal.id]
fathers = data.id[, father.id]
mothers = data.id[, mother.id]
# determine the number of individuals within each category
n.animals = max(animals, na.rm = T)
n.fathers = max(fathers, na.rm = T)
n.mothers = max(mothers, na.rm = T)
# recode the ids
fathers = fathers
mothers = mothers + n.fathers
animals = animals + n.fathers + n.mothers
ped = data.frame(animals = animals, fathers = fathers, mothers = mothers)
ped[is.na(ped)] = 0
# replace the columns in the original data frame
data[, animal.id] = ped$animals
data[, father.id] = ped$fathers
data[, mother.id] = ped$mothers
# return
data
}
|
f6797db755b95a30e5c1c56a1b22bbf8fa56b7d4 | 6c07db4a56830892a21412d3d0cca5128d7dc542 | /cachematrix.R | 3e974b106bc1051945607b558766ac81966388aa | [] | no_license | neurofen/ProgrammingAssignment2 | 974186a6b6e341d432963619f80692a4b7bf01f8 | e933a1f2e1cc6a01b6df2ad78a7835fe22ffd877 | refs/heads/master | 2021-01-18T16:54:59.799583 | 2015-01-25T09:56:20 | 2015-01-25T09:56:20 | 29,799,014 | 0 | 0 | null | 2015-01-25T01:51:24 | 2015-01-25T01:51:24 | null | UTF-8 | R | false | false | 1,817 | r | cachematrix.R | ## Create an cachable matrix object which caches the inversion of a given invertable matrix.
## Using cacheSolve, calculate the inverse of the cacheable matrix. Repeating solve on the
## same cacheable matrix should returned the cached inverse, calculated the first time
## cacheSolve was called.
##
## Usage:
## cached.matrix <- makeCacheMatrix(matrix(1:4,2,2))
##
## Calling cacheSolve for first time will trigger calculation of matrix inverse
##
## cacheSolve(cached.matrix)
##
## [,1] [,2]
## [1,] -2 1.5
## [2,] 1 -0.5
##
## Subsequent calls will returned cached inverse
##
## cacheSolve(cached.matrix)
##
## getting cached inverse of matrix
## [,1] [,2]
## [1,] -2 1.5
## [2,] 1 -0.5
## Given an invertable matrix, return a cacheable
## matrix object that can cached its inverse
makeCacheMatrix <- function(x = matrix()) {
cache <- NULL
# set value of matrix
set <- function(y) {
x <<- y
cache <<- NULL
}
# get value of matrix
get <- function() x
# set value of reverse matrix
setInverse <- function(inverse) cache <<- inverse
# get value of reverse matrix
getInverse <- function() cache
list(set = set, get = get, setInverse = setInverse, getInverse = getInverse)
}
## Return the inverse of an cacheMatrix object. If the inverse has already
## been calculated (and the matrix has not changed),
## then the cachesolve should retrieve the inverse from the cache.
cacheSolve <- function(x, ...) {
inverseFunc <- x$getInverse()
if(!is.null(inverseFunc)) {
message("getting cached inverse of matrix")
return(inverseFunc)
}
data <- x$get()
matrixInverse <- solve(data, ...)
x$setInverse(matrixInverse)
## Return a matrix that is the inverse of 'x'
matrixInverse
}
|
ab6e75222151b2b14952e14d0e7b2336e274e0e3 | a560269290749e10466b1a29584f06a2b8385a47 | /Notebooks/r/andybega/notebook6b7f424ee9/notebook6b7f424ee9.R | c529830d0906f98611e8b3c3671191a28096fd5a | [] | no_license | nischalshrestha/automatic_wat_discovery | c71befad1aa358ae876d5494a67b0f4aa1266f23 | 982e700d8e4698a501afffd6c3a2f35346c34f95 | refs/heads/master | 2022-04-07T12:40:24.376871 | 2020-03-15T22:27:39 | 2020-03-15T22:27:39 | 208,379,586 | 2 | 1 | null | null | null | null | UTF-8 | R | false | false | 175 | r | notebook6b7f424ee9.R | library("rio")
library("ggplot2")
library("randomForest")
train <- import("../input/train.csv")
test <- import("../input/test.csv")
str(train)
list.files("../input") |
6457224f28fb1cf330008f4c4fc03498060e19f2 | fd2cd35a789adc3a1e4c83cd7798c6385118c068 | /scripts_R_genericos/explorando_tempo_interna.R | 1c42b7f0a342d73c5961b3d36617862ad632e062 | [] | no_license | covid19br/central_covid | 1ce07ad6a086304983aa97caee9243bfc37367ee | 74e1ca39e4307fc43a8c510d7e98825ae1816d91 | refs/heads/master | 2023-05-11T14:45:02.202142 | 2023-03-22T21:20:51 | 2023-03-22T21:20:51 | 263,488,734 | 12 | 1 | null | 2020-07-10T18:42:29 | 2020-05-13T00:55:54 | HTML | ISO-8859-1 | R | false | false | 4,243 | r | explorando_tempo_interna.R | library(ISOweek)
library(dplyr)
library(tidyr)
library(lubridate)
library(readr)
#source("../nowcasting/fct/get.last.date.R")
#source("../nowcasting/fct/read.sivep.R")
## Leitura dos dados###
data.dir <- "../dados/SIVEP-Gripe/"
last.date <- get.last.date(data.dir)
dados <- read.sivep(dir = data.dir, escala = "pais", data = get.last.date(data.dir))
######colocando em classes et?rias########
dados$nu_idade_n <- as.numeric(dados$nu_idade_n)
dados <- dados %>%
mutate(age_clas = case_when(nu_idade_n = 1 & nu_idade_n <= 19 ~ "age_0_19",
nu_idade_n = 20 & nu_idade_n <= 39 ~ "age_20_39",
nu_idade_n = 40 & nu_idade_n <= 59 ~ "age_40_59",
nu_idade_n >= 60 ~ "age_60"))
###############COVID##########################
covid <- dados %>%
filter(hospital == 1) %>%
filter(pcr_sars2 == 1 | classi_fin == 5) %>%
filter(evolucao == 1 | evolucao == 2) %>%
filter(!is.na(age_clas)) %>%
filter (dt_sin_pri<=dt_interna)%>%
filter (dt_sin_pri<=dt_entuti) %>%
select(dt_sin_pri, dt_interna, dt_entuti, evolucao, age_clas, sg_uf)
###filtrando as 4 últimas semanas#######
# Classificando semana epidemiologica por estado
## Semana epidemiologica brasileira
covid$week <- epiweek(covid$dt_sin_pri) ####semana epidemiol?gica come?ando no domingo
covid<- covid %>% filter (week<28)%>%
filter (week>12)
###pegando data de internação###
covid$dt_interna<-as.character(covid$dt_interna)
covid$dt_entuti<-as.character(covid$dt_entuti)
covid$hospi<-ifelse(is.na(covid$dt_interna), covid$dt_entuti, covid$dt_interna)
covid$hospi<-ymd(covid$hospi)
####calulcando tempo do primeiro sintoma e data de internação###
covid$tempo_inter<-as.numeric(covid$hospi-covid$dt_sin_pri)
###tirando dados inconsistentes###
covid<-covid %>% filter (tempo_inter<360) %>%
filter (tempo_inter>=0)
###filtrando estados##
estados<-c("SP", "RS", "SC", "PR", "MG", "RJ", "BA", "AM", "MA", "PA", "AL", "PE", "CE", "ES", "PI", "PB","GO", "MS", "MT")
df_covid<- covid %>% filter (sg_uf %in% estados)
####fazendo uma média de internação e primeiros sintomas por estado###
tabela<- covid %>%
group_by(sg_uf) %>%
summarise(mean=mean(tempo_inter, na.rm=TRUE), sd=sd(tempo_inter, na.rm = TRUE))
tabela2<- covid %>%
group_by(sg_uf, age_clas) %>%
summarise(mean=mean(tempo_inter, na.rm=TRUE), sd=sd(tempo_inter, na.rm = TRUE))
tabela3<- covid %>%
group_by(sg_uf, week) %>%
summarise(mean=mean(tempo_inter, na.rm=TRUE), sd=sd(tempo_inter, na.rm = TRUE))%>%
as.data.frame()
class(tabela3)
df_covid <- covid %>%
filter(sg_uf %in% estados)
ggplot(df_covid, aes(x = factor(sg_uf), y = tempo_inter))+
geom_boxplot(trim=FALSE, fill="gray") +
xlab ("Estados")+
ylab ("dias entre primeiros sintomas e internação")+
#stat_summary(fun.data=mean_sdl, mult=1, geom="pointrange", color="red")+
coord_cartesian(ylim = c(0, 30))+
theme_bw()
ggplot(df_covid, aes(x = factor(age_clas), y = tempo_inter))+
facet_wrap(~sg_uf, ncol=3)+
geom_boxplot(fill="gray") +
xlab ("Faixa etária")+
ylab ("dias entre primeiros sintomas e internação")+
#stat_summary(fun.data=mean_sdl, mult=1, geom="pointrange", color="red")+
coord_cartesian(ylim = c(0, 25))+
theme_bw()
#ggplot(covid, aes(x = tempo_inter))+
# geom_histogram(aes(y=..density..))+
# facet_wrap(~sg_uf, ncol=4)+
# coord_cartesian(ylim = c(0, 0.15))+
# theme_bw()
tabela3<- tabela3 %>% filter (sg_uf%in% estados)
ggplot(tabela3, aes(x= week, y=mean))+
geom_line()+
facet_wrap(~sg_uf, ncol=4)+
xlab ("Semana de primeiro sintoma")+
ylab ("dias entre primeiros sintomas e internação")+
theme_bw()
explorando<- df_covid %>% filter (sg_uf=="MS")%>%
filter (week==18)
#write.csv(tabela, file = paste0("output/dados/", "summary_covid_IFHR","_", last.date,".csv"),
# row.names = FALSE)
#write.csv(tabela2, file = paste0("output/dados/", "summary_srag_IFHR","_", last.date,".csv"),
# row.names = FALSE)
|
2372c9aa89eb86d70844e73dba69774a43ac0c1f | 58ab1de8a6eb3c9eea7e5e4ce65a6ca92cf68487 | /man/pairs.ridge.Rd | 41b1ef879e3fddd581fca6d078cfeb9b1cf8ef26 | [] | no_license | friendly/genridge | 419312c910f63eb1c2d46a09f27d950cf76a11e7 | 6727b4ec074829d34b5f57ebe362386f80708ddb | refs/heads/master | 2023-08-08T17:06:58.576278 | 2023-08-08T15:31:11 | 2023-08-08T15:31:11 | 105,555,707 | 3 | 1 | null | 2023-08-03T18:46:42 | 2017-10-02T16:10:20 | R | UTF-8 | R | false | true | 2,839 | rd | pairs.ridge.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/pairs.ridge.R
\name{pairs.ridge}
\alias{pairs.ridge}
\title{Scatterplot Matrix of Bivariate Ridge Trace Plots}
\usage{
\method{pairs}{ridge}(
x,
variables,
radius = 1,
lwd = 1,
lty = 1,
col = c("black", "red", "darkgreen", "blue", "darkcyan", "magenta", "brown",
"darkgray"),
center.pch = 16,
center.cex = 1.25,
digits = getOption("digits") - 3,
diag.cex = 2,
diag.panel = panel.label,
fill = FALSE,
fill.alpha = 0.3,
...
)
}
\arguments{
\item{x}{A \code{ridge} object, as fit by \code{\link{ridge}}}
\item{variables}{Predictors in the model to be displayed in the plot: an
integer or character vector, giving the indices or names of the variables.}
\item{radius}{Radius of the ellipse-generating circle for the covariance
ellipsoids.}
\item{lwd, lty}{Line width and line type for the covariance ellipsoids.
Recycled as necessary.}
\item{col}{A numeric or character vector giving the colors used to plot the
covariance ellipsoids. Recycled as necessary.}
\item{center.pch}{Plotting character used to show the bivariate ridge
estimates. Recycled as necessary.}
\item{center.cex}{Size of the plotting character for the bivariate ridge
estimates}
\item{digits}{Number of digits to be displayed as the (min, max) values in
the diagonal panels}
\item{diag.cex}{Character size for predictor labels in diagonal panels}
\item{diag.panel}{Function to draw diagonal panels. Not yet implemented:
just uses internal \code{panel.label} to write the variable name and ranges.}
\item{fill}{Logical vector: Should the covariance ellipsoids be filled?
Recycled as necessary.}
\item{fill.alpha}{Numeric vector: alpha transparency value(s) for filled
ellipsoids. Recycled as necessary.}
\item{\dots}{Other arguments passed down}
}
\value{
None. Used for its side effect of plotting.
}
\description{
Displays all possible pairs of bivariate ridge trace plots for a given set
of predictors.
}
\examples{
longley.y <- longley[, "Employed"]
longley.X <- data.matrix(longley[, c(2:6,1)])
lambda <- c(0, 0.005, 0.01, 0.02, 0.04, 0.08)
lridge <- ridge(longley.y, longley.X, lambda=lambda)
pairs(lridge, radius=0.5, diag.cex=1.75)
data(prostate)
py <- prostate[, "lpsa"]
pX <- data.matrix(prostate[, 1:8])
pridge <- ridge(py, pX, df=8:1)
pairs(pridge)
}
\references{
Friendly, M. (2013). The Generalized Ridge Trace Plot:
Visualizing Bias \emph{and} Precision. \emph{Journal of Computational and
Graphical Statistics}, \bold{22}(1), 50-68,
doi:10.1080/10618600.2012.681237,
\url{https://www.datavis.ca/papers/genridge-jcgs.pdf}
}
\seealso{
\code{\link{ridge}} for details on ridge regression as implemented here
\code{\link{plot.ridge}}, \code{\link{traceplot}} for other plotting methods
}
\author{
Michael Friendly
}
\keyword{hplot}
|
07d520f910fd8cb9e6a6e3db8b7c98bd152ef969 | 58567152aadcf24d632e5e5ecbc458a1de83f95f | /01_scripts/01_analisis_encuesta.R | 76a6a830ab099fb4fd22def26bd4d455530bc3f9 | [] | no_license | quishqa/Curso_R_UNALM_ciencias | 9558d3f294ab9b76a2ffe9b99b0f183c1dd7d389 | 2355f53fe4f264ad0a1679318fad8bb7fc113672 | refs/heads/main | 2023-04-28T14:10:32.870204 | 2021-05-15T21:52:10 | 2021-05-15T21:52:10 | 350,546,649 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 5,030 | r | 01_analisis_encuesta.R | # Un curso introductorio de R
# Análisis de las respuestas de la encuesta
# Leyendo el archivo csv
respuestas <- read.table("02_data/respuestas27.csv",
header = T,
sep = ",",
stringsAsFactors = F)
# Cambiando el nombre del encabezado
names(respuestas) <- c("date", "name", "last.name",
"age", "district", "molinero",
"faculty", "year", "program",
"prog.lang", "os", "labs",
"Excel", "R", "why")
# Calculando algunas médias
edad_media <- mean(respuestas$age)
molineros <- (prop.table(table(respuestas$molinero))["Sí"] +
prop.table(table(respuestas$molinero))["Sí, y ya me gradué"] ) * 100
district <- prop.table(table(respuestas$district))
faculty <- prop.table(table(respuestas$faculty)) * 100
graduados <- prop.table(table(respuestas$year))["Ya me gradué"] * 100
program <- prop.table(table(respuestas$program)) * 100
os <- prop.table(table(respuestas$os)) * 100
labs <- prop.table(table(respuestas$labs)) * 100
excel <- prop.table(table(respuestas$Excel)) * 100
r <- prop.table(table(respuestas$R)) * 100
r
# Haciendo unas figuras
library(scales)
fac <- sort(faculty, decreasing = T)
bp <- barplot(fac, col = alpha("red", 0.7),
axes=F, ylim = c(0, 100),
border=NA,
font.lab=2,
cex.names = 0.85, width=c(0.1, 0.1, 0.1, 0.1, 0.1))
mtext('Qué estudias?', side = 3, adj = 0,
line = 1.2, cex = 1.75, font = 2) # Adding the main title
mtext('Frecuencia (%)', side = 3, adj=0, cex = 1.25, font =3) # Add the subtitle
text(x = bp, y = fac + 3.5, labels = fac, cex = 1.25) # Adding % to each bar
RespuestasBarplot <- function(table, title, bar_col, ylim=c(0,100), size = 0.85, sorted=T){
if (sorted){
table_sort <- sort(table, decreasing = T)
} else {
table_sort <- table
}
bp <- barplot(table_sort, col = alpha(bar_col, 0.7),
axes=F, ylim=ylim,
border=NA, font.lab=2,
cex.names=size)
mtext(title, side = 3, adj = 0,
line = 1.2, cex = 1.75, font = 2) # Adding the main title
mtext('Frecuencia (%)', side = 3, adj=0, cex = 1.25, font =3) # Add the subtitle
text(x = bp, y = table_sort + 3.5, labels = table_sort, cex = 1.25) # Adding % to each bar
}
png("04_figs/01_que_estudias.png", res=300, units="in", width=7, height=5)
RespuestasBarplot(faculty, "Qué estudias?", "red",size=0.8)
dev.off()
png("04_figs/02_sabes_programar.png", res = 300, units="in", width=7, height=5 )
RespuestasBarplot(program[c("Sí", "Algo", "Nada")], "Sabes programar?", "blue", sorted = F)
dev.off()
png("04_figs/03_practicas.png", res=300, units="in", width = 7, height = 5)
fac <- sort(labs, decreasing = T)
bp <- barplot(fac, col = alpha("orange", 0.7),
axes=F, ylim = c(0, 100),
border=NA,
font.lab=2,
cex.names = 0.85,
names = c("Excel", "Lenguaje \n Programación",
"Google Sheet", "Otro Soft", "\n \n No hago Prácticas \n me defiendo \n Parcial y Final"))
mtext('Qué usas para hacer tus prácticas?', side = 3, adj = 0,
line = 1.2, cex = 1.75, font = 2) # Adding the main title
mtext('Frecuencia (%)', side = 3, adj=0, cex = 1.25, font =3) # Add the subtitle
text(x = bp, y = fac + 3.5, labels = format(fac, digits=0), cex = 1.25) #
dev.off()
png("04_figs/04_excel.png", res=300, units="in", width = 7, height = 5)
fac <- sort(excel, decreasing = T)
bp <- barplot(fac, col = alpha("forestgreen", 0.7),
axes=F, ylim = c(0, 100),
border=NA,
font.lab=2,
cex.names = 0.85,
names = c("Lo justo", "Como calculadora", "Monstro en computación"))
mtext('Cuál es tu nivel de Excel?', side = 3, adj = 0,
line = 1.2, cex = 1.75, font = 2) # Adding the main title
mtext('Frecuencia (%)', side = 3, adj=0, cex = 1.25, font =3) # Add the subtitle
text(x = bp, y = fac + 3.5, labels = format(fac, digits=0), cex = 1.25) #
dev.off()
# Hora en que respondieron la encuesta
date_res <- as.POSIXct(strptime(respuestas$date, format = "%m/%d/%Y %H:%M:%S"),
tz = "America/Sao_Paulo")
attributes(date_res)$tzone <- "America/Lima"
date_res_per <- as.POSIXlt(date_res)
hour_res <- as.data.frame(table(date_res_per$hour), stringsAsFactors = F)
names(hour_res) <- c("hour", "freq")
hour_freq <- data.frame(hour=as.character(0:23))
hour <- merge(hour_freq, hour_res, all=T)
hour$hour <- as.numeric(hour$hour)
hour <- hour[order(hour$hour), ]
png("04_figs/05_hora_de_respuesta.png", res=300, units="in", width = 7, height = 5)
plot(hour$hour, hour$freq, ylim = c(0, 8), col = "red", pch=19, cex=1.25,
xlab = "Horas", ylab = "Frecuencia", axes = F,
main= "Hora del día de respuesta de la encuesta")
segments(hour$hour,0, hour$hour, hour$freq, col="red")
axis(2)
axis(1, at=seq(0,23, 2), labels = seq(0,23, 2))
box()
dev.off() |
0445aa97a4b28dfd1c1125cd7beda2ab71ad96f0 | c5649a93aeb636af7c2335da5c43b16aab909e0b | /src/titer-plot.R | 2aafdbb4fcacba497cf39087502589720c3b9d09 | [
"MIT"
] | permissive | hilldr/astrovirus | dc33f1dc872e433c8adc542cd4777c64cb4cd51f | 92626644884f9c7aba0bddef4a7e419bff6eda4d | refs/heads/master | 2020-03-15T01:00:33.263252 | 2019-04-19T18:57:13 | 2019-04-19T18:57:13 | 131,883,292 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,722 | r | titer-plot.R | ## R script for plotting the results of correlation analysis
## between viral titer and RNA-seq gene counts
## David R. Hill
## -----------------------------------------------------------------------------
library(magrittr)
## load data
data <- readr::read_csv(file = "../results/counts_metadata_titer-correlation.csv")
## calculate position for 'r' on facet plots
#data.position <- data %>% dplyr::group_by(SYMBOL) %>%
# dplyr::summarise(position = max(count))
#data <- data %>% dplyr::left_join(data.position, by = 'SYMBOL')
## subset to top 10 positively correlated
data.pos <- data[which(data$SYMBOL %in% unique(data$SYMBOL)[1:12]),]
## calculate position for 'r' on facet plots
data.position <- data.pos %>% dplyr::group_by(SYMBOL) %>%
dplyr::summarise(position = max(count))
data.pos <- data.pos %>% dplyr::left_join(data.position, by = 'SYMBOL')
## top 10 negatively correlated
data.neg <- data[which(data$SYMBOL %in% unique(data$SYMBOL)[(length(unique(data$SYMBOL)) - 11):length(unique(data$SYMBOL))]),]
data.position <- data.neg %>% dplyr::group_by(SYMBOL) %>%
dplyr::summarise(position = min(count))
data.neg <- data.neg %>% dplyr::left_join(data.position, by = 'SYMBOL')
## genes of interest subset
genes <- readr::read_csv(file = "../results/gene_list.csv", col_names = FALSE)
data.genes <- data[which(data$SYMBOL %in% genes$X1),]
data.position <- data.genes %>% dplyr::group_by(SYMBOL) %>%
dplyr::summarise(position = max(count))
data.genes <- data.genes %>% dplyr::left_join(data.position, by = 'SYMBOL')
## Plots -----------------------------------------------------------------------
## Positive correlations
library(ggplot2)
library(scales)
source("ggplot2-themes.R")
plot1 <- ggplot(data = data.pos[data.pos$virus == "VA1",],
aes(x = PFU_well, y = count)) +
geom_smooth(
colour = "grey70",
fill = "grey80",
linetype = "dashed",
size = 1,
method = "lm",
formula = y ~ x,
level = 0.95
) +
geom_point(shape = 21, size = 5, aes(fill = as.factor(hpi))) +
facet_wrap(~SYMBOL, scales = "free_y") +
scale_x_log10(
labels = trans_format("log10", math_format(10^.x))) +
annotation_logticks(sides = "b", size = 1,
short = unit(.75,"mm"),
mid = unit(1,"mm"),
long = unit(2,"mm")) +
xlab("genome copies/well") + ylab("TPM") +
scale_fill_brewer("HPI", palette = "Reds") +
geom_text(
size = 5,
aes(x = 2e3,
y = position,
label = paste0("r = ", round(titer_correlation, digits = 3)))) +
theme(
strip.text = element_text(size = 24),
axis.text = element_text(size = 12),
axis.title = element_text(size = 24),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
panel.background = element_rect(fill = "white"),
panel.border = element_rect(fill = NA,
color = "grey70",
size = 1),
plot.title = element_text(size = 45,
face = "bold",
hjust = 0),
)
## print to open PNG device
png(filename = "../img/titer-gene-correlation-plot_POSITIVE.png", width = 1200, height = 900)
print(plot1)
dev.off()
plot2 <- ggplot(data = data.neg[data.neg$virus == "VA1",], aes(x = PFU_well, y = count)) +
geom_smooth(
colour = "grey70",
fill = "grey80",
linetype = "dashed",
size = 1,
method = "lm",
formula = y ~ x,
level = 0.95
) +
geom_point(shape = 21, size = 5, aes(fill = as.factor(hpi))) +
facet_wrap(~SYMBOL, scales = "free_y") +
scale_x_log10(
labels = trans_format("log10", math_format(10^.x))) +
annotation_logticks(sides = "b", size = 1,
short = unit(.75,"mm"),
mid = unit(1,"mm"),
long = unit(2,"mm")) +
xlab("genome copies/well") + ylab("TPM") +
scale_fill_brewer("HPI", palette = "Reds") +
geom_text(
size = 5,
aes(x = 2e3,
y = position,
label = paste0("r = ", round(titer_correlation, digits = 3)))) +
theme(
strip.text = element_text(size = 24),
axis.text = element_text(size = 12),
axis.title = element_text(size = 24),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
panel.background = element_rect(fill = "white"),
panel.border = element_rect(fill = NA,
color = "grey70",
size = 1),
plot.title = element_text(size = 45,
face = "bold",
hjust = 0),
)
## print to open PNG device
png(filename = "../img/titer-gene-correlation-plot_NEGATIVE.png", width = 1200, height = 900)
print(plot2)
dev.off()
plot3 <- ggplot(data = data.genes[data.genes$virus == "VA1",],
aes(x = PFU_well, y = count)) +
geom_smooth(
colour = "grey70",
fill = "grey80",
linetype = "dashed",
size = 1,
method = "lm",
formula = y ~ x,
level = 0.95
) +
geom_point(shape = 21, size = 5, aes(fill = as.factor(hpi))) +
facet_wrap(~SYMBOL, scales = "free_y") +
scale_x_log10(
labels = trans_format("log10", math_format(10^.x))) +
annotation_logticks(sides = "b", size = 1,
short = unit(.75,"mm"),
mid = unit(1,"mm"),
long = unit(2,"mm")) +
xlab("genome copies/well") + ylab("TPM") +
scale_fill_brewer("HPI", palette = "Reds") +
geom_text(
size = 5,
aes(x = 2e3,
y = position,
label = paste0("r = ", round(titer_correlation, digits = 3)))) +
theme(
strip.text = element_text(size = 24),
axis.text = element_text(size = 12),
axis.title = element_text(size = 24),
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
panel.background = element_rect(fill = "white"),
panel.border = element_rect(fill = NA,
color = "grey70",
size = 1),
plot.title = element_text(size = 45,
face = "bold",
hjust = 0),
)
## print to open PNG device
png(filename = "../img/titer-gene-correlation-plot_gene_list.png", width = 2400, height = 1800)
print(plot3)
dev.off()
|
e8d4dfff93ab2ad82ad31a531b0890671409b719 | da683d1ea1b7001b1b26c7d2b4a18b21652c9677 | /plot2.R | d5e098b580e0037c1a9c44dc2d6ab2447115fa43 | [] | no_license | sayanbanerjee32/ExData_Plotting1 | 7b78e97de1f650073ee55bbdd2c4c1173ecf9aba | a68551f702a36f3c2dacc8755a42073a054e81ab | refs/heads/master | 2021-01-12T18:58:22.082674 | 2015-06-06T19:18:52 | 2015-06-06T19:18:52 | 36,974,861 | 0 | 0 | null | 2015-06-06T09:35:07 | 2015-06-06T09:35:07 | null | UTF-8 | R | false | false | 856 | r | plot2.R | require(dplyr)
require(lubridate)
#reading data file
hpcDF<- read.table("household_power_consumption.txt",header = TRUE, sep = ";")
#filtering on dates
hpcFilteredDF <- filter(hpcDF, as.Date(Date,"%d/%m/%Y") ==as.Date("1/2/2007","%d/%m/%Y") |
as.Date(Date,"%d/%m/%Y") ==as.Date("2/2/2007","%d/%m/%Y"))
# adding another colume as date time object
hpcFWithDateDF <- mutate(hpcFilteredDF, datetime = parse_date_time(paste(Date,Time),
"%d/%m/%Y %H:%M:%S"))
#plotting the line
par(mfrow = c(1, 1))
with(hpcFWithDateDF,plot(datetime,
as.numeric(levels(Global_active_power))[Global_active_power],
ylab="Global Active Power (kilowatts)",
xlab="",
type="l")
)
#copying the plot in the file
dev.copy(png,file="plot2.png")
dev.off()
|
eb15e319dd17e4da17f114d7670952b4d7e7c60d | 8d041410c9eadf7344fb3c9410eb0176b90e2bc7 | /metabolomics_FC/unsupervised_learning_time_points.R | 3f6c552fcb46ab3a8721a5bb29669c342d686f28 | [] | no_license | easonfg/overfeeding | 0e169403730d3252e2b74f250052d5e87672ebb7 | 120f0e1eba798f688619624874d49641e6d01a4c | refs/heads/main | 2023-04-09T08:56:53.332355 | 2021-04-08T16:12:31 | 2021-04-08T16:12:31 | 355,973,680 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 7,414 | r | unsupervised_learning_time_points.R | library(dplyr)
library(tibble)
### read data
overfeed.data = read.csv('Re__Obesity_data_set/overfeeding_metabolomics.csv', row.names = 1,
header = F, stringsAsFactors = FALSE, sep=',', dec='.')
pheno.data = read.csv('cytof/phenotypes.csv',
header = T, stringsAsFactors = FALSE, sep=',', dec='.')
pheno.data
pheno.data$sspg.didff = pheno.data$SSPG2 - pheno.data$SSPG1
pheno.data$wt.diff = pheno.data$Wtwk4 - pheno.data$Base.Wt
pheno.data$lab. = sub('-', '_', pheno.data$lab.)
pheno.data$lab. = sub('/', '_', pheno.data$lab.)
pheno.data = rename(pheno.data, subject.id = 'lab.')
### get rid of last column which contains 1 number
overfeed.data = data.frame(t(overfeed.data[,-ncol(overfeed.data)]))
### change all columns that should be numbers to numeric
# first find exclude all columns that should be factors
num.col = seq(1,ncol(overfeed.data))[!seq(1, ncol(overfeed.data)) %in% c(1,4,5,6)]
# overfeed.data[1:10,1:10]
# change all remaining to numeric
overfeed.data[,num.col] = sapply(num.col,
function(x) as.numeric(as.character(overfeed.data[,x])))
overfeed.data = rename(overfeed.data, subject.id = 'SUBJECT.ID')
str(overfeed.data )
overfeed.data[1:10,1:10]
is.na(overfeed.data[1:10,1:10])
overfeed.data[1:10,1:10]
## change time point to numbers
overfeed.data$TIME.POINT = rep(c(0,2,4,8), nrow(overfeed.data)/4)
library(lme4)
# get the metabolite columns
metabolite_cols = colnames(overfeed.data)[7:ncol(overfeed.data)]
pipeline = function(timepoint) {
library(dplyr)
rownames(overfeed.data) = paste0(overfeed.data$subject.id, '_', overfeed.data$TIME.POINT)
rownames(overfeed.data)
pca.data = overfeed.data %>%
filter(TIME.POINT == 0 | TIME.POINT == timepoint)
# select(7:ncol(overfeed.data))
row.names(pca.data)
all.base.data = inner_join(pheno.data, pca.data, by = 'subject.id')
all.base.data = pca.data
all.base.data[1:10, 1:20]
# pca.met.data = pca.data %>% select(7:ncol(overfeed.data))
pca.met.data = all.base.data[complete.cases(all.base.data),]
sum(colSums(is.na(pca.met.data)) > 0)
## filter for unique columns
pca.met.data = Filter(function(x)(length(unique(x))>1), pca.met.data)
pca.met.data[1:10, 1:20]
# pca.met.data = data.frame(pca.met.data) %>% remove_rownames() %>% column_to_rownames(var = 'subject.id')
pca.met.data[1:10, 1:20]
str(pca.met.data)
pca.res = prcomp(pca.met.data %>% select(-c(1:6)), scale = T)
library("factoextra")
fviz_eig(pca.res)
# fviz_pca_ind(pca.res, axes = c(3,4),
jpeg(paste('metabolomics_FC/clustering/pca_', timepoint, '.jpeg', sep = ''),
units="in", width=10, height=10, res=500)
pca.plot = fviz_pca_ind(pca.res,
# col.ind = "cos2", # Color by the quality of representation
# col.ind = as.factor(pca.met.data$GENDER), # Color by the quality of representation
# col.ind = (pca.met.data$BMI.x), # Color by the quality of representation
# col.ind = (pca.met.data$GROUP.NAME), # Color by the quality of representation
col.ind = as.factor(pca.met.data$TIME.POINT), # Color by the quality of representation
# col.ind = (pca.met.data$Ethnicity), # Color by the quality of representation
# gradient.cols = c("#00AFBB", "#E7B800", "#FC4E07"),
# gradient.cols = c('red', 'white', 'blue'),
repel = TRUE # Avoid text overlapping
)
print(pca.plot)
dev.off()
library(RColorBrewer)
heat_colors <- rev(brewer.pal(7, "RdYlBu"))
### Run pheatmap using the metadata data frame for the annotation
library(pheatmap)
jpeg(paste('metabolomics_FC/clustering/heatmap_', timepoint, '.jpeg', sep = ''),
units="in", width=10, height=10, res=500)
pheatmap(t(scale(pca.met.data[,20:ncol(pca.met.data)])),
annotation_col = data.frame(row.names = rownames(pca.met.data),
# groups = pca.met.data$GROUP.NAME,
# Gender = pca.met.data$Gender,
# Ethnicity = pca.met.data$Ethnicity),
TIME.POINT = as.factor(pca.met.data$TIME.POINT)),
color = heat_colors,
cluster_rows = T,
show_rownames = F,
border_color = NA,
fontsize = 10,
scale = "row",
fontsize_row = 10,
height = 20)
dev.off()
}
pipeline(2)
pipeline(4)
pipeline(8)
library(dplyr)
rownames(overfeed.data) = paste0(overfeed.data$subject.id, '_', overfeed.data$TIME.POINT)
rownames(overfeed.data)
pca.data = overfeed.data
# select(7:ncol(overfeed.data))
row.names(pca.data)
all.base.data = inner_join(pheno.data, pca.data, by = 'subject.id')
all.base.data = pca.data
all.base.data[1:10, 1:20]
# pca.met.data = pca.data %>% select(7:ncol(overfeed.data))
pca.met.data = all.base.data[complete.cases(all.base.data),]
sum(colSums(is.na(pca.met.data)) > 0)
## filter for unique columns
pca.met.data = Filter(function(x)(length(unique(x))>1), pca.met.data)
pca.met.data[1:10, 1:20]
# pca.met.data = data.frame(pca.met.data) %>% remove_rownames() %>% column_to_rownames(var = 'subject.id')
pca.met.data[1:10, 1:20]
str(pca.met.data)
pca.res = prcomp(pca.met.data %>% select(-c(1:6)), scale = T)
library("factoextra")
fviz_eig(pca.res)
# fviz_pca_ind(pca.res, axes = c(3,4),
jpeg(paste('metabolomics_FC/clustering/pca_', 'all.timepoints', '.jpeg', sep = ''),
units="in", width=10, height=10, res=500)
pca.plot = fviz_pca_ind(pca.res,
# col.ind = "cos2", # Color by the quality of representation
# col.ind = as.factor(pca.met.data$GENDER), # Color by the quality of representation
# col.ind = (pca.met.data$BMI.x), # Color by the quality of representation
# col.ind = (pca.met.data$GROUP.NAME), # Color by the quality of representation
col.ind = as.factor(pca.met.data$TIME.POINT), # Color by the quality of representation
# col.ind = (pca.met.data$Ethnicity), # Color by the quality of representation
# gradient.cols = c("#00AFBB", "#E7B800", "#FC4E07"),
# gradient.cols = c('red', 'white', 'blue'),
repel = TRUE # Avoid text overlapping
)
print(pca.plot)
dev.off()
library(RColorBrewer)
heat_colors <- rev(brewer.pal(7, "RdYlBu"))
### Run pheatmap using the metadata data frame for the annotation
library(pheatmap)
jpeg(paste('metabolomics_FC/clustering/heatmap_', 'all.timepoints', '.jpeg', sep = ''),
units="in", width=10, height=10, res=500)
pheatmap(t(scale(pca.met.data[,20:ncol(pca.met.data)])),
annotation_col = data.frame(row.names = rownames(pca.met.data),
# groups = pca.met.data$GROUP.NAME,
# Gender = pca.met.data$Gender,
# Ethnicity = pca.met.data$Ethnicity),
TIME.POINT = as.factor(pca.met.data$TIME.POINT)),
color = heat_colors,
cluster_rows = T,
show_rownames = F,
border_color = NA,
fontsize = 10,
scale = "row",
fontsize_row = 10,
height = 20)
dev.off()
|
bc5ee948fea5f11e9d8219378fac2ef4a6946928 | 08e9461d78579eaf46fac27e7bda3c7419313b22 | /workflow/scripts/01download.R | 18022361a682c7940e9e7e04e9e3915ad4a6607a | [] | no_license | CristianRiccio/celegans-pathogen-adaptation | 509bd196397d8611c7e6d382fd470165b5237501 | 73cfed6ecd79f03511ccbbe3a2923d68839d163a | refs/heads/master | 2023-05-31T15:58:26.239064 | 2023-05-20T21:33:40 | 2023-05-20T21:33:40 | 192,746,214 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,126 | r | 01download.R | ### Download data from iRODS, rename it and transform it into fq.gz
### PacBio
snakemake --jobs 4 output/reads/dna/pacbio/{BIGb248,BIGb306,BIGb446,BIGb468}/a.subreads.bam --cluster-config cluster.json --cluster "bsub -n {cluster.nCPUs} -R {cluster.resources} -c {cluster.tCPU} -G {cluster.Group} -q {cluster.queue} -o {cluster.output} -e {cluster.error} -M {cluster.memory}"
# 5569STDY7708519 BIGb248
imeta qu -z seq -d sample = '5569STDY7708519'
imeta ls -d /seq/pacbio/r54316_20190125_142157/2_B01/lima_output.bc1008_BAK8A--bc1008_BAK8A.bam
# library name: DN546911R-A1
# 5569STDY7708520 BIGb306
imeta qu -z seq -d sample = '5569STDY7708520'
imeta ls -d /seq/pacbio/r54316_20190125_142157/2_B01/lima_output.bc1009_BAK8A--bc1009_BAK8A.bam
# DN546911R-B1
# 5569STDY7708521 BIGb446
imeta qu -z seq -d sample = '5569STDY7708521'
imeta ls -d /seq/pacbio/r54316_20190125_142157/2_B01/lima_output.bc1010_BAK8A--bc1010_BAK8A.bam
# DN546911R-C1
# 5569STDY7708522 BIGb468
imeta qu -z seq -d sample = '5569STDY7708522'
imeta ls -d /seq/pacbio/r54316_20190125_142157/2_B01/lima_output.bc1012_BAK8A--bc1012_BAK8A.bam
# DN546911R-D1 |
d758636d84a7f98316a72246d3b80464ccf0191c | 816e64bd2c1c3ed1eb9c28b3cc73da43bfb0b57c | /cost_summary.R | fa160f7de2f7d808927da2bed5f8c812873b3c95 | [] | no_license | imeikle/storm-data | 320f1fd4957eb963493a4c873d9db4abf53ccaed | 86aae709f3a5eac0615d84bf382b0ed9cf2989c8 | refs/heads/master | 2016-09-11T01:37:16.656479 | 2014-12-20T12:16:16 | 2014-12-20T12:16:16 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,592 | r | cost_summary.R | storm <- read.csv("repdata-data-StormData.csv.bz2", stringsAsFactors = FALSE)
library(dplyr)
# Remove all those events which did not incur costs
storm_cost_significant <- filter(storm, (PROPDMG != 0 & CROPDMG != 0))
# Replace duplicates caused by case variance
storm_cost_significant$PROPDMGEXP <- sapply(storm_cost_significant$PROPDMGEXP,toupper)
storm_cost_significant$CROPDMGEXP <- sapply(storm_cost_significant$CROPDMGEXP,toupper)
# Convert to factor so that the is.na function works later
#storm_cost_significant$PROPDMGEXP <- factor(storm_cost_significant$PROPDMGEXP)
#storm_cost_significant$CROPDMGEXP <- factor(storm_cost_significant$CROPDMGEXP)
# Method to expand the exponents. Does it match "3" and "5"?
exp <- function (e) if (is.null(e) || is.na(e)) 1 else if (e == 'K') 1000 else if (e == 'M') exp('K') * 1000 else if (e == 'B') exp('M') * 1000 else 1
# Replace with my own version
expand <- function(x) {
if (is.na(as.integer(x))) {
return (1)
} else {
if (x == "K") {
return (1000)
} else {
if (x == "M") {
return (10^6)
} else {
if (x == "B") {
return (10^9)
} else {
return(10^(as.integer(x)))
}
}
}
}
}
# exp2 <- function(x) {
# if (is.na(as.integer(x))) {
# return (1)
# } else {
# if (x == "K") {
# return (1000)
# } else {
# if (x == "M") {
# return (10^6)
# } else {
# if (x == "B") {
# return (10^9)
# } else {
# print(paste(x,":",as.integer(x),":",10^(as.integer(x))))
# return(10^(as.integer(x)))
# }
# }
# }
# }
# }
tpd <- with(storm_cost_significant, PROPDMG * mapply(expand, PROPDMGEXP))
tcd <- with(storm_cost_significant, CROPDMG * mapply(expand, CROPDMGEXP))
# then combine with data frame as below
storm_cost_sum <- cbind(storm_cost_significant[c(1,2,7,8,25:28)],
PropertyCost = tpd, CropCost = tcd)
# Either:
# vexp <- Vectorize(exp)
#
# total_crop_dmg <- storm_cost_significant$CROPDMG * vexp(storm_cost_significant$CROPDMGEXP)
# total_property_dmg <- storm_cost_significant$PROPDMG * vexp(storm_cost_significant$PROPDMGEXP)
#
# # Or:
# total_crop_dmg <- with(storm_cost_significant, CROPDMG * mapply(exp,CROPDMGEXP))
# total_property_dmg <- with(storm_cost_significant, PROPDMG * mapply(exp,PROPDMGEXP))
#
#
# storm_cost_sum <- cbind(storm_cost_significant[c(1,2,7,8,25:28)],
# PropertyCost = total_property_dmg, CropCost = total_crop_dmg)
property_cost <- storm_cost_sum %>%
group_by(EVTYPE) %>%
summarise(Prop_Cost = sum(PropertyCost)) %>%
arrange(desc(Prop_Cost))
crop_cost <- storm_cost_sum %>%
group_by(EVTYPE) %>%
summarise(Crop_Cost = sum(CropCost)) %>%
arrange(desc(Crop_Cost))
Costs <- merge(property_cost, crop_cost)
Costs <- Costs %>%
mutate(Total_Cost = Prop_Cost + Crop_Cost) %>%
arrange(desc(Total_Cost))
# ALTERNATIVELY:
# Order the exponents as factors
storm_cost_significant$PROPDMGEXP <- factor(storm_cost_significant$PROPDMGEXP,
levels = c("", "0", "3", "5", "K", "M", "B"), ordered = TRUE)
storm_cost_significant$CROPDMGEXP <- factor(storm_cost_significant$CROPDMGEXP,
levels = c("", "0", "K", "M", "B"), ordered = TRUE)
# test <- storm_cost_significant %>%
# group_by(EVTYPE) %>%
# arrange(PROPDMGEXP)
# Group together the exponents and sum over them
# Problems arise because some entries add up to more than the exponent value represents
property <- storm_cost_significant %>%
group_by(PROPDMGEXP, EVTYPE) %>%
summarise(cost = sum(PROPDMG)) %>%
arrange(PROPDMGEXP, cost, EVTYPE)
# Reverse to give a descending value of exponent
property[rev(seq(1:nrow(property))),]
crops <- storm_cost_significant %>%
group_by(CROPDMGEXP, EVTYPE) %>%
summarise(cost = sum(CROPDMG)) %>%
arrange(CROPDMGEXP, cost, EVTYPE)
# Reverse to give a descending value of exponent
crops[rev(seq(1:nrow(crops))),]
# This only captures events where there are both F & I
storm_dead <- filter(storm, FATALITIES != 0 & INJURIES != 0)
states_dead <- storm_dead %>%
select(STATE, EVTYPE, FATALITIES, INJURIES) %>%
group_by(STATE) %>%
filter(FATALITIES == max(FATALITIES)) %>%
arrange(STATE)
storm_dead1 <- filter(storm, FATALITIES != 0 & INJURIES != 0) %>%
mutate(CASUALTIES = FATALITIES + INJURIES) %>%
select(STATE, EVTYPE, CASUALTIES)
states_cas <- storm_dead1 %>%
select(STATE, EVTYPE, CASUALTIES) %>%
group_by(STATE) %>%
filter(CASUALTIES == max(CASUALTIES)) %>%
arrange(STATE)
test <- test %>%
select(STATE, EVTYPE, FATALITIES, INJURIES) %>%
group_by(STATE) %>%
filter(FATALITIES == max(FATALITIES)) %>%
arrange(STATE)
storm_ei_state <- storm_ei %>%
mutate(Total_Cost = PropertyCost + CropCost) %>%
select(STATE, EVTYPE, Total_Cost) %>%
group_by(STATE) %>%
filter(Total_Cost == max(Total_Cost)) %>%
arrange(STATE)
#ggplot(test, aes(factor(STATE), weight = FATALITIES, fill = factor(EVTYPE))) +
# geom_bar(position="dodge")
ggplot(test, aes(STATE, FATALITIES, fill = EVTYPE)) +
geom_bar(position="dodge", stat = "identity") + theme(axis.text.x = element_text(angle = 270))
states_gg <- ggplot(test, aes(STATE, FATALITIES, fill = EVTYPE)) +
geom_bar(position="dodge", stat = "identity") +
theme(axis.text.x = element_text(angle = 270)) +
guides(fill = guide_legend(keyheight = 0.8, keywidth = 0.5))
states_gg
ei_states_gg <- ggplot(storm_ei_state, aes(STATE, Total_Cost, fill = EVTYPE)) +
geom_bar(position="dodge", stat = "identity") +
theme(axis.text.x = element_text(angle = 270)) + scale_y_sqrt() +
guides(fill = guide_legend(keyheight = 0.8, keywidth = 0.5))
ei_states_gg
``` |
87c0862fc8f7c4443ec0fa0f6e815ff35a7e391c | 4fefba17f572330a88a693b8bd7b6a4d0f0c2c75 | /best.R | 7c43b741f1368e797028bafebf70714cd2f174b6 | [] | no_license | TerraSetzler/best | f6fe20e1e7d57bd856ea924b5cb58d01c14c5e46 | 12a05702b4234f4f3c87971f1deb3d91b65748d7 | refs/heads/master | 2021-01-10T15:12:19.053719 | 2015-10-31T17:41:29 | 2015-10-31T17:41:29 | 45,266,045 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,194 | r | best.R | best <- function(state, outcome) {
data2 <- read.csv("outcome-of-care-measures.csv", na.strings = "Not Available")
data1 <- data2[which(data2$State == state),]
if(outcome != "heart attack" || "heart failure" || "pnumonia") {
stop("invalid outcome")
}
if(outcome == "heart attack") {
z <- min(as.numeric(paste(data1$Hospital.30.Day.Death..Mortality..Rates.from.Heart.Attack)), na.rm = TRUE)
data4 <- data1[which(as.numeric(data1$Hospital.30.Day.Death..Mortality..Rates.from.Heart.Attack) == z),]
return(as.character(data4$Hospital.Name))
}
if(outcome == "heart failure") {
z <- min(as.numeric(paste(data1$Hospital.30.Day.Death..Mortality..Rates.from.Heart.Failure)), na.rm = TRUE)
data4 <- data1[which(as.numeric(data1$Hospital.30.Day.Death..Mortality..Rates.from.Heart.Failure) == z),]
return(as.character(data4$Hospital.Name))
}
if(outcome == "pneumonia") {
z <- min(as.numeric(paste(data1$Hospital.30.Day.Death..Mortality..Rates.from.Pneumonia)), na.rm = TRUE)
data4 <- data1[which(as.numeric(data1$Hospital.30.Day.Death..Mortality..Rates.from.Pneumonia) == z),]
return(as.character(data4$Hospital.Name))
}
} |
9cac9c7ca636eb9c697ecfe8c673a1f8468f76cf | 08705790131f97a4c6fc4c7f8824fc91b67eb5ed | /PackageTryV3/R/SplitGroup.R | 265b1f20a30ebb39ce6639a90e8c314591a1227b | [] | no_license | Durenlab/PackageTry | e6d76b85ed083dfd2ee22d4ff67f717c0579dcfe | 843f5c1d4f8fa4eaabcc86b13d7cd27afbd834c6 | refs/heads/main | 2023-06-13T07:43:23.598894 | 2021-07-07T18:36:47 | 2021-07-07T18:36:47 | 382,151,249 | 0 | 1 | null | 2021-07-06T16:40:15 | 2021-07-01T20:39:51 | C++ | UTF-8 | R | false | false | 123 | r | SplitGroup.R | "SplitGroup" <- function(foldername,barcord,W3,H,Reg_symbol_name,Reg_peak_name,cluster){
UseMethod("SplitGroup");
}
|
da2e07f926ec2f717765092217780f8db5ae019e | c283c9dac2440ad037a106b69fb659c54c5e9f5c | /code/13.R | 6a571e81f93fae830759b7046502c190f230920e | [] | no_license | ykx-ykx/work2 | f46ca88dfafb0568fb0f16343439450306dc0a77 | 9ff4d41dfcd19d96875ed6d451782fbeb0655b69 | refs/heads/main | 2023-04-01T10:50:09.475033 | 2021-04-04T02:49:29 | 2021-04-04T02:49:29 | 354,437,493 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 8,562 | r | 13.R | library(ggplot2)
library(NbClust)
library(reshape2)
library(ggsignif)
library(dplyr)
#MIR靶基因
miRNA_target<-read.delim("G:/colon_cancer/c3.mir.mirdb.v7.2.symbols.gmt",header = FALSE)
target<-miRNA_target[c(274,524,1104,760),]
mir_29a<-na.omit(unique(t(cbind(target[1,2:1071],target[2,2:1071]))))
mir_29a<-as.data.frame(mir_29a[-435,])
mir_891a<-as.data.frame(t(target[3,2:128]))
mir_548v<-as.data.frame(t(target[4,2:138]))
colnames(mir_29a)<-"mir_29a"
#读取基因分类
Group<-read.delim("G:/Newcon/cancer_gene_census.txt")
#读取免疫基因
inna<-read.delim("G:/Newcon/innate.txt")
inna<-toupper(t(inna))
mir_29a1<-merge(mir_29a,Group,by.x = "mir_29a",by.y="Gene.Symbol")
mir_891a1<-merge(mir_891a,Group,by.x = "1104",by.y="Gene.Symbol")
mir_548v1<-merge(mir_548v,Group,by.x = "760",by.y="Gene.Symbol")
mir_29ai<-intersect(t(mir_29a),inna)
mir_891ai<-intersect(t(mir_891a),inna)
mir_548vi<-intersect(t(mir_548v),inna)
mir_29a1[56:98,1]<-mir_29ai
mir_29a1[56:98,2]<-"Immune Gene"
mir_548v1[7:11,1]<-mir_548vi
mir_548v1[7:11,2]<-"Immune Gene"
mir_891a1[11:17,1]<-mir_891ai
mir_891a1[11:17,2]<-"Immune Gene"
mir_29a1[,1]<-"mir-29a"
mir_548v1[,1]<-"mir-548v"
mir_891a1[,1]<-"mir-891a"
colnames(mir_29a1)<-c("Mir","Group")
colnames(mir_548v1)<-c("Mir","Group")
colnames(mir_891a1)<-c("Mir","Group")
res<-rbind(mir_29a1,mir_548v1,mir_891a1)
res[which(res$Group=="Immune Gene"),2]<-1
res[which(res$Group=="oncogene"),2]<-2
res[which(res$Group=="TSG"),2]<-3
res[which(res$Group=="fusion"),2]<-4
res$Mir<- factor(res$Mir, levels=c("mir-29a", "mir-891a", "mir-548v"), ordered=TRUE)
percent<-c("0%","25%","50%","75%","100%")
ggplot(data = res, mapping = aes(x = Mir, fill = Group)) +
geom_bar(position = 'fill',width = 0.3)+
scale_fill_manual(values = c("#85C1E9","#F1948A","#7DCEA0","#DCDCDC"),
breaks=c("1", "2", "3","4"),
labels=c("Immune Gene", "Oncogene", "TSG","Other Gene"))+
scale_y_continuous(expand = c(0,0),labels=percent)+
theme(panel.background = element_rect(fill = NA),
panel.border = element_blank(),
axis.line = element_line(size = 0.8),
axis.title = element_text(colour = "black",size = 14),
axis.text = element_text(colour = "black",size = 14),
plot.title = element_text(hjust = 0.5),
legend.text = element_text(size = rel(1.2)),
panel.grid.major=element_blank(),panel.grid.minor=element_blank())+
theme(legend.position = "bottom",legend.direction = "horizontal")+
labs(x="",y="Frequency of genes")+
guides(fill=guide_legend(title=NULL))+
scale_x_discrete(breaks=c("mir-29a", "mir-891a", "mir-548v"),
labels=c("miR-29a", "miR-891a", "miR-548v"))+
annotate(geom = "text",x = 1, y = 0.8,label="44%",size=6)+
annotate(geom = "text",x = 1, y = 0.51,label="10%",size=6)+
annotate(geom = "text",x = 1, y = 0.39,label="15%",size=6)+
annotate(geom = "text",x = 1, y = 0.16,label="31%",size=6)+
annotate(geom = "text",x = 2, y = 0.8,label="41%",size=6)+
annotate(geom = "text",x = 2, y = 0.53,label="12%",size=6)+
annotate(geom = "text",x = 2, y = 0.25,label="47%",size=6)+
annotate(geom = "text",x = 3, y = 0.8,label="45%",size=6)+
annotate(geom = "text",x = 3, y = 0.5,label="10%",size=6)+
annotate(geom = "text",x = 3, y = 0.37,label="18%",size=6)+
annotate(geom = "text",x = 3, y = 0.15,label="27%",size=6)
ggsave("G:/图片/14.tiff", dpi=600)
###########3
setwd("G:/colon")
#处理数据,对于dead样本,overall survival采用day_to_death
#对于alive样本,overall survival采用day_to_last_follow_up
#最后只保留样本和生存时间两列
phen<-read.delim("TCGA-COAD.GDC_phenotype.tsv")
phen<-phen[,c(1,93,39,75)]
#删除无用数据
phen<-phen[which(phen$pathologic_M!=""),]
phen<-phen[which(phen$pathologic_M!="MX"),]
#只保留3,4期数据
d1<-phen[which(phen$tumor_stage.diagnoses=="stage iv"),]
d2<-phen[which(phen$tumor_stage.diagnoses=="stage iva"),]
d3<-phen[which(phen$tumor_stage.diagnoses=="stage ivb"),]
d4<-phen[which(phen$tumor_stage.diagnoses=="stage iii"),]
d5<-phen[which(phen$tumor_stage.diagnoses=="stage iiia"),]
d6<-phen[which(phen$tumor_stage.diagnoses=="stage iiib"),]
d7<-phen[which(phen$tumor_stage.diagnoses=="stage iiic"),]
phen<-rbind(d1,d2,d3,d4,d5,d6,d7)
phen["Group"]<-NA
phen[which(phen$tumor_stage.diagnoses=="stage iv"),5]<-"Distance Metastases"
phen[which(phen$tumor_stage.diagnoses=="stage iva"),5]<-"Distance Metastases"
phen[which(phen$tumor_stage.diagnoses=="stage ivb"),5]<-"Distance Metastases"
phen[which(phen$tumor_stage.diagnoses=="stage iii"),5]<-"Lymph Node Metastasis"
phen[which(phen$tumor_stage.diagnoses=="stage iiia"),5]<-"Lymph Node Metastasis"
phen[which(phen$tumor_stage.diagnoses=="stage iiib"),5]<-"Lymph Node Metastasis"
phen[which(phen$tumor_stage.diagnoses=="stage iiic"),5]<-"Lymph Node Metastasis"
phen<-na.omit(phen)
phen$submitter_id.samples<-gsub(pattern="-", replacement=".", phen$submitter_id.samples)
#只保留3列显著的miRNA
miRNA<-read.delim("batch_mir.txt",header = T)
miRNA<-miRNA[c(567,209,756),]
miRNA<-as.data.frame(t(miRNA))
colnames(miRNA)<-miRNA[1,]
miRNA<-miRNA[-1,]
miRNA['Name']<-rownames(miRNA)
rownames(miRNA)<-1:428
miR_phen<-merge(miRNA,phen,by.x = "Name",by.y = "submitter_id.samples")
miR_phen<-miR_phen[,-c(5,6)]
miR_phen1<-as.data.frame(apply(miR_phen[,2:4], 2, as.numeric))
miR_phen[,2:4]<-miR_phen1
#层次聚类
df<-scale(miR_phen[,2:4])
dist<-dist(df,method = "euclidean")
hc<-hclust(dist,method = "ward.D2")
nc<-NbClust(df,distance = "euclidean",min.nc = 2,max.nc = 15,method = "average")
par(mfcol=c(1,1))
clusters<-cutree(hc,k=2)
mir<-miR_phen
mir["cul"]<-clusters
colnames(mir)<-c("Name","mir-548v","mir-29a","mir-891a","Status","Transfer","Group")
sub1<-mir[which(mir$Group==1),]
sub2<-mir[which(mir$Group==2),]
mir$Group<-as.factor(mir$Group)
levels(mir$Group)<-c("S1","S2")
mir<-mir[order(mir$Transfer,decreasing=T),]
mir$Transfer<-factor(mir$Transfer,levels =c("Lymph Node Metastasis","Distance Metastases"))
#############################3
mir1<-mir[,c(1,7)]
score<-read.table("G:/colorectal_imm_sco.txt",header = T)
score$ID<-gsub(pattern="-", replacement=".", score$ID)
mir1$Name<-gsub('.{1}$', '', mir1$Name)
mir2<-merge(mir1,score,by.x = "Name",by.y = "ID")
mir2<-mir2[,-1]
sub1<-subset(mir2,Group%in%"S1")
sub2<-subset(mir2,Group%in%"S2")
p<-data.frame(Name="P",Stromal_score=NA,Immune_score =NA,ESTIMATE_score=NA)
for (i in 2:4) {
p[1,i]<-wilcox.test(sub1[,i],sub2[,i])$p.value
}
c<-matrix(data=NA,ncol=2,nrow=3)
for(i in 2:4){
m1<-median(sub1[,i])
m2<-median(sub2[,i])
c[i-1,1]<-colnames(sub1)[i]
c[i-1,2]<-log2(m1/m2)
}
#log-fold-change
c<-as.data.frame(c)
colnames(c)<-c("imma","m1/m2")
mir2<-melt(mir2)
mir2$Group<-as.character(mir2$Group)
compaired<-list("Stromal_score")
ggplot(mir2) +
aes(x = variable, y = value, fill = Group) +
geom_boxplot(outlier.colour = NA,width=0.5)+
scale_fill_manual(values = c("#2ECC71","#F1C40F"))+
geom_jitter(aes(fill=Group),width =0.2,size=0.5)+
geom_signif(y_position=c(1600), xmin=c(1.87), xmax=c(2.125), color="red" ,
annotation=c("LFC= -2.74"), tip_length=c(0.04,0.15), size=0.5,
textsize = 3.8, vjust = -0.3) +
geom_signif(y_position=c(1250), xmin=c(0.87), xmax=c(1.125), color="red" ,
annotation=c("LFC= -0.310"), tip_length=c(0.05,0.02), size=0.5,
textsize = 3.8, vjust = -0.3) +
geom_signif(y_position=c(2490), xmin=c(2.87), xmax=c(3.125), color="red" ,
annotation=c("LFC= -0.72"), tip_length=c(0.1,0.03), size=0.5,
textsize = 3.8, vjust = -0.3) +
theme_bw()+
theme(plot.title = element_text(hjust = 0.5,face="bold",size =14))+
labs(x=NULL,y="value")+
theme(axis.title = element_text(colour = "black",size = 14),
axis.text = element_text(colour = "black",size = 14),
panel.grid.major=element_blank(),panel.grid.minor=element_blank())+
theme(legend.position = c(0.02, 0.98),
legend.justification = c(0, 1),
legend.direction = "vertical",
#legend.key.width=unit(.6,"inches"),
legend.background=element_rect(colour="#566573",size=0.4),
legend.key.height=unit(.2,"inches"),
legend.text=element_text(colour="black",size=13),
legend.title=element_blank())
ggsave("G:/图片/19.tiff", dpi=600) |
20c0bf21f4ffe201296dfd206dd68fdfdabd081e | 446373433355171cdb65266ac3b24d03e884bb5d | /man/saga_rastermasking.Rd | 7c8732601fe49242b37dba311b43f30b6417c766 | [
"MIT"
] | permissive | VB6Hobbyst7/r_package_qgis | 233a49cbdb590ebc5b38d197cd38441888c8a6f3 | 8a5130ad98c4405085a09913b535a94b4a2a4fc3 | refs/heads/master | 2023-06-27T11:52:21.538634 | 2021-08-01T01:05:01 | 2021-08-01T01:05:01 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 1,104 | rd | saga_rastermasking.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/saga_rastermasking.R
\name{saga_rastermasking}
\alias{saga_rastermasking}
\title{QGIS algorithm Raster masking}
\usage{
saga_rastermasking(
GRID = qgisprocess::qgis_default_value(),
MASK = qgisprocess::qgis_default_value(),
MASKED = qgisprocess::qgis_default_value(),
...,
.complete_output = TRUE
)
}
\arguments{
\item{GRID}{\code{raster} - Grid. Path to a raster layer.}
\item{MASK}{\code{raster} - Mask. Path to a raster layer.}
\item{MASKED}{\code{rasterDestination} - Masked Grid. Path for new raster layer.}
\item{...}{further parameters passed to \code{qgisprocess::qgis_run_algorithm()}}
\item{.complete_output}{logical specifing if complete out of \code{qgisprocess::qgis_run_algorithm()} should be used (\code{TRUE}) or first output (most likely the main) should read (\code{FALSE}). Default value is \code{TRUE}.}
}
\description{
QGIS Algorithm provided by SAGA Raster masking (saga:rastermasking)
}
\details{
\subsection{Outputs description}{
\itemize{
\item MASKED - outputRaster - Masked Grid
}
}
}
|
f6cd9024ea22f67ef19b1c4bb7427365ae8be7cf | 9eff21629a84f33ce0da2840a72dea50e94dc524 | /R/cause_params_sims.R | 3822edf56770753b1f17a56ad45df98f2f9733d9 | [] | no_license | sangyoonstat/causeSims | 6b4221dd4db71104fb7ac35835a21f10014bd4b8 | 1997e0ac3e55a23ac386b0b4911daba1fe9979c1 | refs/heads/master | 2023-05-01T04:51:37.087823 | 2020-09-02T19:41:32 | 2020-09-02T19:41:32 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 598 | r | cause_params_sims.R |
#'@title Estimate CAUSE parameters for simulated data
#'@param dat A simulated data frame created with sum_stats
#'@param null_wt Null weight in dirichlet prior on mixing parameters
#'@param no_ld Run with the nold data (T/F)
#'@return
#'@export
cause_params_sims <- function(dat, null_wt = 10, no_ld=FALSE, max_candidates=Inf){
if(no_ld) dat <- process_dat_nold(dat)
X <- dat %>%
select(snp, beta_hat_1, seb1, beta_hat_2, seb2) %>%
new_cause_data(.)
params <- est_cause_params(X, X$snp, null_wt = null_wt, max_candidates = max_candidates)
return(params)
}
|
430eed8218c21f6836d51ebf12d1a5a1c68a264b | 4832243131b31f3a22fd1a4480bdf8938cbfb272 | /FishCreekAlevinParentage2014.R | 11865d6d92d627065070a9f9cf9b36a66d9cb12d | [] | no_license | krshedd/SEAK-Chum-Parentage | b267d6387e292f93bacaa6db499e7409ffa40250 | 6464534d025bc7a8caae201d1ecca654591f6d87 | refs/heads/master | 2023-01-28T21:21:31.605699 | 2023-01-25T23:43:36 | 2023-01-25T23:43:36 | 63,094,480 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 17,945 | r | FishCreekAlevinParentage2014.R | #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#### 2014 Fish Creek Parentage Analysis ####
# Kyle Shedd Mon Jul 11 11:14:11 2016
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
date()
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#### Introduction ####
# The goal of this script is to perform parentage analysis on chum salmon
# adults (2013) and alevin (2014). Parentage analysis will be performed with
# different sets of SNPs to select a final marker set.
# 1) Gating measures (HWE and LD)
# 3) Ranking measures (MAF)
# 2) Parentage analysis (subsets of SNPs)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#### Specific Objectives for Parentage Analysis ####
# This script will:
# 1) Import adult/alevin data
# 2) Define potential parents and offspring
# 3) Perform a data QC on mixtures
# 4) Prepare FRANz input files
# 5) Summarize FRANz results
# 6) Run fullsnplings for alevin
# 7) Generate plots and tables of results
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#### Initial Setup ####
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
rm(list = ls(all = TRUE))
options(java.parameters = "-Xmx100g")
setwd("V:/Analysis/5_Coastwide/Multispecies/Alaska Hatchery Research Program/SEAK Chum")
source("H:/R Source Scripts/Functions.GCL_KS.R")
source("C:/Users/krshedd/Documents/R/Functions.GCL.R")
username <- "krshedd"
password <- "********"
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#### Get Data from LOKI ####
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
## Get collection SILLYs
SEAKChumSillys <- c("CMFISHCR14a", "CMFISHCR13", "CMFISHCRT13", "CMADMCR13", "CMSAWCR13", "CMPROSCR13")
dput(x = SEAKChumSillys, file = "Objects/SEAKChumSillys.txt")
## Create Locus Control
CreateLocusControl.GCL(markersuite = "ChumParentage2015_284SNPs", username = username, password = password)
## Save original LocusControl
loci284 <- LocusControl$locusnames
mito.loci <- which(LocusControl$ploidy == 1)
dir.create("Objects")
dput(x = LocusControl, file = "Objects/OriginalLocusControl.txt")
dput(x = loci284, file = "Objects/loci284.txt")
dput(x = mito.loci, file = "Objects/mito.loci.txt")
#~~~~~~~~~~~~~~~~~~
## Pull all data for each silly code and create .gcl objects for each
# sillyvec = SEAKChumSillys; username = username; password = password
LOKI2R.GCL(sillyvec = SEAKChumSillys, username = username, password = password) # Had to bust open the function and run line by line with `options(java.parameters = "-Xmx100g")` as opposed to `10g`, otherwise hit GC overhead and run out of heap space
rm(username, password)
objects(pattern = "\\.gcl")
## Save unaltered .gcl's as back-up:
dir.create("Raw genotypes")
dir.create("Raw genotypes/OriginalCollections")
invisible(sapply(SEAKChumSillys, function(silly) {dput(x = get(paste(silly, ".gcl", sep = '')), file = paste("Raw genotypes/OriginalCollections/" , silly, ".txt", sep = ''))} )); beep(8)
## Original sample sizes by SILLY
collection.size.original <- sapply(SEAKChumSillys, function(silly) get(paste(silly, ".gcl", sep = ""))$n)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#### Clean workspace; dget .gcl objects and Locus Control ####
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
rm(list = ls(all = TRUE))
setwd("V:/Analysis/5_Coastwide/Multispecies/Alaska Hatchery Research Program/SEAK Chum")
# This sources all of the new GCL functions to this workspace
source("C:/Users/krshedd/Documents/R/Functions.GCL.R")
source("H:/R Source Scripts/Functions.GCL_KS.R")
## Get objects
LocusControl <- dget(file = "Objects/OriginalLocusControl.txt")
SEAKobjects <- list.files(path = "Objects", recursive = FALSE)
SEAKobjects <- SEAKobjects[!SEAKobjects %in% c("OriginalLocusControl.txt")]
SEAKobjects
invisible(sapply(SEAKobjects, function(objct) {assign(x = unlist(strsplit(x = objct, split = ".txt")), value = dget(file = paste(getwd(), "Objects", objct, sep = "/")), pos = 1) })); beep(2)
## Get un-altered mixtures
invisible(sapply(SEAKChumSillys, function(silly) {assign(x = paste(silly, ".gcl", sep = ""), value = dget(file = paste(getwd(), "/Raw genotypes/OriginalCollections/", silly, ".txt", sep = "")), pos = 1)} )); beep(2)
objects(pattern = "\\.gcl")
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#### Add Metatdata from OceanAK ####
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
require(data.table)
# Add bottle data from OceanAK - CMFISHCR14a
oceanak.CMFISHCR14a.dt <- fread(input = "OceanAK/Bulk Tissue Inventory 20160726.txt")
oceanak.CMFISHCR14a.df <- data.frame(oceanak.CMFISHCR14a.dt)
str(oceanak.CMFISHCR14a.df)
bottles <- oceanak.CMFISHCR14a.df$Bottle.Name # bottle names
number.fish.bottle <- apply(oceanak.CMFISHCR14a.df, 1, function(btl) {length(btl["First.Vial"]:btl["Last.Vial"])} ) # n fish per bottle
bottle.id <- rep(bottles, number.fish.bottle) # vector of bottle names
length(bottle.id) == CMFISHCR14a.gcl$n # confirm equal
CMFISHCR14a.gcl$attributes$BOTTLE_ID <- bottle.id # add bottle ID to each individual
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Sillys to filter OceanAK Salmon Biological Fact for
writeClipboard(paste(SEAKChumSillys, collapse = ";"))
# Only have data for CMADMCR13, CMFISHCR13, CMPROSCR13, and CMSAWCR13 (not CMFISHCRT13 since they were tagged alive)
oceanak.dt <- fread(input = "OceanAK/Salmon Biological Data 20160726.txt") # freaky fast
str(oceanak.dt)
# Convert to data.frame
oceanak.df <- data.frame(oceanak.dt)
str(oceanak.df)
# Create data keys for both (barcode + position)
oceanak.df$Key <- paste(oceanak.df$DNA.Tray.Code, oceanak.df$DNA.Tray.Well.Code, sep = "_")
CMFISHCR13.gcl$attributes$Key <- paste(CMFISHCR13.gcl$attributes$DNA_TRAY_CODE, CMFISHCR13.gcl$attributes$DNA_TRAY_WELL_CODE, sep = "_")
CMFISHCRT13.gcl$attributes$Key <- paste(CMFISHCRT13.gcl$attributes$DNA_TRAY_CODE, CMFISHCRT13.gcl$attributes$DNA_TRAY_WELL_CODE, sep = "_")
CMADMCR13.gcl$attributes$Key <- paste(CMADMCR13.gcl$attributes$DNA_TRAY_CODE, CMADMCR13.gcl$attributes$DNA_TRAY_WELL_CODE, sep = "_")
CMPROSCR13.gcl$attributes$Key <- paste(CMPROSCR13.gcl$attributes$DNA_TRAY_CODE, CMPROSCR13.gcl$attributes$DNA_TRAY_WELL_CODE, sep = "_")
CMSAWCR13.gcl$attributes$Key <- paste(CMSAWCR13.gcl$attributes$DNA_TRAY_CODE, CMSAWCR13.gcl$attributes$DNA_TRAY_WELL_CODE, sep = "_")
#~~~~~~~~~~~~~~~~~~
# Note that floy tag individuals from CMFISHCRT13 were not pulled into CMFISHCR13 by LOKI2R
table(CMFISHCR13.gcl$attributes$PK_TISSUE_TYPE)
# Which floy tag individuals were recapured and had their otoliths read? (CMFISHCR13 individuals with tissue = floy tag)
oceanak.CMFISHCR13.dt <- fread(input = "OceanAK/GEN_SAMPLED_FISH_TISSUE 20160726.txt")
oceanak.CMFISHCR13.df <- data.frame(oceanak.CMFISHCR13.dt)
str(oceanak.CMFISHCR13.df); dim(oceanak.CMFISHCR13.df)
oceanak.CMFISHCR13.df$Key <- paste(oceanak.CMFISHCR13.df$DNA_TRAY_CODE, oceanak.CMFISHCR13.df$DNA_TRAY_WELL_CODE, sep = "_")
# CMFISHCRT13.recaptures.Key <- CMFISHCRT13.gcl$attributes$Key[match(oceanak.CMFISHCR13.df$CAPTURE_LOCATION, CMFISHCRT13.gcl$attributes$CAPTURE_LOCATION)]
# Pool CMFISHCRT13 fish that were recaptured
CMFISHCRT13.recapture.IDs <- list(CMFISHCRT13 = na.omit(as.character(CMFISHCRT13.gcl$attributes$FK_FISH_ID[
match(oceanak.CMFISHCR13.df$CAPTURE_LOCATION, CMFISHCRT13.gcl$attributes$CAPTURE_LOCATION)]
)))
PoolCollections.GCL(collections = "CMFISHCRT13", loci = loci284, CMFISHCRT13.recapture.IDs, newname = "CMFISHCRT13_recapture")
CMFISHCRT13_recapture.gcl$attributes$Key <- paste(CMFISHCRT13_recapture.gcl$attributes$DNA_TRAY_CODE, CMFISHCRT13_recapture.gcl$attributes$DNA_TRAY_WELL_CODE, sep = "_")
str(CMFISHCRT13_recapture.gcl)
# Note that the Key codes for the axillaries do not match the otoliths
table(oceanak.CMFISHCR13.df$Key %in% oceanak.df$Key)
table(CMFISHCRT13_recapture.gcl$attributes$Key %in% oceanak.df$Key)
# Need to replace the dimnames for counts/scores with "true" fish numbers and also replace attributes
dimnames(CMFISHCRT13_recapture.gcl$counts)[[1]] <- oceanak.CMFISHCR13.df$FK_FISH_ID
dimnames(CMFISHCRT13_recapture.gcl$scores)[[1]] <- oceanak.CMFISHCR13.df$FK_FISH_ID
names(CMFISHCRT13_recapture.gcl$attributes)
names(oceanak.CMFISHCR13.df)
str(CMFISHCRT13_recapture.gcl$attributes)
CMFISHCRT13_recapture.gcl$attributes[, c("FK_FISH_ID", "DNA_TRAY_CODE", "DNA_TRAY_WELL_CODE", "DNA_TRAY_WELL_POS", "Key")] <-
oceanak.CMFISHCR13.df[, c("FK_FISH_ID", "DNA_TRAY_CODE", "DNA_TRAY_WELL_CODE", "DNA_TRAY_WELL_POS", "Key")]
str(CMFISHCRT13_recapture.gcl)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Matching
SEAKChumSillys.otoltihs <- c("CMFISHCR13", "CMFISHCRT13_recapture", "CMADMCR13", "CMPROSCR13", "CMSAWCR13")
# Create list of matches by silly
SEAKChumSillys.otoltihs.match <- sapply(SEAKChumSillys.otoltihs, function(silly) {
match(get(paste(silly, ".gcl", sep = ''))$attributes$Key, oceanak.df$Key)
}, simplify = FALSE)
str(SEAKChumSillys.otoltihs.match)
str(oceanak.df)
save.image(file = "FishCreekAlevinParentage2014.RData")
# Add field/otolith metadata using match
require(qdap)
sapply(SEAKChumSillys.otoltihs, function(silly) {
my.gcl <- get(paste(silly, ".gcl", sep = ''))
atts <- c("Sex", "Length.Mm", "Otolith.Mark.Present", "Otolith.Mark.ID", "Otolith.Mark.Status.Code")
my.gcl$attributes <- cbind(my.gcl$attributes, oceanak.df[SEAKChumSillys.otoltihs.match[[silly]], atts])
my.gcl$attributes$Natural.Hatchery <- mgsub(pattern = c("NO", "YES"), replacement = c("N", "H"), text.var = my.gcl$attributes$Otolith.Mark.Present)
my.gcl$attributes$Natural.Hatchery[my.gcl$attributes$Natural.Hatchery == ""] = "U"
assign(x = paste(silly, ".gcl", sep = ''), value = my.gcl, pos = 1)
})
str(CMFISHCR13.gcl)
table(CMFISHCR13.gcl$attributes$Otolith.Mark.Present); table(CMFISHCR13.gcl$attributes$Natural.Hatchery)
unique(CMFISHCR13.gcl$attributes$Otolith.Mark.Present)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Save post-match, pre-QC .gcl objects
dir.create("Raw genotypes/PostMetadataPreQC")
invisible(sapply(SEAKChumSillys.otoltihs, function(silly) {dput(x = get(paste(silly, ".gcl", sep = '')), file = paste("Raw genotypes/PostMetadataPreQC/" , silly, ".txt", sep = ''))} )); beep(8)
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#### Clean workspace; dget .gcl objects and Locus Control ####
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
rm(list = ls(all = TRUE))
setwd("V:/Analysis/5_Coastwide/Multispecies/Alaska Hatchery Research Program/SEAK Chum")
# This sources all of the new GCL functions to this workspace
source("C:/Users/krshedd/Documents/R/Functions.GCL.R")
source("H:/R Source Scripts/Functions.GCL_KS.R")
## Get objects
LocusControl <- dget(file = "Objects/OriginalLocusControl.txt")
SEAKobjects <- list.files(path = "Objects", recursive = FALSE)
SEAKobjects <- SEAKobjects[!SEAKobjects %in% c("OriginalLocusControl.txt")]
SEAKobjects
invisible(sapply(SEAKobjects, function(objct) {assign(x = unlist(strsplit(x = objct, split = ".txt")), value = dget(file = paste(getwd(), "Objects", objct, sep = "/")), pos = 1) })); beep(2)
## Get un-altered mixtures
setwd("V:/Analysis/5_Coastwide/Multispecies/Alaska Hatchery Research Program/SEAK Chum/Raw genotypes/PostMetadataPreQC")
invisible(sapply(list.files(), function(silly.txt) {
silly <- unlist(strsplit(x = silly.txt, split = ".txt"))
assign(x = paste(silly, ".gcl", sep = ''), value = dget(file = silly.txt), pos = 1)
})); beep (5)
objects(pattern = "\\.gcl")
setwd("V:/Analysis/5_Coastwide/Multispecies/Alaska Hatchery Research Program/SEAK Chum")
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#### Data QC/Massage ####
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SEAKChumSillys.all <- sapply(objects(pattern = "\\.gcl"), function(gclobject) {unlist(strsplit(x = gclobject, split = ".gcl"))} )
samp.size <- sapply(SEAKChumSillys.all, function(silly) {get(paste(silly, ".gcl", sep = ''))$n} )
require(xlsx); require(beepr)
SEAKChumSillys.all
SEAKChumSillys.all_SampleSizes <- matrix(data = NA, nrow = length(SEAKChumSillys.all), ncol = 5,
dimnames = list(SEAKChumSillys.all, c("Genotyped", "Alternate", "Missing", "Duplicate", "Final")))
#### Check loci
## Get sample size by locus
Original_SEAKChumSillys.all_SampleSizebyLocus <- SampSizeByLocus.GCL(SEAKChumSillys.all, loci284)
min(Original_SEAKChumSillys.all_SampleSizebyLocus) # 0
sort(apply(Original_SEAKChumSillys.all_SampleSizebyLocus,1,min)/apply(Original_SEAKChumSillys.all_SampleSizebyLocus,1,max)) # Several under 0.8
table(apply(Original_SEAKChumSillys.all_SampleSizebyLocus,1,min)/apply(Original_SEAKChumSillys.all_SampleSizebyLocus,1,max) < 0.8) # all 7 SILLY's with at least one locus fail
# Remove loci that failed for all individuals
loci2remove <- loci284[apply(Original_SEAKChumSillys.all_SampleSizebyLocus, 2, sum) == 0]
loci277 <- loci284[!loci284 %in% loci2remove]
# Percent of individuals genotyped per locus
Original_SEAKChumSillys.all_SampleSizebyLocus <- SampSizeByLocus.GCL(SEAKChumSillys.all, loci277)
SEAKChumSillys.all.percent.per.locus <- apply(Original_SEAKChumSillys.all_SampleSizebyLocus, 2, function(locus) {locus / samp.size} )
require(lattice)
new.colors <- colorRampPalette(c("black", "white"))
levelplot(SEAKChumSillys.all.percent.per.locus, col.regions = new.colors, xlab = "SILLY", ylab = "Locus", at = seq(0, 1, length.out = 100), scales = list(x = list(rot = 90)), aspect = "fill") # aspect = "iso" will make squares
# View histogram of number of individuals genotyped per loci
sapply(SEAKChumSillys.all, function(silly) {hist(as.numeric(Original_SEAKChumSillys.all_SampleSizebyLocus[silly, ]), main = silly, col = 8, xlab = "Sample size per locus")} )
#### Check individuals
### Initial
## Get number of individuals per silly before removing missing loci individuals
Original_SEAKChumSillys.all_ColSize <- sapply(paste(SEAKChumSillys.all, ".gcl", sep = ''), function(x) get(x)$n)
SEAKChumSillys.all_SampleSizes[, "Genotyped"] <- Original_SEAKChumSillys.all_ColSize
### Alternate
## Indentify alternate species individuals
ptm <- proc.time()
SEAKChumSillys.all_Alternate <- FindAlternateSpecies.GCL(sillyvec = SEAKChumSillys.all, species = "chum"); beep(8)
proc.time() - ptm
## Remove Alternate species individuals
RemoveAlternateSpecies.GCL(AlternateSpeciesReport = SEAKChumSillys.all_Alternate, AlternateCutOff = 0.5, FailedCutOff = 0.5); beep(2)
## Get number of individuals per silly after removing alternate species individuals
ColSize_SEAKChumSillys.all_PostAlternate <- sapply(paste(SEAKChumSillys.all, ".gcl", sep = ''), function(x) get(x)$n)
SEAKChumSillys.all_SampleSizes[, "Alternate"] <- Original_SEAKChumSillys.all_ColSize-ColSize_SEAKChumSillys.all_PostAlternate
### Missing
## Remove individuals with >20% missing data
SEAKChumSillys.all_MissLoci <- RemoveIndMissLoci.GCL(sillyvec = SEAKChumSillys.all, proportion = 0.8); beep(8)
## Get number of individuals per silly after removing missing loci individuals
ColSize_SEAKChumSillys.all_PostMissLoci <- sapply(paste(SEAKChumSillys.all, ".gcl", sep = ''), function(x) get(x)$n)
SEAKChumSillys.all_SampleSizes[, "Missing"] <- ColSize_SEAKChumSillys.all_PostAlternate-ColSize_SEAKChumSillys.all_PostMissLoci
### Duplicate
## Check within collections for duplicate individuals.
SEAKChumSillys.all_DuplicateCheck95MinProportion <- CheckDupWithinSilly.GCL(sillyvec = SEAKChumSillys.all, loci = loci277, quantile = NULL, minproportion = 0.95); beep(8)
SEAKChumSillys.all_DuplicateCheckReportSummary <- sapply(SEAKChumSillys.all, function(x) SEAKChumSillys.all_DuplicateCheck95MinProportion[[x]]$report)
## Remove duplicate individuals
SEAKChumSillys.all_RemovedDups <- RemoveDups.GCL(SEAKChumSillys.all_DuplicateCheck95MinProportion)
## Get number of individuals per silly after removing duplicate individuals
ColSize_SEAKChumSillys.all_PostDuplicate <- sapply(paste(SEAKChumSillys.all, ".gcl", sep = ''), function(x) get(x)$n)
SEAKChumSillys.all_SampleSizes[, "Duplicate"] <- ColSize_SEAKChumSillys.all_PostMissLoci-ColSize_SEAKChumSillys.all_PostDuplicate
### Final
SEAKChumSillys.all_SampleSizes[, "Final"] <- ColSize_SEAKChumSillys.all_PostDuplicate
SEAKChumSillys.all_SampleSizes
dput(x = SEAKChumSillys.all_SampleSizes, file = "Objects/SEAKChumSillys.all_SampleSizes.txt")
dir.create("Output")
write.xlsx(SEAKChumSillys.all_SampleSizes, file = "Output/SEAKChumSillys.all_SampleSizes.xlsx")
## Save post-QC .gcl's as back-up:
dir.create(path = "Raw genotypes/PostQCCollections")
invisible(sapply(SEAKChumSillys.all, function(silly) {dput(x = get(paste(silly, ".gcl", sep = '')), file = paste("Raw genotypes/PostQCCollections/" , silly, ".txt", sep = ''))} )); beep(8)
# Percent of individuals genotyped per locus
Original_SEAKChumSillys.all_postQC_SampleSizebyLocus <- SampSizeByLocus.GCL(SEAKChumSillys.all[-7], loci277)
samp.size <- sapply(SEAKChumSillys.all[-7], function(silly) {get(paste(silly, ".gcl", sep = ''))$n} )
SEAKChumSillys.all.percent.per.locus <- apply(Original_SEAKChumSillys.all_postQC_SampleSizebyLocus, 2, function(locus) {locus / samp.size} )
require(lattice)
new.colors <- colorRampPalette(c("black", "white"))
levelplot(SEAKChumSillys.all.percent.per.locus, col.regions = new.colors, xlab = "SILLY", ylab = "Locus", at = seq(0, 1, length.out = 100), scales = list(x = list(rot = 90)), aspect = "fill") # aspect = "iso" will make squares
# Hatchery vs. Natural per silly
sapply(SEAKChumSillys.all, function(silly) {table(get(paste(silly, ".gcl", sep = ''))$attributes$Natural.Hatchery)} )
|
03ae1326fa6eaf5cba33471522ebf1e4e7504a42 | f7748db84b273c3b4f3ad289df86c1cf73671df0 | /PDF Estimation.R | 05736009a8eddadaad0998bad259965e2bdab12a | [] | no_license | maglavis138/WeeklyRecApp | a616b99813e9515ae01891595b2182f4cb7547b0 | 2024f8f450fafc9903e6a3f1f2b5707879f8868b | refs/heads/master | 2021-01-22T04:48:25.676330 | 2017-09-03T14:13:49 | 2017-09-03T14:13:49 | 102,269,266 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 15,240 | r | PDF Estimation.R |
library(MASS)
library(fitdistrplus)
remove_outliers <- function(x, na.rm = TRUE, ...) {
qnt <- quantile(x, probs=c(.25, .75), na.rm = na.rm, ...)
H <- 1.5 * IQR(x, na.rm = na.rm)
y <- x
y[x < (qnt[1] - H)] <- NA
y[x > (qnt[2] + H)] <- NA
y
}
plot_histogram <- function(data){
hist(data, # histogram
col = "peachpuff", # column color
border = "black",
prob = TRUE, # show densities instead of frequencies
# xlim = c(36,38.5),
# ylim = c(0,3),
xlab = "Variable",
main = "Articles")
lines(density(data), # density plot
lwd = 2, # thickness of line
col = "chocolate3")
abline(v = mean(data),
col = "royalblue",
lwd = 2)
abline(v = median(data),
col = "red",
lwd = 2)
legend(x = "topright", # location of legend within plot area
c("Density plot", "Mean", "Median"),
col = c("chocolate3", "royalblue", "red"),
lwd = c(2, 2, 2))
}
article_data <- DataArticles[which(DataArticles$date >= "2017-07-01" & DataArticles$date < "2017-08-01" & DataArticles$repost == 0, DataArticles$post_source_type == "native"),]$link_clicks
video_data <- DataVideos[which(DataVideos$date >= "2017-05-01" & DataVideos$date < "2017-08-01" & DataVideos$video_meme == 0),]$post_video_views
video_meme_data <- DataVideos[which(DataVideos$date >= "2017-05-01" & DataVideos$date < "2017-08-01" & DataVideos$video_meme == 1),]$post_video_views
article_data <- remove_outliers(DataArticles[which(DataArticles$date >= "2016-01-01" & DataArticles$repost == 0),]$post_reach)[!is.na(remove_outliers(DataArticles[which(DataArticles$date >= "2016-01-01" & DataArticles$repost == 0),]$post_reach))]
video_data <- remove_outliers(DataVideos[which(DataVideos$date >= "2016-01-01" & DataVideos$repost == 0 & DataVideos$video_meme == 0),]$post_reach)[!is.na(remove_outliers(DataVideos[which(DataVideos$date >= "2016-01-01" & DataVideos$repost == 0 & DataVideos$video_meme == 0),]$post_reach))]
video_meme_data <- remove_outliers(DataVideos[which(DataVideos$date >= "2016-01-01" & DataVideos$repost == 0 & DataVideos$video_meme == 1),]$post_reach)[!is.na(remove_outliers(DataVideos[which(DataVideos$date >= "2016-01-01" & DataVideos$repost == 0 & DataVideos$video_meme == 1),]$post_reach))]
photo_data <- remove_outliers(DataPhotos[which(DataPhotos$date >= "2016-01-01" & DataPhotos$repost == 0),]$post_reach)[!is.na(remove_outliers(DataPhotos[which(DataPhotos$date >= "2016-01-01" & DataPhotos$repost == 0),]$post_reach))]
plotdist(article_data, histo = TRUE, demp = TRUE)
fln_articles <- fitdist(article_data, "lnorm")
par(mfrow = c(2, 2))
plot.legend <- c("lognormal")
denscomp(list(fln_articles), legendtext = plot.legend)
qqcomp(list(fln_articles), legendtext = plot.legend)
cdfcomp(list(fln_articles), legendtext = plot.legend)
ppcomp(list(fln_articles), legendtext = plot.legend)
plotdist(log(article_data), histo = TRUE, demp = TRUE)
n_articles <- fitdist(log(article_data), "norm")
par(mfrow = c(2, 2))
plot.legend <- c("normal")
denscomp(list(n_articles), legendtext = plot.legend)
qqcomp(list(n_articles), legendtext = plot.legend)
cdfcomp(list(n_articles), legendtext = plot.legend)
ppcomp(list(n_articles), legendtext = plot.legend)
exp(qnorm(0.05, mean = n_articles$estimate[1], sd = n_articles$estimate[2]))
exp(qnorm(0.95, mean = n_articles$estimate[1], sd = n_articles$estimate[2]))
1 - pnorm(log(median(article_data)), mean = n_articles$estimate[1], sd = n_articles$estimate[2])
set.seed(0)
for(i in 1:10000){
monte_carlo_articles[i] <- sum(exp(rnorm(155, mean = n_articles$estimate[1], sd = n_articles$estimate[2])))
}
plotdist((monte_carlo_articles), histo = TRUE, demp = TRUE)
sd(monte_carlo_articles)
plotdist(log(monte_carlo_articles), histo = TRUE, demp = TRUE)
n_monte_carlo_articles <- fitdist(log(monte_carlo_articles), "norm")
par(mfrow = c(2, 2))
plot.legend <- c("normal")
denscomp(list(n_monte_carlo_articles), legendtext = plot.legend)
qqcomp(list(n_monte_carlo_articles), legendtext = plot.legend)
cdfcomp(list(n_monte_carlo_articles), legendtext = plot.legend)
ppcomp(list(n_monte_carlo_articles), legendtext = plot.legend)
exp(qnorm(0.05, mean = n_monte_carlo_articles$estimate[1], sd = n_monte_carlo_articles$estimate[2]))
exp(qnorm(0.95, mean = n_monte_carlo_articles$estimate[1], sd = n_monte_carlo_articles$estimate[2]))
plotdist(video_data, histo = TRUE, demp = TRUE)
fln_videos <- fitdist(video_data, "lnorm")
par(mfrow = c(2, 2))
plot.legend <- c("lognormal")
denscomp(list(fln_videos), legendtext = plot.legend)
qqcomp(list(fln_videos), legendtext = plot.legend)
cdfcomp(list(fln_videos), legendtext = plot.legend)
ppcomp(list(fln_videos), legendtext = plot.legend)
plotdist(log(video_data), histo = TRUE, demp = TRUE)
n_videos <- fitdist(log(video_data), "norm")
par(mfrow = c(2, 2))
plot.legend <- c("normal")
denscomp(list(n_videos), legendtext = plot.legend)
qqcomp(list(n_videos), legendtext = plot.legend)
cdfcomp(list(n_videos), legendtext = plot.legend)
ppcomp(list(n_videos), legendtext = plot.legend)
exp(qnorm(0.05, mean = n_videos$estimate[1], sd = n_videos$estimate[2]))
exp(qnorm(0.95, mean = n_videos$estimate[1], sd = n_videos$estimate[2]))
sum(exp(rnorm(62, mean = n_videos$estimate[1], sd = n_videos$estimate[2])))
1 - pnorm(log(mean(video_data)), mean = n_videos$estimate[1], sd = n_videos$estimate[2])
plotdist(video_meme_data, histo = TRUE, demp = TRUE)
fln_video_memes <- fitdist(video_meme_data, "lnorm")
par(mfrow = c(2, 2))
plot.legend <- c("lognormal")
denscomp(list(fln_video_memes), legendtext = plot.legend)
qqcomp(list(fln_video_memes), legendtext = plot.legend)
cdfcomp(list(fln_video_memes), legendtext = plot.legend)
ppcomp(list(fln_video_memes), legendtext = plot.legend)
plotdist(log(video_meme_data), histo = TRUE, demp = TRUE)
n_video_memes <- fitdist(log(video_meme_data), "norm")
par(mfrow = c(2, 2))
plot.legend <- c("normal")
denscomp(list(n_video_memes), legendtext = plot.legend)
qqcomp(list(n_video_memes), legendtext = plot.legend)
cdfcomp(list(n_video_memes), legendtext = plot.legend)
ppcomp(list(n_video_memes), legendtext = plot.legend)
exp(qnorm(0.05, mean = n_video_memes$estimate[1], sd = n_video_memes$estimate[2]))
exp(qnorm(0.95, mean = n_video_memes$estimate[1], sd = n_video_memes$estimate[2]))
sum(exp(rnorm(62, mean = n_video_memes$estimate[1], sd = n_video_memes$estimate[2])))
1 - pnorm(log(mean(video_meme_data)), mean = n_video_memes$estimate[1], sd = n_video_memes$estimate[2])
pnorm(log(40574.16), mean = n_video_memes$estimate[1], sd = n_video_memes$estimate[2])
plotdist(photo_data, histo = TRUE, demp = TRUE)
fln_photos <- fitdist(photo_data, "lnorm")
par(mfrow = c(2, 2))
plot.legend <- c("lognormal")
denscomp(list(fln_photos), legendtext = plot.legend)
qqcomp(list(fln_photos), legendtext = plot.legend)
cdfcomp(list(fln_photos), legendtext = plot.legend)
ppcomp(list(fln_photos), legendtext = plot.legend)
gofstat(fln_videos)
fln_articles$estimate[1]
video_meme_data_aug <- DataVideos[which(DataVideos$date >= "2016-01-01" & DataVideos$date < "2017-01-01" & DataVideos$video_meme == 1),]$post_video_views
hist(video_meme_data_aug, prob=TRUE, breaks = 200, xlim = c(0, 5000000))
curve(dlnorm(x, mean = n_video_memes$estimate[1], sd = n_video_memes$estimate[2]), add=TRUE)
## MODEL ------------------------------------------------------------------------------------------------------------------
library(MASS)
library(fitdistrplus)
# 1. Data ----------------------------------
date_range <- c("2017-05-01", "2017-08-01")
article_data <- DataArticles[which(DataArticles$date >= date_range[1] & DataArticles$date < date_range[2] & DataArticles$repost == 0, DataArticles$post_source_type == "native"),]$link_clicks
article_repost_data <- DataArticles[which(DataArticles$date >= date_range[1] & DataArticles$date < date_range[2] & DataArticles$repost == 1, DataArticles$post_source_type == "native"),]$link_clicks
video_data <- DataVideos[which(DataVideos$date >= date_range[1] & DataVideos$date < date_range[2] & DataVideos$video_meme == 0 & DataVideos$repost == 0, DataVideos$post_source_type == "native"),]$post_video_views
video_repost_data <- DataVideos[which(DataVideos$date >= date_range[1] & DataVideos$date < date_range[2] & DataVideos$video_meme == 0 & DataVideos$repost == 1, DataVideos$post_source_type == "native"),]$post_video_views
video_meme_data <- DataVideos[which(DataVideos$date >= date_range[1] & DataVideos$date < date_range[2] & DataVideos$video_meme == 1 & DataVideos$repost == 0, DataVideos$post_source_type == "native"),]$post_video_views
video_meme_repost_data <- DataVideos[which(DataVideos$date >= date_range[1] & DataVideos$date < date_range[2] & DataVideos$video_meme == 1 & DataVideos$repost == 1, DataVideos$post_source_type == "native"),]$post_video_views
meme_data <- DataPhotos[which(DataPhotos$date >= date_range[1] & DataPhotos$date < date_range[2] & DataPhotos$repost == 0, DataPhotos$post_source_type == "native"),]$post_reach
meme_repost_data <- DataPhotos[which(DataPhotos$date >= date_range[1] & DataPhotos$date < date_range[2] & DataPhotos$repost == 1, DataPhotos$post_source_type == "native"),]$post_reach
# 2. Dist. Estimation ----------------------------------
n_articles <- fitdist(log(article_data), "norm")
par(mfrow = c(2, 2))
plot.legend <- c("normal")
denscomp(list(n_articles), legendtext = plot.legend)
qqcomp(list(n_articles), legendtext = plot.legend)
cdfcomp(list(n_articles), legendtext = plot.legend)
ppcomp(list(n_articles), legendtext = plot.legend)
n_videos <- fitdist(log(video_data), "norm")
par(mfrow = c(2, 2))
plot.legend <- c("normal")
denscomp(list(n_videos), legendtext = plot.legend)
qqcomp(list(n_videos), legendtext = plot.legend)
cdfcomp(list(n_videos), legendtext = plot.legend)
ppcomp(list(n_videos), legendtext = plot.legend)
n_video_memes <- fitdist(log(video_meme_data), "norm")
par(mfrow = c(2, 2))
plot.legend <- c("normal")
denscomp(list(n_video_memes), legendtext = plot.legend)
qqcomp(list(n_video_memes), legendtext = plot.legend)
cdfcomp(list(n_video_memes), legendtext = plot.legend)
ppcomp(list(n_video_memes), legendtext = plot.legend)
n_memes <- fitdist(log(meme_data), "norm")
par(mfrow = c(2, 2))
plot.legend <- c("normal")
denscomp(list(n_memes), legendtext = plot.legend)
qqcomp(list(n_memes), legendtext = plot.legend)
cdfcomp(list(n_memes), legendtext = plot.legend)
ppcomp(list(n_memes), legendtext = plot.legend)
n_articles_repo <- fitdist(log(article_repost_data), "norm")
par(mfrow = c(2, 2))
plot.legend <- c("normal")
denscomp(list(n_articles_repo), legendtext = plot.legend)
qqcomp(list(n_articles_repo), legendtext = plot.legend)
cdfcomp(list(n_articles_repo), legendtext = plot.legend)
ppcomp(list(n_articles_repo), legendtext = plot.legend)
n_videos_repo <- fitdist(log(video_repost_data), "norm")
par(mfrow = c(2, 2))
plot.legend <- c("normal")
denscomp(list(n_videos_repo), legendtext = plot.legend)
qqcomp(list(n_videos_repo), legendtext = plot.legend)
cdfcomp(list(n_videos_repo), legendtext = plot.legend)
ppcomp(list(n_videos_repo), legendtext = plot.legend)
n_video_memes_repo <- fitdist(log(video_meme_repost_data), "norm")
par(mfrow = c(2, 2))
plot.legend <- c("normal")
denscomp(list(n_video_memes_repo), legendtext = plot.legend)
qqcomp(list(n_video_memes_repo), legendtext = plot.legend)
cdfcomp(list(n_video_memes_repo), legendtext = plot.legend)
ppcomp(list(n_video_memes_repo), legendtext = plot.legend)
n_memes_repo <- fitdist(log(meme_repost_data), "norm")
par(mfrow = c(2, 2))
plot.legend <- c("normal")
denscomp(list(n_memes_repo), legendtext = plot.legend)
qqcomp(list(n_memes_repo), legendtext = plot.legend)
cdfcomp(list(n_memes_repo), legendtext = plot.legend)
ppcomp(list(n_memes_repo), legendtext = plot.legend)
# 3. Conf. Intervals -------------------------------------------
alpha <- 0.05
exp(qnorm(alpha/2, mean = n_articles$estimate[1], sd = n_articles$estimate[2]))
exp(qnorm(1 - alpha/2, mean = n_articles$estimate[1], sd = n_articles$estimate[2]))
exp(qnorm(alpha/2, mean = n_articles_repo$estimate[1], sd = n_articles_repo$estimate[2]))
exp(qnorm(1 - alpha/2, mean = n_articles_repo$estimate[1], sd = n_articles_repo$estimate[2]))
exp(qnorm(alpha/2, mean = n_videos$estimate[1], sd = n_videos$estimate[2]))
exp(qnorm(1 - alpha/2, mean = n_videos$estimate[1], sd = n_videos$estimate[2]))
exp(qnorm(alpha/2, mean = n_videos_repo$estimate[1], sd = n_videos_repo$estimate[2]))
exp(qnorm(1 - alpha/2, mean = n_videos_repo$estimate[1], sd = n_videos_repo$estimate[2]))
exp(qnorm(alpha/2, mean = n_video_memes$estimate[1], sd = n_video_memes$estimate[2]))
exp(qnorm(1 - alpha/2, mean = n_video_memes$estimate[1], sd = n_video_memes$estimate[2]))
exp(qnorm(alpha/2, mean = n_video_memes_repo$estimate[1], sd = n_video_memes_repo$estimate[2]))
exp(qnorm(1 - alpha/2, mean = n_video_memes_repo$estimate[1], sd = n_video_memes_repo$estimate[2]))
exp(qnorm(alpha/2, mean = n_memes$estimate[1], sd = n_memes$estimate[2]))
exp(qnorm(1 - alpha/2, mean = n_memes$estimate[1], sd = n_memes$estimate[2]))
exp(qnorm(alpha/2, mean = n_memes_repo$estimate[1], sd = n_memes_repo$estimate[2]))
exp(qnorm(1 - alpha/2, mean = n_memes_repo$estimate[1], sd = n_memes_repo$estimate[2]))
# 4. Monte Carlo Sim. ------------------------------------------
set.seed(0)
monte_carlo_articles <- NA
monte_carlo_videos <- NA
monte_carlo_video_memes <- NA
monte_carlo_memes <- NA
monte_carlo_articles_repo <- NA
monte_carlo_videos_repo <- NA
monte_carlo_video_memes_repo <- NA
monte_carlo_memes_repo <- NA
for(i in 1:100000){
monte_carlo_articles[i] <- sum(exp(rnorm(155, mean = n_articles$estimate[1], sd = n_articles$estimate[2])))
monte_carlo_videos[i] <- sum(exp(rnorm(62, mean = n_videos$estimate[1], sd = n_videos$estimate[2])))
monte_carlo_video_memes[i] <- sum(exp(rnorm(62, mean = n_video_memes$estimate[1], sd = n_video_memes$estimate[2])))
monte_carlo_memes[i] <- sum(exp(rnorm(186, mean = n_memes$estimate[1], sd = n_memes$estimate[2])))
monte_carlo_articles_repo[i] <- sum(exp(rnorm(30, mean = n_articles_repo$estimate[1], sd = n_articles_repo$estimate[2])))
monte_carlo_videos_repo[i] <- sum(exp(rnorm(15, mean = n_videos_repo$estimate[1], sd = n_videos_repo$estimate[2])))
monte_carlo_video_memes_repo[i] <- sum(exp(rnorm(15, mean = n_video_memes_repo$estimate[1], sd = n_video_memes_repo$estimate[2])))
monte_carlo_memes_repo[i] <- sum(exp(rnorm(60, mean = n_memes_repo$estimate[1], sd = n_memes_repo$estimate[2])))
}
monte_carlo_page <- monte_carlo_articles + monte_carlo_videos + monte_carlo_video_memes + monte_carlo_memes + monte_carlo_articles_repo + monte_carlo_videos_repo + monte_carlo_video_memes_repo + monte_carlo_memes_repo
plotdist(monte_carlo_page, histo = TRUE, demp = TRUE)
plotdist(monte_carlo_articles, histo = TRUE, demp = TRUE)
plotdist(monte_carlo_videos, histo = TRUE, demp = TRUE)
plotdist(monte_carlo_video_memes, histo = TRUE, demp = TRUE)
plotdist(monte_carlo_memes, histo = TRUE, demp = TRUE)
plot_histogram(monte_carlo_articles)
plot_histogram(monte_carlo_videos)
plot_histogram(monte_carlo_video_memes)
plot_histogram(monte_carlo_memes)
|
3029d3e9902178395a4ac6ffb3170b85274b4b0b | 966cec6374ae11cf5cad5e819c06047a252bd085 | /suvivalanalysis.R | 1ff98ed7310199b882d952d86092e3980c25c6da | [] | no_license | zhangliyin666/TCGAbiolinks | 2e91575d53ea72fd8e98a2802a5a39708ef62e8c | dec87fa946e714c5c8ae37db8ac649df7dc5a102 | refs/heads/master | 2023-03-31T23:27:07.309322 | 2021-04-11T10:03:30 | 2021-04-11T10:03:30 | 355,908,938 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 492 | r | suvivalanalysis.R | TCGAbiolinks::getGDCprojects()$project_id
clin.LUAD <- GDCquery_clinic("TCGA-LUAD", "clinical",save.csv = TRUE)
#write.csv(clin.LUAD,file = "LUAD_clinical.csv")
library(survminer)
TCGAanalyze_survival(clin.LUAD,
clusterCol="gender",
risk.table = FALSE,
xlim = c(100,1000),
ylim = c(0.4,1),
conf.int = FALSE,
pvalue = TRUE,
color = c("Dark2"))
|
2b7828ac2390e034b9b1e6b2849bff5fc4f70b7f | 796b959d64ca828fc1d6ceb028a1244fd779bcf4 | /LSuperior/LS_ICE_Analysis.R | fc427addd5763edc25e70b58794acaf362138f00 | [] | no_license | droglenc/WiDNR_Creel | 96a29fd4820d6db4bc77ef325ce29bbbb1545e3d | 4118b4cd48f12cabf1dc48fdef22d617554dd21c | refs/heads/master | 2020-04-11T18:55:40.400660 | 2020-01-28T13:39:04 | 2020-01-28T13:39:04 | 162,016,946 | 0 | 0 | null | 2019-10-14T00:43:48 | 2018-12-16T15:54:30 | HTML | UTF-8 | R | false | false | 3,103 | r | LS_ICE_Analysis.R | #=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=
#
# PROGRAM TO ANALYZE LAKE SUPERIOR ICE CREEL
#
# DIRECTIONS:
# 1. Create a "LS_ICE_YEAR" folder (where YEAR is replaced with the year
# to be analyzed ... e.g., LS_ICE_2019) inside "LSuperior" folder.
# 2. Use Access macro to extract interview, fdays, count, and fish data files
# into a "data" folder inside the folder from 1.
# 3. Enter the year for the analysis here.
YEAR <- 2019
# 4. Make TRUE below to combine resultant CSV files for all routes to one file.
COMBINE_CSV_FILES <- TRUE
# 5. Source this script (and choose the information file in the dialog box).
# 6. See resulting files in folder from 1 ... the html file is the overall
# report and the CSV files are intermediate data files that may be loaded
# into a database for future analyses.
#
# R VERSIONS (CONVERTED FROM EXCEL):
# XXX, 2019 (version 1 - Derek O)
#
#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=#=-=
#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!
#
# DO NOT CHANGE THESE UNLESS THE ACCESS DATABASE MACRO HAS CHANGED!!!
#
#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!
CNTS_FILE <- "qry_ice_counts_4R.xlsx"
FDAY_FILE <- "qry_ice_fdays_4R.xlsx"
INTS_FILE <- "qry_ice_interviews_4R.xlsx"
FISH_FILE <- "qry_ice_fish_4R.xlsx"
#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!
#
# DO NOT CHANGE ANYTHING BENEATH HERE (unless you know what you are doing)!!!
#
#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!#!-!
# Create working directory for helper files and rmarkdown template.
WDIR <- file.path(here::here(),"LSuperior")
## Allows user to choose the appropriate folder created in 1 above.
message("!! Choose YEAR folder file in the dialog box (may be behind other windows) !!")
RDIR <- choose.dir(default=WDIR)
# Open the interviews file to find ROUTEs with interviews
intvs <- readxl::read_excel(file.path(RDIR,"data",INTS_FILE))
ROUTE <- unique(intvs$ROUTE)
## Iterate through the routes, produce an HTML report and intermediate CSV files
for (LOCATION in ROUTE) {
# Handle slashes in location names
LOCATION2 <- gsub("/","",LOCATION)
message("Creating report and data files for '",LOCATION,
"' route ...",appendLF=FALSE)
# Create a name for the report output file ("Analysis_" + location + year).
OUTFILE <- paste0(LOCATION2,"_Ice_",YEAR,"_Report.html")
# Render the markdown report file with the information from above
rmarkdown::render(input=file.path(WDIR,"Helpers","LS_Ice_Analysis_Template.Rmd"),
params=list(LOC=LOCATION,LOC2=LOCATION2,YR=YEAR,
WDIR=WDIR,RDIR=RDIR),
output_dir=RDIR,output_file=OUTFILE,
output_format="html_document",
clean=TRUE,quiet=TRUE)
# Show the file in a browswer
utils::browseURL(file.path(RDIR,OUTFILE))
message(" Done")
}
if (COMBINE_CSV_FILES) combineCSV(RDIR,YEAR)
|
0c3e31fd1016f4a820fd682a09799e841ce6b2cc | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/TrajDataMining/examples/owMeratniaByCollection.Rd.R | 0c046b7f200af49f27d5b3f8d81c5cf7d6d6df8c | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 476 | r | owMeratniaByCollection.Rd.R | library(TrajDataMining)
### Name: owMeratniaByCollection
### Title: Ow Meratnial By Collection
### Aliases: owMeratniaByCollection
### owMeratniaByCollection,TracksCollection,numeric,numeric-method
### ** Examples
library(magrittr)
library(sp)
library(ggplot2)
ow <-owMeratniaByCollection(tracksCollection,13804.84 ,0.03182201) %>% coordinates()
df <- data.frame(x=ow[,1],y=ow[,2])
ggplot(df,aes(x=x,y=y))+geom_path(aes(group = 1), arrow = arrow(),color='blue')
|
fa2cfe4df2c5c918a9e31ae6fed3fc85cc6eb574 | ce68a85c4a6c5d474a6a574c612df3a8eb6685f7 | /book/packt/Mastering.Data.Analysis.with.R/08 - Polishing Data.R | b406c3571d9587ea3d7905255757a7708c8a3aed | [] | no_license | xenron/sandbox-da-r | c325b63114a1bf17d8849f076bfba22b6bdb34a3 | c217fdddc26ed523b3860e2000afc699afac55a2 | refs/heads/master | 2020-04-06T06:58:17.049181 | 2016-08-24T06:16:32 | 2016-08-24T06:16:32 | 60,466,314 | 1 | 1 | null | null | null | null | UTF-8 | R | false | false | 4,455 | r | 08 - Polishing Data.R | ## Extracted code chunks from
##
## Gergely Daróczi (2015): Mastering Data Analysis with R.
##
## Chapter #8: Polishing Data. pp. 169-192.
##
##
## This file includes the code chunks from the above mentioned
## chapter except for the leading ">" and "+" characters, which
## stand for the prompt in the R console. The prompt was
## intentionally removed here along with arbitrary line-breaks,
## so that you copy and paste the R expressions to the R console
## in a more convenient and seamless way.
##
## Code chunks are grouped here by the printed pages of the book.
## Two hash signs at the beginning of a line stands for a page
## break, while an extra empty line between the code chunks
## represents one or more paragraphs in the original book between
## the examples for easier navigation.
##
## Sometimes extra instructions starting with a double hash are
## also provided on how to run the below expressions.
##
##
## Find more information on the book at http://bit.ly/mastering-R
## and you can contact me on Twitter and GitHub by the @daroczig
## handle, or mail me at daroczig@rapporter.net
##
library(hflights)
table(complete.cases(hflights))
##
prop.table(table(complete.cases(hflights))) * 100
sort(sapply(hflights, function(x) sum(is.na(x))))
mean(cor(apply(hflights, 2, function(x) as.numeric(is.na(x)))), na.rm = TRUE)
##
Funs <- Filter(is.function, sapply(ls(baseenv()), get, baseenv()))
names(Filter(function(x) any(names(formals(args(x))) %in% 'na.rm'), Funs))
##
names(Filter(function(x) any(names(formals(args(x))) %in% 'na.rm'), Filter(is.function, sapply(ls('package:stats'), get, 'package:stats'))))
myMean <- function(...) mean(..., na.rm = TRUE)
mean(c(1:5, NA))
myMean(c(1:5, NA))
##
library(rapportools)
mean(c(1:5, NA))
detach('package:rapportools')
mean(c(1:5, NA))
library(Defaults)
setDefaults(mean.default, na.rm = TRUE)
mean(c(1:5, NA))
setDefaults(mean, na.rm = TRUE)
##
mean
formals(mean)
unDefaults(ls)
##
na.omit(c(1:5, NA))
na.exclude(c(1:5, NA))
x <- rnorm(10); y <- rnorm(10)
x[1] <- NA; y[2] <- NA
exclude <- lm(y ~ x, na.action = 'na.exclude')
omit <- lm(y ~ x, na.action = 'na.omit')
round(residuals(exclude), 2)
round(residuals(omit), 2)
##
m <- matrix(1:9, 3)
m[which(m %% 4 == 0, arr.ind = TRUE)] <- NA
m
na.omit(m)
mean(hflights$ActualElapsedTime)
mean(hflights$ActualElapsedTime, na.rm = TRUE)
mean(na.omit(hflights$ActualElapsedTime))
##
library(microbenchmark)
NA.RM <- function() mean(hflights$ActualElapsedTime, na.rm = TRUE)
NA.OMIT <- function() mean(na.omit(hflights$ActualElapsedTime))
microbenchmark(NA.RM(), NA.OMIT())
##
m[which(is.na(m), arr.ind = TRUE)] <- 0
m
ActualElapsedTime <- hflights$ActualElapsedTime
mean(ActualElapsedTime, na.rm = TRUE)
ActualElapsedTime[which(is.na(ActualElapsedTime))] <- mean(ActualElapsedTime, na.rm = TRUE)
mean(ActualElapsedTime)
library(Hmisc)
mean(impute(hflights$ActualElapsedTime, mean))
##
sd(hflights$ActualElapsedTime, na.rm = TRUE)
sd(ActualElapsedTime)
##
summary(iris)
library(missForest)
set.seed(81)
miris <- prodNA(iris, noNA = 0.2)
summary(miris)
##
iiris <- missForest(miris, xtrue = iris, verbose = TRUE)
##
str(iiris)
##
miris <- miris[, 1:4]
##
iris_mean <- impute(miris, fun = mean)
iris_forest <- missForest(miris)
diag(cor(iris[, -5], iris_mean))
diag(cor(iris[, -5], iris_forest$ximp))
##
detach('package:missForest')
detach('package:randomForest')
##
library(outliers)
outlier(hflights$DepDelay)
summary(hflights$DepDelay)
library(lattice)
bwplot(hflights$DepDelay)
IQR(hflights$DepDelay, na.rm = TRUE)
##
set.seed(83)
dixon.test(c(runif(10), pi))
model <- lm(hflights$DepDelay ~ 1)
model$coefficients
mean(hflights$DepDelay, na.rm = TRUE)
##
a <- 0.1
(n <- length(hflights$DepDelay))
(F <- qf(1 - (a/n), 1, n-2, lower.tail = TRUE))
(L <- ((n - 1) * F / (n - 2 + F))^0.5)
sum(abs(rstandard(model)) > L)
summary(lm(Sepal.Length ~ Petal.Length, data = miris))
##
lm(Sepal.Length ~ Petal.Length, data = iris)$coefficients
library(MASS)
summary(rlm(Sepal.Length ~ Petal.Length, data = miris))
##
f <- formula(Sepal.Length ~ Petal.Length)
cbind(
orig = lm(f, data = iris)$coefficients,
lm = lm(f, data = miris)$coefficients,
rlm = rlm(f, data = miris)$coefficients)
miris$Sepal.Length[1] <- 14
cbind(
orig = lm(f, data = iris)$coefficients,
lm = lm(f, data = miris)$coefficients,
rlm = rlm(f, data = miris)$coefficients)
|
be73ca73b931786b036dbc2126b9a4dc22deb8ad | 7c4b414da8b71d9f54242a07e7526cecf8020106 | /deaths_in_Finland/ui.R | 0e70089b3b70468f7893a5b84894341369b50439 | [] | no_license | zavolainen/developing_data_products_project | 53ef6116977c66dcb3e08fe3250808caba48d27c | 8fe0fcab615f981b316e839b113d6b6ab2758b0b | refs/heads/master | 2020-04-11T07:45:30.134502 | 2018-12-14T12:50:45 | 2018-12-14T12:50:45 | 161,621,323 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 582 | r | ui.R | library(shiny)
library(plotly)
ui <- fluidPage(
headerPanel("Deaths in Finland - years 1980-2017"),
sidebarPanel(
sliderInput("year", "Select the years you want to inspect", 1980, 2017, value = c(1980, 2017), sep = ""),
checkboxInput("total", "Show total", value = TRUE),
checkboxInput("male", "Show male", value = TRUE),
checkboxInput("female", "Show female", value = TRUE)
),
mainPanel(
plotlyOutput("plot1"),
textOutput("source")
)
) |
90e99acd0d219683a5e585177866cd055ab9505d | c0eaf1f57e6673445a8692c8d3be546b627ccd7f | /rfg_submission_code/code/ensemble/ens-105.r | 9dc294462701ee59526584590565c108c5c7f507 | [
"BSD-3-Clause"
] | permissive | nalu357/DiabetesRetinopathyDetection | fbc47428fc88cd29d70c167dd88180b73c941b8f | 391d1c1230d73733e07df8eaa111abd18b1d1170 | refs/heads/master | 2022-08-28T16:43:56.534318 | 2022-07-26T15:14:02 | 2022-07-26T15:14:02 | 186,412,136 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 5,742 | r | ens-105.r | library(glmnet)
library(ggplot2)
library(caret)
library(plyr)
# set.seed(2)
img_dir = '../../../../kaggle-eye/data/crop256x256/train'
args = commandArgs(TRUE)
if (length(args)) {
img_dir = args[1]
}
set.seed(6)
predictor_file_name = 'ens-105-predictors.csv'
predictor_dat = read.csv(predictor_file_name, stringsAsFactors=F)
print(nrow(predictor_dat))
left_right_join = function(dat) {
tmp = dat
tmp = merge(tmp[tmp$side == 'left', ],
tmp[tmp$side == 'right', ],
by = 'subj_id')
vv = predictor_dat$id
tmp1 = tmp[, c('subj_id',
'side.x',
'level.x',
paste(vv, '.x', sep=''),
paste(vv, '.y', sep=''))]
names(tmp1) = gsub('\\.x', '_1', names(tmp1))
names(tmp1) = gsub('\\.y', '_2', names(tmp1))
names(tmp1)[1:3] = c('subj_id', 'side', 'level')
tmp2 = tmp[, c('subj_id',
'side.y',
'level.y',
paste(vv, '.y', sep=''),
paste(vv, '.x', sep=''))]
names(tmp2) = gsub('\\.y', '_1', names(tmp2))
names(tmp2) = gsub('\\.x', '_2', names(tmp2))
names(tmp2)[1:3] = c('subj_id', 'side', 'level')
tmp = rbind(tmp1, tmp2)
for (i in 1:nrow(predictor_dat)) {
current_id = predictor_dat$id[i]
tmp = cbind(tmp, xxx = pmax(tmp[, paste(current_id, '1', sep='_')],
tmp[, paste(current_id, '2', sep='_')]))
names(tmp)[names(tmp) == 'xxx'] = paste(current_id, 'max', sep='_')
tmp = cbind(tmp, xxx = pmin(tmp[, paste(current_id, '1', sep='_')],
tmp[, paste(current_id, '2', sep='_')]))
names(tmp)[names(tmp) == 'xxx'] = paste(current_id, 'min', sep='_')
}
tmp$level = tmp$level + 1
tmp$side = as.numeric(tmp$side) - 1
tmp
}
max_min = function(dat) {
tmp = dat
tmp = cbind(tmp, xxx = apply(tmp[, grep('_1', names(tmp))], 1, max))
names(tmp)[names(tmp) == 'xxx'] = 'max_1'
tmp = cbind(tmp, xxx = apply(tmp[, grep('_1', names(tmp))], 1, min))
names(tmp)[names(tmp) == 'xxx'] = 'min_1'
tmp
}
train = dir(img_dir)
train = data.frame(
subj_id = gsub('_left|_right|\\.jpeg$', '', train),
side = gsub('^[0-9]*_|\\.jpeg$', '', train))
labels = read.csv('trainLabels.csv')
labels = transform(labels,
subj_id = gsub('_left|_right|\\.jpeg$', '', image),
side = gsub('^[0-9]*_|\\.jpeg$', '', image))
train = merge(train, labels)
for (i in 1:nrow(predictor_dat)) {
dat = read.csv(predictor_dat$val[i], header=F)
dat = transform(dat,
subj_id = gsub('_left|_right|\\.jpeg$', '', V1),
side = gsub('^[0-9]*_|\\.jpeg$', '', V1),
pred = V2)
dat = dat[,c('subj_id', 'side', 'pred')]
names(dat)[3] = predictor_dat$id[i]
train = merge(train, dat)
print(dim(train))
}
train$m46 = 0.5 * (train$m46 + train$m46_2)
train$m46_2 = NULL
predictor_dat = predictor_dat[predictor_dat$id != 'm46_2', ]
train = left_right_join(train)
train = max_min(train)
write.csv(train, 'ensemble_fitting_matrix.csv', row.names=F)
train_y = train$level
train_x = train[, !names(train) %in% c('subj_id', 'level', 'sz')]
train_x = model.matrix(~(0+.)^2, data=train_x)
dim(train_x)
set.seed(5)
fit = cv.glmnet(y=train_y,
x=train_x,
type.measure='mse',
nfolds=30,
alpha=0.6,
family='gaussian',
standardize=T,
nlambda=300,
lambda.min.ratio=0.001)
saveRDS(fit, '../../models/output3/models/ens105.rds')
fit = readRDS('../../models/output3/models/ens105.rds')
coefs = as.matrix(coef(fit))[as.matrix(coef(fit)) != 0]
names(coefs) = rownames(coef(fit))[as.matrix(coef(fit)) != 0]
bestIndx = which(fit$cvm == min(fit$cvm))
tmp = data.frame(var=names(coefs), coef=coefs)
rownames(tmp) = NULL
tmp[rev(order(tmp$coef)), ]
bestIndx
fit$cvm[bestIndx]
fit$lambda[bestIndx]
summary(fit$lambda)
preds = predict(fit, train_x, type='response')[, 1]
rslt = data.frame(pred = preds,
actual = train_y - 1)
table(rslt$actual)
prop.table(table(rslt$actual))
write.table(rslt, 'kappascan.tsv',
sep='\t', quote=F, na='', row.names=F, col.names=F)
# var coef
# 1 (Intercept) 0.3583172063
# 6 m53_1 0.0921488777
# 9 m52_no_bg_max 0.0885712307
# 4 m52_no_bg_1 0.0828727706
# 5 m51_no_bg_1 0.0820158333
# 11 m53_max 0.0789745074
# 3 m47_1 0.0507662479
# 10 m51_no_bg_max 0.0395980269
# 2 m46_1 0.0368135557
# 8 m47_max 0.0315050330
# 7 m46_max 0.0140580797
# 12 max_1 0.0130081083
# 23 m41_max:m52_no_bg_max 0.0058689359
# 20 m42_max:m53_max 0.0054791081
# 24 m41_max:m53_max 0.0049071226
# 19 m42_max:m52_no_bg_max 0.0045980725
# 29 m47_max:m53_max 0.0043879388
# 32 m52_no_bg_max:max_1 0.0043180213
# 28 m47_max:m52_no_bg_max 0.0042620666
# 33 m53_max:max_1 0.0037329151
# 27 cyc28_max:m53_max 0.0036576947
# 26 cyc28_max:m52_no_bg_max 0.0035159808
# 22 m41_max:m47_max 0.0033607143
# 31 m52_no_bg_max:m53_max 0.0032091486
# 18 m42_max:m47_max 0.0029636488
# 25 cyc28_max:m47_max 0.0027404610
# 13 min_1 0.0026252536
# 30 m47_max:max_1 0.0012280575
# 16 m53_1:m42_max 0.0012112265
# 14 m47_1:m42_max 0.0008127055
# 21 m42_max:max_1 0.0008016348
# 15 m47_1:m41_max 0.0006854084
# 17 m53_1:m41_max 0.0004574470
# [1] 288
# [1] 0.238064
# [1] 0.001754512
# best score is 0.84872806
# best cut off
# 2573 1.5064173
# 2927 2.200046
# 3262 2.9302843
# 3485 4.0473447
|
ce5c0c801224f8bd314dbe34011c0cbdbf24f807 | edbe9c10eec8f9c62598a6c55bc129e35261a699 | /Plot3.r | 38a41315f54a20e870d29381c503cbd466d99300 | [] | no_license | menrotProg/ExData_Plotting1 | effb2347c159f8f99973ddbc4cbde149ffb923fd | 00b771dfb4d32da53630e3b3c3791d8817087584 | refs/heads/master | 2021-01-22T21:23:08.906664 | 2015-06-06T03:28:20 | 2015-06-06T03:28:20 | 36,886,478 | 0 | 0 | null | 2015-06-04T17:52:40 | 2015-06-04T17:52:40 | null | UTF-8 | R | false | false | 897 | r | Plot3.r | rm(list=ls())
library(sqldf)
##
## Read from the dataset only the relevant 2 days, using SQL query
##
## Assumes dataset is in the working directory
##
wd <- read.csv.sql(
"household_power_consumption.txt",
sql = "select * from file where Date='1/2/2007' OR Date='2/2/2007' ",
sep=";",head=TRUE)
# create a POSIXct column, to serve as the X-axis
wd[, "DateTime"] <- as.POSIXct(strptime(paste(wd[, "Date"], wd[, "Time"], sep=" "), "%d/%m/%Y %H:%M:%S", tz=""))
##
## Plot 3 - multiple lines adjusted to png
##
png(file="plot3.png", width = 480, height = 480)
plot(wd$DateTime, wd$Sub_metering_1, type="l", col="black",
ylab="Energy Sub Metering", xlab="")
with(wd, lines(wd$DateTime, wd$Sub_metering_2, col="red"))
with(wd, lines(wd$DateTime, wd$Sub_metering_3, col="blue"))
legend("topright", legend=c(names(wd)[7:9]), col=c("black", "red", "blue"), lty=c(1,1))
dev.off()
|
a93d2ba92287e915c37984a35dbdd40c58e64111 | b251f9b356673d08b21093b59690021dd8653d48 | /man/repmat.Rd | 23212b8eb3af1c5987d554487379211fe25573f2 | [] | no_license | GarrettMooney/moonmisc | c501728302e35908f888028f9be4522921b08be3 | 0dee0c4e5b3b55d93721a4c70501e7322d44cd15 | refs/heads/master | 2020-03-24T13:38:25.650579 | 2019-10-19T18:22:05 | 2019-10-19T18:22:05 | 142,748,283 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 592 | rd | repmat.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/repmat.R
\name{repmat}
\alias{repmat}
\title{Rep Mat Function}
\usage{
repmat(X, m, n)
}
\arguments{
\item{X}{a numeric matrix}
\item{m}{number of row-wise replications}
\item{n}{number of column-wise replications}
}
\value{
numeric matrix with X replicated n-times across and m-times down
}
\description{
This function takes a matrix X and replicates it n times column-wise and m times row-wise, similarly to repmat in Matlab
}
\examples{
set.seed(1)
x = matrix(rnorm(6),2,3)
x
repmat(x,3,2)
}
\keyword{repmat}
|
02c253fee9df7d3679fcacf7fd32ba59447afb9b | aa3ba28eac4a2734ba2f0360820035eb643f06e0 | /Programs/SVM.r | 6acb8b72856af78cdef1b46754b149802f460e8e | [] | no_license | AlQatrum/ProjetFilRouge | 6094ea2383976ab7b7c9bcb6161a804848715172 | e8a7918900ff1a4ea56970011a11e6d2e2e8b24a | refs/heads/main | 2023-03-07T18:06:41.332953 | 2021-02-25T22:14:08 | 2021-02-25T22:14:08 | 311,359,108 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 3,441 | r | SVM.r | ########################################## Description ##############################################
# This script aims to create a SVM separating young + old from the others. #
#####################################################################################################
rm(list = ls())
### Libraries ###
library(data.table)
library(tibble)
library(tidyverse)
library(e1071) # For smv
### Parameters ###
Rep <- 'C:/Users/alexi/OneDrive/Documents/DocumentsPersonnels/SynchroDropbox/Dropbox/IODAA2020/Agro/ProjetFilRouge/ProjetFilRouge/'
AdultAges <- '(10,80]'
### Data ###
GenusLearning <- fread(paste0(Rep,'Data/WorkData/WithAgeClasses/GenusLearning.csv'))
GenusTest <- fread(paste0(Rep,'Data/WorkData/WithAgeClasses/GenusTest.csv'))
GenusValidation <- fread(paste0(Rep,'Data/WorkData/WithAgeClasses/GenusValidation.csv'))
SpeciesLearning <- fread(paste0(Rep,'Data/WorkData/WithAgeClasses/SpeciesLearning.csv'))
SpeciesTest <- fread(paste0(Rep,'Data/WorkData/WithAgeClasses/SpeciesTest.csv'))
SpeciesValidation <- fread(paste0(Rep,'Data/WorkData/WithAgeClasses/SpeciesValidation.csv'))
### Creating only two classes ###
GenusLearning <- GenusLearning %>%
mutate(OldYoung = ifelse(test = AgeClass != AdultAges, yes = 1, no = -1)) %>%
as_tibble()
GenusTest <- GenusTest %>%
mutate(OldYoung = ifelse(test = AgeClass != AdultAges, yes = 1, no = -1)) %>%
as_tibble()
GenusValidation <- GenusValidation %>%
mutate(OldYoung = ifelse(test = AgeClass != AdultAges, yes = 1, no = -1)) %>%
as_tibble()
SpeciesLearning <- SpeciesLearning %>%
mutate(OldYoung = ifelse(test = AgeClass != AdultAges, yes = 1, no = -1)) %>%
as_tibble()
SpeciesTest <- SpeciesTest %>%
mutate(OldYoung = ifelse(test = AgeClass != AdultAges, yes = 1, no = -1)) %>%
as_tibble()
SpeciesValidation <- SpeciesValidation %>%
mutate(OldYoung = ifelse(test = AgeClass != AdultAges, yes = 1, no = -1)) %>%
as_tibble()
### Removing [ and ] in colnames ###
GenusColNames <- colnames(GenusLearning) %>%
map(.x = ., .f = ~str_remove_all(string = .x, pattern = "[^[:alnum:][:blank:]_]")) %>%
unlist()
colnames(GenusLearning) <- GenusColNames
colnames(GenusTest) <- GenusColNames
colnames(GenusValidation) <- GenusColNames
SpeciesColNames <- colnames(SpeciesLearning) %>%
map(.x = ., .f = ~str_remove_all(string = .x, pattern = "[^[:alnum:][:blank:]_]")) %>%
unlist()
colnames(SpeciesLearning) <- SpeciesColNames
colnames(SpeciesTest) <- SpeciesColNames
colnames(SpeciesValidation) <- SpeciesColNames
### Modelising SVM ###
GenusFormula <- colnames(GenusLearning)[4:105] %>% # Taking only columns of interest
paste(collapse = '+')
SpeciesFormula <- colnames(SpeciesLearning)[4:73] %>% # Taking only columns of interest
paste(collapse = '+')
### Genus modelising
GenusSVM <- svm(OldYoung ~ eval(parse(text=GenusFormula)), data = GenusLearning, kernel = 'linear', scale = TRUE, probability = TRUE)
print(GenusSVM)
plot(GenusSVM, GenusLearning)
GenusTestPrediction <- predict(GenusSVM, GenusTest) %>%
cbind(GenusTest, .) %>%
select('OldYoung', '.')
### Species modelising
SpeciesSVM <- svm(OldYoung ~ eval(parse(text=SpeciesFormula)), data = SpeciesLearning, kernel = 'linear', scale = TRUE, probability = TRUE)
print(SpeciesSVM)
plot(SpeciesSVM, SpeciesLearning)
SpeciesTestPrediction <- predict(SpeciesSVM, SpeciesTest) %>%
cbind(SpeciesTest, .) %>%
select('OldYoung', '.') |
bb1ed00deba1390b8310ea02e5f708ce69ab1749 | 8599857de719b36466687e4cd3ff258497fc3998 | /ARules/Groceries_arules.R | 0fcb27889b5df2d6ddc84e9d3cfe5faf4c9222db | [] | no_license | MenaliBagga/Data-Mining-Projects | b388c1048f3e5a5f6386dabf78f5103994c0324d | 1b469c374925860a0d72e1b722ec4a177c2d2de6 | refs/heads/master | 2020-05-15T07:53:23.217629 | 2019-04-28T06:50:55 | 2019-04-28T06:50:55 | 182,149,765 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,974 | r | Groceries_arules.R | library(arules)
library(arulesViz)
library(tidyverse)
data("Groceries")
head(Groceries)
grocery <- as(Groceries, 'data.frame')
head(grocery)
summary(Groceries)
# Density of 0.026 means 2.6 % are non-zero matrix cells.
# Dimensions: 9835 * 169
# Whole milk was purchased 2513 times which is 26% of the transactions
# 2519 transactions contained 1 item , while only 1 transaction had 32 items
# First quartile and median is 2 and 3 , which means that 25% of the transactions had 2 items and about half contained 3 items
itemFrequency(Groceries[,1:5])
itemFrequencyPlot(Groceries, support= 0.10)
#8 items have 10% of the support
itemFrequencyPlot(Groceries, support= 0.05)
# 28 items have atleast 5 % of the support
#Relative frequency of top 20 items
itemFrequencyPlot(Groceries, topN = 20)
#Visualizing first 5 transactions
image(Groceries[1:5])
#Random 100 transactions
image(sample(Groceries, 100))
#Apriori algorithm
# finding items that are sold three times a day, therefore for a monthm, support = 90/9835
basket <- apriori(Groceries, parameter = list(support = 0.009, confidence = 0.25, minlen = 2))
basket
summary(basket)
#gave a set of 224 rules , rule length distribution gives us how many items are present in how many rules
# 2 items are present in 111 rules
# 3 items are present in 113 rules
inspect(basket[1:10])
#The first five rules are seen here. Also, we can see support for the top 5 most frequent items. We can see the lift
#column along with support and confidence. The lift of a rule measures how much likely an item or itemset is
#purchased relative to its typical rate of purchase, given that you know another item or itemsethas been purchased.
# Sorting according to lift
inspect(sort(basket, by = "lift")[1:5])
#People who buy berries have 4 times tendency to buy whipped/sour cream than other customers
#taking subsets of Arules
#Sometimes the marketing team requires to promote a specific product, say they want to promote berries, and want
#to find out how often and with which items the berries are purchased. The subset function enables one to find
#subsets of transactions, items or rules. The %in% operator is used for exact matching
#Suppose I want to see it for berries
berries <- subset(basket, items %in% "berries")
inspect(berries)
#Yoghurt and whipped/sour cream turned up with which Berries is purchased
#Scatter Plot for 224 rules
plot(basket)
plot(basket, measure=c("support", "lift"), shading="confidence")
#Shading by order (number of items contained in the rule)
plot(basket, shading="order", control=list(main = "Two-key plot"))
#Interactive Scatter Plot
plot(basket, measure=c("support", "lift"), shading="confidence", interactive=TRUE)
# group based visualization
plot(basket, method="grouped")
# graph based visualization
plot(basket, method="graph", control=list(type="items"))
|
1fe25b84132381f354e953030cc0d13383b137e7 | 131183f323a69c26a875c1e49f596a26c3783ee0 | /man/cnNeutralDepthFromHetSites.Rd | d0141f8793897a0a12659919d0119b46a96eef12 | [
"Apache-2.0"
] | permissive | chrisamiller/copyCat | 079eb1cf031c4bbec82655d52e29b531226c24b8 | c009dc05ed16f940bb2837a54577b56e3b3682c5 | refs/heads/master | 2021-07-21T07:55:09.551037 | 2021-07-15T15:26:02 | 2021-07-15T15:26:02 | 6,358,229 | 27 | 10 | NOASSERTION | 2021-02-24T16:57:08 | 2012-10-23T19:10:48 | R | UTF-8 | R | false | false | 1,801 | rd | cnNeutralDepthFromHetSites.Rd | \name{cnNeutralDepthFromHetSites}
\alias{cnNeutralDepthFromHetSites}
\title{
cnNeutralDepthFromHetSites
}
\description{
examines heterozygous SNP positions in fixed windows to identify
copy-number neutral regions. Finds the median read depth in these
regions and uses it as the baseline for copy number estimation.
}
\usage{
cnNeutralDepthFromHetSites(rdo, samtoolsFile, snpBinSize, peakWiggle=3, minimumDepth=20, maximumDepth=100, plot=FALSE)
}
\arguments{
\item{rdo}{
a readDepth object filled with read counts
}
\item{samtoolsFile}{
the path to a file containing the output of running 'samtools
mpileup' on the bam file
}
\item{snpBinSize}{
the size of the window to use for counting het sites and estimating
peaks. Default of 1MB should be reasonable for most human genomes.
}
\item{peakWiggle}{
The amount of "wiggle" allowed in classifying peaks. For example,
if peak wiggle is set to 5, a peak at 45% will be considered
cn-neutral, while a peak at 44% will not.
}
\item{minimumDepth}{
The minimum depth of coverage needed to consider a het snp
site. Prevents sampling error due to low coverage.
}
\item{maximumDepth}{
The maximum depth of coverage allowable at a het snp site. Prevents
consideration of sites with aberrantly high depth due to mapping artifacts.
}
\item{plot}{
Whether to generate density plots of each peak for visual
review. Places these in the output directory, under plots/vafPlots
}
}
\value{
a number that represents the median depth of coverage in copy-number
neutral sites.
}
\examples{
# tumorSamtoolsFile = "samtools.mpileup.output"
# rdo@params$med = cnNeutralDepthFromHetSites(rdo, tumorSamtoolsFile,
# snpBinSize=1000000, plot=TRUE)
}
|
13cbf9acb72b056dfcb394d64bfb2c86a1932278 | 4879269434417ce69e70e3f1d43789ae3a9cc8e9 | /.checkpoint/2019-01-01/lib/x86_64-w64-mingw32/3.5.1/weathermetrics/doc/weathermetrics.R | 3b58f15d3997d95db90dc83e13d6ad50ebcd39e5 | [] | no_license | cghoehne/transport-uhi-phx | 53a091b71a04bd24d063aceec60e0abbd66434b6 | d16fb6c5f44740c5e77a26897b762d70d23c8dc9 | refs/heads/master | 2022-11-11T06:58:00.961572 | 2020-03-07T01:12:07 | 2020-03-07T01:12:07 | 149,355,287 | 2 | 1 | null | 2020-06-18T19:27:22 | 2018-09-18T21:39:25 | R | UTF-8 | R | false | false | 4,563 | r | weathermetrics.R | ## ----echo = FALSE--------------------------------------------------------
library(weathermetrics)
## ------------------------------------------------------------------------
#Convert from degrees Celsius to degress Fahrenheit
data(lyon)
lyon$TemperatureF <- convert_temperature(lyon$TemperatureC,
old_metric = "celsius", new_metric = "fahrenheit")
lyon$DewpointF <- convert_temperature(lyon$DewpointC,
old_metric = "celsius", new_metric = "fahrenheit")
lyon
#Convert from degrees Fahrenheit to degrees Celsius
data(norfolk)
norfolk$TemperatureC <- convert_temperature(norfolk$TemperatureF,
old_metric = "f", new_metric = "c")
norfolk$DewpointC <- convert_temperature(norfolk$DewpointF,
old_metric = "f", new_metric = "c")
norfolk
#Convert from degrees Kelvin to degrees Celsius
data(angeles)
angeles$TemperatureC <- convert_temperature(angeles$TemperatureK,
old_metric = "kelvin", new_metric = "celsius")
angeles$DewpointC <- convert_temperature(angeles$DewpointK,
old_metric = "kelvin", new_metric = "celsius")
angeles
## ------------------------------------------------------------------------
data(lyon)
lyon$RH <- dewpoint.to.humidity(t = lyon$TemperatureC,
dp = lyon$DewpointC,
temperature.metric = "celsius")
lyon
## ------------------------------------------------------------------------
data(newhaven)
newhaven$DP <- humidity.to.dewpoint(t = newhaven$TemperatureF,
rh = newhaven$Relative.Humidity,
temperature.metric = "fahrenheit")
newhaven
## ------------------------------------------------------------------------
data(newhaven)
newhaven$DP <- humidity.to.dewpoint(t = newhaven$TemperatureF,
rh = newhaven$Relative.Humidity,
temperature.metric = "fahrenheit")
newhaven$DP_C <- convert_temperature(newhaven$DP, old_metric = "f", new_metric = "c")
newhaven
## ------------------------------------------------------------------------
data(suffolk)
suffolk$HI <- heat.index(t = suffolk$TemperatureF,
rh = suffolk$Relative.Humidity,
temperature.metric = "fahrenheit",
output.metric = "fahrenheit")
suffolk
## ------------------------------------------------------------------------
data(lyon)
lyon$HI_F <- heat.index(t = lyon$TemperatureC,
dp = lyon$DewpointC,
temperature.metric = "celsius",
output.metric = "fahrenheit")
lyon
## ------------------------------------------------------------------------
data(beijing)
beijing$knots <- convert_wind_speed(beijing$kmph,
old_metric = "kmph", new_metric = "knots")
beijing
data(foco)
foco$mph <- convert_wind_speed(foco$knots, old_metric = "knots",
new_metric = "mph", round = 0)
foco$mph <- convert_wind_speed(foco$knots, old_metric = "knots",
new_metric = "mps", round = NULL)
foco$kmph <- convert_wind_speed(foco$mph, old_metric = "mph",
new_metric = "kmph")
foco
## ------------------------------------------------------------------------
data(breck)
breck$Precip.mm <- convert_precip(breck$Precip.in,
old_metric = "inches", new_metric = "mm", round = 2)
breck
data(loveland)
loveland$Precip.in <- convert_precip(loveland$Precip.mm,
old_metric = "mm", new_metric = "inches", round = NULL)
loveland$Precip.cm <- convert_precip(loveland$Precip.mm,
old_metric = "mm", new_metric = "cm", round = 3)
loveland
## ------------------------------------------------------------------------
df <- data.frame(T = c(NA, 90, 85),
DP = c(80, NA, 70))
df$RH <- dewpoint.to.humidity(t = df$T, dp = df$DP,
temperature.metric = "fahrenheit")
df
## ------------------------------------------------------------------------
df <- data.frame(T = c(90, 90, 85),
DP = c(80, 95, 70))
df$heat.index <- heat.index(t = df$T, dp = df$DP,
temperature.metric = 'fahrenheit')
df
## ------------------------------------------------------------------------
data(suffolk)
suffolk$TempC <- convert_temperature(suffolk$TemperatureF,
old_metric = "f",
new_metric = "c",
round = 5)
suffolk$HI <- heat.index(t = suffolk$TemperatureF,
rh = suffolk$Relative.Humidity,
round = 3)
suffolk
|
4231fb7edb3c178f209fc797a185ed9615fa864f | 67f31a9f56d85ede80920358fe40462c2cb710ed | /man/seqv.Rd | db47d6461af6c4bcb6c623daab28d3e2b5d51b60 | [] | no_license | vh-d/VHtools | ff95b01424c210b3451f4ee63d5aaa016e553c2e | a7907e8ba370523ca92985fb73f734a3284896b8 | refs/heads/master | 2020-04-12T06:25:18.169942 | 2019-04-09T20:09:34 | 2019-04-09T20:09:34 | 60,918,606 | 0 | 0 | null | null | null | null | UTF-8 | R | false | true | 613 | rd | seqv.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/transform.R
\name{seqv}
\alias{seqv}
\title{seq function on vectors}
\usage{
seqv(from, to, ..., simplify = F)
}
\arguments{
\item{\code{from, to}}{the starting and (maximal) end values vectors of the sequences.}
\item{\code{simplify}}{logical: whether the function returns a matrix/list (depending on reqularity of \code{from-to}) (\code{TRUE}) a vector (\code{FALSE}) .}
}
\description{
seq function on vectors
}
\details{
If the \code{from - to} vector is not constant vector and \code{simplify = F}, \code{seqv} returns a list.
}
|
194b8b9586653a59d8c8d2b3f42743c9047814f8 | 5e5fa7fa2f060f7ae836e274f77fbea23d5012b1 | /snoke_results.R | e9f646834402efd77eef0426b9fdad80a9fe7b83 | [] | no_license | BDSS-PSU/capr_poll | 59f1d9a072f92699bbbfc482fc24aeb0917f6aa5 | 0f2ec1be4595d872dca1ba9ef8b546253bf6e2d2 | refs/heads/master | 2016-09-01T08:48:44.622315 | 2016-02-23T20:36:04 | 2016-02-23T20:36:04 | 52,108,074 | 0 | 1 | null | 2016-02-20T15:24:29 | 2016-02-19T18:34:20 | Python | UTF-8 | R | false | false | 3,523 | r | snoke_results.R | load("~/Box Sync/CAPRpoll/CAPR-R-format/capr.5.ballots.charc.Rdata")
#####
## Remove time outliers (people who took longer than 30 minutes)
#####
capr_no_outliers = capr[capr$tot.time < 1800, ]
#---------------------------
# Considering time taken to complete ballots and total character lengths
#---------------------------
cor(capr_no_outliers$totalcharc, capr_no_outliers$tot.time) ## unsurprisingly correlated
#####
## create ballot time vectors for t.tests
#####
ballot1Times = capr_no_outliers[capr_no_outliers$ballot.five.cat == "Ballot 1", "tot.time"]
ballot2ATimes = capr_no_outliers[capr_no_outliers$ballot.five.cat == "Ballot 2A", "tot.time"]
ballot2BTimes = capr_no_outliers[capr_no_outliers$ballot.five.cat == "Ballot 2B", "tot.time"]
ballot3ATimes = capr_no_outliers[capr_no_outliers$ballot.five.cat == "Ballot 3A", "tot.time"]
ballot3BTimes = capr_no_outliers[capr_no_outliers$ballot.five.cat == "Ballot 3B", "tot.time"]
#####
## simple pairwise t.tests for mean ballot time
#####
t.test(ballot2ATimes, ballot2BTimes) ## only one of real interest
t.test(ballot3ATimes, ballot3BTimes)
t.test(ballot1Times, ballot2ATimes)
t.test(ballot1Times, ballot2BTimes)
t.test(ballot1Times, ballot3ATimes)
t.test(ballot1Times, ballot3BTimes)
t.test(ballot3ATimes, ballot2BTimes)
t.test(ballot3BTimes, ballot2BTimes)
t.test(ballot3ATimes, ballot2ATimes)
t.test(ballot3BTimes, ballot2ATimes)
#####
## Multiple regression models for time, AIC model selection for important predictor variables
#####
timeLM = lm(tot.time ~ ballot.five.cat + gender + educ + race + votechoice + inputstate +
religpew + employ + pid3 + pid7 + marstat + ideo5 + faminc + pew_bornagain + birthyr +
pew_churatd,
data = capr_no_outliers)
stepTime = stepAIC(timeLM) ## chosen predictors: ballot, 7 point political scale, birth year
summary(stepTime)
anova(stepTime)
#-----------------------------------
#####
## create ballot character length vectors for t.tests
#####
ballot1Char = capr_no_outliers[capr_no_outliers$ballot.five.cat == "Ballot 1", "totalcharc"]
ballot2AChar = capr_no_outliers[capr_no_outliers$ballot.five.cat == "Ballot 2A", "totalcharc"]
ballot2BChar = capr_no_outliers[capr_no_outliers$ballot.five.cat == "Ballot 2B", "totalcharc"]
ballot3AChar = capr_no_outliers[capr_no_outliers$ballot.five.cat == "Ballot 3A", "totalcharc"]
ballot3BChar = capr_no_outliers[capr_no_outliers$ballot.five.cat == "Ballot 3B", "totalcharc"]
#####
## simple pairwise t.tests for mean ballot character length
#####
t.test(ballot2AChar, ballot2BChar) ##
t.test(ballot3AChar, ballot3BChar)
t.test(ballot1Char, ballot2AChar)
t.test(ballot1Char, ballot2BChar) ##
t.test(ballot1Char, ballot3AChar) ##
t.test(ballot1Char, ballot3BChar) ##
t.test(ballot3AChar, ballot2BChar)
t.test(ballot3BChar, ballot2BChar)
t.test(ballot3AChar, ballot2AChar) ##
t.test(ballot3BChar, ballot2AChar) ##
#####
## Multiple regression models for character length, AIC model selection for important predictor variables
#####
charLM = lm(totalcharc ~ ballot.five.cat + gender + educ + race + votechoice + inputstate +
religpew + employ + pid3 + pid7 + marstat + ideo5 + faminc + pew_bornagain + birthyr +
pew_churatd,
data = capr_no_outliers)
stepChar = stepAIC(charLM) ## chosen predictors: ballot, education, race, employment,
## 3 point political scale, 5 point ideology scale, birthyear
summary(stepChar)
anova(stepChar)
|
66743800e980b1c89170b683d66192b64eeea5a2 | 36efea92e1b51480f9301e18f2c3085b8a376ede | /cachematrix.R | 3cc97456e2224d9ad526e7e552ac0809f9341b19 | [] | no_license | vanqm/ProgrammingAssignment2 | abba0e9636c00ddeb32ed90a3ba5d5d3b6467d99 | b99385ffa6e647a1d29a417aa136d2b3c2e75771 | refs/heads/master | 2020-05-20T22:22:14.374422 | 2014-12-14T09:20:56 | 2014-12-14T09:20:56 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,759 | r | cachematrix.R |
# makeCacheMatrix: This function creates a special "matrix" object that can cache its inverse.
## set the value of the vector
## get the value of the vector
## set the value of the solve
## get the value of the solve
## cacheSolve: This function computes the inverse of the special "matrix" returned by makeCacheMatrix above.
## If the inverse has already been calculated (and the matrix has not changed),
## then cacheSolve should retrieve the inverse from the cache.
## Test case 1
# mat <- matrix(data = c(4,2,7,6), nrow = 2, ncol = 2)
# mat2 <- makeCacheMatrix(mat)
# cacheSolve(mat2)
## Result
# [,1] [,2]
# [1,] 0.6 -0.7
# [2,] -0.2 0.4
## Create a special matrix that is invertable
## x: is a matrix that is invertable - nrow == ncol
makeCacheMatrix <- function(x = matrix()) {
# From description of Project Assignment 2:
# For this assignment, assume that the matrix supplied is always invertible.
# So we do not need checking the matrix is invertable or not
m <- NULL
# 'set' function will set the new matrix to 'x'
set <- function(y) {
# cache the value of matrix
x <<- y
m <<- NULL
}
# get the value of matrix that cached
get <- function() x
setsolve <- function(solve) m <<- solve
getsolve <- function() m
# return a list contains set, get, setsolve, getsolve functions
list(set = set, get = get,
setsolve = setsolve,
getsolve = getsolve)
}
## Return the inverse of matrix
## x: a list - return from 'makeCacheMatrix' function
cacheSolve <- function(x, ...) {
## Return a matrix that is the inverse of 'x'
m <- x$getsolve()
# check 'm' is NULL or not
if(!is.null(m)) {
message("getting cached data")
return(m)
}
data <- x$get()
m <- solve(data, ...)
x$setsolve(m)
# return the inverse
m
}
|
88a867aa9734c3fe29a3273ac1004cb82cee15c8 | 08fd1cd25eb2d334f8e74e7daccae701a86956f8 | /_site/R_Scripts/0001_Natividad_Fish_Data_Raw_Clean_Up_2006_to_2012.R | 902f5928e9b95b5d61ff30178cc2805a14cb354b | [] | no_license | rbeas/baja-eco-data | 01e3b59b489a455c9c0508e37db9fc1ccb0398a3 | 2571e29c0e03a9f5f90416494448395682bef594 | refs/heads/master | 2020-12-24T15:04:44.018794 | 2015-02-13T18:59:08 | 2015-02-13T18:59:08 | 27,797,656 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 17,389 | r | 0001_Natividad_Fish_Data_Raw_Clean_Up_2006_to_2012.R | ########### Reef Check Fish Data Clean Up Process ###########
# Written by: C. A. Boch, May 29, 2014
# Micheli Laboratory, Hopkins Marine Station, Stanford University
# Note: the raw data was changed by the following to keep the data structure and format in its most basic form and structure
# Process the data to make sure the number of columns, column names and column order are all the same as previous files --> much easier to analyzie data in a single structure
############## Processing in Excel ##############
# Done in Excel
# 1. Make a dulicate copy of original xls file (keeps dates the same)
# 2. Deleted other worksheets (Hojas) in this duplicate file
# 3. saved the file from xls --> csv (this also gets rid of any pulldowns and formulas)
# 4. cleared the contents of all columns and rows outside of dataframe.
# 5. Manually changed all date formats to m/dd/yy (easier in Excel)
## Now, the file is structure ready to be imported into R
############## Processing in R ##############
rm(list=ls())
library(zoo)
library(plyr)
library(gsubfn)
#1. read in the fish data file
#2. split the data by site name because each site has a different species list
#3. split the site data into years because the RC full list changed slightly in 2013
#4. split each year data by Location -- after this, each location and year now has unique transect numbers we can split by
#5. merge the full list with each TransectNumber observations --> this only fills in the GS information
#6a. fill in the rest of the column data information for each TransectNumber -- ie location, site, latitude, etc
#6b. keep SizeClass and SexClass information as the same -- don't want to "fill in"
#7. recombine all the split data
#8. export as a table --> this is now the cleaned up processed data
### Note, although this fills in all the missing Genusspecies info, we still need to account for zeros. This can be done by turning the SizeClass information into binary and sum the 1's (total abundance count). O's will then represent not observed
######## Read in all raw fish data from Isla Natividad Site (2006-2012)
NatividadFishesRaw <- read.csv("~/Desktop/NSF_CHN/Ecology/ReefCheck/Data_Raw/Natividad_fishesraw_2006_2012.csv", header=T, stringsAsFactors=F, fileEncoding="latin1")
################################################################################################################################################################################
#### Remove unecessary columns of data
NatividadFishesRaw$codigo <- NULL
NatividadFishesRaw$tiempo.total <- NULL
NatividadFishesRaw$no..buceo <- NULL
NatividadFishesRaw$epoca <- NULL
NatividadFishesRaw$no..replica <- NULL
NatividadFishesRaw$sitio.en.extenso <- NULL
NatividadFishesRaw$prof.inicial..ft. <- NULL
NatividadFishesRaw$prof.final..ft. <- NULL
NatividadFishesRaw$prof.max..ft. <- NULL
NatividadFishesRaw$prof.X..ft. <- NULL
NatividadFishesRaw$temperatura...F. <- NULL
NatividadFishesRaw$Direccion <- NULL
NatividadFishesRaw$Observaciones <- NULL
NatividadFishesRaw$X <- NULL
# ################################################################################################################################################################################
#### Change the name of column headings
names(NatividadFishesRaw)[1]<-"Observer"
names(NatividadFishesRaw)[2]<-"Date"
names(NatividadFishesRaw)[3]<-"Year"
names(NatividadFishesRaw)[4]<-"TimeInitial"
names(NatividadFishesRaw)[5]<-"TimeFinal"
names(NatividadFishesRaw)[6]<-"TransectNumber"
names(NatividadFishesRaw)[7]<-"Location"
names(NatividadFishesRaw)[8]<-"Zone"
names(NatividadFishesRaw)[9]<-"DepthInitial_m"
names(NatividadFishesRaw)[10]<-"DepthFinal_m"
names(NatividadFishesRaw)[11]<-"DepthMax_m"
names(NatividadFishesRaw)[12]<-"MidTransDepth_m"
names(NatividadFishesRaw)[13]<-"Latitude"
names(NatividadFishesRaw)[14]<-"Longitude"
names(NatividadFishesRaw)[15]<-"Temperature_C"
names(NatividadFishesRaw)[16]<-"Visibility_m"
names(NatividadFishesRaw)[17]<-"Genusspecies"
names(NatividadFishesRaw)[18]<-"SizeClass"
names(NatividadFishesRaw)[19]<-"SpeciesPresOutTrans"
names(NatividadFishesRaw)[20]<-"SexClass"
################################################################################################################################################################################
#### Add column with Site and Country and a searchable date format
NatividadFishesRaw["Country"] <- "Mexico"
NatividadFishesRaw["Site"] <- "IslaNatividad"
NatividadFishesRaw$Observer <- sapply(NatividadFishesRaw$Observer, function(x) gsub("\u0096", "n", x, ignore.case=TRUE)) # replace the spanish n
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("\u0096", "n", x, ignore.case=TRUE)) # replace the spanish n
# ################################################################################################################################################################################
# #### Reorder the column order
NatividadFishesRaw <- NatividadFishesRaw[, c("Observer", "Date", "Year", "TimeInitial", "TimeFinal", "TransectNumber", "Location", "Site", "Zone", "Country", "DepthInitial_m", "DepthFinal_m", "DepthMax_m", "MidTransDepth_m", "Latitude", "Longitude", "Temperature_C", "Visibility_m", "Genusspecies", "SizeClass", "SexClass", "SpeciesPresOutTrans")]
#### Use unique() function to check for all unique entries column by column and change/replace if necessary
################################################################################################################################################################################
#### Replace old Location (sitioenextenso) data with new Location names
NatividadFishesRaw$Location <- sapply(NatividadFishesRaw$Location, function(x) gsub(" ", "", x))
NatividadFishesRaw$Location <- sapply(NatividadFishesRaw$Location, function(x) gsub("LaPlana/LasCuevas", "LaPlana", x))
NatividadFishesRaw$Location <- sapply(NatividadFishesRaw$Location, function(x) gsub("Puntaprieta", "PuntaPrieta", x)) #w/o comma between Natividad and Baja
################################################################################################################################################################################
#### Replace Zone designation from location information to Reserve or Fished (easier to remember) --> future designation should be based on GPS coordinates
NatividadFishesRaw$Zone <- ifelse(NatividadFishesRaw$Location %in% c("PuntaPrieta", "LaPlana"), "Reserve", ifelse(NatividadFishesRaw$Location %in% c("LaDulce", "Babencho", "LaBarrita", "MorroPrieto", "LaGuanera", "LaBarrita"), "Fished", ""))
###############################################################################################################################################################################
#### Replace n/a, n/d from data -- not consistenetly entered
NatividadFishesRaw$SizeClass <- sapply(NatividadFishesRaw$SizeClass, function(x) gsub("n/a", "", x, ignore.case=TRUE))
NatividadFishesRaw$SizeClass <- sapply(NatividadFishesRaw$SizeClass, function(x) gsub("n/d", "", x, ignore.case=TRUE))
NatividadFishesRaw$SexClass <- sapply(NatividadFishesRaw$SexClass, function(x) gsub("n/a", "", x, ignore.case=TRUE))
NatividadFishesRaw$SexClass <- sapply(NatividadFishesRaw$SexClass, function(x) gsub("n/d", "", x, ignore.case=TRUE))
NatividadFishesRaw$DepthFinal_m <- sapply(NatividadFishesRaw$DepthFinal_m, function(x) gsub("n/d", "", x, ignore.case=TRUE))
NatividadFishesRaw$DepthMax_m <- sapply(NatividadFishesRaw$DepthFinal_m, function(x) gsub("n/d", "", x, ignore.case=TRUE))
NatividadFishesRaw$Temperature_C <- sapply(NatividadFishesRaw$Temperature_C, function(x) gsub("n/d", "", x, ignore.case=TRUE))
################################################################################################################################################################################
#### Replace spanish Genusspecies with Latin names but need to split the dataframe because the species list changes in 2013
### Replace Spanish names with Latin Genusspecies names in 2006-2012 data
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub(" ", "", x)) # replace any spaces
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("Blanco", "Caulolatilusprinceps", x, ignore.case=TRUE))
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("Cabezon", "Scorpaenichthysmarmoratus", x, ignore.case=TRUE))
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("Cabrillaamarilla", "Paralabraxclathratus", x, ignore.case=TRUE))
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("chopaverde", "Girellanigricans", x, ignore.case=TRUE)) # need to place the space in front of verde to get rid it
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("Chopa", "Girellanigricans", x, ignore.case=TRUE))
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("Chromis", "Chromispunctipinnis", x, ignore.case=TRUE))
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("Guitarra", "Rhinobatosproductus", x, ignore.case=TRUE))
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("Meronegro", "Stereolepisgigas", x, ignore.case=TRUE))
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("Mojarranegra", "Embioticajacksoni", x, ignore.case=TRUE))
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("Molva", "Ophiodonelongatus", x, ignore.case=TRUE))
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("naranjito", "Hypsypopsrubicundus", x, ignore.case=TRUE)) # different spelling found using unique()
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("Perro", "Heterodontusfrancisci", x, ignore.case=TRUE))
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("Rocote", "Sebastessp", x, ignore.case=TRUE))
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("Roncador", "Anisotremusdavidsoni", x, ignore.case=TRUE))
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("Sargacerito", "Halichoeressemicintus", x, ignore.case=TRUE))
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("Senorita", "Oxyjuliscalifornica", x, ignore.case=TRUE))
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("Verdillo", "Paralabraxnebulifer", x, ignore.case=TRUE))
NatividadFishesRaw$Genusspecies <- sapply(NatividadFishesRaw$Genusspecies, function(x) gsub("Vieja", "Semicossyphuspulcher", x, ignore.case=TRUE))
################################################################################################################################################################################
#### Categorize SizeClass into bins
# ###############################################################################################################################################################################
#### Replace unique entries of SexClass so all entries are either M (M), F (Female) or J (Juvenile), A (Adult)
NatividadFishesRaw$SexClass <- sapply(NatividadFishesRaw$SexClass, function(x) gsub("macho", "M", x, ignore.case=TRUE))
NatividadFishesRaw$SexClass <- sapply(NatividadFishesRaw$SexClass, function(x) gsub("hembra", "F", x, ignore.case=TRUE))
NatividadFishesRaw$SexClass <- sapply(NatividadFishesRaw$SexClass, function(x) gsub("adulto", "A", x, ignore.case=TRUE))
NatividadFishesRaw$SexClass <- sapply(NatividadFishesRaw$SexClass, function(x) gsub("juvenil", "J", x, ignore.case=TRUE))
NatividadFishesRaw$SexClass <- sapply(NatividadFishesRaw$SexClass, function(x) gsub("j", "J", x))
NatividadFishesRaw$SexClass <- sapply(NatividadFishesRaw$SexClass, function(x) gsub("m", "M", x))
NatividadFishesRaw$SexClass <- sapply(NatividadFishesRaw$SexClass, function(x) gsub("a", "A", x))
NatividadFishesRaw$SexClass <- sapply(NatividadFishesRaw$SexClass, function(x) gsub("H", "F", x, ignore.case=TRUE))
NatividadFishesRaw$SexClass <- sapply(NatividadFishesRaw$SexClass, function(x) gsub("0", "", x))
# data is now in either lower or upper case of SexClass
# print(unique(NatividadFishesRaw$SexClass)) #to check for any outliers
###############################################################################################################################################################################
#### Remove lobos and focas from dataframe--not on the RC list
NatividadFishesRaw <- with(NatividadFishesRaw, NatividadFishesRaw[!(Genusspecies == "Lobo" | is.na(Genusspecies)), ]) # remove rows with the data lobo because these are not on the RC species list but was entered in data
NatividadFishesRaw <- with(NatividadFishesRaw, NatividadFishesRaw[!(Genusspecies == "Foca" | is.na(Genusspecies)), ]) # remove rows with the data foca because these are not on the RC species list but was entered in data
# could do the same with Rhinobatosproductus but it can be taken out later in the analyses
################################################################################################################################################################################
#### Fill in data for zeros for all of RC Fish data
#### A bit complicated for the whole dataframe but worth it
r<-rle(c(NatividadFishesRaw$TransectNumber))$lengths # index the dataframe by TransectNumber --> TransectNumber were entered in clusters; essentially, this represents each page of the dataset entered!
NatividadFishesRaw.df <- split(NatividadFishesRaw, rep(seq_along(r), r)) ## this is the bread and butter!!! double checked that it works!
## Now, we need to fill in each subdataframe by the full RCGS list to fill in zeroes
### Below is the full list of Reef Check species from 2006 - 2012 in I. Natividad; list changed in 2013
RCGS <- c("Paralabraxclathratus", "Paralabraxnebulifer", "Hypsypopsrubicundus", "Chromispunctipinnis", "Girellanigricans", "Anisotremusdavidsoni", "Embioticajacksoni", "Semicossyphuspulcher", "Sebastessp", "Oxyjuliscalifornica", "Halichoeressemicintus", "Caulolatilusprinceps", "Heterodontusfrancisci", "Stereolepisgigas", "Mycteropercajordani", "Mycteropercaxenarcha", "Scorpaenichthysmarmoratus", "Ophiodonelongatus", "Squatinacalifornica", "Rhacochilusvacca")
RCGS <-as.data.frame(RCGS)
names(RCGS)[1] <- "Genusspecies"
################################################################################################################################################################################
# merge function to match the RCGS list with the raw data
CompleteFishRaw.df <- lapply(NatividadFishesRaw.df, function(x, y) {merge(x, y, by.x=names(x)[19], by.y=names(y)[1], all=T)}, RCGS)
# fill in the NA with the corresponding transect metadata -- i.e., location, site, latitude, etc...
FillNA <- function(z){
SizeSex = z[,20:22]
# SizeSex[is.na(SizeSex)] <- "" # remove the NAs created by the above splitting
YY = na.locf(z[,1:19], na.rm=TRUE, fromLast=TRUE) # fill in
YYY = na.locf(YY, na.rm=TRUE) # fill in
NewFill = cbind(YYY, SizeSex) # combine all the columns back again
}
CompleteFish.df<-lapply(CompleteFishRaw.df, FillNA)
CompleteFish<-do.call(rbind, CompleteFish.df)
NatividadFishes <- with(CompleteFish, CompleteFish[!(Genusspecies == "" | is.na(Genusspecies)), ]) # delete all rows with no Genusspecies in the dataframe. When the raw data is expanded to include zero observations for a trasect, the merging function creates an additional row that is unnecessary. So this function looks for cells with blank values in the Genusspecies column and deletes these blank rows.
#### Reorder the column order
NatividadFishes <- NatividadFishes[, c("Observer", "Date", "Year", "TimeInitial", "TimeFinal", "TransectNumber", "Location", "Site", "Zone", "Country", "DepthInitial_m", "DepthFinal_m", "DepthMax_m", "MidTransDepth_m", "Latitude", "Longitude", "Temperature_C", "Visibility_m", "Genusspecies", "SizeClass", "SexClass", "SpeciesPresOutTrans")]
NatividadFishes$Present <- ifelse(NatividadFishes$SizeClass == "NA", 0, 1) # add a column called Present and fill with binary data: if a fish species was present in the transect (as indicated by SizeClass entry), then place 1 in the corresponding column. 0 would then indicate NAs or absence of fish in each row
NatividadFishes$Present[is.na(NatividadFishes$Present)] <- "0" # but since R doesn't place the NAs and replace it with 0's in the above function, this function does it after the fact
NatividadFishes[, "Present"] <- as.numeric(as.character(NatividadFishes[, "Present"])) # R also interprets the binary data as characters but we need it as numbers so we can count the 1's or 0's; this function changes the binary data from characters to numeric values
NatividadFishes0612 <- NatividadFishes
write.table(NatividadFishes0612, "~/Desktop/NSF_CHN/Ecology/ReefCheck/Data_Clean/Natividad_fishes_clean_data_2006_to_2012.csv", sep=",", col.names=T, row.names=F)
|
7afe5c3f63664afd60fab00e4b7d0e1d6bd3ccc5 | e819b9628724e07f12d5e2138b0ee90aaf468113 | /scripts/large_occ_buffer_calc.R | 2f443e114451038331485e7b12e65142af3c29b7 | [] | no_license | ccmothes/endophyteDimensionsGit | 50989161bba4a372e36d5a2b20389833dc0e9ef2 | 03bba362050cd3678afb34468dfaa04e01239969 | refs/heads/main | 2023-07-09T20:35:55.174152 | 2021-08-20T20:49:06 | 2021-08-20T20:49:06 | 397,606,505 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,488 | r | large_occ_buffer_calc.R | # Large OCC buffer calculation
# Set Up -----------------------------------------------------------
library(raster)
library(geodist)
library(cluster)
library(readr)
library(dplyr)
setwd("/scratch/projects/endophytedim/")
# read in occ
load("data/thinned_occ_large.RData")
specs <- thin.occ.large %>% pull(species) %>% unique()
##rename columns of updated file for geodist
names(thin.occ.large)[8:9] <- c("latitude", "longitude")
dist <- vector("list", length = length(specs))
for (i in 1:4){
s <- dplyr::filter(thin.occ.large, species == paste(specs[i]))
d <- geodist(as.matrix(s[,c("longitude","latitude")]), sequential = FALSE, measure = "haversine")
ag <-
agnes(d,
diss = TRUE,
metric = "euclidean",
method = "single")
if((max(ag$height)/1000) > 3330){
dist[[i]] <- max(ag$height[ag$height < (3330*1000)]/1000)
}
else{
dist[[i]] <- max(ag$height)/1000
}
print(i)
}
dist <- unlist(dist)
## save distances with species names
buffer_dist <- data.frame(species = specs[1:4], distance_km = dist)
# left join with number of occurrences and cut distances in half to make range polygons
buffer_dist_final <- thin.occ.large %>% group_by(species) %>% count() %>%
filter(species %in% buffer_dist$species) %>% left_join(buffer_dist, by = "species") %>%
mutate(distance_km_half = distance_km/2) %>% rename(num_occ = n)
write.csv(buffer_dist_final, "data/buffer_dist_test.csv")
print("done")
|
cf292dd2d0b6a6736513f42b0b453b872309b50b | c56cb5069a959a5e4d555c4eae307ac73198add9 | /man/montecarlo.Rd | 3919acfc4896797585b3d63258edae02bc53ec15 | [] | no_license | cran/agricolae | f7b6864aa681ed1304a07405392b73c2182f8950 | cad9b43db1fcd0f528fe60b8a416d65767c74607 | refs/heads/master | 2023-07-26T22:33:36.776659 | 2023-06-30T20:30:06 | 2023-06-30T20:30:06 | 17,694,321 | 7 | 9 | null | 2015-12-31T20:15:37 | 2014-03-13T03:53:58 | R | UTF-8 | R | false | false | 1,091 | rd | montecarlo.Rd | \name{montecarlo}
\alias{montecarlo}
\title{ Random generation by Montecarlo }
\description{Random generation form data, use function density and parameters
}
\usage{
montecarlo(data, k, ...)
}
\arguments{
\item{data}{ vector or object(hist, graph.freq) }
\item{k}{ number of simulations }
\item{\dots}{ Other parameters of the function density, only if data is vector }
}
\value{
Generate random numbers with empirical distribution.
}
\author{ Felipe de Mendiburu }
\seealso{\code{\link{density}} }
\examples{
library(agricolae)
r<-rnorm(50, 10,2)
montecarlo(r, k=100, kernel="epanechnikov")
# other example
h<-hist(r,plot=FALSE)
montecarlo(h, k=100)
# other example
breaks<-c(0, 150, 200, 250, 300)
counts<-c(10, 20, 40, 30)
op<-par(mfrow=c(1,2),cex=0.8,mar=c(2,3,0,0))
h1<-graph.freq(x=breaks,counts=counts,plot=FALSE)
r<-montecarlo(h, k=1000)
plot(h1,frequency = 3,ylim=c(0,0.008))
text(90,0.006,"Population\n100 obs.")
h2<-graph.freq(r,breaks,frequency = 3,ylim=c(0,0.008))
lines(density(r),col="blue")
text(90,0.006,"Montecarlo\n1000 obs.")
par(op)
}
\keyword{ manip }
|
f3e77da8cec990e29496432323ab5732125ef6b0 | 8b044e42b5e453ff3dbdf939b86083d1971bc160 | /submit.R | bbd991791684b6716e4442bcf9c14c02ac6fc9a9 | [] | no_license | walkabilly/pa_task_view | acebbbc0f5e7fc83714abe9448eb72e9ab8487a9 | 69b2f4ecf1c39135ba1abbd6a2f8261a600f7cd6 | refs/heads/master | 2023-06-27T21:43:48.596632 | 2023-06-16T16:08:04 | 2023-06-16T16:08:04 | 136,500,385 | 0 | 2 | null | 2019-11-28T15:49:41 | 2018-06-07T15:56:26 | null | UTF-8 | R | false | false | 746 | r | submit.R | # Javad Khataei
# skhataeipour@mun
# 11/29/2019
# this code submit ctv file to CRAN
library(ctv)
r <- getOption("repos") # set CRAN mirror
r["CRAN"] <- "https://cloud.r-project.org"
options(repos=r)
ctv_file <- read.ctv("PhysicalActivity.ctv")
html_file <- "PhysicalActivity.html"
check_ctv_packages("PhysicalActivity.ctv") # run the check
ctv2html(ctv_file, file = html_file, reposname = "CRAN")
# test another ctv
x <- read.ctv(system.file("ctv", "Econometrics.ctv", package = "ctv"))
ctv::download.views(views = x ,coreOnly = T)
ctv::CRAN.views()
ctv::install.views("PhysicalActivity")
ctv::install.views("Econometrics")
ctv::repos_update_views(repos = "https://cran.r-project.org/web/views/")
|
db6f30039cdc56ae1e99bb3c16cb4e9fad7161fd | 4384af4add4a62c2e922704e6ef2f7729f33bef0 | /RCode/Analysis/3b_reModelsVizLagStruc.R | 94daecbe02246ae2fbbac7848bd48a9213db5a22 | [] | no_license | s7minhas/ForeignAid | 8e1cffcae5550cbfd08a741b1112667629b236d6 | 2869dd5e8c8fcb3405e45f7435de6e8ceb06602a | refs/heads/master | 2021-03-27T18:55:20.638590 | 2020-05-12T16:42:58 | 2020-05-12T16:42:58 | 10,690,902 | 1 | 1 | null | null | null | null | UTF-8 | R | false | false | 5,949 | r | 3b_reModelsVizLagStruc.R | if(Sys.info()['user']=='s7m' | Sys.info()['user']=='janus829'){ source('~/Research/ForeignAid/RCode/setup.R') }
if(Sys.info()['user']=='cindycheng'){ source('~/Documents/Papers/ForeignAid/RCode/setup.R') }
################################################################
################################################################
# load data
load(paste0(pathData, '/iDataDisagg_wLags.rda'))
regData = iData[[1]]
# load model
dvs = c('humanitarianTotal', 'developTotal', 'civSocietyTotal', 'notHumanitarianTotal')
dvNames = paste(c('Humanitarian', 'Development', 'Civil Society', 'Non-Humanitarian'), 'Aid')
coefp_colors = c("Positive"=rgb(54, 144, 192, maxColorValue=255),
"Negative"= rgb(222, 45, 38, maxColorValue=255),
"Positive at 90"=rgb(158, 202, 225, maxColorValue=255),
"Negative at 90"= rgb(252, 146, 114, maxColorValue=255),
"Insig" = rgb(150, 150, 150, maxColorValue=255))
################################################################
################################################################
for(i in 1:length(dvs)){
stratMuIntMods = lapply(c('',paste0(2:6, '_')), function(x){
pth = paste0(pathResults, '/', dvs[i], '_fullSamp_gaussian_re_LstratMu_', x, 'interaction.rda')
load(pth) ; return(mods) })
names(stratMuIntMods) = paste0('Lag ', 1:6)
modSumm = do.call('rbind', lapply(1:length(stratMuIntMods), function(i){
x = stratMuIntMods[[i]]
x = rubinCoef(x)
summ = x[c(2,3,9),,drop=FALSE]
summ$lag = names(stratMuIntMods)[i]
summ$up95 = with(summ, beta + qnorm(.975)*se) ; summ$lo95 = with(summ, beta - qnorm(.975)*se)
summ$up90 = with(summ, beta + qnorm(.95)*se); summ$lo90 = with(summ, beta - qnorm(.95)*se)
summ$sig = NA
summ$sig[summ$lo90 > 0 & summ$lo95 < 0] = "Positive at 90"
summ$sig[summ$lo95 > 0] = "Positive"
summ$sig[summ$up90 < 0 & summ$up95 > 0] = "Negative at 90"
summ$sig[summ$up95 < 0] = "Negative"
summ$sig[summ$lo90 < 0 & summ$up90 > 0] = "Insig"
summ$varFacet = c('Strategic Distance', 'No. Disasters', 'Strategic Distance x No. Disasters')
return(summ) }))
tmp=ggplot(modSumm, aes(x=lag, y=beta, color=sig)) +
geom_hline(aes(yintercept=0), linetype='dashed', color='grey40') +
geom_point() +
geom_linerange(aes(ymin=lo95,ymax=up95), size=.3) +
geom_linerange(aes(ymin=lo90,ymax=up90), size=1) +
scale_color_manual(values=coefp_colors) +
ggtitle(dvNames[i]) +
facet_wrap(~varFacet, nrow=3, scales='free_x') +
ylab('') + xlab('') +
theme(
axis.ticks = element_blank(),
panel.border=element_blank(),
legend.position='none' )
ggsave(tmp, file=paste0(pathGraphics, '/', dvs[i], '_int_lagEffect.pdf'), width=8, height=7)
}
################################################################
################################################################
# calc substantive effect
simPlots = list()
for(i in 1:length(dvs)){
stratMuIntMods = lapply(c('',paste0(c(3,5), '_')), function(x){
pth = paste0(
pathResults, '/',
dvs[i], '_fullSamp_gaussian_re_LstratMu_', x, 'interaction.rda')
load(pth) ; return(mods) })
names(stratMuIntMods) = paste0('Lag ', c(1,3,5))
simResults = lapply(1:length(stratMuIntMods), function(j){
mod = stratMuIntMods[[j]][[1]] ; var = names(fixef(mod))[2]
disVar = names(fixef(mod))[3]
# Create scenario matrix
stratQts = quantile(regData[,var], probs=c(.05,.95), na.rm=TRUE)
stratRange=with(data=regData, seq(stratQts[1], stratQts[2], .01) )
disRange=seq( min(regData[,disVar],na.rm=TRUE), 4, 2 )
scen = with(data=regData,
expand.grid(
1, stratRange, disRange,
median(colony,na.rm=TRUE), median(Lpolity2,na.rm=TRUE),
median(LlnGdpCap,na.rm=TRUE),
median(LlifeExpect,na.rm=TRUE),median(Lcivwar,na.rm=TRUE)
) )
# Add interaction term
scen = cbind( scen, scen[,2]*scen[,3] )
colnames(scen) = names(fixef(mod))
scen = data.matrix(scen)
pred = scen %*% mod@beta
draws = mvrnorm(10000, mod@beta, vcov(mod))
sysUncert = scen %*% t(draws)
sysInts95 = t(apply(sysUncert, 1, function(x){
quantile(x, c(0.025, 0.975), na.rm=TRUE) }))
sysInts90 = t(apply(sysUncert, 1, function(x){
quantile(x, c(0.05, 0.95), na.rm=TRUE) }))
# Combine for plotting
ggData=data.frame(
cbind(pred, sysInts95, sysInts90,
scen[,var], scen[,disVar])
)
names(ggData)=c('fit', 'sysLo95', 'sysHi95',
'sysLo90', 'sysHi90', var, disVar)
names(ggData)[6:7] = c('LstratMu','Lno_disasters')
# Plot rel at various cuts of disasters
disRange=with(data=regData, seq(
min(Lno_disasters), 4, 2) )
ggDataSmall = ggData[which(ggData$Lno_disasters %in% disRange),]
# change facet labels
ggDataSmall$Lno_disasters = paste(ggDataSmall$Lno_disasters, 'Disasters$_{r}$')
#
ggDataSmall$lagID = names(stratMuIntMods)[j]
return(ggDataSmall) })
# viz
ggData = do.call('rbind', simResults)
facet_labeller = function(string){ TeX(string) }
modTitle = dvNames[i]
tmp=ggplot(ggData, aes(x=LstratMu, y=fit)) +
geom_line() +
geom_ribbon(aes(ymin=sysLo90, ymax=sysHi90), alpha=.6) +
geom_ribbon(aes(ymin=sysLo95, ymax=sysHi95), alpha=.4) +
facet_grid(Lno_disasters~lagID, labeller=as_labeller(facet_labeller, default = label_parsed)) +
labs(
x=TeX('Strategic Distance$_{sr}$'),
y=TeX("Log(Aid)$_{t}$"),
title=modTitle
) +
theme(
axis.ticks=element_blank(),
panel.border = element_blank() )
simPlots[[i]] = tmp
}
ggsave(simPlots[[1]], file=paste0(pathGraphics, '/simComboPlot_lag_hAid.pdf'), width=7, height=4)
ggsave(simPlots[[2]], file=paste0(pathGraphics, '/simComboPlot_lag_dAid.pdf'), width=7, height=4)
ggsave(simPlots[[3]], file=paste0(pathGraphics, '/simComboPlot_lag_cAid.pdf'), width=7, height=4)
################################################################ |
ccf3e8558effc27bde615a6c1a459fc7958fc778 | 20fb140c414c9d20b12643f074f336f6d22d1432 | /man/NISTukThUn59FtOjoule.Rd | 0174be504540e89f51f7a367eb3095febeb32fdb | [] | no_license | cran/NISTunits | cb9dda97bafb8a1a6a198f41016eb36a30dda046 | 4a4f4fa5b39546f5af5dd123c09377d3053d27cf | refs/heads/master | 2021-03-13T00:01:12.221467 | 2016-08-11T13:47:23 | 2016-08-11T13:47:23 | 27,615,133 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 803 | rd | NISTukThUn59FtOjoule.Rd | \name{NISTukThUn59FtOjoule}
\alias{NISTukThUn59FtOjoule}
\title{Convert British thermal unit to joule }
\usage{NISTukThUn59FtOjoule(ukThUn59F)}
\description{\code{NISTukThUn59FtOjoule} converts from British thermal unit (59 F) (Btu) to joule (J) }
\arguments{
\item{ukThUn59F}{British thermal unit (59 F) (Btu) }
}
\value{joule (J) }
\source{
National Institute of Standards and Technology (NIST), 2014
NIST Guide to SI Units
B.8 Factors for Units Listed Alphabetically
\url{http://physics.nist.gov/Pubs/SP811/appenB8.html}
}
\references{
National Institute of Standards and Technology (NIST), 2014
NIST Guide to SI Units
B.8 Factors for Units Listed Alphabetically
\url{http://physics.nist.gov/Pubs/SP811/appenB8.html}
}
\author{Jose Gama}
\examples{
NISTukThUn59FtOjoule(10)
}
\keyword{programming} |
7166efb22310186d32a8f2348d294de1b8bf67cb | 81fb985e71980705e5d354bf95ef8cf0756a9c32 | /spGenerateInteraction.R | 384f20cff7a8d8d22f0719c22d0e64ed7d4c7ce9 | [] | no_license | yongkai17/n-way-spFSR | f2b4912889863634d1b2a19575a6be0d3f76f434 | 58da62501cfcef13909e8696014c21afb6ef9981 | refs/heads/master | 2020-03-30T12:49:46.938854 | 2018-10-02T12:02:03 | 2018-10-02T12:02:03 | 151,242,953 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,856 | r | spGenerateInteraction.R | source('modifiedMakeClassifTask.R')
source('modifiedMakeRegrTask.R')
spGenerateInteraction <- function(task, wrapper,
data, seed.number = NULL,
total_features,
max_interaction_threshold_percent,
measure,
...){
set.seed(seed.number)
# Initialise the empty lists and data frame to store
results_importance <- list()
results_summary <- data.frame()
all_edges <- list()
covariance <- list()
est_coeff <- list()
# Initialise index for summary and result
k <- 1
cat('\nGetting main effect features....\n')
# Run SP-FSR to select specificied number of features
spsaMod <- spFeatureSelection( task = task,
wrapper = wrapper,
measure = measure,
num.features.selected = total_features,
...)
# Store the summary in the data frame
results_summary[k, 'p'] <- total_features
results_summary[k, 'mean'] <- spsaMod$best.value
results_summary[k, 'std'] <- spsaMod$best.std
results_summary[k, 'runtime'] <- spsaMod$run.time
# Store the importance result in the list
results_importance[[k]] <- getImportance(spsaMod)
all_edges[[k]] <- NULL
features.to.keep <- as.character(results_importance[[k]]$features)
target <- task$task.desc$target
# Refit with the full dataset and extract IC values
fittedTask <- makeClassifTask(data[, c(features.to.keep, target)],
target = target, id = 'subset')
fittedMod <- train(wrapper, fittedTask)
pred <- predict(fittedMod, fittedTask)
fittedMod <- fittedMod$learner.model
est_coeff[[k]] <- data.frame(coefficient = fittedMod$coefficients)
results_summary[k, 'AIC'] <- AIC(fittedMod)
results_summary[k, 'BIC'] <- BIC(fittedMod)
results_summary[k, 'measure'] <- performance(pred, measure)
# Create a task with pairwise interactions
cat('\nGetting interaction features....\n')
sub_task <- modifiedMakeClassifTask(data = data[, c(features.to.keep, target)],
target = target, order = 2L)
# Specify the max interaction number allowed
max_interactions_number <- total_features*(total_features-1)/2
max_interactions_number <- floor(max_interactions_number*max_interaction_threshold_percent)
max_interactions_number <- max(max_interactions_number, 1)
# Run SP-FSR for each interaction number
for(j in 1:max_interactions_number){
k <- k + 1
sub_spsaMod <- spFeatureSelection( task = sub_task,
wrapper = wrapper,
measure = measure,
num.features.selected = j,
features.to.keep = features.to.keep,
...)
results_summary[k, 'p'] <- results_summary[k-1, 'p'] + 1
results_summary[k, 'mean'] <- sub_spsaMod$best.value
results_summary[k, 'std'] <- sub_spsaMod$best.std
results_summary[k, 'runtime'] <- sub_spsaMod$run.time
results_importance[[k]] <- getImportance(sub_spsaMod)
new_features <- as.character(results_importance[[k]]$features)
sub_fittedtask <- makeClassifTask(sub_task$env$data[, c(new_features, target)], target = target, id = 'subset')
sub_fittedMod <- train(wrapper, sub_fittedtask)
sub_pred <- predict(sub_fittedMod, sub_fittedtask)
sub_fittedMod <- sub_fittedMod$learner.model
covariance[[k]] <- sub_fittedMod$R
est_coeff[[k]] <- data.frame(coefficient = sub_fittedMod$coefficients)
results_summary[k, 'AIC'] <- AIC( sub_fittedMod )
results_summary[k, 'BIC'] <- BIC( sub_fittedMod )
results_summary[k, 'measure'] <- performance(sub_pred, measure)
# Extract the interaction terms
y <- sapply(new_features, FUN =function(x){strsplit(x, "\\.")})
edges <- data.frame()
m <- 0
for(z in 1:length(y)){
x <- y[[z]]
if( length(x) == 2){
m <- m + 1
edges[m, 'from'] = x[1]
edges[m, 'to'] = x[2]
edges[m, 'weight'] = 1
}
}
all_edges[[k]] <- edges
}
# Create a list to store all results
results <- list(summary = results_summary,
importance = results_importance,
nodes = data.frame(variable = features.to.keep),
edges = all_edges,
covariance = covariance,
est_coeff = est_coeff)
return(results)
} |
0bc6123d8cf803c312ac256608235a898be65af8 | a625bec26103f79c89ab475357effdae50f29fcb | /Mushrooms.R | 8fca62ebdb4b6ab1bdf177f80716bc93bcdc8cfb | [] | no_license | gelandr/capstone_CYO | fb85b51680c1756ae46943031e037e7fbf5285bb | 3ca80fece097b7f0178e7aaf4cd5f9222487b52e | refs/heads/master | 2020-09-22T12:37:38.603495 | 2019-12-26T22:03:33 | 2019-12-26T22:03:33 | 225,198,039 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 27,924 | r | Mushrooms.R |
if(!require(tidyverse)) install.packages("tidyverse", repos = "http://cran.us.r-project.org", dependencies = TRUE)
if(!require(caret)) install.packages("caret", repos = "http://cran.us.r-project.org", dependencies = TRUE)
if(!require(gridExtra)) install.packages("gridExtra", repos = "http://cran.us.r-project.org", dependencies = TRUE)
if(!require(ggplot2)) install.packages("ggplot2", repos = "http://cran.us.r-project.org", dependencies = TRUE)
# Mushrooms dataset:
# https://archive.ics.uci.edu/ml/machine-learning-databases/mushroom/
# https://archive.ics.uci.edu/ml/machine-learning-databases/mushroom/agaricus-lepiota.data
# https://archive.ics.uci.edu/ml/machine-learning-databases/mushroom/agaricus-lepiota.names
#create directory for the data file if necessary
if (!dir.exists("mushrooms")){
dir.create("mushrooms")
}
#download the data file only if not done yet
if (!file.exists("./mushrooms/agaricus-lepiota.data")){
download.file("https://archive.ics.uci.edu/ml/machine-learning-databases/mushroom/agaricus-lepiota.data", "./mushrooms/agaricus-lepiota.data")
}
fl <- file("mushrooms/agaricus-lepiota.data")
mushroom_data <- str_split_fixed(readLines(fl), ",", 23)
close(fl)
colnames(mushroom_data) <- c("classes", "cap-shape", "cap-surface", "cap-color", "bruises?", "odor", "gill-attachment",
"gill-spacing", "gill-size", "gill-color", "stalk-shape", "stalk-root", "stalk-surface-above-ring",
"stalk-surface-below-ring", "stalk-color-above-ring", "stalk-color-below-ring", "veil-type", "veil-color",
"ring-number", "ring-type", "spore-print-color", "population", "habitat")
mushroom_data <- as.data.frame(mushroom_data)
#initialize random sequenz
set.seed(1, sample.kind = "Rounding")
#create index for train and test set
#20% of the data will be used for the test set
test_idx = createDataPartition(y = mushroom_data$classes, times=1, p=0.2, list=FALSE)
train_data = mushroom_data[-test_idx,]
test_data = mushroom_data[test_idx,]
rm(fl, mushroom_data, test_idx)
#function for calculating the entropy of one feature in the dataset
#the feature must be a factor!
#
#parameters:
# dataset: the data set containing data for calculation
# colX: the index of the feature column for calculating the entropy (indexing starts with 1)
#
#return:
# the entropy of the feature in the dataset as numeric value
entropy <- function(dataset, colX){
#the colX index must be between 1 and the count of the columns
#if not, than the entropy is not calculable and we return 0 as entropy
if (ncol(dataset) < colX | colX < 1){
return (0)
}
#initialize the return variable
ret <- 0
#calculate the count of the records for each value of the given feature
summarized_data <- dataset %>% group_by(.[,colX]) %>% summarise(n = n())
#the row count of the whole dataset
rowCount <- nrow(dataset)
#get all possible feture values
level_items <- levels(dataset[,colX])
#calculate for all feature values..
for (item in level_items){
#probability of the actual feature value
prob <- summarized_data %>% filter(.[,1] == item) %>% pull(n) / rowCount
#add probability multiplied by the 2 base log of the probability to the overall summ
ret <- ret + prob * log(prob,base = 2)
}
#return the entropy value
return (-ret)
}
#function for calculating the conditional entropy between two features in the dataset
#Both features must be a factor! The feature indexes start with 1.
#
#parameters:
# dataset: the data set containing the data for calculation
# colX: the index of the feature column for calculating the conditinal entropy for
# colY: the index of the condiion feature column for calculating the conditional entropy
#
#return:
# the conditional entropy of the two features in the dataset as numeric value
cond_entropy <- function(dataset, colX, colY){
#the colX index must be between 1 and the count of the columns
#if not, than the condiitonal entropy is not calculable and we return 0 as entropy
if (ncol(dataset) < colX | colX < 1){
return (0)
}
#the colY index must be between 1 and the count of the columns
#if not, than the conditional entropy is not calculable and we return 0 as entropy
if (ncol(dataset) < colY | colY < 1){
return (0)
}
#initialize the return variable
ret <- 0
#the count of the data rows in the dataset
rowCount <- nrow(dataset)
#calculate the count of the records for each value of the condition feature
summarized_y <- dataset %>% group_by(.[,colY]) %>% summarise(n = n())
#rename the feature col name to Y (it's easier to handle in the further code)
names(summarized_y)[1] <- "Y"
#get all values for the condition feature
level_itemsy <- levels(summarized_y$Y)
#calculation loop for all possible condition values
for (itemy in level_itemsy){
#calculate the probability of the condition
proby <- summarized_y %>% filter(Y == itemy) %>% pull(n) / rowCount
#calculate the count of the compare feature values using the condition value as filter
summarized_x <- dataset %>% filter(.[,colY] == itemy) %>% group_by(.[,colX]) %>% summarise(n = n())
#rename the compare feature col name to X (it's easier to handle in the further code)
names(summarized_x)[1] <- "X"
#get all values for the compare feature
level_itemsx <- levels(summarized_x$X)
#calculation loop for all possible compare values
for (itemx in level_itemsx){
#calculate the count of the condition feature value
filtered <- summarized_x %>% filter(X == itemx)
#if there are values and the condition probability is not 0
if (nrow(filtered) > 0 && proby != 0){
#calculate the joined probabiliy P(X,Y)
probxy <- filtered$n / rowCount
#calculate the conditional probabilit p(X|Y)
probx_at_y <- probxy / proby
#add conditinal probability multiplied by the 2 base log
#of the conditional probability to the overall summ
ret <- ret + probxy * log(probx_at_y,base = 2)
}
}
}
#return the conditional entropy
return (-ret)
}
#function for calculating the uncertainty score between two features in the dataset
#Both features must be a factor! The feature indexes start with 1.
#
#parameters:
# dataset: the data set containing the data for calculation
# colX: the index of the feature column for calculating the conditinal entropy for
# colY: the index of the condiion feature column for calculating the conditional entropy
#
#return:
# the uncertainty score of the two features in the dataset as numeric value
uncertainty <- function(dataset, colX, colY){
#calculate the entropy for the compare feature
entr <- entropy(dataset,colX)
#if the enropy is 0, we return 0 as uncertainty
if (entr == 0){
return(0)
}
#calculate the conditional entropy for the two feature
cond_entr <- cond_entropy(dataset, colX, colY)
#return the uncertainty score for the features
return ( (entr - cond_entr) / entr)
}
#plot function for the uncertainty matrix for a given dataset
#All features must be factor
#
#parameters:
# dataset: the data set containing the data for the plot
#
uncertainty_plot <- function(dataset){
#initialize the dataframe for the uncertainty score matrix
uncertainty_df <- data_frame(X=character(), idx=numeric(), Y=character(), idy=numeric(), value=numeric())
#feature names for the plot
labels <- names(dataset)
#loop for all features (x coordinates)
for (i in 1:ncol(dataset))
{
#loop for all features (y coordinates)
for(j in 1:ncol(dataset))
{
#add the calculated uncertainty score to the result dataframe
uncertainty_df <- bind_rows(uncertainty_df, data_frame(Y = labels[j], idy=j , X=labels[i], idx=i, value=uncertainty(train_data,i,j)))
}
}
#plot the uncertainty matrix with colors
uncertainty_df %>% ggplot(aes(x=reorder(X,idx), y=reorder(Y,-idy), fill=value)) + geom_tile() +
geom_text(aes(label=round(value,2)), color='white') +
theme_bw() + theme(axis.text.x = element_text(angle = 90, hjust = 1)) +
xlab('X feature') + ylab('Y feature') + ggtitle('uncertainty score U(X|Y)')
}
#plot function for the value distribution for all feature values to edibility classification
#All features must be factor and the first feature must be the edibility classification
#
#parameters:
# dataset: the data set containing the data for the plot
#
feature_ditribution_plot <- function(data){
#initialize the plot list
plots <- list()
#loop for all features except edibility
for (i in 1:(ncol(data)-1))
{
#calculate the count of the feature values per edibility classification
summarized_data <- data %>% group_by(classes, .[,i+1]) %>% summarise(n = n())
#rename the feature column(it's easier to handle in the further code)
names(summarized_data)[2] <- "attr"
#create the plot for the current feature
plot <- summarized_data %>% ggplot(aes(attr , classes)) + geom_point(aes(size=n)) +
xlab(names(data)[i+1]) + ylab("Edibility")
#add the plot to the plot list
plots[[i]] <- plot
}
#remove the unnecessary variables
rm(summarized_data, i, plot)
#draw all the created plots in a grid with 3 columns
grid.arrange(grobs=plots,ncol=3)
}
#function for cross validation
#parameters:
# trainset: the train set to use for the cross validation
# cv_n: the count of the cross validation
# FUNC: the function to call for the actual cross validation train and test set (calculated from the param trainset)
# ...: additional parameter necessary for calling the provided function
#
#return:
# dataframe with the function result for the cross validations (the data frame has cv_n items)
cross_validation <- function(trainset, cv_n, FUNC,...){
#get the count of the data rows on the train set
data_count = nrow(trainset)
#initialize the data frame for the result
values_from_cv = data_frame()
#randomise the trainset.
#If the train set is ordered (not randomised, like the movielens dataset) the cross validation
#will not be independent and provide wrong result
trainset_randomised <- trainset[sample(nrow(trainset)),]
#create the train- and testset for the cross validation
#we need cv_n run, therefore we use a loop
for (i in c(1:cv_n)){
#evaulate the size of the test set. This will be the 1/cv_n part of the data
part_count = data_count / cv_n
#select the data from the parameter train set
#we get the part_count size elements from the parameter train set
idx = c( (trunc((i-1) * part_count) + 1) : trunc(i * part_count) )
#tmp holds the new test set
test = trainset_randomised[idx,]
#train holds the new test set
train = trainset_randomised[-idx,]
#call the provided function to the actual train and test set.
akt_value <- FUNC(train, test,...)
#add the result to the data frame
#the column 'cv' contains the idx of the cross validation run
values_from_cv <- bind_rows(values_from_cv, akt_value %>% mutate(cv = i))
}
#return the results of each cross validation
return(values_from_cv)
}
feature_ditribution_plot(train_data)
odor_n <- train_data %>% filter(`odor` == 'n') %>% select(-`odor`)
feature_ditribution_plot(odor_n)
spore_print_color_w <- odor_n %>% filter(`spore-print-color` == 'w') %>% select(-`spore-print-color`)
feature_ditribution_plot(spore_print_color_w)
gill_color_w <- spore_print_color_w %>% filter(`gill-color` == 'w') %>% select(-`gill-color`)
feature_ditribution_plot(gill_color_w)
gill_size_n <- gill_color_w %>% filter(`gill-size` == 'n') %>% select(-`gill-size`)
feature_ditribution_plot(gill_size_n)
stalk_surface_below <- gill_size_n %>% filter(`stalk-surface-below-ring` == 's') %>% select(-`stalk-surface-below-ring`)
feature_ditribution_plot(stalk_surface_below)
#implementation of the naiv decision tree model based on the data analysis observation
#This implementation uses fix decision tree, therefore is not suitable for machine learning
#
#parameters:
# dataset: the data set containing the data for prediction
#
#return:
# the predicted classification list
predict_dectree_naive <- function(dataset){
#initalization for the result variable
predicted <- tibble(y = character())
#loop all the data in the dataset
for (i in 1 : nrow(dataset))
{
y <- tryCatch({
#get the feature values relevant for the fixed decision tree
odor <- dataset[i,]$odor
spore_print_color <- dataset[i,]$`spore-print-color`
gill_color <- dataset[i,]$`gill-color`
gill_size <- dataset[i,]$`gill-size`
stalk_surface <- dataset[i,]$`stalk-surface-below-ring`
ring_type <- dataset[i,]$`ring-type`
#go throug the decision tree rules in an if - then tree
if (odor %in% c('c','f','m','p','s','y')){
'p'
}
else if (odor %in% c('a','l')){
'e'
}
else if (spore_print_color %in% c('e','g', 'n','o','p','y')){
'e'
}
else if (spore_print_color %in% c('r')){
'p'
}
else if (gill_color %in% c('y')){
'p'
}
else if (gill_color %in% c('e','g','p')){
'e'
}
else if (gill_size %in% c('b')){
'e'
}
else if (stalk_surface %in% c('f')){
'e'
}
else if (stalk_surface %in% c('y')){
'p'
}
else if (ring_type %in% c('e')){
'e'
}
#if not match in the tree, we predict poisonous
else{
'p'
}
}, warning = function(w){
return('p')
}, error = function(e) {
return('p')
})
#add the current prediction to the result list
predicted <- bind_rows(predicted, data_frame(y = y))
}
#convert the result to a factor with two values
predicted <- predicted %>% mutate(y=factor(y, levels=c('e','p'))) %>% pull(y)
#reutrn the prediction list
return(predicted)
}
#predict with naive tree
predicted_naive <- predict_dectree_naive(test_data %>% select(-classes))
result_naive_tree_model <- confusionMatrix( predicted_naive, test_data$classes)
result_naive_tree_model
#Function for predict the edibility based on the given decision tree
#
#parameters:
# dataset: the data set containing the data for prediction
# tree: the decision tree for to use for the prediction
#
#return:
# the predicted classification list
predict_decision_tree <- function(dataset, tree){
#initialize the result list
ret <- list()
#loop through all data in the dataset
for(i in 1:nrow(dataset)){
#initialize the current classification
#we initialize to the poisonous value, because we want to be sure,
#that we classify 'p' for unknown results
y <- 'p'
#the decision rule count in the tree
count_rules <- nrow(tree)
#loop throug all the decision rules in the tree
for(j in 1:count_rules){
#get the feature name from the current decision rule
feature <- tree$feature[j]
if (length(tree$e_values[j]) > 0){
#get the e rules from the current decision rule
e_values <- unlist(str_split(tree$e_values[j], ','))
#if the current data match to one of the e rule values
if (dataset[i,feature] %in% e_values){
#than we predic edible
y <- 'e'
#the current row is predicted, we can add the precition to the result
break;
}
}
if (length(tree$p_values[j]) > 0){
#get the p rules from the current decision rule
p_values <- unlist(str_split(tree$p_values[j], ','))
#if the current data macht to one of the p rules
if (dataset[i,feature] %in% p_values){
#we predict poisonous
y <- 'p'
#the current row is predicted, we can add the precition to the result
break;
}
}
}
#add the predicted value to the result
ret <- c(ret, y)
}
#return the prediction list as factor
return(factor(ret, levels = c('e','p')))
}
#Function to create a decision tree model based on the given dataset
#
#parameters:
# dataset: the data set containing the data for train the decision tree
#
#return:
# the trained decisoin tree. The tree is a tibble with decision rules as row.
# the columns contains the feature name for the rule, the values to predict 'e' edibility
# and the valus to predict 'p' edibility
train_decision_tree_model <- function(dataset){
#initialize the return tibble as empty
tree <- data_frame(feature=character(), decided_proz=numeric(), e_values=character(), p_values=character())
#call the recursive function to get the decision rules for the decision tree
#we start the recursion with the whole dataset and an empty tree
tree <- extend_decision_tree(dataset, tree)
#return the decision tree
return(tree)
}
#Recursive function to create the decision rules for the decision tree based on the given (remaining) dataset
#The function assumes, that the edibility classification is the first column in the dataset!
#All the features must be a factor
#
#parameters:
# dataset: the data set containing the data for train the decision tree
# decision_tree: the decision tree, we extend recursive with new rules based on the given dataset
#
#return:
# the trained decision tree. The tree is a tibble with decision rules as row.
# the columns contains the feature name for the rule, the values to predict 'e' edibility
# and the valus to predict 'p' edibility
extend_decision_tree <- function(dataset, decision_tree){
#initialize the frame with the possible new rules
newRules <- data_frame(feature=character(),decided_proz=numeric(), e_values=character(), p_values=character())
#the count of all data in the dataset
count = nrow(dataset)
#we go throug all the features except the edibility classification (col = 1)
for (i in 1:(ncol(dataset)-1))
{
#Get the current feature name
feature <- names(dataset)[i+1]
#calculate the count of the feature values
summarized_data <- dataset %>% group_by(classes, .[,i+1]) %>% summarise(n = n())
#rename the feature col name to attr (it's easier to handle in the further code)
names(summarized_data)[2] <- "attr"
#get the possible feature values
values <- levels(summarized_data$attr)
#initialize the edible values list
e_values <- list()
#intialize the poisonous value list
p_values <- list()
#initialize the counter for the feature pereformance score
decided_count = 0
#initialize the index for the list entries
e_pos <- 1
p_pos <- 1
#loop through all feature values
for(val in values){
#calculate the count of the edible entries for the feature value
e_count <- summarized_data %>% filter(classes=='e' & attr == val) %>% pull(n)
#if no definitely edible entries than set the count to 0
e_count <- ifelse(is_empty(e_count),0,e_count)
#calculate the count of the poisonous entries for the feature value
p_count <- summarized_data %>% filter(classes=='p' & attr == val) %>% pull(n)
#if no definitely edible entries than set the count to 0
p_count <- ifelse(is_empty(p_count),0,p_count)
#if the feature value has only p entries, than we can use it as p rule
if (e_count == 0 && p_count > 0){
#add the value to the p rules
p_values[[p_pos]] <- val
#update the feture performance score
decided_count <- decided_count + p_count
#update the p rule list index
p_pos <- p_pos + 1
}
#if the feature value has only e entries, than we can use it as e rule
if (e_count > 0 && p_count == 0){
#add the value to the e rules
e_values[[e_pos]] <- val
#update the feature performance score
decided_count <- decided_count + e_count
#update the e rule list index
e_pos <- e_pos + 1
}
}
#if the feature performance score is greater as 0 (we can use the feature as decision rule)
if (decided_count > 0){
#convert the lists to comma separated string
e_values <- paste(e_values, collapse=",")
p_values <- paste(p_values,collapse=",")
#add the rule as possible new decision rule to the tibble
#we add the percent of the data, that could be classified with the actuall rule
#we will use this score for selecting the best rule
newRules <- bind_rows(newRules, data_frame(feature=feature, decided_proz=decided_count / count, e_values = e_values,
p_values = p_values))
}
}
#if we found at least one decision rule for the given dataset
if (nrow(newRules) > 0){
#look for the decision rule with the highest percentage of classified data
idx <- which.max(newRules$decided_proz)
bestRule <- newRules[idx,]
#get the e rules from the best performing decision rule
e_values <- unlist(str_split(bestRule$e_values, ','))
#get the p rules from the best performing decision rule
p_values <- unlist(str_split(bestRule$p_values, ','))
#get the data from the dataset, that couldn't classify with the e and p rules
remaining_data <- dataset %>% filter( !(.[,bestRule$feature] %in% e_values) & !(.[,bestRule$feature] %in% p_values))
#add the best rule to the tree
decision_tree <-bind_rows(decision_tree, bestRule)
#if we have remaining data
if (nrow(remaining_data) > 0){
#than call the function recursvie again with the remaining data and the extended tree
decision_tree <- extend_decision_tree(remaining_data, decision_tree)
}
}
#return the decision tree if we are ready with the recursion
return(decision_tree)
}
#train decision tree
predict_tree <- train_decision_tree_model(train_data)
predicted <- predict_decision_tree(test_data, predict_tree)
result_decision_tree_model <- confusionMatrix( predicted, test_data$classes)
result_decision_tree_model
uncertainty_plot(train_data)
odor_sporeprintcolor_values <- train_data %>% group_by(classes, odor, `spore-print-color`) %>% summarise()
odor_sporeprintcolor_values %>% print(n = nrow(odor_sporeprintcolor_values))
#The function trains the feature model with the given amount of the features. To select the most relevant
#features, the function uses the uncertinity score calculation
#All the features must be a factor. The function assumes, that the edibility classification is
#the first column in the dataset
#
#parameters:
# dataset: the data set containing the data for train the feature model
# feat_count: the count of the feature to use for the training
#
#return:
# the trained feature model. Thre result is a tibble containing the feature value combinations
# for that we predict 'e' edibility
train_feature_model <- function(dataset, feat_count){
#initialize the tibble for the uncertinity score
uncertinity_df <- data_frame(X=character(), idx=numeric(), value=numeric())
#get the feature name
labels <- names(dataset)
#loop all the features in the dataset
for (i in 1:ncol(dataset))
{
#calculate the uncertinity score for the current feature with the first feature
#(first feature contains the edibility classification)
uncertinity_df <- bind_rows(uncertinity_df, data_frame(X=labels[i], idx=i, value=uncertainty(dataset,1,i)))
}
#order the data by the uncertinity score descending
uncertinity_df <- uncertinity_df %>% arrange(-value)
#get the edible classification + the following feat_count features
features <- head(uncertinity_df$X,feat_count+1)
#select the unique data for all the features combination, where the classification is 'p'
p_values <- dataset[,features] %>% filter(classes=='p') %>% select(-classes) %>% unique()
#select the unique data for all the features combination, where the classification is 'e'
e_values <- dataset[,features] %>% filter(classes=='e') %>% select(-classes) %>% unique()
#filter out all the data from the e_values, that occures also in the p values
e_values <- e_values %>% anti_join(p_values, by=names(e_values))
#so we have the feature combinations with definitly 'e' classification
#add this classification as column to the result
e_values <- e_values %>% mutate(pred='e')
#return the result
return(e_values)
}
#The function predict the edibility classification for the given dataset
#based on the given feature model
#All the features must be a factor.
#
#parameters:
# dataset: the data set containing the data for train the feature model
# e_values: the feature model (the e value combinations)
#
#return:
# the predicted classification list
predict_feature_model <- function(dataset, e_values){
#get the features containing in the feature model
feature_names = names(e_values %>% select(-pred))
#join the dataset to the feature model based on the selected features
#we use left join, therefore there can be data without founded entry in the feature model
#if we found an entry in the feture model (e values), than we predict 'e'
#other case we predict 'p'
#So we predict 'e' only for combination where we are sure, that the mushroom is edible
pred <- dataset %>% left_join(e_values, by=feature_names) %>%
mutate(y = factor(ifelse(is.na(pred),'p', pred), c('e','p'))) %>% pull(y)
#return the predicted values
return(pred)
}
#This function calculates the F1 score for the feature model for a given feature_count by
#given train and test set. This function can be called by the n-fold cross validation function
#
#parameters:
# train: the train data set containing the data for train the feature model
# test: the test data set to predict the edibility classification
# features_count: the count of the feature to use for the training
#
#return:
# the F1 score to the trained model
calculate_F1_score <- function(train, test, features_counts){
F1_all = data_frame(feature_count = numeric(), F1 = numeric())
for(i in features_counts){
train_model <- train_feature_model(train, i)
predicted <- predict_feature_model(test, train_model)
result <- confusionMatrix(predicted, test$classes)
F1_score <- result$byClass[["F1"]]
F1_score <- ifelse(is.na(F1_score),0,F1_score)
F1_all <- bind_rows(F1_all, data_frame(feature_count = i, F1=F1_score))
}
return(F1_all)
}
#train features model with cross validation
F1_scores <- cross_validation(train_data,5, calculate_F1_score, 1:22) %>% group_by(feature_count) %>%
summarise(F1=mean(F1))
#plot the F1 scores depending on the used feature count
F1_scores %>% ggplot(aes(feature_count, F1)) + geom_point()
#get the optimum feature count
opt_feature_count <- which.max(F1_scores$F1)
#calculate the result for our feature count based model
result_feature_model <- confusionMatrix(predict_feature_model(test_data,
train_feature_model(train_data, opt_feature_count)), test_data$classes)
train_data2 <- train_data %>% select(-`veil-type`)
train_knn <- train(classes~., method='knn', data=train_data2)
result_knn <- confusionMatrix(predict(train_knn, test_data, type="raw"), test_data$classes)
cat("The F1 value for the naiv decision tree: ", result_naive_tree_model$byClass[["F1"]])
cat("The F1 value for the trained decision tree: ", result_decision_tree_model$byClass[["F1"]])
cat("The F1 value for the feature model: ", result_feature_model$byClass[["F1"]])
cat("The F1 value for the knn method: ", result_knn$byClass[["F1"]])
|
569027df5c7198a2c09c62d9c8b6039b1dc71a30 | a6b6d7ee41dbed94cbf1a5f8a27f2237704b8983 | /Grand-average-eriksen.R | b4c8caac3419799a7a6bbdea1f26590f4a19948f | [] | no_license | BartlettJE/PhD_EEG | 791ff273e3c12d98256485412a9e3755bf670ca0 | 992a135134ac649f87bd951982a0a25bff2e121f | refs/heads/master | 2021-06-11T02:47:12.633782 | 2020-03-22T08:52:14 | 2020-03-22T08:52:14 | 128,170,825 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 10,089 | r | Grand-average-eriksen.R | require(R.matlab) #function to be able to read matlab files - EEG data saved from MNE Pythin
require(tidyverse)
require(pracma) #function to allow the calculation of linear space for plotting
require(cowplot)
require(readbulk)
require(afex)
require(skimr)
# Load my packages
source("EEG_functions.R")
# prepare batch processing
# read a list of .csv (experiment data) to append later
csv.files <- list.files(path = "Raw_data/Behavioural/Eriksen/",
pattern = "*.csv",
full.names = F)
# read a list of .mat (EEG data) to append later
mat.files <- list.files(path = "Rdata/Eriksen/",
pattern = "*.mat",
full.names = F)
# Define which electrode I want to focus on out of the array of 33
electrode = "Cz"
# Define the linear space for the x axis of the graphs
x = linspace(-200,800,1025)
# Run a for loop to add the data to each matrix above
for (i in 1:length(csv.files)) {
# for each file, read in the .csv trial information and .mat EEG file
trial_info <- read.csv(paste("Raw_data/Behavioural/Eriksen/", csv.files[i], sep = ""))
dat <- readMat(paste("Rdata/Eriksen/", mat.files[i], sep = ""))
# Some defensive coding
# Make sure the csv and mat files match up - breaks loop if they do not
if (substr(csv.files[i], 0, 4) != substr(mat.files[i], 0, 4)) {
print(paste("The files of participant ", substr(csv.files[i], 0, 4), " do not match.", sep = ""))
break
}
else{
#if all is good, start processing the files
# apply functions from above to get erps for correct and incorrect trials
correct.erp <-
eriksen_erp(mat = dat,
csv = trial_info,
electrode = electrode,
correct = 1)
incorrect.erp <-
eriksen_erp(mat = dat,
csv = trial_info,
electrode = electrode,
correct = 0)
# append each new matrix row to the previous one
amplitude.dat <- data.frame(
"subject" = substr(csv.files[i], 0, 4),
"electrode" = electrode,
"response" = c(rep("correct", 1025), rep("incorrect", 1025)),
"amplitude" = c(correct.erp, incorrect.erp),
"time" = rep(x, 2)
)
write.csv(amplitude.dat, paste("processed_data/eriksen/",
substr(csv.files[i], 0, 4),
"-",
electrode,
"-eriksen.csv",
sep = ""))
# print out the progress and make sure the files match up.
# I could put in some defensive coding here.
print(paste("participant", substr(csv.files[i], 0, 4), "is complete."))
}
}
# Calculate how many trials were included for correct and incorrect responses
# Participants will only be included with > 8 trials in both conditions
# Create empty matrix to append to
trial.n <- matrix(nrow = length(csv.files),
ncol = 3)
# Run a for loop to add the data to each matrix above
for (i in 1:length(csv.files)){
# for each file, read in the .csv trial information and .mat EEG file
trial_info <- read.csv(paste("Raw_data/Behavioural/Eriksen/", csv.files[i], sep = ""))
mat <- readMat(paste("Rdata/Eriksen/", mat.files[i], sep = ""))
# apply functions from above to get erps for correct and incorrect trials
trials_ns <- trial_N_Eriksen(mat = mat, csv = trial_info, electrode = electrode)
# append each new matrix row to the previous one
trial.n[i, ] <- trials_ns
# print out the progress and make sure the files match up.
# I could put in some defensive coding here.
print(paste("participant", substr(csv.files[i], 0, 4), "is complete."))
}
# Convert to data frame to be more informative
trial.n <- data.frame(trial.n)
colnames(trial.n) <- c("participant", "n_correct", "n_incorrect")
# Add eligible column for n trials > 8 in each condition
trial.n <- trial.n %>%
mutate(included = case_when(n_correct & n_incorrect > 7 ~ 1,
n_correct & n_incorrect < 8 ~ 0))
#write.csv(trial.n, "Average_data/eriksen_trial_numbers.csv")
# Read in processed data from file
amplitude.dat <- read_bulk(directory = "processed_data/eriksen/",
extension = ".csv")
# append eligible or not
# Number trials included
trial.n <- read_csv("Average_data/eriksen_trial_numbers.csv")
amplitude.dat <- left_join(amplitude.dat, trial.n,
by = c("subject" = "participant"))
# Add smoking group
amplitude.dat <- amplitude.dat %>%
mutate(smoking_group = case_when(substr(subject, 0, 1) == 1 ~ "Non-Smoker",
substr(subject, 0, 1) == 2 ~ "Smoker")) %>%
filter(included == 1) # remove any Ss with < 8 trials in each condition
# Create difference wave: incorrect - correct trials
difference_wave <- amplitude.dat %>%
select(-X) %>%
spread(key = response, value = amplitude) %>%
mutate(difference = incorrect - correct)
# Create constant colour scheme for all plots
difference_wave$electrode <- factor(difference_wave$electrode,
levels = c("Cz", "Fz", "Pz"),
labels = c("Cz", "Fz", "Pz"))
group.cols <- c("#a6cee3", "#1f78b4", "#b2df8a")
names(group.cols) <- (levels(difference_wave$electrode))
colScale <- scale_color_manual(name = "Electrode", values = group.cols)
# Create difference wave plot
# Subset data for when I want to show individual data
# subject <- 2016
# difference_wave2 <- subset(difference_wave, subject == 2016)
(grand_difference <- difference_wave %>%
ggplot(aes(x = time, y = difference)) +
facet_grid(electrode~smoking_group) +
# stat_summary(aes(group = interaction(smoking_group, subject), color = smoking_group),
# fun.y = mean,
# geom = "line",
# size = 1,
# alpha = 0.2) +
stat_summary(aes(group = interaction(smoking_group, electrode), color = electrode),
fun.y = mean,
geom = "line",
size = 1,
alpha = 1) +
# stat_summary(data = difference_wave2, # optional label participant
# fun.y = mean,
# geom = "line",
# color = "black",
# size = 1,
# alpha = 1) +
scale_x_discrete(limits = seq(from = -200, to = 800, by = 200)) +
scale_y_continuous(limits = c(-15, 25),
breaks = seq(-15, 25, 5)) +
annotate("rect", xmin = 25, xmax = 75, ymin = -15, ymax = 25, alpha = 0.3) + #ERN
annotate("rect", xmin = 200, xmax = 400, ymin = -15, ymax = 25, alpha = 0.3) + #Pe
geom_hline(yintercept = 0, linetype = 2) +
geom_vline(xintercept = 0, linetype = 2) +
theme(legend.position="none") +
xlab("Time (ms)") +
ylab(expression("Mean amplitude"~(mu*"V"))) +
colScale)
# Save plot
# save_plot(filename = "ERP-plots/Grand_average_eriksen.pdf",
# plot = grand_difference,
# base_height = 10,
# base_width = 16)
# # Save participant plot
# save_plot(
# filename = paste("ERP-plots/participant_plots/",
# subject,
# "-Eriksen.pdf",
# sep = ""),
# plot = grand_difference,
# base_height = 10,
# base_width = 16
# )
### Number trials included
trial.n %>%
filter(included == 1) %>%
skim()
### Inferential stats
# ERN analysis
ERN <- difference_wave %>%
group_by(subject, smoking_group, electrode) %>%
filter(time >= 25 & time <= 75) %>%
summarise(mean_amp = mean(difference))
ERN_ANOVA <- aov_ez(id = "subject",
data = ERN,
dv = "mean_amp",
between = "smoking_group",
within = "electrode")
afex_plot(ERN_ANOVA, x = "smoking_group", trace = "electrode")
# Pe analysis
Pe <- difference_wave %>%
group_by(subject, smoking_group, electrode) %>%
filter(time >= 200 & time <= 400) %>%
summarise(mean_amp = mean(difference))
Pe_ANOVA <- aov_ez(id = "subject",
data = Pe,
dv = "mean_amp",
between = "smoking_group",
within = "electrode")
afex_plot(Pe_ANOVA, x = "smoking_group", trace = "electrode", error_ci = T)
# Descriptives
# ERN
difference_wave %>%
group_by(smoking_group, electrode) %>%
filter(time >= 25 & time <= 75) %>%
summarise(mean_correct = mean(correct),
sd_correct = sd(correct),
mean_incorrect = mean(incorrect),
sd_incorrect = sd(incorrect))
# Pe
difference_wave %>%
group_by(smoking_group, electrode) %>%
filter(time >= 200 & time <= 400) %>%
summarise(mean_correct = mean(correct),
sd_correct = sd(correct),
mean_incorrect = mean(incorrect),
sd_incorrect = sd(incorrect))
# Behavioural analysis
behav.dat <- read_bulk(directory = "Raw_data/Behavioural/Eriksen/",
extension = ".csv")
behav.dat <- behav.dat %>%
filter(Block != "Practice" & correct == 1 & response_time >= 200 & subject_nr %in% difference_wave$subject) %>%
group_by(subject_nr, Congruency) %>%
mutate(median_rt = median(response_time),
MAD_threshold = stats::mad(response_time)*2.5) %>%
filter(response_time > (median_rt - MAD_threshold) & response_time < (median_rt + MAD_threshold))
perc_error <- behav.dat %>%
group_by(subject_nr, Congruency) %>%
summarise(perc_error = 100 - (sum(correct) / 200) * 100) %>%
mutate(smoking_group = case_when(substr(subject_nr, 0, 1) == 1 ~ "Non-Smoker",
substr(subject_nr, 0, 1) == 2 ~ "Smoker"))
error_ANOVA <- aov_ez(id = "subject_nr",
data = perc_error,
dv = "perc_error",
between = "smoking_group",
within = "Congruency")
afex_plot(error_ANOVA, x = "smoking_group", trace = "Congruency", error_ci = T)
perc_error %>%
group_by(Congruency) %>%
summarise(mean_error = mean(perc_error),
sd_error = sd(perc_error))
|
4e678f4f1a8a5783878c90b7f71a8a1bfd914279 | fdd5042b1b4cd9151f6cc2289f3ba5023d05791f | /20160226 titanic logistic regression meeting 2.R | db61d1f8be27fb7b17e79fa3e6dcd2fedb78b535 | [] | no_license | WallaceTansg/Kaggletitanic | 50265281f183a2e54b510e0e2e44b4626bea3188 | 2a2881b1418f5ca769c099fa6ff79158c3c6aa03 | refs/heads/master | 2021-01-10T14:33:08.942603 | 2016-03-02T02:52:00 | 2016-03-02T02:52:00 | 52,928,358 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 6,732 | r | 20160226 titanic logistic regression meeting 2.R | #decision tree
install.packages(rpart)
library(rpart)
#cforest
install.packages('party')
library(party)
#randomForest
install.packages('randomForest')
library(randomForest)
#more efficiency
library(readr)
#for visualisation purpose of missing value pattern#
library(VIM)
#for impute missing value purpose
library(mice)
#for count frequency purpose
install.packages("plyr")
library(plyr)
options( contrasts = c( "contr.treatment",
"contr.poly" ))
set.seed(888)
# na.strings=c("","NA"), impute blank cell as NA
train<- read.csv("train.csv",sep=",", na.strings=c("","NA"))
test<- read.csv("test.csv",sep=",",na.strings=c("","NA"))
contrasts(train$Sex)
#data massage
test$Survived <- NA
combi <- rbind(train, test)
combi$Name <- as.character(combi$Name)
combi$Name <- as.character(combi$Name)
combi$Name[1]
strsplit(combi$Name[1], split='[,.]')
strsplit(combi$Name[1], split='[,.]')[[1]]
strsplit(combi$Name[1], split='[,.]')[[1]][2]
combi$Title <- sapply(combi$Name, FUN=function(x) {strsplit(x, split='[,.]')[[1]][2]})
combi$Title <- sub(' ', '', combi$Title)
table(combi$Title)
combi$Title[combi$Title %in% c('Mme', 'Mlle')] <- 'Mlle'
combi$Title[combi$Title %in% c('Capt', 'Don', 'Major', 'Sir')] <- 'Sir'
combi$Title[combi$Title %in% c('Dona', 'Lady', 'the Countess', 'Jonkheer')] <- 'Lady'
combi$Title <- factor(combi$Title)
combi$FamilySize <- combi$SibSp + combi$Parch + 1
combi$Surname <- sapply(combi$Name, FUN=function(x) {strsplit(x, split='[,.]')[[1]][1]})
combi$FamilyID <- paste(as.character(combi$FamilySize), combi$Surname, sep="")
combi$FamilyID[combi$FamilySize <= 2] <- 'Small'
famIDs <- data.frame(table(combi$FamilyID))
famIDs <- famIDs[famIDs$Freq <= 2,]
combi$FamilyID[combi$FamilyID %in% famIDs$Var1] <- 'Small'
combi$FamilyID <- factor(combi$FamilyID)
combi$FamilyID2 <- combi$FamilyID
combi$FamilyID2 <- as.character(combi$FamilyID2)
combi$FamilyID2[combi$FamilySize <= 3] <- 'Small'
combi$FamilyID2 <- factor(combi$FamilyID2)
train <- combi[1:891,]
test <- combi[892:1309,]
#to get column index of variable
grep("Ticket", colnames(train))
grep("Name", colnames(train))
grep("Cabin", colnames(train))
grep("Surname", colnames(train))
grep("FamilyID", colnames(train))
#delete variables by index use -
train<-train[,-c(4,9,11,15,16)]
#way 1 of imputation of missing value
#plot pattern of missing value
train_aggr = aggr(train, col=mdc(1:2), numbers=TRUE, sortVars=TRUE,
labels=names(train), cex.axis=.7, gap=3,
ylab=c("Proportion of missingness","Missingness Pattern"))
?mice
train <- mice(train,m=8,maxit=25,meth='cart',seed=888,printFlag=TRUE)
train<- complete(train,1)
train$Sexchild<-ifelse (train$Age<=14 ,"Child",
ifelse(train$Sex=="female" & train$Age>14,"women","men" ))
#test
test_aggr = aggr(test, col=mdc(1:2), numbers=TRUE, sortVars=TRUE,
labels=names(test), cex.axis=.7, gap=3,
ylab=c("Proportion of missingness","Missingness Pattern"))
#to get column index of variable
grep("Survived", colnames(test))
grep("Ticket", colnames(test))
grep("Name", colnames(test))
grep("Cabin", colnames(test))
grep("Surname", colnames(test))
grep("FamilyID", colnames(test))
test<-test[,-c(2,4,7,11,15,16)]
test <- mice(test,m=8,maxit=3,meth='cart',seed=888)
test<- complete(test,1)
test$Sexchild<-ifelse (test$Age<=14 ,"Child",
ifelse(test$Sex=="female" & test$Age>14,"women","men" ))
test$Sexchild<-as.factor(test$Sexchild)
#way 2 of imputation of missing value
random.imp <- function (a){
missing <- is.na(a)
n.missing <- sum(missing)
a.obs <- a[!missing]
imputed <- a
imputed[missing] <- sample (a.obs, n.missing, replace=F)
return (imputed)
}
train$Age=random.imp(train$Age)
test$Age=random.imp(test$Age)
test$Fare=random.imp(test$Fare)
#model 1 rpart
fit <- rpart(Survived ~ Pclass + Sex + Age + Fare + Title+
FamilyID2,data=train, method="class")
fancyRpartPlot(fit)
Prediction <- predict(fit, test, type = "class")
submit_rpart <- data.frame(PassengerId = test$PassengerId, Survived=Prediction)
count(submit_rpart,"Survived")
write.csv(submit_rpart, file = "submission11_rpart.csv", row.names = FALSE)
#model 2 randomforest
train$Survived<-as.factor(train$Survived)
fit2 <- randomForest(Survived ~Pclass + Sex + Age + Fare + Title+Pclass*Sex+ FamilyID2, data=train, importance=TRUE, ntree=1500)
varImpPlot(fit2)
Prediction2 <- predict(fit2, test)
submit_rforest<-data.frame(PassengerId = test$PassengerId, Survived=Prediction2)
count(submit_rforest,"Survived")
write.csv(submit_rforest, file = "submission12_rforest.csv", row.names = FALSE)
#model 3 cforest
set.seed(888)
train$Sexchild<-as.factor(Sexchild)
fit3<- cforest(as.factor(Survived) ~ Pclass+Age+FamilyID2+FamilySize+Title+Sexchild+Fare+Pclass*Fare+Pclass*Title+Title*Sexchild+Pclass*FamilyID2+Pclass*Sexchild+Pclass*Age,data = train, controls=cforest_unbiased(ntree=1000))
# standard importance
varimp(fit3)
# the same modulo random variation
varimp(fit3, pre1.0_0 = TRUE)
# conditional importance, may take a while...
varimp(fit3, conditional = TRUE)
#https://journal.r-project.org/archive/2009-2/RJournal_2009-2_Strobl~et~al.pdf
Prediction3 <- predict(fit3, test, OOB=TRUE, type = "response")
submit_cforest<-data.frame(PassengerId = test$PassengerId, Survived=Prediction3)
?cforest
count(submit_cforest,"Survived")
write.csv(submit_cforest, file = "submission29_cforest.csv", row.names = FALSE)
submit_cforestI<-data.frame(test, Survived=Prediction3)
write.csv(submit_cforestI, file = "submission23_cforestI.csv", row.names = FALSE)
#submission23_cforest:0.818.. corresponding to kaggle sub22
#submission22_cforest:0.81383.. corresponding to kaggle sub20
#model 4 Bagged CART
library(mlbench)
library(caret)
control <- trainControl(method="repeatedcv", number=10, repeats=3)
?trainControl
seed <- 7
metric <- "Accuracy"
fit.treebag <- train(as.factor(Survived) ~ Pclass + Sex + Age + Fare + Title+Pclass*Sex+
FamilyID2, data=train, method="treebag", metric=metric, trControl=control)
Prediction4=predict(fit.treebag, test)
submit_tbag<-data.frame(PassengerId = test$PassengerId, Survived=Prediction4)
count(submit_tbag,"Survived")
write.csv(submit_cforest, file = "submission14_tbag.csv", row.names = FALSE)
fit.gbm <- train(as.factor(Survived) ~ Pclass + Sex + Age + Fare + Title+Pclass*Sex+
FamilyID2, data=train, method="gbm", metric=metric, trControl=control, verbose=FALSE)
Prediction5=predict(fit.gbm, test)
submit_gbm<-data.frame(PassengerId = test$PassengerId, Survived=Prediction5)
count(submit_gbm,"Survived")
write.csv(submit_gbm, file = "submission15_gbm.csv", row.names = FALSE)
|
bf4ed302aec1ad7bc3fc565ee4043273d63ce8ed | 2785f694ee390cfe78b189e9956ff8c58fe68ac6 | /R/est_pow.R | 3753b7c96576a566f27bdd98f42d73be54ca3d0d | [] | no_license | VanAndelInstitute/bifurcatoR | e8400afad194e41802f5156ba6a6184e6d8d1d5b | 9346700eb392f494d79485c9734c2a1e54996219 | refs/heads/main | 2023-08-17T04:08:22.138059 | 2023-05-30T17:40:16 | 2023-05-30T17:40:16 | 452,341,124 | 6 | 5 | null | 2023-08-30T16:00:10 | 2022-01-26T15:59:38 | R | UTF-8 | R | false | false | 5,748 | r | est_pow.R | #' est_pow
#'
#' @param n number of [somethings]
#' @param alpha default significance level (0.05)
#' @param nsim number of simulations (20)
#' @param dist generating distribution
#' @param params parameters for the generating distribution
#' @param tests names of tests to run
#' @param nboot number of bootstraps for mclust and/or modetest
#'
#' @return a power estimate
#'
#' @import mclust
#' @import diptest
#' @import mousetrap
#' @import LaplacesDemon
#'
#' @export
est_pow = function(n,alpha,nsim,dist,params,tests,nboot){
if(dist=="norm"){
n1 = floor(params$p*n)
n2 = floor((1-params$p)*n)
n.dfs = lapply(1:nsim,function(x) c(rnorm(n1,params$mu1,params$sd1),rnorm(n2,params$mu2,params$sd2)))
a.dfs = lapply(1:nsim,function(x) c(rnorm(n,
(n1 * params$mu1 + n2 * params$mu2)/n,
sqrt(((n1-1)*params$sd1^2 + (n2-1)*params$sd2^2)/(n-2)))))
} else {
if(dist=="beta"){
n.dfs = lapply(1:nsim,function(x) rbeta(n,params$s1,params$s2))
a.dfs = lapply(1:nsim,function(x) rbeta(n,2,2))
} else {
if(dist=="weib"){
#print(params)
n1 = floor(params$p*n)
n2 = floor((1-params$p)*n)
n.dfs = lapply(1:nsim,function(x) c(rweibull(n1,shape=params$sp1,scale=params$sc1),rweibull(n2,shape=params$sp2,scale=params$sc2)))
a.dfs = lapply(1:nsim,function(x) c(rweibull(n,
shape = (n1 * params$sp1 + n2 * params$sp2)/n,
scale = (n1 * params$sc1 + n2 * params$sc2)/n)))
}
}
}
pwr.df = NULL
#Old version of dip
# if("dip" %in% tests){
# pwr.df = rbind(pwr.df,data.frame(N = n, Test = "Hartigans' dip test",
# power = sum(sapply(n.dfs, function(s) I(diptest::dip.test(s)$p.value<alpha)))/nsim,
# FP = sum(sapply(a.dfs, function(s) I(diptest::dip.test(s)$p.value<alpha)))/nsim))
# }
#
if("mclust" %in% tests){
pwr.df = rbind(pwr.df,data.frame(N = n, Test = "Mclust",
power = sum(sapply(n.dfs, function(s) I(mclust::mclustBootstrapLRT(as.data.frame(s),modelName="E",verbose=F,maxG=1,nboot=nboot)$p.value<alpha)))/nsim,
FP = sum(sapply(a.dfs, function(s) I(mclust::mclustBootstrapLRT(as.data.frame(s),modelName="E",verbose=F,maxG=1,nboot=nboot)$p.value<alpha)))/nsim ))
}
if("mt" %in% tests){
pwr.df = rbind(pwr.df,data.frame(N = n, Test = "Bimodality Coefficient",
power = sum(sapply(n.dfs, function(s) I(mousetrap::mt_check_bimodality(as.data.frame(s),t,method="BC")$BC > 0.555)))/nsim,
FP = sum(sapply(a.dfs, function(s) I(mousetrap::mt_check_bimodality(as.data.frame(s),method="BC")$BC > 0.555)))/nsim))
}
if("SI" %in% tests){
pwr.df = rbind(pwr.df,data.frame(N = n, Test = "Silverman Bandwidth",
power = sum(sapply(n.dfs, function(s) I(multimode::modetest(data = s,mod0 = 1,method = "SI",B=nboot)$p.value < alpha)))/nsim,
FP = sum(sapply(a.dfs, function(s) I(multimode::modetest(data = s,mod0 = 1,method = "SI",B=nboot)$p.value < alpha)))/nsim))
}
if("dip" %in% tests){
pwr.df = rbind(pwr.df,data.frame(N = n, Test = "Hartigan Dip Test",
power = sum(sapply(n.dfs, function(s) I(multimode::modetest(data = s,mod0 = 1,method = "HH",B=nboot)$p.value < alpha)))/nsim,
FP = sum(sapply(a.dfs, function(s) I(multimode::modetest(data = s,mod0 = 1,method = "HH",B=nboot)$p.value < alpha)))/nsim))
}
if("HY" %in% tests){
pwr.df = rbind(pwr.df,data.frame(N = n, Test = "Hall and York Bandwidth",
power = sum(sapply(n.dfs, function(s) I(multimode::modetest(data = s,mod0 = 1,method = "HY",B=nboot)$p.value < alpha)))/nsim,
FP = sum(sapply(a.dfs, function(s) I(multimode::modetest(data = s,mod0 = 1,method = "HY",B=nboot)$p.value < alpha)))/nsim))
}
if("CH" %in% tests){
pwr.df = rbind(pwr.df,data.frame(N = n, Test = "Cheng and Hall Excess Mass",
power = sum(sapply(n.dfs, function(s) I(multimode::modetest(data = s,mod0 = 1,method = "CH",B=nboot)$p.value < alpha)))/nsim,
FP = sum(sapply(a.dfs, function(s) I(multimode::modetest(data = s,mod0 = 1,method = "CH",B=nboot)$p.value < alpha)))/nsim))
}
if("ACR" %in% tests){
pwr.df = rbind(pwr.df,data.frame(N = n, Test = "Ameijeiras-Alonso et al. Excess Mass",
power = sum(sapply(n.dfs, function(s) I(multimode::modetest(data = s,mod0 = 1,method = "ACR",B=nboot)$p.value < alpha)))/nsim,
FP = sum(sapply(a.dfs, function(s) I(multimode::modetest(data = s,mod0 = 1,method = "ACR",B=nboot)$p.value < alpha)))/nsim))
}
if("FM" %in% tests){
pwr.df = rbind(pwr.df,data.frame(N = n, Test = "Fisher and Marron Carmer-von Mises",
power = sum(sapply(n.dfs, function(s) I(multimode::modetest(data = s,mod0 = 1,method = "FM",B=nboot)$p.value < alpha)))/nsim,
FP = sum(sapply(a.dfs, function(s) I(multimode::modetest(data = s,mod0 = 1,method = "FM",B=nboot)$p.value < alpha)))/nsim))
}
return(pwr.df)
}
|
3552297ffca0f26c4ba5d97fd8e7f4c752164ae3 | 266c2547061106cc5c12ce6aaf9f72d68683a828 | /Skriptit/kartta.r | b4a39a12e179997f7ae70e1c8e20255f6e331d29 | [] | no_license | ficusvirens/ELS-laskuri | 7ed502f9aeb4e74ab30db025508d2c6347762aef | 7a5d9c7227cc35ae60e4120a94ea995cbd2f271d | refs/heads/main | 2023-03-18T11:53:02.695970 | 2021-03-04T11:31:08 | 2021-03-04T11:31:08 | 344,451,155 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,083 | r | kartta.r | # ELS-laskuri
# Sofia Airola 2016
# sofia.airola@hespartto.fi
library("tmap")
# FIXMAP: muokkaa kartan drawMap-funktiolle sopivaan muotoon
# map = shapefile-objekti, jossa vesistömuodostumien nimet ovat otsakkeen "Nimi" alla
fixMap <- function(map) {
# muutetaan vesistömuodostumien nimet character-muotoon
map@data$Nimi <- as.character(map@data$Nimi)
# järjestetään aakkosjärjestykseen
map <- map[order(map@data$Nimi),]
return (map)
}
# FIXMAPDATA: muokkaa karttadatan drawMap-funktiolle sopivaan muotoon
# mapData = taulukko, jossa vesistömuodostumat ovat nimellä "alue" ja ELS-arvo nimellä "ELS"
fixMapData <- function(mapData) {
# muutetaan vesistömuodostumien nimet character-muotoon
mapData$alue <- as.character(mapData$alue)
# järjestetään aakkosjärjestykseen
mapData <- mapData[order(mapData$alue),]
# muutetaan ELS-arvot ELS-luokiksi
mapData$ELS <- sapply(mapData$ELS, FUN = ELS_ToWords)
# muutetaan ELS-luokat faktoreiksi
levs = c("Erinomainen", "Hyva", "Tyydyttava", "Valttava", "Huono")
mapData$ELS <- factor(mapData$ELS, levels = levs, ordered = TRUE)
return (mapData)
}
# DRAWMAP: piirtää kartan, jossa vesistömuodostumat on värikoodattu ELS-luokan mukaan
# map = shapefile-objekti, jossa vesistömuodostumien nimet ovat otsakkeen "Nimi" alla
# mapData = taulukko, jossa vesistömuodostumat ovat nimellä "alue" ja ELS-luokka nimellä "ELS"
drawMap <- function(mapData, map, period) {
# tehdään uusi kartta
newMap <- append_data(map, mapData, key.shp = "Nimi", key.data = "alue")
mapTitle = as.character(period)
# tässä asetetaan kartan muotoilut.
finalMap <- tm_shape(newMap) +
tm_layout(title = mapTitle) +
tm_fill("ELS", title = "ELS", style = "cat", palette = "-RdYlGn", colorNA = "grey") +
tm_borders(alpha = .5)+
tm_text("alue", size = 0.8, col = "#000000", bg.color = "#FFFFFF")
# tallennetaan kartta jpg-muotoon
filename = paste("Output/ELS-kartat/ELS-kartta ", period, ".jpg", sep = "")
save_tmap(finalMap, file = filename)
}
|
fe4f3a979c98a2bd2621b488087351c7803432c8 | 29585dff702209dd446c0ab52ceea046c58e384e | /koRpus/R/coleman.liau.R | 8c3cf0511bc594b923200303c0f40422e417366f | [] | no_license | ingted/R-Examples | 825440ce468ce608c4d73e2af4c0a0213b81c0fe | d0917dbaf698cb8bc0789db0c3ab07453016eab9 | refs/heads/master | 2020-04-14T12:29:22.336088 | 2016-07-21T14:01:14 | 2016-07-21T14:01:14 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,746 | r | coleman.liau.R | # Copyright 2010-2014 Meik Michalke <meik.michalke@hhu.de>
#
# This file is part of the R package koRpus.
#
# koRpus is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# koRpus is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with koRpus. If not, see <http://www.gnu.org/licenses/>.
#' Readability: Coleman-Liau Index
#'
#' This is just a convenient wrapper function for \code{\link[koRpus:readability]{readability}}.
#'
#' Calculates the Coleman-Liau index. In contrast to \code{\link[koRpus:readability]{readability}},
#' which by default calculates all possible indices, this function will only calculate the index value.
#'
#' This formula doesn't need syllable count.
#'
#' @param txt.file Either an object of class \code{\link[koRpus]{kRp.tagged-class}}, a character vector which must be be
#' a valid path to a file containing the text to be analyzed, or a list of text features. If the latter, calculation
#' is done by \code{\link[koRpus:readability.num]{readability.num}}.
#' @param ecp A numeric vector with named magic numbers, defining the relevant parameters for the cloze percentage estimate.
#' @param grade A numeric vector with named magic numbers, defining the relevant parameters to calculate grade equvalent for ECP values.
#' @param short A numeric vector with named magic numbers, defining the relevant parameters for the short form of the formula.
#' @param ... Further valid options for the main function, see \code{\link[koRpus:readability]{readability}} for details.
#' @return An object of class \code{\link[koRpus]{kRp.readability-class}}.
# @author m.eik michalke \email{meik.michalke@@hhu.de}
#' @keywords readability
#' @export
#' @examples
#' \dontrun{
#' coleman.liau(tagged.text)
#' }
coleman.liau <- function(txt.file,
ecp=c(const=141.8401, char=0.21459, sntc=1.079812),
grade=c(ecp=-27.4004, const=23.06395),
short=c(awl=5.88, spw=29.6, const=15.8), ...){
# combine parameters
param.list <- list(ecp=ecp, grade=grade, short=short)
if(is.list(txt.file)){
results <- readability.num(txt.features=txt.file, index="Coleman.Liau", parameters=list(Coleman.Liau=param.list), ...)
} else {
results <- readability(txt.file=txt.file, index="Coleman.Liau", parameters=list(Coleman.Liau=param.list), ...)
}
return(results)
}
|
1790a3fe25190a97ba47ce092227877ad9b968b4 | 248f361e13e043c73cb7e5e2573852ddd9970db2 | /inst/extdata/facial_distance.R | 5cc09a3fdb3a388ea2cc233d69f47aa096d0b87e | [] | no_license | kindlychung/collr2 | ff82f38066386923268c598c145f912fa89f8215 | 6c7ebc59277de2dbd24a6413e994ba5b6eb82ca4 | refs/heads/master | 2021-01-23T13:22:22.327304 | 2015-01-08T15:57:03 | 2015-01-08T15:57:03 | 28,972,848 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 619 | r | facial_distance.R | require(collr)
setwd("/media/data1/kaiyin/RS123_1KG")
phenoNames = paste("dist", 1:36, sep="")
for(phenoName in phenoNames) {
argscoll = getPlinkParam(
allow_no_sex = "",
missing_phenotype = 9999,
pheno = "RS123.1kg.pheno/face.phenoAll.csv",
covar = "RS123.1kg.pheno/face.phenoAll.csv",
covar_name = "sex,age",
linear = "hide-covar",
pheno_name = phenoName
)
taskRoutine(
taskname = paste("face_", phenoName, "_s200", sep=""),
plinkParamList=argscoll,
nMaxShift=200,
initGwas = TRUE,
pvalThresh=1e-3
)
}
|
cfa8454733ff0bcf023aa568c5fba19b85cbb654 | f3441308f90b0843b313b8690b0d2b6fb5b467cb | /내가 공부한 것/Data/Q_1.R | 4449615593fff336c8f2e0af63671fc3fbf74548 | [] | no_license | qjvkdkel12/R | 5a2cc5aca085b349e679e89ce660078674cb0ed5 | 1d54f276a44922fcebfab2fdfc92de597547a66e | refs/heads/master | 2020-07-08T13:18:53.790677 | 2019-09-04T03:41:16 | 2019-09-04T03:41:16 | 203,686,011 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 271 | r | Q_1.R | # data.frame()과 c()를 조합해 표의 내용을 데이터 프레임으로 만들어 출력해 보자.
abc <- data.frame(제품 <- c("사과","딸기","수박"),
가격 <- c(1800,1500,3000),
판매량 <- c(24,38,13))
abc
|
ba74fe9fb8ad6e57e259a1dcebef1e62021af512 | df6dc09aca37d6cd616436fe4ee1021677cebd37 | /man/cfba_moment.Rd | c6d3a583231a57aebaa839275233bba1bf75eb1b | [] | no_license | cran/sybilccFBA | 9eadbbcc6bf0cd550dd4cc003a106b742b9764cd | 4c7c379f9407323f9d6963b3315487146cb5321e | refs/heads/master | 2020-05-17T11:07:00.514539 | 2019-12-15T14:20:05 | 2019-12-15T14:20:05 | 18,368,868 | 1 | 2 | null | null | null | null | UTF-8 | R | false | false | 6,972 | rd | cfba_moment.Rd | \name{cfba_moment}
\alias{cfba_moment}
\encoding{utf8}
\title{ Function: cfba_moment: implement MOMENT method}
\description{
This function uses GPR, kcat, and molecular weights to calculate fluxes
according to MOMENT method.
}
\usage{
cfba_moment(model,mod2=NULL, Kcat,MW=NULL,
selected_rxns=NULL,verboseMode=2,objVal=NULL,
RHS=NULL,solver=SYBIL_SETTINGS("SOLVER"),medval=NULL,
runFVA = FALSE, fvaRxn = NULL)
}
\arguments{
\item{model}{ An object of class \code{\link{modelorg}}.}
\item{mod2}{ An object of class \code{\link{modelorg}} with only irreversible reactions.
It can be sent to save time of recalculating it with each call.}
\item{Kcat}{ kcat values in unit 1/S. Contains three slots: reaction id,direction(dirxn),value(val)}
\item{MW}{ list of molecular weights of all genes, using function calc_MW, in units g/mol}
\item{selected_rxns}{optional parameter used to select a set of reactions not all, list of react_id}
\item{verboseMode}{
An integer value indicating the amount of output to stdout:
0: nothing, 1: status messages, 2: like 1 plus with more details,
3: generates files of the LP problem.\cr
Default: \code{2}.
}
\item{RHS}{ the budget C, for EColi 0.27}
\item{objVal}{when not null the problem will be to find the minimum budget that give the specified
objective value(biomass)}
\item{solver}{
Single character string giving the solver package to use. See
\code{\link{SYBIL_SETTINGS}} for possible values.\cr
Default: \code{SYBIL_SETTINGS("SOLVER")}.
}
\item{medval}{ median of Kcat values , used for missing values}
\item{runFVA}{ flag to choose to run flux variability default FALSE}
\item{fvaRxn}{ optional parameter to choose set of reaction ids to run FVA on them.
Ids are from the irreversible model default all reactions. Ignored when runFVA
is not set.}
}
\details{
Main steps
1- Add variables for all genes
2- for each selected reaction: parse gpr,
3- Add variables accordingly and constraints
4- Add solvant constraint
}
\value{
returns a list containing slots:
\item{sol}{solution of the problem, instance of class \code{\link{optObj}}.}
\item{prob}{object of class \code{\link{sysBiolAlg}} that contains the linear problem,
this can be used for further processing like adding more constraints.
To save it, function \code{\link{writeProb}} can be used.}
\item{geneCol}{mapping of genes to variables in the problem.}
}
\author{Abdelmoneim Amer Desouki}
%% ~Make other sections like Warning with \section{Warning }{....} ~
\references{Adadi, R., Volkmer, B., Milo, R., Heinemann, M., & Shlomi, T. (2012).
Prediction of Microbial Growth Rate versus Biomass Yield by a Metabolic Network
with Kinetic Parameters, 8(7). doi:10.1371/journal.pcbi.1002575
Gelius-Dietrich, G., Desouki, A. A., Fritzemeier, C. J., & Lercher, M. J. (2013).
sybil–Efficient constraint-based modelling in R. BMC systems biology, 7(1), 125.
}
\seealso{
\code{\link{modelorg}},
\code{\link{optimizeProb}}
}
\examples{
\dontrun{
library(sybilccFBA)
data(iAF1260)
model= iAF1260
data(mw)
data(kcat)
mod2=mod2irrev(model)
uppbnd(mod2)[react_id(mod2)=="R_EX_glc_e__b"]=1000
uppbnd(mod2)[react_id(mod2)=="R_EX_glyc_e__b"]=0
uppbnd(mod2)[react_id(mod2)=="R_EX_ac_e__b"]=0
uppbnd(mod2)[react_id(mod2)=="R_EX_o2_e__b"]=1000
lowbnd(mod2)[react_id(mod2)=="R_ATPM"]=0
sol=cfba_moment(model,mod2,kcat,MW=mw,verbose=2,RHS=0.27,solver="glpkAPI",medval=3600*22.6)
bm_rxn = which(obj_coef(mod2)!=0)
print(sprintf('biomass=\%f',sol$sol$fluxes[bm_rxn]))
# Enzyme concentrations:
}% end dontrun
data(Ec_core)
model=Ec_core
genedef=read.csv(paste0(path.package("sybilccFBA"), '/extdata/Ec_core_genedef.csv'),
stringsAsFactors=FALSE)
mw=data.frame(gene=genedef[,'gene'],mw=genedef[,'mw'],stringsAsFactors=FALSE)
mw[mw[,1]=='s0001','mw']=0.001#spontenious
##########
##Kcats
kl=read.csv(stringsAsFactors=FALSE,paste0(path.package("sybilccFBA"),
'/extdata/','allKcats_upd34_dd_h.csv'))
kl=kl[!is.na(kl[,'ijo_id']),]
kcat=data.frame(rxn_id=kl[,'ijo_id'],val=kl[,'kcat_max'],dirxn=kl[,'dirxn'],
src=kl[,'src'],stringsAsFactors=FALSE)
kcat=kcat[kcat[,'rxn_id']\%in\% react_id(model),]
kcat[(is.na(kcat[,'src'])),'src']='Max'
########## ----------------
mod2=mod2irrev(model)
uppbnd(mod2)[react_id(mod2)=="EX_o2(e)_b"]=1000
lowbnd(mod2)[react_id(mod2)=="ATPM"]=0
uppbnd(mod2)[react_id(mod2)=="ATPM"]=0
nr=react_num(mod2)
medianVal=median(kcat[,'val'])
# sum(is.na(genedef[,'mw']))
C_mr=0.13#as not all genes exist
sim_name=paste0('Ec_org_med',round(medianVal,2),'_C',100*C_mr)
cpx_stoich =read.csv(paste0(path.package("sybilccFBA"),
'/extdata/','cpx_stoich_me.csv'),stringsAsFactors=FALSE)
##-----------------
CSList=c("R_EX_glc_e__b","R_EX_glyc_e__b","R_EX_ac_e__b","R_EX_fru_e__b",
"R_EX_pyr_e__b","R_EX_gal_e__b",
"R_EX_lac_L_e__b","R_EX_malt_e__b","R_EX_mal_L_e__b","R_EX_fum_e__b","R_EX_xyl_D_e__b",
"R_EX_man_e__b","R_EX_tre_e__b",
"R_EX_mnl_e__b","R_EX_g6p_e__b","R_EX_succ_e__b","R_EX_gam_e__b","R_EX_sbt_D_e__b",
"R_EX_glcn_e__b",
"R_EX_rib_D_e__b","R_EX_gsn_e__b","R_EX_ala_L_e__b","R_EX_akg_e__b","R_EX_acgam_e__b")
msrd=c(0.66,0.47,0.29,0.54,0.41,0.24,0.41,0.52,0.55,0.47,0.51,0.35,0.48,0.61,
0.78,0.50,0.40,0.48,0.68,0.41,0.37,0.24,0.24,0.61)
CA=c(6,3,2,6,3,6,3,12,4,4,5,6,12,6,6,4,6,6,6,5,10,3,5,8)
CSList=substring(gsub('_e_','(e)',CSList),3)
react_name(mod2)[react_id(mod2) \%in\% CSList]
msrd=msrd[CSList \%in\% react_id(mod2) ]
CA=CA[CSList \%in\% react_id(mod2)]
CSList=CSList[CSList \%in\% react_id(mod2)]
uppbnd(mod2)[react_id(mod2) \%in\% CSList]=0
mod2R=mod2
##---------------------
bm_rxn=which(obj_coef(mod2)!=0)
all_flx=NULL
all_flx_MC=NULL
all_rg_MC=NULL
Kcatorg=kcat
solver='glpkAPI'
solverParm=NA
for(cs in 1:length(CSList)){
print(CSList[cs])
mod2=mod2R
uppbnd(mod2)[react_id(mod2) \%in\% CSList]=0
uppbnd(mod2)[react_id(mod2)==CSList[cs]]=1000
sol_org=cfba_moment(model,mod2,kcat,MW=mw,verbose=2,RHS=0.27,solver="glpkAPI",
medval=3600*medianVal)
### preparing output -------------------
all_flx=rbind(all_flx,data.frame(stringsAsFactors=FALSE,cs,csname=CSList[cs],
rxn_id=react_id(mod2),
flx=sol_org$sol$fluxes[1:nr], ub=uppbnd(mod2),ubR=uppbnd(mod2R)))
# print(paste0("nrow all_rg_MC=",nrow(all_rg_MC)))
}
upt=all_flx[all_flx[,'csname']==all_flx[,'rxn_id'],]
bm=all_flx[react_id(mod2)[obj_coef(mod2)!=0]==all_flx[,'rxn_id'],]
cor.test(bm[,'flx'],msrd,method='spearman')
}
% Add one or more standard keywords, see file 'KEYWORDS' in the
% R documentation directory.
\keyword{ FBA }
\keyword{ MOMENT }
\keyword{ cost constraint FBA }% __ONLY ONE__ keyword per line
|
2eff7b898ae3fcefdcdd2ddd5f107faa7a077171 | 0feedfcb9f76e63e15727486747d9693d4863e5a | /稳健性检验/breadth/breadth_y.R | d10c3823dbde5c9577524b570b17b5e8ffe5313d | [] | no_license | jaynewton/paper_6 | be06bd623707d87e0446f25eed0851d86738f561 | 331ce16dd031e9e3506fc91ac00bb7769cc2095f | refs/heads/master | 2020-04-02T02:33:06.048234 | 2018-11-11T11:24:59 | 2018-11-11T11:24:59 | 153,915,405 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,466 | r | breadth_y.R | #################################
load("F:/我的论文/第五篇/主代码/individual investor preference/RData/da_tsk_all_1.RData")
#load("F:/我的论文/第五篇/主代码/individual investor preference/RData/da_tsk_all_2.RData")
#load("F:/我的论文/第五篇/主代码/individual investor preference/RData/da_tsk_all_3.RData")
#load("F:/我的论文/第五篇/主代码/individual investor preference/RData/da_tsk_all_4.RData")
#load("F:/我的论文/第五篇/主代码/individual investor preference/RData/da_tsk_all_5.RData")
# Since we are faced with storage space limitation, don't use copy() here.
da_breadth_all <- da_tsk_all_1
#da_breadth_all <- da_tsk_all_2
#da_breadth_all <- da_tsk_all_3
#da_breadth_all <- da_tsk_all_4
#da_breadth_all <- da_tsk_all_5
now()
da_intermediate <- NULL
ym_index <- da_breadth_all[,sort(unique(ym))]
for (i in 1:length(ym_index)) {
da_sub <- da_breadth_all[ym==ym_index[i],]
selected_code <- da_sub[,.N,by=SecCode][N>=120,SecCode]
da_sub <- da_sub[SecCode %in% selected_code,]
da_intermediate <- rbind(da_intermediate,da_sub)
}
now()
da_breadth_all <- da_intermediate
da_breadth_y <- da_breadth_all[,.(breadth=mean(ret_e)-median(ret_e)),keyby=.(ym,SecCode)]
da_breadth_y_1 <- da_breadth_y
#da_breadth_y_2 <- da_breadth_y
#da_breadth_y_3 <- da_breadth_y
#da_breadth_y_4 <- da_breadth_y
#da_breadth_y_5 <- da_breadth_y
save(da_breadth_y_1,file="C:/Users/Ding/Desktop/da_breadth_y_1.RData")
#save(da_breadth_y_2,file="C:/Users/Ding/Desktop/da_breadth_y_2.RData")
#save(da_breadth_y_3,file="C:/Users/Ding/Desktop/da_breadth_y_3.RData")
#save(da_breadth_y_4,file="C:/Users/Ding/Desktop/da_breadth_y_4.RData")
#save(da_breadth_y_5,file="C:/Users/Ding/Desktop/da_breadth_y_5.RData")
rm(list=ls())
#################################
load("F:/我的论文/第五篇/主代码/individual investor preference/RData/da_breadth_y_1.RData")
load("F:/我的论文/第五篇/主代码/individual investor preference/RData/da_breadth_y_2.RData")
load("F:/我的论文/第五篇/主代码/individual investor preference/RData/da_breadth_y_3.RData")
load("F:/我的论文/第五篇/主代码/individual investor preference/RData/da_breadth_y_4.RData")
load("F:/我的论文/第五篇/主代码/individual investor preference/RData/da_breadth_y_5.RData")
da_breadth_y <- rbind(da_breadth_y_1,da_breadth_y_2,da_breadth_y_3,da_breadth_y_4,da_breadth_y_5)
save(da_breadth_y,file="C:/Users/Ding/Desktop/da_breadth_y.RData")
|
d136cc97e4610f54f1b176b07d142674af594712 | fff9bbc9b5d874ca4bfe8dcd53ff94db2792c75f | /tests/testthat/test_iotable_get.R | c1327be64dbf42be7f694fd1d5ca455743c59418 | [] | no_license | DrRoad/iotables | c53be36e02752afc9cba5425aeb304ac34b19c77 | 0dcd94f5b21e059469dc424e2c06504cc2da9177 | refs/heads/master | 2020-03-27T04:25:53.715206 | 2018-08-21T22:28:57 | 2018-08-21T22:28:57 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 2,981 | r | test_iotable_get.R | library (testthat)
library (iotables)
context ("Creating an IO Table")
#iotable_get ( source = "naio_10_cp1620", geo = "CZ",
# stk_flow = "TOTAL", year = 2010,
# unit = "MIO_NAC", data_directory = 'data-raw')
#test <- iotable_get ( source = "naio_10_pyp1620", geo = "CZ",
# stk_flow = "TOTAL", year = 2010,
# unit = "MIO_NAC", data_directory = 'data-raw')
test_that("get_iotable errors ", {
expect_error(iotable_get(source = "germany_1990",
geo = 'DE', year = 1990, unit = "MIO_NAC")) #currency not found
expect_error(iotable_get(source = "germany_1990",
geo = 'DE', year = 1787, unit = "MIO_EUR")) #no data for this year
expect_error(iotable_get(source = "germany_1990",
geo = 'BE', year = 1990, unit = "MIO_EUR")) # no data for geographical unit
expect_warning(iotable_get(source = "germany_1990",
geo = 'de', year = 1990, unit = "MIO_EUR",
labelling = "short")) #warn for upper case
expect_error(iotable_get(source = "germany_1990",
geo = 'DE', year = 1990,
unit = "MIO_EUR", labelling = "biotables") ) # no such labelling
})
test_that("correct data is returned", {
expect_equal(iotable_get(source = "germany_1990",
geo = 'DE', year = 1990,
unit = "MIO_EUR", labelling = "iotables")[1,2], 1131)
expect_equal(as.character(iotable_get(source = "germany_1990",
geo = 'DE', year = 1990,
unit = "MIO_EUR", labelling = "short")[4,1]), "cpa_g_i")
expect_equal(as.numeric(iotable_get ( source = "croatia_2010_1800", geo = "HR",
year = 2010, unit = "T_NAC")[1,3]),
expected = 164159, tolerance = 0.6)
expect_equal(as.numeric(iotable_get ( source = "croatia_2010_1900", geo = "HR",
year = 2010, unit = "T_NAC")[2,5]),
expected = 1, tolerance = 0.5)
expect_equal(as.character(iotable_get ( source = "croatia_2010_1900", geo = "HR",
year = 2010, unit = "T_NAC",
labelling = "short")[[1]][2]),
expected = "CPA_A02")
expect_equal(as.character(iotable_get ( source = "croatia_2010_1900", geo = "HR",
year = 2010, unit = "T_NAC",
labelling = "iotables")[[1]][2]),
expected = "forestry")
})
#Slovakia A01, A01 shoud be 497.37
#test <- iotable_get ( source = "naio_10_cp1750", stk_flow = "TOTAL",
# geo = "CZ", unit = "MIO_NAC", year = 2010,
# data_directory = "data-raw", force_download = FALSE)
# A01, A01 should yield 10,161
|
bd1c3d5bad2254fce771d52c8d1be1240f3da872 | 6721d0fdd7b61e6f8687d50059a7c20a4a101f5a | /analysis_scripts/pseudotime/4-fit_clusters_K36_and_K9m3_single_and_double_by_celltypes.permute.R | d682c32cd3533ae9c771fc19d7794abb76f8ecc8 | [] | no_license | jakeyeung/scChIX | b2f9e274ff150b91c19be283336457d42e512d15 | c038113b658ea19dd6489b8daef4516c4a9055a2 | refs/heads/main | 2023-04-30T17:44:02.136452 | 2023-04-25T13:32:52 | 2023-04-25T13:32:52 | 358,041,325 | 4 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,332 | r | 4-fit_clusters_K36_and_K9m3_single_and_double_by_celltypes.permute.R | # Jake Yeung
# Date of Creation: 2021-08-24
# File: ~/projects/scChIX/analysis_scripts/pseudotime/4-fit_clusters_K36_and_K9m3_single_and_double_by_celltypes.R
# Load meta after unmixing and run DE to find neighborhood structures
rm(list=ls())
library(dplyr)
library(tidyr)
library(ggplot2)
library(data.table)
library(Matrix)
library(scChIX)
# Load raw counts (50kb genomewide) ---------------------------------------------------------
jmarks <- c("K36", "K9m3"); names(jmarks) <- jmarks
inf.lda.lst <- lapply(jmarks, function(jmark){
print(jmark)
inf.lda.tmp <- paste0("/home/jyeung/hub_oudenaarden/jyeung/data/dblchic/gastrulation/LDA_scchix_outputs/from_pipeline_unmixed_singles_LDA_together/var_filtered_manual2nocenterfilt2_K36_K9m3_K36-K9m3/lda_outputs.scchix_inputs_clstr_by_celltype_K36-K9m3.removeNA_FALSE-merged_mat.", jmark, ".K-30.binarize.FALSE/ldaOut.scchix_inputs_clstr_by_celltype_K36-K9m3.removeNA_FALSE-merged_mat.", jmark, ".K-30.Robj")
assertthat::assert_that(file.exists(inf.lda.tmp))
return(inf.lda.tmp)
})
out.objs <- lapply(inf.lda.lst, function(inf.lda){
load(inf.lda, v=T)
return(list(out.lda = out.lda, count.mat = count.mat))
})
count.mat.lst <- lapply(out.objs, function(jout){
jout$count.mat
})
# Load meta ---------------------------------------------------------------
inf.meta <- "/home/jyeung/hub_oudenaarden/jyeung/data/dblchic/gastrulation/from_analysis/metadata/from_demux_cleaned_var_filtered_manual2nocenterfilt2_K36_K9m3_K36-K9m3/demux_cleaned_filtered_var_filtered_manual2nocenterfilt2_K36_K9m3_K36-K9m3.2021-08-24.filt2.spread_7.single_and_dbl.txt"
assertthat::assert_that(file.exists(inf.meta))
dat.meta <- fread(inf.meta)
cbPalette <- c("#696969", "#32CD32", "#56B4E9", "#FFB6C1", "#F0E442", "#0072B2", "#D55E00", "#CC79A7", "#006400", "#FFB6C1", "#32CD32", "#0b1b7f", "#ff9f7d", "#eb9d01", "#7fbedf")
ggplot(dat.meta, aes(x = umap1.shift, y = umap2.scale, color = cluster, group = cell)) +
geom_point() +
geom_path(alpha = 0.01) +
theme_bw() +
scale_color_manual(values = cbPalette) +
theme(aspect.ratio=0.5, panel.grid.major = element_blank(), panel.grid.minor = element_blank())
dat.ctype <- dat.meta
cells.keep <- dat.ctype$cell
count.mat.filt.lst <- lapply(count.mat.lst, function(jcount){
cols.keep <- colnames(jcount) %in% cells.keep
jcount[, cols.keep]
})
# ggplot(dat.ctype, aes(x = umap1, y = umap2, color = cluster)) +
# geom_point() +
# facet_wrap(~type) +
# theme_bw() +
# theme(aspect.ratio=1, panel.grid.major = element_blank(), panel.grid.minor = element_blank())
# make Epithelial the reference celltype
dat.ctype <- dat.ctype %>%
rowwise() %>%
mutate(cluster = ifelse(cluster == "Epithelial", "aEpithelial", cluster))
# Fit each gene ---------------------------------------------------------------
dat.annots.filt.mark <- dat.ctype
jname <- "manual2nocenterfilt2_K36_K9m3_K36-K9m3"
hubprefix <- "/home/jyeung/hub_oudenaarden"
ncores <- 16
jseed <- 123
outdir <- file.path(hubprefix, "jyeung/data/dblchic/gastrulation/from_analysis/glm_fits_outputs/by_clusters", jname)
dir.create(outdir)
for (jmark in jmarks){
outf <- file.path(outdir, paste0("glm_poisson_fits_output.clusters.", jname, ".", jmark, ".permute_seed_", jseed, ".RData"))
count.mat <- count.mat.lst[[jmark]]
set.seed(jseed)
indx <- seq_len(ncol(count.mat))
indx.permute <- sample(indx, size = length(indx), replace = FALSE)
cnames.orig <- colnames(count.mat)
cnames.permute <- cnames.orig[indx.permute]
count.mat.permute <- count.mat
colnames(count.mat.permute) <- cnames.permute
cnames <- colnames(count.mat.permute)
ncuts.cells.mark <- data.frame(cell = colnames(count.mat.permute), ncuts.total = colSums(count.mat.permute), stringsAsFactors = FALSE)
jrow.names <- rownames(count.mat.permute)
names(jrow.names) <- jrow.names
print("fitting genes... permuted")
system.time(
jfits.lst <- parallel::mclapply(jrow.names, function(jrow.name){
jrow <- count.mat.permute[jrow.name, ]
jout <- scChIX::FitGlmRowClusters.withse(jrow, cnames, dat.annots.filt.mark, ncuts.cells.mark, jbin = jrow.name, returnobj = FALSE, with.se = TRUE)
return(jout)
}, mc.cores = ncores)
)
save(jfits.lst, dat.annots.filt.mark, ncuts.cells.mark, count.mat.permute, dat.ctype, file = outf)
}
|
451f616e2c0fcecbd144bcba0f0437cf73fcda5a | 12f319b3df41097e858bab5b13536a350871c2cf | /R/BIO.R | 5de45f7f545ce3851861eb5a467041a386fe2ba2 | [
"CC-BY-4.0"
] | permissive | talbano/qti | bda76edd29b650e10bfe1136983756fd8b45a50a | 622f7e699d934455408e7e4766f398ccae78a826 | refs/heads/master | 2023-01-05T01:31:51.644564 | 2022-12-30T23:14:11 | 2022-12-30T23:14:11 | 111,174,639 | 2 | 1 | null | null | null | null | UTF-8 | R | false | false | 801 | r | BIO.R | #' Medical Biology Item Bank
#'
#' A list containing 611 medical biology items, each stored as an object of
#' class \link{qti_item}.
#'
#' Items include the following information, accessible via list elements:
#' \itemize{
#' \item id: unique identifier.
#' \item title: descriptive item title, in this case the learning outcome
#' that the item was written to assess.
#' \item type: item type. All MACRO items are multiple-choice.
#' \item prompt: text for the item prompt or stem.
#' \item options: vector of multiple-choice option text, one element per
#' option.
#' \item key: vector of scores per option.
#' \item href: xml file name, generated automatically if not supplied.
#' \item xml: item content in xml format.
#'}
#'
#' @format A list
#' @source \url{http://proola.org/}
"BIO"
|
c02b74e53639c3dc261c76506050a070f5e68392 | 29585dff702209dd446c0ab52ceea046c58e384e | /NNS/R/Uni_SD_Routines.R | 720831d73b56b524f08236e37c9f86f4e83af32f | [] | no_license | ingted/R-Examples | 825440ce468ce608c4d73e2af4c0a0213b81c0fe | d0917dbaf698cb8bc0789db0c3ab07453016eab9 | refs/heads/master | 2020-04-14T12:29:22.336088 | 2016-07-21T14:01:14 | 2016-07-21T14:01:14 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,272 | r | Uni_SD_Routines.R | #' FSD
#'
#' Uni-directional test of first degree stochastic dominance using lower partial moments used in SD Efficient Set routine.
#' @param x variable
#' @param y variable
#' @author Fred Viole, OVVO Financial Systems
#' @references Viole, F. and Nawrocki, D. (2016) "LPM Density Functions for the Computation of the SD Efficient Set." Journal of Mathematical Finance, 6, 105-126. \url{http://www.scirp.org/Journal/PaperInformation.aspx?PaperID=63817}.
#' @examples
#' set.seed(123)
#' x<-rnorm(100); y<-rnorm(100)
#' \dontrun{FSD(x,y)}
#' @export
FSD <- function(x,y){
x_sort <- sort(x, decreasing=FALSE)
y_sort <- sort(y, decreasing=FALSE)
Combined = c(x_sort,y_sort)
Combined_sort = sort(Combined, decreasing=FALSE)
LPM_x_sort = numeric(0)
LPM_y_sort = numeric(0)
output_x <- vector("numeric", length(x))
output_y <- vector("numeric", length(x))
if(min(y)>=min(x)) {return(0)} else {
for (i in 1:length(Combined)){
## Indicator function ***for all values of x and y*** as the CDF target
if(LPM(0,Combined_sort[i],y)-LPM(0,Combined_sort[i],x)>=0 )
{output_x[i]<-0} else { break } }
for (i in 1:length(Combined)){
if(LPM(0,Combined_sort[i],x)-LPM(0,Combined_sort[i],y)>=0 )
{output_y[i]<-0} else { break }
}
for (j in 1:length(Combined_sort)){
LPM_x_sort[j] = LPM(0,Combined_sort[j],x)
LPM_y_sort[j] = LPM(0,Combined_sort[j],y)
}
ifelse(length(output_x)==length(Combined) & min(x)>=min(y),return(1),return(0))
}
}
#' VN.SSD.uni
#'
#' Uni-directional test of second degree stochastic dominance using lower partial moments used in SD Efficient Set routine.
#' @param x variable
#' @param y variable
#' @author Fred Viole, OVVO Financial Systems
#' @examples
#' set.seed(123)
#' x<-rnorm(100); y<-rnorm(100)
#' \dontrun{VN.SSD.uni(x,y)}
#' @export
VN.SSD.uni <- function(x,y){
x_sort <- sort(x, decreasing=FALSE)
y_sort <- sort(y, decreasing=FALSE)
Combined = c(x_sort,y_sort)
Combined_sort = sort(Combined, decreasing=FALSE)
LPM_x_sort = numeric(0)
LPM_y_sort = numeric(0)
output_x <- vector("numeric", length(x))
output_y <- vector("numeric", length(x))
if(min(y)>=min(x)) {return(0)} else {
for (i in 1:length(Combined)){
## Indicator function ***for all values of x and y*** as the CDF target
if(LPM(1,Combined_sort[i],y)-LPM(1,Combined_sort[i],x)>=0 )
{output_x[i]<-0} else { break } }
for (i in 1:length(Combined)){
if(LPM(1,Combined_sort[i],x)-LPM(1,Combined_sort[i],y)>=0 )
{output_y[i]<-0} else { break }
}
for (j in 1:length(Combined_sort)){
LPM_x_sort[j] = LPM(1,Combined_sort[j],x)
LPM_y_sort[j] = LPM(1,Combined_sort[j],y)
}
ifelse(length(output_x)==length(Combined) & min(x)>=min(y),return(1),return(0))
}
}
#' TSD
#'
#' Uni-directional test of third degree stochastic dominance using lower partial moments used in SD Efficient Set routine.
#' @param x variable
#' @param y variable
#' @author Fred Viole, OVVO Financial Systems
#' @examples
#' set.seed(123)
#' x<-rnorm(100); y<-rnorm(100)
#' \dontrun{TSD(x,y)}
#' @export
TSD <- function(x,y){
x_sort <- sort(x, decreasing=FALSE)
y_sort <- sort(y, decreasing=FALSE)
Combined = c(x_sort,y_sort)
Combined_sort = sort(Combined, decreasing=FALSE)
LPM_x_sort = numeric(0)
LPM_y_sort = numeric(0)
output_x <- vector("numeric", length(x))
output_y <- vector("numeric", length(x))
if(min(y)>=min(x) | mean(y)>=mean(x)) {return(0)} else {
for (i in 1:length(Combined)){
## Indicator function ***for all values of x and y*** as the CDF target
if(LPM(2,Combined_sort[i],y)-LPM(2,Combined_sort[i],x)>=0 )
{output_x[i]<-0} else { break } }
for (i in 1:length(Combined)){
if(LPM(2,Combined_sort[i],x)-LPM(2,Combined_sort[i],y)>=0 )
{output_y[i]<-0} else { break }
}
for (j in 1:length(Combined_sort)){
LPM_x_sort[j] = LPM(2,Combined_sort[j],x)
LPM_y_sort[j] = LPM(2,Combined_sort[j],y)
}
ifelse(length(output_x)==length(Combined) & min(x)>=min(y),return(1),return(0))
}
}
|
094137426ccc42979b64d12bfe81b6e548cc9b52 | 57ca4315d1ca99e293df246c5abba38c9f212579 | /ConnRedis.R | cd5f5e19aba4549b646f16cb6cb62e864e42b0d6 | [] | no_license | mike3722/BigData | 6a91afdd31d99ca5cf877fbe8e0b1c6f0c8a0e2e | 74f6180a8bfbd49832ec9bb7d4623b569706174a | refs/heads/master | 2020-04-09T05:11:48.481917 | 2018-12-02T14:42:46 | 2018-12-02T14:42:46 | 160,055,139 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 100 | r | ConnRedis.R | #install package rredis
libary(rredis)
redisConnect()
redisSet("x",rnorm(S))
redisSet("x")
|
a8f6fcafec61cec3dfc94d7d7320c260fe42cf17 | 4476502e4fed662b9d761c83e352c4aed3f2a1c2 | /GIT_NOTE/06_R_Quant/01_크롤링/step16_ 종목정보 시각화.R | 7a7ec17229ba28e86ff4a4afc57e6195d4334a73 | [] | no_license | yeon4032/STUDY | 7772ef57ed7f1d5ccc13e0a679dbfab9589982f3 | d7ccfa509c68960f7b196705b172e267678ef593 | refs/heads/main | 2023-07-31T18:34:52.573979 | 2021-09-16T07:45:57 | 2021-09-16T07:45:57 | 407,009,836 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 5,902 | r | step16_ 종목정보 시각화.R | #step16_ 종목정보 시각화
library(ggplot2)
#x축은 ROE 열을 사용하고, y축은 PBR 열
#geom_point() 함수를 통해 산점도 그래프를 그려줍니다
ggplot(data_market, aes(x = ROE, y = PBR)) +
geom_point()
#단치 효과를 제거하기 위해 coord_cartesian() 함수 내에 xlim과 ylim, 즉 x축과 y축의 범위를 직접 지정해줍니다.
ggplot(data_market, aes(x = ROE, y = PBR)) +
geom_point() +
coord_cartesian(xlim = c(0, 0.30), ylim = c(0, 3))
#ggplot() 함수 내부 aes 인자에 color와 shape를 지정해주면,
#코스피와 코스닥 종목들에 해당하는 데이터의 색과 점 모양을 다르게 표시할 수 있습니다
ggplot(data_market, aes(x = ROE, y = PBR,
color = `시장구분`,
shape = `시장구분`)) +
#geom_smooth() 함수를 통해 평활선을 추가할 수도 있으며,
#lm(linear model)을 지정할 경우 선형회귀선을 그려주게 됩니다
geom_point() +
geom_smooth(method = 'lm') +
coord_cartesian(xlim = c(0, 0.30), ylim = c(0, 3))
##geom_histogram(): 히스토그램 나타내기
ggplot(data_market, aes(x = PBR)) +
geom_histogram(binwidth = 0.1) + # binwidth 인자를 통해 막대의 너비를 선택해줄 수 있습니다
coord_cartesian(xlim = c(0, 10))
# PBR 히스토그램을 좀 더 자세하게 나타내보겠습니다
ggplot(data_market, aes(x = PBR)) +
geom_histogram(aes(y = ..density..),
binwidth = 0.1,
color = 'sky blue', fill = 'sky blue') +
coord_cartesian(xlim = c(0, 10)) +
geom_density(color = 'red') +
geom_vline(aes(xintercept = median(PBR, na.rm = TRUE)),
color = 'blue') +
geom_text(aes(label = median(PBR, na.rm = TRUE),
x = median(PBR, na.rm = TRUE), y = 0.05),
col = 'black', size = 6, hjust = -0.5)
#위의 histogram 설명
#geom_histogram() 함수 내에 aes(y = ..density..)를 추가해 밀도함수로 바꿉니다.
#geom_density() 함수를 추가해 밀도곡선을 그려줍니다.
#geom_vline() 함수는 세로선을 그려주며, xintercept 즉 x축으로 PBR의 중앙값을 선택합니다.
#geom_text() 함수는 그림 내에 글자를 표현해주며, label 인자에 원하는 글자를 입력해준 후
#글자가 표현될 x축, y축, 색상, 사이즈 등을 선택할 수 있습니다.
##geom_boxplot(): 박스 플롯 나타내기
#이상치를 확인하기 좋은 그림
ggplot(data_market, aes(x = SEC_NM_KOR, y = PBR)) +
geom_boxplot() +
coord_flip()
#x축 데이터로는 섹터 정보, y축 데이터로는 PBR을 선택합니다.
#geom_boxplot()을 통해 박스 플롯을 그려줍니다.
#coord_flip() 함수는 x축과 y축을 뒤집어 표현해주며 x축에 PBR,
#y축에 섹터 정보가 나타나게 됩니다.
##dplyr과 ggplot을 연결하여 사용하기
data_market %>%
filter(!is.na(SEC_NM_KOR)) %>%
group_by(SEC_NM_KOR) %>%
summarize(ROE_sector = median(ROE, na.rm = TRUE),
PBR_sector = median(PBR, na.rm = TRUE)) %>%
ggplot(aes(x = ROE_sector, y = PBR_sector,
color = SEC_NM_KOR, label = SEC_NM_KOR)) +
geom_point() +
geom_text(color = 'black', size = 3, vjust = 1.3) +
theme(legend.position = 'bottom',
legend.title = element_blank())
#데이터 분석의 단계로 filter()를 통해 섹터가 NA가 아닌 종목을 선택합니다.
#group_by()를 통해 섹터별 그룹을 묶습니다.
#summarize()를 통해 ROE와 PBR의 중앙값을 계산해줍니다. 해당 과정을 거치면 다음의 결과가 계산됩니다.
#축과 y축을 설정한 후 색상과 라벨을 섹터로 지정해주면 각 섹터별로 색상이 다른 산점도가 그려집니다.
#geom_text() 함수를 통해 앞에서 라벨로 지정한 섹터 정보들을 출력해줍니다.
#theme() 함수를 통해 다양한 테마를 지정합니다. legend.position 인자로 범례를 하단에 배치했으며, legend.title 인자로 범례의 제목을 삭제했습니다.
##geom_bar(): 막대 그래프 나타내기
data_market %>%
group_by(SEC_NM_KOR) %>%
summarize(n = n()) %>%
ggplot(aes(x = SEC_NM_KOR, y = n)) +
geom_bar(stat = 'identity') +
theme_classic()
#group_by()를 통해 섹터별 그룹을 묶어줍니다.
#summarize() 함수 내부에 n()을 통해 각 그룹별 데이터 개수를 구합니다.
#ggplot() 함수에서 x축에는 SEC_NM_KOR, y축에는 n을 지정해줍니다.
#geom_bar()를 통해 막대 그래프를 그려줍니다. y축에 해당하는 n 데이터를 그대로 사용하기 위해서는 stat 인자를 identity로 지정해주어야 합니다. theme_*() 함수를 통해 배경 테마를 바꿀 수도 있습니다.
# 막대그래프 모양 변경
data_market %>%
filter(!is.na(SEC_NM_KOR)) %>%
group_by(SEC_NM_KOR) %>%
summarize(n = n()) %>%
ggplot(aes(x = reorder(SEC_NM_KOR, n), y = n, label = n)) +
geom_bar(stat = 'identity') +
geom_text(color = 'black', size = 4, hjust = -0.3) +
xlab(NULL) +
ylab(NULL) +
coord_flip() +
scale_y_continuous(expand = c(0, 0, 0.1, 0)) +
theme_classic()
#filter() 함수를 통해 NA 종목은 삭제해준 후 섹터별 종목 개수를 구해줍니다.
#ggplot()의 x축에 reorder() 함수를 적용해 SEC_NM_KOR 변수를 n 순서대로 정렬해줍니다.
#geom_bar()를 통해 막대 그래프를 그려준 후 geom_text()를 통해 라벨에 해당하는 종목 개수를 출력합니다.
#xlab()과 ylab()에 NULL을 입력해 라벨을 삭제합니다.
#coord_flip() 함수를 통해 x축과 y축을 뒤집어줍니다.
#scale_y_continuous() 함수를 통해 그림의 간격을 약간 넓혀줍니다.
#theme_classic()으로 테마를 변경해줍니다.
|
4d37b2063dfd16405b8c4823a2e136530e67689c | 56988ca3d2d1af30f611ae6b4df98f981f9b8bbe | /home_backup_20200305.R | 0037760448944f1395a1036abf0dfd60eaba1d66 | [] | no_license | gehami/test_general_map | d8cffd0ae0550bdf84683905648f2e72ef89a095 | 13cdbc0d18298a48b9fd96cffb535568e6bb89ef | refs/heads/master | 2020-09-11T22:25:42.920513 | 2020-05-18T22:23:10 | 2020-05-18T22:23:10 | 222,208,020 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 86,768 | r | home_backup_20200305.R | # home page
# county_list = toTitleCase(gsub("([^,]+)(,)([[:print:]]+)", "\\3 county, \\1", county.fips$polyname))
######## REading in codebook #########
#reading in codebook to translate the names of the acs vars to their name in the dataset
# if(!exists('codebook')){
codebook = read.csv('variable_mapping.csv', stringsAsFactors = FALSE)
# }
data_code_book = codebook[!duplicated(paste0(codebook$risk_factor_name, codebook$metric_category)),]
####### Constants #########
VIOLENCE_CHOICES = data_code_book$risk_factor_name[grep('at-risk', data_code_book$metric_category, ignore.case = TRUE)]
HEALTH_CHOICES = data_code_book$risk_factor_name[grep('health', data_code_book$metric_category, ignore.case = TRUE)]
ECONOMIC_CHOICES = data_code_book$risk_factor_name[grep('economic', data_code_book$metric_category, ignore.case = TRUE)]
QOL_CHOICES = data_code_book$risk_factor_name[grep('qol', data_code_book$metric_category, ignore.case = TRUE)]
city_tract_map = readRDS('data_tables/All tracts in all US cities - state WY.rds')
all_cities = unlist(hash::keys(city_tract_map))
health_risk_factors = ''
economic_factors = ''
qol_factors = ''
violence_risk_factors = ''
location = ''
#Understanding the year range that should be available in the app
#since cdc data only goes back to 2016, we are cutting the year range off at 2016 minimum
YEAR_RANGE = c(2016,2018)
#loading one cdc data to know what cities we have cdc data on
cdc_2018 = readRDS('data_tables/cdc_2018.rds')
cities_cdc = paste0(cdc_2018$placename[!duplicated(cdc_2018$placename)], ' ', cdc_2018$stateabbr[!duplicated(cdc_2018$placename)])
states_cdc = unique(cdc_2018$stateabbr)
##checking to make sure every city_state in cdc is also in county_tract_map... it does
# length(cities_cdc %in% all_cities) == length(cities_cdc) #confirmed
INITIAL_WEIGHTS = 1
#percent of a variable that is allowed to be NA for me to keep it in the predictors dataset
NA_TOL = .1
QUANTILE_BINS = 10
# 1/x_ij where x is number of blocks between block i and j (starting at 1), 0 if more than MAX_BLOCK_DIST away
MAX_LOC_DIST = 1 #looking at neighbords directly next to tract
TRACT_PAL = 'RdYlBu'
TRACT_OPACITY = .65
SLIDER_MIN = 0
SLIDER_MAX = 10
INITIAL_SLIDER_VALUE = 1
MIN_SLIDER_STEP = 0.5
INFO_POPUP_TEXT = 'The overall score combines all of the metrics you chose into one number for each neighborhood. You can calculate it by taking the average score of all the metrics you chose (shown below in small font).
Typically the highest scoring neighborhoods show the highest needs in the city according to the data and selected metrics. Learn more from the <a href = "?home">FAQ on the bottom of the home page.</a>'
#economic
PRESET_1_DESC_TEXT = 'Income, wealth, and poverty'
#medical
PRESET_2_DESC_TEXT = 'Medical health stats from the CDC'
#high needs
PRESET_3_DESC_TEXT = 'General needs for services across many factors'
######## custom JS ######
# Allows you to move people to a new page via the server
redirect_jscode <- "Shiny.addCustomMessageHandler('mymessage', function(message) {window.location = '?map';});"
############ Globally used functions #############
#given a vector of numeric values, or something that can be coerced to numeric, returns a vector of the percentile each observation is within the vector.
#Example: if vec == c(1,2,3), then get_percentile(vec) == c(0.3333333, 0.6666667, 1.0000000)
get_percentile = function(vec, compare_vec = NULL){
if(is.null(compare_vec)){
return(ecdf(vec)(vec))
}else{
new_vec = rep(0, length(vec))
for(n in seq_along(vec)){
new_vec[n] = ecdf(c(vec[n], compare_vec))(c(vec[n], compare_vec))[1]
}
return(new_vec)
}
}
#given a vector of numeric values, and the number of bins you want to place values into, returns a vector of 'vec' length where each observation is the quantile of that observation.
#Example: if vec == c(1,2,3), then get_quantile(vec, quantile_bins = 2) = as.factor(c(0, 50, 50)).
#To have the top observation be marked as the 100%ile, set ret_100_ile == TRUE.
#To return a factor variable, ret_factor == TRUE, otherwise it will return a numeric vector.
get_quantile = function(vec, quantile_bins, ret_factor = TRUE, ret_100_ile = FALSE, compare_vec = NULL){
if(all(is.na(vec))) return(NA)
quantile_bins = round(min(max(quantile_bins, 2), 100)) #ensuring the quantile bins is an integer between 2 and 100
quant_val = floor(get_percentile(vec, compare_vec)*100 / (100/quantile_bins)) * (100/quantile_bins)
if(!ret_100_ile){
if(length(quant_val) == 1 & quant_val == 100){quant_val = (1 - 1/quantile_bins)*100}else{
quant_val[quant_val == 100] = unique(quant_val)[order(-unique(quant_val))][2]
}
}
if(ret_factor){return(factor(quant_val))}
return(quant_val)
}
#given a vector, min-max-scales it between 0 and 1
min_max_vec = function(vec, na.rm = TRUE, ...){
if(max(vec, na.rm = na.rm, ...) == min(vec, na.rm = na.rm, ...)){
return(rep(0, length(vec)))
}
return((vec - min(vec, na.rm = na.rm, ...))/(max(vec, na.rm = na.rm, ...)-min(vec, na.rm = na.rm, ...)))
}
#like the %in% function, but retains the order of the vector that your checking to see if the other vector is in it.
#EX: if x <- c(1,4,6) and y <- c(6,1,4), then in_match_order(x, y) = c(3,1,2) since 6 appears at x_ind 3, 1 appears at x_ind 1, and 4 appears at x_ind 2
in_match_order = function(vec_in, vec){
ret_inds = NULL
for(n in vec){
ret_inds = c(ret_inds,
try(which(n == vec_in)[1])
)
}
return(ret_inds[!is.na(ret_inds)])
}
############# map functions ########
#Given therisk vars, risk weights, spdf, and codebook, calculates the overall risk factor score
calculate_score = function(risk_vars, risk_weights, spdf, data_code_book, keep_nas = FALSE){
if(length(risk_vars) != length(risk_weights)){
warning("risk vars and risk weights are not the same length")
return(NULL)
}
if(!all(risk_vars %in% data_code_book$risk_factor_name)){
warning("some var names are not in the codebook")
return(NULL)
}
data_code_book = data_code_book[order(data_code_book$Name),]
risk_dataset = data.frame(risk_vars, risk_weights, stringsAsFactors = FALSE)[order(risk_vars),]
risk_dataset$var_code = data_code_book$Name[in_match_order(data_code_book$risk_factor_name, risk_dataset$risk_vars)]
#handles the edge case where only one variable is involved in the scoring.
if(length(risk_vars) < 2) {
score_var = as.numeric(spdf@data[,risk_dataset$var_code])
if(is.null(spdf$GEOID)) return(data.frame(score = min_max_vec(score_var, na.rm = TRUE), stringsAsFactors = FALSE))
return(data.frame(GEOID = spdf$GEOID, score = min_max_vec(score_var, na.rm = TRUE), stringsAsFactors = FALSE))
}
#get the vars from the spdf that are valuable here and order them in the same was as the risk_dataset
score_vars = tryCatch(spdf@data[,which(colnames(spdf@data) %in% risk_dataset$var_code)],
error = function(e) spdf[,which(colnames(spdf) %in% risk_dataset$var_code)])
score_vars = score_vars[,risk_dataset$var_code]
#converting each column in score_vars to numeric
for(n in seq_len(ncol(score_vars))){
score_vars[,n] = as.numeric(score_vars[,n])
}
#standardizing the score_vars between 0 and 1
for(n in seq_len(ncol(score_vars))){
score_vars[,n] = min_max_vec(score_vars[,n], na.rm = TRUE)
}
#Marking any entries with NAs as a large negative number to ensure the resulting value is negative, so we can mark NA scores.
if(keep_nas){
for(n in seq_len(ncol(score_vars))){
score_vars[is.na(score_vars[,n]),n] = -10000000
}
}else{
#Alternatively, we can deal with NAs by just taking the average score from the other variables. I like this better for now and will do as such
for(n in seq_len(ncol(score_vars))){
score_vars[is.na(score_vars[,n]),n] = rowSums(score_vars[is.na(score_vars[,n]),], na.rm = TRUE) / rowSums(!is.na(score_vars[is.na(score_vars[,n]),]))
}
}
score_mat = data.matrix(score_vars)
#multiplying those scores by the weights and summing them. works
score = score_mat %*% risk_dataset$risk_weights
score[score < 0] = NA
if(is.null(spdf$GEOID)) return(data.frame(score = min_max_vec(score, na.rm = TRUE), stringsAsFactors = FALSE))
return(data.frame(GEOID = spdf$GEOID, score = min_max_vec(score, na.rm = TRUE), stringsAsFactors = FALSE))
}
#making the label for the map from the risk_vars, spdf, codebook, and quantile bins
make_label_for_score = function(risk_vars, spdf, data_code_book, quantile_bins = 10, front_name = FALSE, info_popup_text = ""){
label_list = NULL
#cleaning up the risk_var names
risk_var_cats = unique(gsub('([[:alpha:]]+)(_[[:print:]]*)', '\\1', names(risk_vars)))
risk_var_cats_name_conversion = data.frame(cats = risk_var_cats, display_names = paste0(tools::toTitleCase(risk_var_cats), ' factors'), stringsAsFactors = FALSE)
risk_var_cats_name_conversion$display_names[risk_var_cats_name_conversion$cats == 'qol'] = "Quality of life factors"
if(length(risk_var_cats) > 1){
risk_cats_scores = data.frame(array(dim = c(nrow(spdf@data), length(risk_var_cats))))
colnames(risk_cats_scores) = risk_var_cats
for(risk_cat in risk_var_cats){
interest_vars = risk_vars[grep(risk_cat, names(risk_vars))]
interest_var_names = data_code_book$Name[in_match_order(data_code_book$risk_factor_name, interest_vars)] #works
min_max_vars = data.frame(spdf@data[,interest_var_names])
for(n in seq_len(ncol(min_max_vars))){
min_max_vars[,n] = min_max_vec(min_max_vars[,n], na.rm = TRUE)
}
risk_cats_scores[,risk_cat] = rowSums(min_max_vars)
}
risk_cats_quantiles = risk_cats_scores
for(n in seq_len(ncol(risk_cats_quantiles))){
risk_cats_quantiles[,n] = suppressWarnings(get_quantile(risk_cats_scores[,n], quantile_bins = quantile_bins))
}
}
for(row_ind in 1:nrow(spdf@data)){
label_string = NULL
if(length(risk_var_cats) < 2){
for(n in seq_along(risk_vars)){
if(front_name){
label_string = c(label_string, paste0('<small class = "phone_popup">',data_code_book$front_name[data_code_book$risk_factor_name == risk_vars[n]][1], ' :',
round(spdf@data[row_ind,data_code_book$Name[data_code_book$risk_factor_name == risk_vars[n]]]), '% ',
suppressWarnings(get_quantile(spdf@data[,data_code_book$Name[data_code_book$risk_factor_name == risk_vars[n]]], quantile_bins = quantile_bins))[row_ind], '%ile)</small>'))
}else{
label_string = c(label_string, paste0('<small class = "phone_popup">', round(spdf@data[row_ind,data_code_book$Name[data_code_book$risk_factor_name == risk_vars[n]]]), '% ',
data_code_book$back_name[data_code_book$risk_factor_name == risk_vars[n]][1], ' (',
suppressWarnings(get_quantile(spdf@data[,data_code_book$Name[data_code_book$risk_factor_name == risk_vars[n]]], quantile_bins = quantile_bins))[row_ind], '%ile)</small>'))
}
}
full_label = paste0('<div class = "top-line-popup"><b>Neighborhood zipcodes: ', gsub('(^[0-9]+\\, [0-9]+\\, [0-9]+)(\\, [[:print:]]+)', '\\1', spdf$zipcodes[row_ind]), '</b></div>',
'<div class = "top-line-popup" onclick = "popupFunction()"><b>Overall ', tolower(risk_var_cats_name_conversion$display_names[1]), " metric: ", suppressWarnings(get_quantile(spdf@data$score[row_ind], quantile_bins = quantile_bins, compare_vec = spdf@data$score)),
"%ile</b>",
HTML('<div class = "info-popup">',
'<i class="fa fa-info-circle"></i>',
'</div>'), '</div>',
'<div class = "info-popuptext" id = "myInfoPopup" onclick = "popupFunction()">', info_popup_text, '</div>', paste(label_string, collapse = '<br>'))
label_list = c(label_list, full_label)
# HTML('<div class = "info-popup" onclick = "popupFunction()">',
# '<i class="fa fa-info-circle"></i>',
# '<span class = "info-popuptext" id = "myInfoPopup">', info_popup_text, '</span>',
# '</div>')), label_string)
# label_list = c(label_list, paste(full_label, collapse = '<br>'))
}else{
for(risk_cat in risk_var_cats){
cat_score = risk_cats_quantiles[row_ind,risk_cat]
label_string = c(label_string, paste0('<i>', risk_var_cats_name_conversion$display_names[risk_var_cats_name_conversion$cats == risk_cat],
': ', as.character(cat_score),
'%ile</i>'
))
interest_vars = risk_vars[grep(risk_cat, names(risk_vars))]
if(front_name){
display_var_names = data_code_book$front_name[in_match_order(data_code_book$risk_factor_name, interest_vars)]
}else{
display_var_names = data_code_book$back_name[in_match_order(data_code_book$risk_factor_name, interest_vars)]
}
interest_var_names = data_code_book$Name[in_match_order(data_code_book$risk_factor_name, interest_vars)] #works
for(sub_vars_ind in seq_len(length(interest_vars))){
if(front_name){
label_string = c(label_string,
paste0(#'<small class = "no_small_screen">',
'<small class = "phone_popup">',
display_var_names[sub_vars_ind], ': ',
round(spdf@data[row_ind,interest_var_names[sub_vars_ind]]), '% (',
suppressWarnings(get_quantile(spdf@data[,interest_var_names[sub_vars_ind]], quantile_bins = quantile_bins))[row_ind], '%ile)', '</small>')
)
}else{
label_string = c(label_string,
paste0(#'<small class = "no_small_screen">',
'<small class = "phone_popup">',
round(spdf@data[row_ind,interest_var_names[sub_vars_ind]]), '% ', display_var_names[sub_vars_ind], ' (',
suppressWarnings(get_quantile(spdf@data[,interest_var_names[sub_vars_ind]], quantile_bins = quantile_bins))[row_ind], '%ile)', '</small>')
)
}
}
}
#this should work now
full_label = paste0('<div class = "top-line-popup"><b>Neighborhood zipcodes: ', gsub('(^[0-9]+\\, [0-9]+\\, [0-9]+)(\\, [[:print:]]+)', '\\1', spdf$zipcodes[row_ind]), '</b></div>',
'<div class = "top-line-popup" onclick = "popupFunction()"><b>Overall risk metric: ',
suppressWarnings(get_quantile(spdf@data$score[row_ind], quantile_bins = quantile_bins, compare_vec = spdf@data$score)), "%ile</b>",#,
#'<br class = "no_big_screen">'
HTML('<div class = "info-popup">',
'<i class="fa fa-info-circle"></i>',
'</div>'), '</div>',
'<div class = "info-popuptext" id = "myInfoPopup" onclick = "popupFunction()">', info_popup_text, '</div>', paste(label_string, collapse = '<br>'))
label_list = c(label_list, full_label)
}
}
return(label_list)
}
#given an spdf, codebook, risk_vars, risk_weights, and quantiles, returns the updated spdf with the metric score and full label as new vars
make_full_spdf = function(spdf, data_code_book, risk_vars, risk_weights, quantile_bins, info_popup_text = ""){
#making sure the variables of interest are numeric
for(n in risk_vars){
spdf@data[,colnames(spdf@data) == data_code_book$Name[data_code_book$risk_factor_name == n]] =
as.numeric(spdf@data[,colnames(spdf@data) == data_code_book$Name[data_code_book$risk_factor_name == n]])
}
spdf@data = merge(spdf@data, calculate_score(risk_vars, risk_weights, spdf, data_code_book), by = 'GEOID')
spdf@data$label = make_label_for_score(risk_vars, spdf, data_code_book, quantile_bins, front_name = FALSE, info_popup_text = info_popup_text)
return(spdf)
}
#given the spdf, returns the loc_dist_matrix
get_loc_dist_matrix = function(spdf, MAX_LOC_DIST = 1){
#initializing the matrix
loc_dist_matrix = matrix(0, nrow = nrow(spdf@data), ncol = nrow(spdf@data))
loc_matrix = rgeos::gTouches(spdf, byid = TRUE)
#iterates through all blocks of 1 - MAX_BLOCK_DIST away, identifies which iteration it was picked up, and marks that number into matrix
#this will likely take hours (lol, takes 1 second).
for(loc_it_count in 1 : ncol(loc_dist_matrix)){
layer_locs = loc_it_count
marked_locs = loc_it_count
for(its in 1 : MAX_LOC_DIST){
if(length(layer_locs) > 1){
layer_locs_vec = which(rowSums(loc_matrix[,layer_locs])>0)
}else{
layer_locs_vec = which(loc_matrix[,layer_locs])
}
layer_locs_vec = layer_locs_vec[which(!(layer_locs_vec %in% marked_locs))]
loc_dist_matrix[layer_locs_vec,loc_it_count] = its
layer_locs = layer_locs_vec
marked_locs = c(marked_locs, layer_locs)
}
if(loc_it_count %% 50 == 0) print(loc_it_count)
}
colnames(loc_dist_matrix) = spdf@data$GEOID
rownames(loc_dist_matrix) = spdf@data$GEOID
loc_dist_matrix = 1/loc_dist_matrix
loc_dist_matrix[loc_dist_matrix > 1] = 0
return(loc_dist_matrix)
}
#given a vector of length n and the n by n neighbor matrix, returns a vector of n length of the averaged value for each GEOID's neibs on that var
get_neib_average_vec = function(vec, loc_dist_matrix, na_neibs_count = 0){
return((vec %*% loc_dist_matrix)/(rowSums(loc_dist_matrix) - na_neibs_count))
}#checked and works
#given a vec of n values and a n by n neighbor matrix, returns the number of neighbors with an NA value for each row in the vec
count_na_neibs = function(vec, loc_dist_matrix){
vec[!is.na(vec)] = 0
vec[is.na(vec)] = 1
na_neibs_count = vec %*% loc_dist_matrix
return(na_neibs_count)
}#works
#given a vector of cn length (where c is an integer) and the n by n neighbor matrix, returns a vector of cn length of the averaged value for each row's neibs on that var
get_full_neib_average_vec = function(vec, loc_dist_matrix, na.rm = TRUE){
ret_vec = rep(0, length(vec))
next_start_vec_ind = 1
n = nrow(loc_dist_matrix)
if(na.rm){
na_neibs_count = count_na_neibs(vec, loc_dist_matrix)
vec[is.na(vec)] = 0
}
for(c in seq_len(length(vec)/n)){
focus_inds = next_start_vec_ind:(next_start_vec_ind+n-1)
ret_vec[focus_inds] = get_neib_average_vec(vec[focus_inds], loc_dist_matrix, na_neibs_count)
next_start_vec_ind = next_start_vec_ind + n
}
return(ret_vec)
} #checked and works (loose checking, but yeah seems to work)
#given the x_vars, ids, and the loc_dist_matrix, returns the table of calculated weighted average negihbor score for each x_var
neib_avg_scores = function(x_vars, ids, loc_dist_matrix, na.rm = TRUE){
neib_matrix = data.frame(array(0, dim = c(nrow(x_vars), (length(x_vars) + 1))), stringsAsFactors = FALSE)
colnames(neib_matrix) = c('GEOID', colnames(x_vars))
neib_matrix$GEOID = ids
loc_dist_matrix = loc_dist_matrix[rownames(loc_dist_matrix) %in% neib_matrix$GEOID, colnames(loc_dist_matrix) %in% neib_matrix$GEOID]
for(x_var in colnames(x_vars)){
neib_matrix[,x_var] = get_full_neib_average_vec(x_vars[,x_var], loc_dist_matrix, na.rm)
}
colnames(neib_matrix)[2:ncol(neib_matrix)] = paste0('neib_avg_', colnames(x_vars))
if(!identical(neib_matrix$GEOID, ids)) message('not identical ids, something is wrong')
return(neib_matrix)
}
#given the spdf, risk_vars, and the codebook, returns the independent vars for predicting
get_ind_vars_for_model = function(spdf, risk_vars, data_code_book, MAX_LOC_DIST = 1){
x_vars = spdf@data[data_code_book$Name[data_code_book$risk_factor_name %in% risk_vars]]
#making sure all of the columns are numeric
for(n in seq_len(ncol(x_vars))) x_vars[,n] = as.numeric(x_vars[,n])
loc_dist_matrix = get_loc_dist_matrix(spdf, MAX_LOC_DIST)
ids = spdf@data$GEOID
ind_vars = data.frame(GEOID = ids, x_vars, stringsAsFactors = FALSE) #all the ind vars and GEOID id tag
neib_matrix = neib_avg_scores(x_vars, ids, loc_dist_matrix, na.rm = TRUE)
big_ind_dat = merge(ind_vars, neib_matrix, by = 'GEOID')
return(big_ind_dat)
}
#given the full spdf hash, inputs list, risk_vars, and codebook, returns the 1) raw predicted scores, 2) pred score quantiles, and 3) labels for the pred map
get_predicted_scores_and_labels = function(city_all_spdf_hash, inputs, risk_vars, risk_weights, data_code_book, quantile_bins = 10, MAX_LOC_DIST = 1,
info_popup_text = ""){
ind_vars = get_ind_vars_for_model(city_all_spdf_hash[[as.character(inputs$year_range[1])]], risk_vars, data_code_book, MAX_LOC_DIST)
dep_dat = calculate_score(risk_vars, risk_weights, city_all_spdf_hash[[as.character(inputs$year_range[2])]], data_code_book)
#min_max_scaling vars
ind_vars[,2:ncol(ind_vars)] = sapply(ind_vars[,2:ncol(ind_vars)], min_max_vec)
dep_dat[,2] = min_max_vec(vec = dep_dat[,2])
if(identical(ind_vars[,1], dep_dat[,1])){
model_dat = data.frame(score = dep_dat[,2], ind_vars[,-1])
}else{warning("ids don't match up between dep_dat and ind_vars")}
score.lm = lm(score ~ ., data = model_dat)
predict_dat = get_ind_vars_for_model(city_all_spdf_hash[[as.character(inputs$year_range[2])]],
risk_vars, data_code_book)[,-1]
predict_dat[,1:ncol(predict_dat)] = sapply(predict_dat[,1:ncol(predict_dat)], min_max_vec)
pred_score = predict(score.lm, newdata = predict_dat)
last_real_score = dep_dat[,2]
#to have the overall risk metrics match up with the existing year's risk metrics, we scale the numbers such that they, on average, match up to the existing overall metrics
division_factor = sum(pred_score, na.rm = TRUE)/sum(last_real_score, na.rm = TRUE)
pred_score_fixed = pred_score/division_factor
pred_score_fixed[pred_score_fixed < 0] = 0
pred_score_quantile = suppressWarnings(get_quantile(pred_score_fixed, quantile_bins = quantile_bins))
pred_score_label = paste0('<div class = "top-line-popup"><b>Neighborhood zipcodes: ', gsub('(^[0-9]+\\, [0-9]+\\, [0-9]+)([[:print:]]+)', '\\1', city_all_spdf_hash[[as.character(inputs$year_range[2])]]$zipcodes), '</b></div>',
'<div class = "top-line-popup" onclick = "popupFunction()">',"<b>Predicted overall risk metric 2020: ", pred_score_quantile, "%ile</b>",
HTML('<div class = "info-popup">',
'<i class="fa fa-info-circle"></i>',
'</div>'), '</div>',
'<div class = "info-popuptext" id = "myInfoPopup" onclick = "popupFunction()">', info_popup_text, '</div>',
'</div>','<br/>To avoid inaccurate predictions, we only display the predicted overall score from the metrics you chose.',
'<span class = "no_small_screen"> This does take into account any weights adjustments you made above.',
'For example, if you adjusted the weights to 0 for all but one metric, then the predicted score will reflect the predicted value for just that one metric.',
'</span>')
#checking the absolute error rate. since the scores are between 0 and 1, this shows the %error of the scores
print(summary(abs(last_real_score[!is.na(last_real_score)] - predict(score.lm, newdata = ind_vars[!is.na(last_real_score),]))))
return(list(raw_score = pred_score_fixed, score_quantile = pred_score_quantile, label = pred_score_label))
}
#given present_spdf with future predictions & labels, past_spdf, inputs, pallette info for tracts, and quantile bins, returns a leaflet map
make_map = function(present_spdf, past_spdf, inputs, TRACT_PAL = 'RdYlGn', TRACT_OPACITY = 0.7, quantile_bins = 10){
lon_med = mean(present_spdf@bbox[1,])
lat_med = mean(present_spdf@bbox[2,])
map <- leaflet(options = leafletOptions(minZoom = 8, zoomControl = FALSE)) %>%
# add ocean basemap
# addProviderTiles(providers$Esri.OceanBasemap) %>%
# add another layer with place names
addProviderTiles(providers$Hydda.Full) %>%
# focus map in a certain area / zoom level
setView(lng = lon_med, lat = lat_med, zoom = 12)
# TRACT_PAL = 'RdYlGn'
# TRACT_OPACITY = .7
tract_color_vals = suppressWarnings(get_quantile(present_spdf@data$score, quantile_bins = quantile_bins))
past_tract_color_vals = suppressWarnings(get_quantile(past_spdf@data$score, quantile_bins = quantile_bins))
future_tract_color_vals = suppressWarnings(get_quantile(present_spdf@data$pred_score, quantile_bins = quantile_bins))
tract_pal = colorFactor(
palette = TRACT_PAL,
domain = tract_color_vals,
reverse = TRUE
)
u_tract_color_vals = unique(tract_color_vals[!is.na(tract_color_vals)])
legend_val = u_tract_color_vals[order(u_tract_color_vals)][c(1,length(u_tract_color_vals))]
map_all = map %>% addMarkers(group = 'Clear', lng = 10, lat = 10) %>%
addMapPane('risk_tiles', zIndex = 410) %>%
addPolygons(data = present_spdf, fillColor = ~tract_pal(tract_color_vals), popup = present_spdf@data$label, stroke = T,
fillOpacity = TRACT_OPACITY, , weight = 1, opacity = 1, color = 'white', dashArray = '3',
highlightOptions = highlightOptions(color = 'white', weight = 2,
bringToFront = FALSE, dashArray = FALSE),
group = as.character(inputs$year_range[2]),options = pathOptions(pane = "risk_tiles")) %>%
addPolygons(data = past_spdf, fillColor = ~tract_pal(past_tract_color_vals), popup = past_spdf@data$label, stroke = T,
fillOpacity = TRACT_OPACITY, , weight = 1, opacity = 1, color = 'white', dashArray = '3',
highlightOptions = highlightOptions(color = 'white', weight = 2,
bringToFront = FALSE, dashArray = FALSE),
group = as.character(inputs$year_range[1]), options = pathOptions(pane = "risk_tiles")) %>%
addPolygons(data = present_spdf, fillColor = ~tract_pal(future_tract_color_vals), popup = present_spdf@data$pred_label, stroke = T,
fillOpacity = TRACT_OPACITY, , weight = 1, opacity = 1, color = 'white', dashArray = '3',
highlightOptions = highlightOptions(color = 'white', weight = 2,
bringToFront = FALSE, dashArray = FALSE),
group = as.character(inputs$year_range[2] + (inputs$year_range[2] - inputs$year_range[1])), options = pathOptions(pane = "risk_tiles")) %>%
addLegend(colors = tract_pal(legend_val[length(legend_val):1]), opacity = 0.7, position = 'bottomright',
title = 'Risk factors level', labels = c('High (90%ile)', 'Low (0%ile)')) %>%
# addLayersControl(baseGroups = c('Clear', as.character(inputs$year_range[1]), as.character(inputs$year_range[2]),
# as.character(inputs$year_range[2] + (inputs$year_range[2] - inputs$year_range[1])))) %>%
showGroup(as.character(inputs$year_range[2])) %>% hideGroup('Clear') %>%
hideGroup(as.character(inputs$year_range[1])) %>% #hiding the first year layer
hideGroup(as.character(inputs$year_range[2] + (inputs$year_range[2] - inputs$year_range[1]))) #hiding the future year layer
return(map_all)
}
# ######## Debugging setup ###########
# #libraries
# library(shiny)
# library(shinyWidgets)
# # library(maps)
# library(tools)
# # library(tidyverse)
# # library(tidycensus)
# library(hash)
# library(leaflet)
# library(magrittr)
# library(shinyBS)
# library(sp)
# library(rgeos)
# library(shinyjs)
#
#
# inputs = readRDS('inputs_outputs/debug_inputs.rds')
# # inputs$cities = ''
#
# #reading in the cdc data
# cdc_hash = hash()
# years = seq(inputs$year_range[1], inputs$year_range[2])
# for(year in years){
# if(!(year %in% keys(cdc_hash))){
# cdc_data = readRDS(paste0('data_tables/cdc_', as.character(year), '.rds'))
# colnames(cdc_data)[colnames(cdc_data) == 'tractfips'] = 'GEOID'
# cdc_hash[[as.character(year)]] = cdc_data
# }
# }
# #acs data
# acs_hash = readRDS('data_tables/acs_dat_hash.rds')
# trimmed_tracts = readRDS('data_tables/trimmed_tract_data.rds')
# tract_city_dictionary = readRDS('data_tables/tract_city_dictionary.rds')
#
# #get just the tracts from the cities that we care about
# city_tracts = tract_city_dictionary[inputs$cities] %>% values() %>% unlist() %>% as.character()
#
# #identifying which tracts to use
# tracts_map = trimmed_tracts[trimmed_tracts$GEOID %in% city_tracts,]
#
#
# city_all_dat_hash = hash::hash()
# for(year in inputs$year_range[1]:inputs$year_range[2]){
# acs_year = acs_hash[[as.character(year)]]
# acs_year = acs_year[acs_year$GEOID %in% city_tracts,]
# cdc_year = cdc_hash[[as.character(year)]]
# cdc_year = cdc_year[cdc_year$GEOID %in% city_tracts,]
# city_all_dat_hash[[as.character(year)]] = merge(cdc_year[!duplicated(cdc_year$GEOID),], acs_year[!duplicated(acs_year$GEOID),], by = 'GEOID')
# }
#
# city_all_spdf_hash = hash::hash()
# for(year in inputs$year_range[1]:inputs$year_range[2]){
# city_data = merge(tracts_map@data, city_all_dat_hash[[as.character(year)]], by = 'GEOID')
# city_spdf = tracts_map[tracts_map$GEOID %in% city_data$GEOID,]
# city_spdf = city_spdf[order(city_spdf$GEOID),]
# city_data = city_data[order(city_data$GEOID),]
# city_spdf@data = city_data
# city_all_spdf_hash[[as.character(year)]] = city_spdf
# }
#
# #setting constants
# param_hash = hash::copy(inputs)
# hash::delete(c('cities', 'year_range'), param_hash)
# data_factors = param_hash %>% values() %>% unlist()
# if(length(dim(data_factors)) > 0){
# data_factors = as.character(data_factors)
# names(data_factors) = rep(keys(param_hash), length(data_factors))
# }
#
#
#
# #creating the scores
# risk_vars = data_factors[!duplicated(as.character(data_factors))]
# risk_weights = rep(INITIAL_WEIGHTS, length(risk_vars))
# spdf = city_all_spdf_hash[['2018']]
# quantile_bins = QUANTILE_BINS
#
# #additional constants I need to set up
# info_popup_text = INFO_POPUP_TEXT
# front_name = FALSE
# quantile_bins = QUANTILE_BINS
#
#
# past_spdf = make_full_spdf(city_all_spdf_hash[[as.character(inputs$year_range[1])]], data_code_book, risk_vars, risk_weights, QUANTILE_BINS, info_popup_text = INFO_POPUP_TEXT)
#
# present_spdf = make_full_spdf(city_all_spdf_hash[[as.character(inputs$year_range[2])]], data_code_book, risk_vars, risk_weights, QUANTILE_BINS, info_popup_text = INFO_POPUP_TEXT)
#
# pred_list = get_predicted_scores_and_labels(city_all_spdf_hash, inputs, risk_vars, risk_weights, data_code_book, QUANTILE_BINS, MAX_LOC_DIST, info_popup_text = INFO_POPUP_TEXT)
# present_spdf@data$pred_score = pred_list$raw_score
# present_spdf@data$pred_quantile = pred_list$score_quantile
# present_spdf@data$pred_label = pred_list$label
#
# initial_map = make_map(present_spdf, past_spdf, inputs, TRACT_PAL, TRACT_OPACITY, QUANTILE_BINS)
#
#
#
#
#
#
#
###### reactive-vals / Pass-through parameters #########
#these need to be converted into a list and saved as an rds file to be maintained throughout the journey
inputs = hash()
inputs[['cities']] <- NULL
inputs[['year_range']] <- YEAR_RANGE
inputs[['medical_factors']] <- NULL
inputs[['economics_factors']] <- NULL
inputs[['at-risk_factors']] <- NULL
inputs[['qol_factors']] <- NULL
#declaring reactive values
inputs_react = reactiveVal(inputs)
city_all_spdf_hash_react <- reactiveVal()
data_code_book_react <- reactiveVal()
risk_vars_react <- reactiveVal()
data_factors_react <- reactiveVal()
initial_map_react <- reactiveVal()
example_past_spdf_react <- reactiveVal()
# print(inputs)
##### UI ##########
output$pageStub <- renderUI(tagList(
tags$head(tags$script(redirect_jscode)),
tags$head(rel = "stylesheet", type = 'text/css', href = 'https://fonts.googleapis.com/css?family=Montserrat|Open+Sans|Raleway|Roboto|Roboto+Condensed&display=swap'),
includeCSS('www/sreen_size.css'),
useShinyjs(),
uiOutput('current_page')
))
######## home page ui #######
output$current_page <- renderUI({
fluidPage(
div(class = 'home-page-div',
########### splash up front ###########
div(class = 'center_wrapper',
div(class = 'splash_front',
h1('City Equity Map', class = "splash_text"),
HTML('<h4 class = "splash_text smaller_header">Inequity across neighborhoods has become top of mind for city planners. Some face higher poverty rates while others struggle more with medical issues.',
'<h4 class = "splash_text smaller_header">Use this tool to <strong>map out economic, health and other risk factors across neighborhoods in your city</strong></h4>',
'<h4 class = splash_text smaller_header>City planners can <strong>improve service allocation</strong></h4>',
'<h4 class = splash_text smaller_header>Citizens can better <strong>understand the broader community</strong></h4>',
'<h4 class = splash_text smaller_header>Get to know the numbers behind your neighborhoods</h4>')
)
),
###### Inputs and descriptions ##########
div(class = 'center_wrapper',
fluidRow(class = 'splash_front',
div(class = "on_same_row",
selectizeInput('state', HTML('Filter by state and search for your city below'), choices = c(states_cdc[order(states_cdc)]),
options = list(
onInitialize = I(paste0('function() { this.setValue(""); }'))
)),
#added as a workaround to prevent the chrome autofill
HTML('<input style="display:none" type="text" name="address"/>'),
uiOutput('select_city')
)
#put code for video tutorial here
# actionButton()
),
fluidRow(column(12,
h3('Select a preset group of metrics'),
uiOutput('preset_buttons'),
h3('Or customize your metrics of interest'),
div(id = 'factor_selectors',
div(class = "factor_selector",
dropdownButton(
checkboxInput('all_health_factors', "Select all"),
checkboxGroupInput(
'health_factors', 'Health factors',
choices = HEALTH_CHOICES,
selected = health_risk_factors
),
label = 'Health factors',
circle = FALSE
)),
div(class = "factor_selector",
dropdownButton(
checkboxInput('all_economic_factors', "Select all"),
checkboxGroupInput(
'economic_factors', 'Economic factors',
choices = ECONOMIC_CHOICES,
selected = economic_factors
),
label = 'Economic factors',
circle = FALSE
)),
div(class = "factor_selector",
dropdownButton(
checkboxInput('all_violence_factors', "Select all", value = FALSE),
checkboxGroupInput(
'violence_factors', 'At-risk factors',
choices = VIOLENCE_CHOICES, selected = violence_risk_factors
),
label = 'At-risk factors',
circle = FALSE
)),
div(class = "factor_selector",
dropdownButton(
checkboxInput('all_qol_factors', "Select all"),
checkboxGroupInput(
'qol_factors', 'Other quality-of-life factors',
choices = QOL_CHOICES,
selected = qol_factors
),
label = 'Quality-of-life factors',
circle = FALSE
))
# sliderInput('year_range', 'Which years should we look at?',
# YEAR_RANGE[1], YEAR_RANGE[2], value = year_range),
),
actionBttn('map_it', 'Map custom metrics', size = 'sm'),
uiOutput('input_warning'),
uiOutput('loading_sign')
)
)
)
###### end of home-page-div ########
),
###### FAQ ###########
div(class = 'splash_text smaller_header', id = 'FAQ',
HTML('<h3>Frequently asked questions</h3>',
'<h4><strong>Q: </strong>How do I navigate this page?</h4>',
'<h5><strong>A: </strong><ol><li>Select your city from the city drop-down. You may type in your city or select the state and then the city. Due to privacy concerns, only the 500 US cities with the largest population are avaialable. Smaller city maps can be made on request.</li>
<li>Select the metrics you are interested in studying. These can be economic factors such as unemployment or medical factors such as obesity. For a simple selection of metrics, click on one of the three presets. For a fully-custom map, select your metrics individually below the presets.</li>
<li>Click the blue button to map the metrics and wait about 20-30 seconds for delivey. If has finished loading but the screen is blank, wait an additional 20 seconds before refreshing, as the map may take some time to render.</li>
</ol></h5>',
# '<h4><strong>Q: </strong>Why does it take so long to load / why is my screen blank for so long?</h4>',
# '<h5><strong>A: </strong>Running this app on a faster server would be expensive so the mapping process may take 10-20 seconds to show even after loading.</h5>',
'<h4><strong>Q: </strong>What does this app help me with?</h4>',
'<h5><strong>A: </strong>You\'ve got resources & services to allocate in your city. Where should you put those resources to hit the populations in most need of those services? This map shows where the data would point those resources. <i>The data can\'t capture everything about a neighborhood,</i> but it can help inform decisions on where to put resources.</h5>',
'<h4><strong>Q: </strong>What does the map show?</h4>',
'<h5><strong>A: </strong>It shows each census tract (loosely each neighborhood in a city) ranked from 0%ile to 90%ile based on the metrics you choose. It shows what neighborhoods seem to be better off and which need more assistance. Metrics include unemployment rates, obesity rates, and low education rates (% of adults with no diploma).</h5>',
'<h4><strong>Q: </strong>What does "0%ile" or "90%ile" mean?</h4>',
'<h5><strong>A: </strong>"0%ile" means that 90% or more of the other neighborhoods in the city have a higher score. "90%ile" means less than 10% of the other neighborhoods have a higher score.</h5>',
'<h4><strong>Q: </strong>What is being scored?</h4>',
'<h5><strong>A: </strong>All the metrics you chose. For example, if you looked at the metric "unemployment rate" then the neighborhoods in the 90%ile would have the highest unemployment rates in the city and the neighborhoods in the 0%ile would have the lowest unemployment rates.</h5>',
'<h4><strong>Q: </strong>How do I see these "scores"?</h4>',
'<h5><strong>A: </strong>Go through the first page to make your map, and then click / tap on any of the neighborhoods (each will be their own color) to see its overall score and the score breakdown for each metric you chose. The colors also display the overall score, with the highest scoring neighborhoods (90%ile) in red and the lowest scoring neighborhoods (0%ile) in green.</h5>',
'<h4><strong>Q: </strong>What is the overall score and how is it calculated?</h4>',
'<h5><strong>A: </strong>The overall score combines all of the metrics you chose into one number for each neighborhood. You can calculate it by taking the average score for each metric you chose. For example, say you chose three metrics to look at. One neighborhood scores in the 70%ile for unemployment, 80%ile for obesity, and 60%ile in low education, so the neighborhood\'s overall score would be the 70%ile. In general, a higher overall score (90%ile) means the neighborhood has worse conditions (based ont the metrics you chose) than a neighborhood with a lower overall score (0%ile) </h5>',
'<h4><strong>Q: </strong>So 90%ile = neighborhood needs help and 0%ile = neighborhood is doing well?</h4>',
'<h5><strong>A: </strong><i>Based on the metrics you chose to study and compared to other neighorhoods in the city, yes.</i> Again, this data cannot capture the full circumstances a neighborhood experiences, but it can act as another piece of useful information in guiding where to put resources.</h5>',
'<h4><strong>Q: </strong>I don\'t think every metric I choose should be counted equally in the overall score. How can I fix that?</h4>',
'<h5><strong>A: </strong>If you are on a computer or a large screen you will be able to adjust the metrics to have some count towards the overall score more than others (i.e., re-weight the metrics). See the tutorial on the next page for where you can do this (computers and other large screens only)</h5>',
'<h4><strong>Q: </strong>Where does the data come from?</h4>',
'<h5><strong>A: </strong><a href = "https://www.census.gov/programs-surveys/acs">US Census American Community Survey</a> provides information on income, education, and other demographics. <a href = "https://www.cdc.gov/500cities/index.htm">Center for Disease Control</a> provides information on physical and mental health.</h5>',
'<h4><strong>Q: </strong>Why should I trust this map? Who are you and why do you have this data?</h4>',
'<h5><strong>A: </strong>All of this data is publicly available and you can find it yourself (see the links in the last question). I worked as a data scientist for the City of San Jose to help answer these questions around where to allocate resources. Most of the data I used was publicly available, so I wanted to enable other cities to study these questions through a data lens.</h5>',
'<h4><strong>Q: </strong>I like what you\'ve done here. I have some ideas I think you could help with. How do I get in contact?</h4>',
'<h5><strong>A: </strong>You can reach me at my email address (<a href = "mailto: gehami@alumni.stanford.edu">gehami@alumni.stanford.edu</a>) for any custom mapping requests or for partnership on a city-specific project.</h5>'
),
HTML('<h5 class = "splash_text smaller_header">Data from the <a href = "https://www.census.gov/programs-surveys/acs"> US Census American Community Survey </a> and the <a href = "https://www.cdc.gov/500cities/index.htm">CDC\'s 500 Cities Project</a>. All metrics are scored from low-issue (0%ile) to high-issue (90%ile).</h5>',
'<h5 class = "splash_text smaller_header">For comments, questions, and custom-mapping requests, contact Albert Gehami at
<a href = "mailto: gehami@alumni.stanford.edu">gehami@alumni.stanford.edu</a></h5>')
)
)
})
###### begin server code #########
######### Allowing to filter by state ###########
output$select_city <- renderUI({
selectizeInput(
'city', HTML('Search for your city</br><small>(Largest 500 US cities only)</small>'), choices =
c(cities_cdc[grep(paste0(input$state, '$'), cities_cdc)])[order(c(cities_cdc[grep(paste0(input$state, '$'), cities_cdc)]))], multiple = FALSE,
options = list(
# placeholder = 'Enter City name',
onInitialize = I(paste0('function() { this.setValue("',paste(location, collapse = ','),'"); }')),
maxOptions = 1000
)
)
})
######### Setting up the preset metrics buttons ###########
preset_options = gsub('\\.', ' ', gsub('^Preset_[0-9]+_', '',
grep('^Preset', colnames(data_code_book), value = TRUE, ignore.case = TRUE)))
metrics_selected_1 = data_code_book$risk_factor_name[data_code_book[,grep('^Preset_1_', colnames(data_code_book), ignore.case = TRUE)] %in% 1]
metrics_selected_2 = data_code_book$risk_factor_name[data_code_book[,grep('^Preset_2_', colnames(data_code_book), ignore.case = TRUE)] %in% 1]
metrics_selected_3 = data_code_book$risk_factor_name[data_code_book[,grep('^Preset_3_', colnames(data_code_book), ignore.case = TRUE)] %in% 1]
preset_options_list = list(list(preset_options[1], metrics_selected_1, PRESET_1_DESC_TEXT, 1),
list(preset_options[2], metrics_selected_2, PRESET_2_DESC_TEXT, 2),
list(preset_options[3], metrics_selected_3, PRESET_3_DESC_TEXT, 3))
#shout out to the real ones at w3schools for this popup function: https://www.w3schools.com/howto/howto_js_popup.asp
output$preset_buttons <- renderUI(lapply(preset_options_list, function(i){
# div(class = 'preset-buttons-dropdown', dropdownButton(
# HTML('<h5><strong><i>', i[[3]], '</i></strong></h5>', '<h5>Metrics included:</h5>', paste(c(1:length(i[[2]])), ')', i[[2]], collapse = "<br>")),
# actionBttn(inputId = i[[1]], label = "Map it", size = 'sm'),
# label = i[[1]],
# circle = FALSE
# ))
div(class = 'preset-buttons-dropdown',
actionBttn(inputId = i[[1]], label = i[[1]], size = 'sm'),
# HTML(paste0('<div class="info-popup" onclick = "popupFunction', i[[4]],'()"><p class = "preset-button-desc"><i>'),
# i[[3]],paste0('<div class="info-popuptext" id="myInfoPopup',i[[4]],'">Metrics included'),
# paste(c(1:length(i[[2]])), ') ', i[[2]], collapse = "<br>", sep = ''),'</div>
# </div>', sep = '')
HTML('<p class = "preset-button-desc"><i>', i[[3]], '</i></p>')
)
}))
# HTML('<div class = "info-popup">',
# '<i class="fa fa-info-circle"></i>',
# '</div>'), '</div>',
# '<div class = "info-popuptext" id = "myInfoPopup" onclick = "popupFunction()">', info_popup_text, '</div>', paste(label_string, collapse = '<br>'))
# output$preset_buttons <- renderUI(lapply(preset_options, function(i){
# actionBttn(inputId = i, label = i, size = 'sm')
# }))
clicked_preset <- reactiveVal(FALSE)
#if the first preset is clicked
observeEvent(input[[preset_options[1]]], {
clicked_preset(TRUE)
violence_selected = data_code_book$risk_factor_name[data_code_book[,grep('^Preset_1_', colnames(data_code_book), ignore.case = TRUE)] %in% 1 &
data_code_book$metric_category == 'at-risk']
health_selected = data_code_book$risk_factor_name[data_code_book[,grep('^Preset_1_', colnames(data_code_book), ignore.case = TRUE)] %in% 1 &
data_code_book$metric_category == 'health']
economic_selected = data_code_book$risk_factor_name[data_code_book[,grep('^Preset_1_', colnames(data_code_book), ignore.case = TRUE)] %in% 1 &
data_code_book$metric_category == 'economic']
qol_selected = data_code_book$risk_factor_name[data_code_book[,grep('^Preset_1_', colnames(data_code_book), ignore.case = TRUE)] %in% 1 &
data_code_book$metric_category == 'qol']
updateCheckboxGroupInput(session, 'violence_factors', selected = violence_selected)
updateCheckboxGroupInput(session, 'health_factors', selected = health_selected)
updateCheckboxGroupInput(session, 'economic_factors', selected = economic_selected)
updateCheckboxGroupInput(session, 'qol_factors', selected = qol_selected)
inputs_local = inputs_react()
inputs_local[['medical_factors']] <- health_selected
inputs_local[['economics_factors']] <- economic_selected
inputs_local[['at-risk_factors']] <- violence_selected
inputs_local[['qol_factors']] <- qol_selected
inputs_react(inputs_local)
shinyjs::click('map_it')
})
#if the second preset is clicked
observeEvent(input[[preset_options[2]]], {
clicked_preset(TRUE)
violence_selected = data_code_book$risk_factor_name[data_code_book[,grep('^Preset_2_', colnames(data_code_book), ignore.case = TRUE)] %in% 1 &
data_code_book$metric_category == 'at-risk']
health_selected = data_code_book$risk_factor_name[data_code_book[,grep('^Preset_2_', colnames(data_code_book), ignore.case = TRUE)] %in% 1 &
data_code_book$metric_category == 'health']
economic_selected = data_code_book$risk_factor_name[data_code_book[,grep('^Preset_2_', colnames(data_code_book), ignore.case = TRUE)] %in% 1 &
data_code_book$metric_category == 'economic']
qol_selected = data_code_book$risk_factor_name[data_code_book[,grep('^Preset_2_', colnames(data_code_book), ignore.case = TRUE)] %in% 1 &
data_code_book$metric_category == 'qol']
updateCheckboxGroupInput(session, 'violence_factors', selected = violence_selected)
updateCheckboxGroupInput(session, 'health_factors', selected = health_selected)
updateCheckboxGroupInput(session, 'economic_factors', selected = economic_selected)
updateCheckboxGroupInput(session, 'qol_factors', selected = qol_selected)
inputs_local = inputs_react()
inputs_local[['medical_factors']] <- health_selected
inputs_local[['economics_factors']] <- economic_selected
inputs_local[['at-risk_factors']] <- violence_selected
inputs_local[['qol_factors']] <- qol_selected
inputs_react(inputs_local)
shinyjs::click('map_it')
})
#if the third preset is clicked
observeEvent(input[[preset_options[3]]], {
clicked_preset(TRUE)
violence_selected = data_code_book$risk_factor_name[data_code_book[,grep('^Preset_3_', colnames(data_code_book), ignore.case = TRUE)] %in% 1 &
data_code_book$metric_category == 'at-risk']
health_selected = data_code_book$risk_factor_name[data_code_book[,grep('^Preset_3_', colnames(data_code_book), ignore.case = TRUE)] %in% 1 &
data_code_book$metric_category == 'health']
economic_selected = data_code_book$risk_factor_name[data_code_book[,grep('^Preset_3_', colnames(data_code_book), ignore.case = TRUE)] %in% 1 &
data_code_book$metric_category == 'economic']
qol_selected = data_code_book$risk_factor_name[data_code_book[,grep('^Preset_3_', colnames(data_code_book), ignore.case = TRUE)] %in% 1 &
data_code_book$metric_category == 'qol']
updateCheckboxGroupInput(session, 'violence_factors', selected = violence_selected)
updateCheckboxGroupInput(session, 'health_factors', selected = health_selected)
updateCheckboxGroupInput(session, 'economic_factors', selected = economic_selected)
updateCheckboxGroupInput(session, 'qol_factors', selected = qol_selected)
inputs_local = inputs_react()
inputs_local[['medical_factors']] <- health_selected
inputs_local[['economics_factors']] <- economic_selected
inputs_local[['at-risk_factors']] <- violence_selected
inputs_local[['qol_factors']] <- qol_selected
inputs_react(inputs_local)
shinyjs::click('map_it')
})
######### Tracking checkboxgroups to update inputs_react() ############
observeEvent(input$health_factors, {
inputs_local <- inputs_react()
inputs_local[['medical_factors']] <- input$health_factors
inputs_react(inputs_local)
})
observeEvent(input$economic_factors, {
inputs_local <- inputs_react()
inputs_local[['economics_factors']] <- input$economic_factors
inputs_react(inputs_local)
})
observeEvent(input$violence_factors, {
inputs_local <- inputs_react()
inputs_local[['at-risk_factors']] <- input$violence_factors
inputs_react(inputs_local)
})
observeEvent(input$qol_factors, {
inputs_local <- inputs_react()
inputs_local[['qol_factors']] <- input$qol_factors
inputs_react(inputs_local)
})
######### Setting up the select all checkboxes and their server code ##########
#checkbox server code to make sure only one "mental health diagnoses" is checked at a time
#since we have mental health in two places, we are making sure that if you check one then you uncheck the other one
observeEvent(input$violence_factors, {
if(length(input$health_factors) == 0){}else{
print(input$health_factors)
print(input$violence_factors)
if(any(grepl('Mental health diagnoses', input$health_factors)) & any(grepl('Mental health diagnoses', input$violence_factors))){
print('chaning')
updateCheckboxGroupInput(session, 'health_factors', selected = input$health_factors[grep('Mental health diagnoses', input$health_factors, invert = TRUE)])
}}
})
observeEvent(input$health_factors, {
if(length(input$violence_factors) == 0){}else{
print(input$health_factors)
print(input$violence_factors)
if(any(grepl('Mental health diagnoses', input$health_factors)) & any(grepl('Mental health diagnoses', input$violence_factors))){
print('chaning')
updateCheckboxGroupInput(session, 'violence_factors', selected = input$violence_factors[grep('Mental health diagnoses', input$violence_factors, invert = TRUE)])
}}
})
#this just tracks if they have actually pressed the "Select all button" at all.
select_all_tracker = reactiveVal(FALSE)
#violence
observeEvent(input$all_violence_factors, {
if(input$all_violence_factors){
updateCheckboxGroupInput(session, 'violence_factors', selected = VIOLENCE_CHOICES)
select_all_tracker(TRUE)
}
if(!input$all_violence_factors){
if(select_all_tracker()){
updateCheckboxGroupInput(session, 'violence_factors', selected = character(0))
inputs_local <- inputs_react()
inputs_local[['at-risk_factors']] <- character(0)
inputs_react(inputs_local)
}
}
}, ignoreInit = TRUE, ignoreNULL = TRUE)
#health
observeEvent(input$all_health_factors, {
if(input$all_health_factors){
updateCheckboxGroupInput(session, 'health_factors', selected = HEALTH_CHOICES)
select_all_tracker(TRUE)
}
if(!input$all_health_factors){
if(select_all_tracker()){
updateCheckboxGroupInput(session, 'health_factors', selected = character(0))
inputs_local <- inputs_react()
inputs_local[['medical_factors']] <- character(0)
inputs_react(inputs_local)
}
}
}, ignoreInit = TRUE, ignoreNULL = TRUE)
#economic
observeEvent(input$all_economic_factors, {
if(input$all_economic_factors){
updateCheckboxGroupInput(session, 'economic_factors', selected = ECONOMIC_CHOICES)
select_all_tracker(TRUE)
}
if(!input$all_economic_factors){
if(select_all_tracker()){
updateCheckboxGroupInput(session, 'economic_factors', selected = character(0))
inputs_local <- inputs_react()
inputs_local[['economic_factors']] <- character(0)
inputs_react(inputs_local)
}
}
}, ignoreInit = TRUE, ignoreNULL = TRUE)
#qol
observeEvent(input$all_qol_factors, {
if(input$all_qol_factors){
updateCheckboxGroupInput(session, 'qol_factors', selected = QOL_CHOICES)
select_all_tracker(TRUE)
}
if(!input$all_qol_factors){
if(select_all_tracker()){
updateCheckboxGroupInput(session, 'qol_factors', selected = character(0))
inputs_local <- inputs_react()
inputs_local[['qol_factors']] <- character(0)
inputs_react(inputs_local)
}
}
}, ignoreInit = TRUE, ignoreNULL = TRUE)
##### Reaction to save inputs and begin building the map ##########
output$loading_sign = NULL
output$input_warning <- renderUI(HTML("<h5>This process may take up to 60 seconds</h5>"))
observeEvent(input$map_it,{
if(is.null(c(input$violence_factors, input$health_factors, input$economic_factors, input$qol_factors)) & !clicked_preset()){
print("no factors present")
output$input_warning <- renderUI(h4("Please select at least 1 risk factor from the 4 drop-down menus above or select one of the three presets above", class = "warning_text"))
}else if(is.null(input$city) | input$city == ''){
output$input_warning <- renderUI(h4("Please select a city", class = "warning_text"))
}else{
shinyjs::disable('map_it')
###### Initial set-up #######
progress <- shiny::Progress$new()
on.exit(progress$close())
progress$set(message = "Recording inputs", value = 0)
# output$warn = NULL
inputs <- inputs_react()
inputs[['cities']] <- input$city
# inputs[['year_range']] <- input$year_range
inputs[['year_range']] <- YEAR_RANGE
# inputs[['medical_factors']] <- input$health_factors
# inputs[['economics_factors']] <- input$economic_factors
# inputs[['at-risk_factors']] <- input$violence_factors
# inputs[['qol_factors']] <- input$qol_factors
# print(inputs)
#saving inputs for debugging
# saveRDS(inputs, 'inputs_outputs/debug_inputs.rds')
###### opening files and doing the things ######
####### Reading in data ########
#reading in the cdc data
progress$set(message = "Loading CDC data", value = .05)
if(!exists("cdc_hash")){cdc_hash = hash()}
years = seq(inputs$year_range[1], inputs$year_range[2])
for(year in years){
if(!(year %in% keys(cdc_hash))){
cdc_data = readRDS(paste0('data_tables/cdc_', as.character(year), '.rds'))
colnames(cdc_data)[colnames(cdc_data) == 'tractfips'] = 'GEOID'
cdc_hash[[as.character(year)]] = cdc_data
}
}
#reading in the acs data. Note that the year of the data is actually the year before the key
#(i.e. acs_hash[['2018']] actually stores 2017 acs data), becuase the acs data is one year behind the cdc data.
progress$set(message = "Loading Census data", value = .1)
if(!exists("acs_hash")){acs_hash = readRDS('data_tables/acs_dat_hash.rds')}
#reading in the spatial data
progress$set(message = "Loading maps", value = .15)
if(!exists('trimmed_tracts')){trimmed_tracts = readRDS('data_tables/trimmed_tract_data.rds')}
#reading in the tract_city_database
if(!exists('tract_city_dictionary')){tract_city_dictionary = readRDS('data_tables/tract_city_dictionary.rds')}
progress$set(message = "Loading maps", value = .20)
#reading in codebook to translate the names of the acs vars to their name in the dataset
# if(!exists('codebook')){
# codebook = read.csv('variable_mapping.csv', stringsAsFactors = FALSE)
# }
#get just the tracts from the cities that we care about
city_tracts = tract_city_dictionary[inputs$cities] %>% values() %>% unlist() %>% as.character()
#identifying which tracts to use
tracts_map = trimmed_tracts[trimmed_tracts$GEOID %in% city_tracts,]
####### Contstants #########
param_hash = hash::copy(inputs)
hash::delete(c('cities', 'year_range'), param_hash)
data_factors = param_hash %>% values() %>% unlist()
if(length(dim(data_factors)) > 0){
data_factors = as.character(data_factors)
names(data_factors) = rep(keys(param_hash), length(data_factors))
}
######## Creating the initial map #########
progress$set(message = "Cleaning data", value = .30)
city_all_dat_hash = hash::hash()
for(year in inputs$year_range[1]:inputs$year_range[2]){
acs_year = acs_hash[[as.character(year)]]
acs_year = acs_year[acs_year$GEOID %in% city_tracts,]
cdc_year = cdc_hash[[as.character(year)]]
cdc_year = cdc_year[cdc_year$GEOID %in% city_tracts,]
city_all_dat_hash[[as.character(year)]] = merge(cdc_year[!duplicated(cdc_year$GEOID),], acs_year[!duplicated(acs_year$GEOID),], by = 'GEOID')
}
city_all_spdf_hash = hash::hash()
for(year in inputs$year_range[1]:inputs$year_range[2]){
city_data = merge(tracts_map@data, city_all_dat_hash[[as.character(year)]], by = 'GEOID')
city_spdf = tracts_map[tracts_map$GEOID %in% city_data$GEOID,]
city_spdf = city_spdf[order(city_spdf$GEOID),]
city_data = city_data[order(city_data$GEOID),]
city_spdf@data = city_data
city_all_spdf_hash[[as.character(year)]] = city_spdf
}
progress$set(message = "Developing metric scores", value = .35)
#creating the scores
risk_vars = data_factors[!duplicated(as.character(data_factors))]
risk_weights = rep(INITIAL_WEIGHTS, length(risk_vars))
# spdf = city_all_spdf_hash[['2018']]
# #data_code_book = codebook[!duplicated(codebook$risk_factor_name),]
quantile_bins = QUANTILE_BINS
progress$set(message = paste0("Designing map of ", inputs$year_range[1]), value = .40)
past_spdf = make_full_spdf(city_all_spdf_hash[[as.character(inputs$year_range[1])]], data_code_book, risk_vars, risk_weights, QUANTILE_BINS, info_popup_text = INFO_POPUP_TEXT)
progress$set(message = paste0("Designing map of ", inputs$year_range[2]), value = .50)
present_spdf = make_full_spdf(city_all_spdf_hash[[as.character(inputs$year_range[2])]], data_code_book, risk_vars, risk_weights, QUANTILE_BINS, info_popup_text = INFO_POPUP_TEXT)
progress$set(message = paste0("Predicting map of ", inputs$year_range[2] + (inputs$year_range[2] - inputs$year_range[1])), value = .60)
pred_list = get_predicted_scores_and_labels(city_all_spdf_hash, inputs, risk_vars, risk_weights, data_code_book, QUANTILE_BINS, MAX_LOC_DIST, info_popup_text = INFO_POPUP_TEXT)
present_spdf@data$pred_score = pred_list$raw_score
present_spdf@data$pred_quantile = pred_list$score_quantile
present_spdf@data$pred_label = pred_list$label
progress$set(message = "Rendering maps", value = .70)
initial_map = make_map(present_spdf, past_spdf, inputs, TRACT_PAL, TRACT_OPACITY, QUANTILE_BINS)
######### Saving the needed files for next page ########
progress$set(message = "Finalizing information", value = .80)
city_all_spdf_hash_react(city_all_spdf_hash)
data_code_book_react(data_code_book)
risk_vars_react(risk_vars)
data_factors_react(data_factors)
initial_map_react(initial_map)
example_past_spdf_react(past_spdf[1,])
#### Moving to next page #######
progress$set(message = "Moving to results page", value = .90)
shinyjs::enable('map_it')
output$pageStub <- renderUI(tagList(
tags$head(tags$script(redirect_jscode)),
tags$head(rel = "stylesheet", type = 'text/css', href = 'https://fonts.googleapis.com/css?family=Montserrat|Open+Sans|Raleway|Roboto|Roboto+Condensed&display=swap'),
includeCSS('www/sreen_size.css'),
useShinyjs(),
fluidPage(
div(class = "no_small_screen",
bsCollapse(id = "sliders",
bsCollapsePanel(HTML('<div stlye = "width:100%;">Click here to edit weight of metrics</div>'), value = 'Click here to edit weight of metrics',
fluidRow(
column(10, h4("Increase/decrease the amount each metric goes into the overall risk metric. To recalculate overall risk, click 'Submit'"),
h5("For example, boosting one metric to 2 will make it twice as important in calculating the overall risk")),
column(2, actionBttn('recalculate_weights', 'Submit'),
actionBttn('reset_weight', 'Reset weights'))
# column(2, actionButton('recalculate_weights', 'Submit'))
),
fluidRow(
column(4, uiOutput('sliders_1')),
column(4, uiOutput('sliders_2')),
column(4, uiOutput('sliders_3'))
), style = "info"
)
)
),
fluidRow(
div(id = "map_container",
leaflet::leafletOutput('map', height = 'auto'),
div(id = 'initial_popup', class = "popup",
HTML('<h3, class = "popup_text">Would you like the tutorial?</h3></br>'),
actionLink('close_help', label = HTML('<p class="close">×</p>')),
actionBttn("walkthrough", HTML("<p>Yes</p>"), style = 'unite', size = 'sm'),
actionBttn("no_walkthrough", HTML("<p>No</p>"), style = 'unite', size = 'sm')
# actionButton("walkthrough", HTML("<p>Yes</p>")),
# actionButton("no_walkthrough", HTML("<p>No</p>"))
),
uiOutput('tutorial'),
div(id = 'home_button', class = 'no_small_screen', tags$a(href = '?home', icon('home', class = 'fa-3x'))),
div(id = 'select_year_div', class = 'no_small_screen', pickerInput('select_year',
choices = c('Clear', as.character(inputs$year_range[1]), as.character(inputs$year_range[2]),
as.character(inputs$year_range[2] + (inputs$year_range[2] - inputs$year_range[1]))),
multiple = FALSE, selected = as.character(inputs$year_range[2]), width = '71px')),
div(id = 'home_and_year', class = 'no_big_screen',
div(id = 'select_year_div', pickerInput('select_year_small',
choices = c('Clear', as.character(inputs$year_range[1]), as.character(inputs$year_range[2]),
as.character(inputs$year_range[2] + (inputs$year_range[2] - inputs$year_range[1]))),
multiple = FALSE, selected = as.character(inputs$year_range[2]), width = '71px')),
div(id = 'home_button', tags$a(href = '?home', icon('home', class = 'fa-3x')))
)
)
),id = 'results_page'
)
))
# session$sendCustomMessage("mymessage", "mymessage")
# output$current_page <- uiOutput({
# })
}
})
######### Server back-end #########
####### Outputing initial map ########
output$map <- renderLeaflet(initial_map_react())
####### Tutorial #########
#General map movement
observeEvent(input$walkthrough,{
shinyjs::hide(id = 'initial_popup')
output$tutorial <- renderUI({
div(class = "popup",
HTML('<h5, class = "popup_text">Here is the map of your city based on the metrics you chose.</h5></br>',
'<h5, class = "popup_text">The map is divided into <a href = "https://simplyanalytics.zendesk.com/hc/en-us/articles/204848916-What-is-a-census-tract-" target="_blank">census tracts</a>, which are similar to neighborhoods</h5></br>',
'<h5, class = "popup_text">Click-and-drag map to move around (or swipe on mobile).</h5></br>',
'<h5, class = "popup_text">Scroll to zoom in and out (or pinch on mobile)</h5></br>'),
actionLink('close_help_popups', label = HTML('<p class="close">×</p>')),
actionBttn("walkthrough_map_nav", HTML("<p>Next</p>"), style = 'unite', size = 'sm')
# actionButton("walkthrough_map_nav", HTML("<p>Next</p>"))
)
})
})
#map tiles
observeEvent(input$walkthrough_map_nav,{
# shinyjs::hide(id = 'initial_popup')
output$tutorial <- renderUI({
div(id = 'map_tile_popup', class = "popup",
HTML('<h5, class = "popup_text">Clicking or tapping on a neighborhood tile (the colored blocks on the map) will display a pop-up with the neighborhood\'s overall "risk factor level" score and a breakdown for each metric chosen. A score below the 50%ile means that the risk factors you chose are less present in this neighborhood than others in the city, while a score above the 50%ile means the risk factors are more present than in the average neighborhood in the city. Learn more about these scores from the <a href = "?home">FAQ on the bottom of the home page</a></h5></br></br>'),
# '<h5, class = "popup_text">This is where you will see information such as a neighborhood\'s unemployment rate, obesity rate, or any other metrics you chose to look at.
# Each metric is scored relative to the other neighborhoods, with lowest-scoring neighborhoods in the 0%ile and highest-scoring neighborhoods in the 90%ile. For example, if a neighborhood had one of the highest obesity rates in the city, that neighborhood would score in the 90%ile in obesity.</br>
# The overall score combines all of the metrics you chose into one number for each neighborhood, and reflects the color of the neighborhood\'s tile</br>
# <i>Additional details for the extra curious: </i>You can calculate the overall score by taking the average score of all the metrics you chose (shown in small font). Typically the highest scoring neighborhoods show the highest needs in the city according to the data and selected metrics. Learn more from the <a href = "?home">FAQ on the bottom of the home page</a></br>
# If you selected metrics from multiple categories (e.g., both medical and economic factors), the metrics are segmented by category, and each category has an "overall category score" as well. For example you may see an overall score (all metrics selected), an overall economic score (things such as unemployment or poverty rates), and an overall medical score (things such as obesity and diabetes rates).</h5></br>'),
actionLink('close_help_popups', label = HTML('<p class="close">×</p>')),
actionBttn("walkthrough_map_tile", HTML("<p>Next</p>"), style = 'unite', size = 'sm')
# actionButton("walkthrough_map_tile", HTML("<p>Next</p>"))
)
})
# leafletProxy('map') %>% clearPopups() %>% addPopups(lat = as.numeric(example_past_spdf_react()$INTPTLAT), lng = as.numeric(example_past_spdf_react()$INTPTLON),
# popup = HTML(example_past_spdf_react()$label))
# shinyjs::addCssClass(class = 'highlight-border', selector = '.leaflet-popup-content')
})
#legend
observeEvent(input$walkthrough_map_tile,{
# shinyjs::hide(id = 'initial_popup')
# shinyjs::removeCssClass(class = 'highlight-border', selector = '.leaflet-popup-content')
output$tutorial <- renderUI({
div(id = 'legend_popup', class = "popup",
HTML('<h5, class = "popup_text">Below is the map\'s legend. Each neighborhood is colored based on its overall
score from the metrics you chose. High-issue neighborhoods will be shown in red (90%ile) while low-issue
neighborhoods will be shown in blue (0%ile)</h5></br>'),
actionLink('close_help_popups', label = HTML('<p class="close">×</p>')),
actionBttn("walkthrough_legend", HTML("<p>Next</p>"), style = 'unite', size = 'sm')
# actionButton("walkthrough_legend", HTML("<p>Next</p>"))
)
})
leafletProxy('map') %>% clearPopups()
shinyjs::addCssClass(class = 'highlight-border', selector = '.legend')
})
#layer controls
observeEvent(input$walkthrough_legend,{
# shinyjs::hide(id = 'initial_popup')
shinyjs::removeCssClass(class = 'highlight-border', selector = '.legend')
output$tutorial <- renderUI({
div(id = 'layer_and_metrics_popup', class = "popup",
HTML('<h5, class = "popup_text"></h5>Change the year by clicking the drop-down <span class = "no_small_screen">to the right</span>.
You can see the metrics for the past and present, as well as the predict overall metric for the future.</br>'),
actionLink('close_help_popups', label = HTML('<p class="close">×</p>')),
div(class = 'no_big_screen',actionBttn("walkthrough_to_home", HTML("<p no_big_screen>Next</p>"), style = 'unite', size = 'sm')),
div(class = 'no_small_screen',actionBttn("walkthrough_layers", HTML("<p no_small_screen>Next</p>"), style = 'unite', size = 'sm'))
# div(class = 'no_big_screen', actionButton("walkthrough_to_home", HTML("<p no_big_screen>Explore map</p>"))),
# div(class = 'no_small_screen', actionButton("walkthrough_layers", HTML("<p no_small_screen>Next</p>")))
)
})
shinyjs::addCssClass(id = 'select_year_div', class = 'highlight-border')
shinyjs::addCssClass(id = 'home_and_year', class = 'highlight-border')
})
#weight adjustment
observeEvent(input$walkthrough_layers,{
# shinyjs::hide(id = 'initial_popup')
shinyjs::removeCssClass(id = 'select_year_div', class = 'highlight-border')
output$tutorial <- renderUI({
div(id = 'layer_and_metrics_popup', class = "popup",
HTML('<h5, class = "popup_text">Currently, all of the metrics you chose to look at are weighted equally.
If you feel like some should be more important than others in determining your overall metric,
you can adjust their weights above.</br></h5>'),
actionLink('close_help_popups', label = HTML('<p class="close">×</p>')),
div(actionBttn("walkthrough_to_home", HTML("<p>Next</p>"), style = 'unite', size = 'sm'))
# div(actionButton("walkthrough_to_home", HTML("<p>Next</p>")))
)
})
updateCollapse(session, "sliders", open = 'Click here to edit weight of metrics')
shinyjs::addCssClass(class = 'highlight-border', selector = '.panel.panel-info')
})
#home button and final mentions
observeEvent(input$walkthrough_to_home,{
shinyjs::removeCssClass(class = 'highlight-border', selector = '.panel.panel-info')
updateCollapse(session, "sliders", close = 'Click here to edit weight of metrics')
shinyjs::removeCssClass(id = 'select_year_div', class = 'highlight-border')
output$tutorial <- renderUI({
div(id = 'home_popup', class = "popup",
HTML('<h5, class = "popup_text">Return to the home screen by clicking on the home icon.',
'<span class = "no_big_screen">For additional features, access this site on a larger screen.</span>',
'For comments, questions and custom mapping requests, contact Albert at <a href = "mailto: gehami@alumni.stanford.edu">gehami@alumni.stanford.edu</a>',
'</h5></br>'),
actionLink('close_help_popups', label = HTML('<p class="close">×</p>')),
div(actionBttn("end_walkthrough", HTML("<p>Explore map</p>"), style = 'unite', size = 'sm'))
# div(actionButton("end_walkthrough", HTML("<p>Explore map</p>")))
)
})
shinyjs::addCssClass(id = 'home_button', class = 'highlight-border')
shinyjs::addCssClass(id = 'home_and_year', class = 'highlight-border')
})
#close "x" button from first screen
observeEvent(input$close_help,{
output$tutorial <- renderUI({
div(id = 'return_help_popup', class = 'help_popup',
actionLink('open_help', HTML('<p class = "re_open">?</p>'))
)
})
shinyjs::hide(id = 'initial_popup')
})
#close "x" button from non-first screen
observeEvent(input$close_help_popups,{
print("This should close the help popup")
leafletProxy('map') %>% clearPopups()
output$tutorial <- renderUI({
div(id = 'return_help_popup', class = 'help_popup',
actionLink('open_help', HTML('<p class = "re_open">?</p>'))
)
})
shinyBS::updateCollapse(session, "sliders", close = 'Click here to edit weight of metrics')
shinyjs::removeCssClass(class = 'highlight-border', selector = '.panel.panel-info')
shinyjs::removeCssClass(id = 'select_year_div', class = 'highlight-border')
shinyjs::removeCssClass(class = 'highlight-border', selector = '.legend')
shinyjs::removeCssClass(id = 'home_button', class = 'highlight-border')
shinyjs::removeCssClass(id = 'home_and_year', class = 'highlight-border')
})
#end walkthrough button
observeEvent(input$end_walkthrough,{
shinyjs::removeCssClass(id = 'home_button', class = 'highlight-border')
shinyjs::removeCssClass(id = 'home_and_year', class = 'highlight-border')
output$tutorial <- renderUI({
div(id = 'return_help_popup', class = 'help_popup',
actionLink('open_help', HTML('<p class = "re_open">?</p>'))
)
})
})
#no walkthrough button
observeEvent(input$no_walkthrough,{
shinyjs::hide(id = 'initial_popup')
output$tutorial <- renderUI({
div(id = 'return_help_popup', class = 'help_popup',
actionLink('open_help', HTML('<p class = "re_open">?</p>'))
)
})
})
#re-open help
observeEvent(input$open_help,{
output$tutorial <- renderUI({
div(class = "popup",
HTML('<h5, class = "popup_text">Here is the map of your city based on the metrics you chose.</h5></br>',
'<h5, class = "popup_text">The map is divided into <a href = "https://simplyanalytics.zendesk.com/hc/en-us/articles/204848916-What-is-a-census-tract-" target="_blank">census tracts</a>, which are similar to neighborhoods</h5></br>',
'<h5, class = "popup_text">Click-and-drag map to move around (or swipe on mobile).</h5></br>',
'<h5, class = "popup_text">Scroll to zoom in and out (or pinch on mobile)</h5></br>'),
actionLink('close_help_popups', label = HTML('<p class="close">×</p>')),
actionBttn("walkthrough_map_nav", HTML("<p>Next</p>"), style = 'unite', size = 'sm')
# actionButton("walkthrough_map_nav", HTML("<p>Next</p>"))
)
})
})
######### Sliders/numeric inputs ##########
output$sliders_1 <- renderUI({
lapply(data_factors_react()[seq(1, length(data_factors_react()), by = 3)], function(i){
numericInput(inputId = i, label = i, value = INITIAL_SLIDER_VALUE)
# sliderInput(inputId = i, label = i, min = SLIDER_MIN, max = SLIDER_MAX, value = INITIAL_SLIDER_VALUE, step = MIN_SLIDER_STEP)
})
})
output$sliders_2 <- renderUI({
if(length(data_factors_react()) > 1){
lapply(data_factors_react()[seq(2, length(data_factors_react()), by = 3)], function(i){
numericInput(inputId = i, label = i, value = INITIAL_SLIDER_VALUE)
# sliderInput(inputId = i, label = i, min = SLIDER_MIN, max = SLIDER_MAX, value = INITIAL_SLIDER_VALUE, step = MIN_SLIDER_STEP)
})
}
})
output$sliders_3 <- renderUI({
if(length(data_factors_react()) > 2){
lapply(data_factors_react()[seq(3, length(data_factors_react()), by = 3)], function(i){
numericInput(inputId = i, label = i, value = INITIAL_SLIDER_VALUE)
# sliderInput(inputId = i, label = i, min = SLIDER_MIN, max = SLIDER_MAX, value = INITIAL_SLIDER_VALUE, step = MIN_SLIDER_STEP)
})
}
})
# if(length(data_factors) > 1){
# output$sliders_2 <- renderUI({
# lapply(data_factors[seq(2, length(data_factors), by = 3)], function(i){
# numericInput(inputId = i, label = i, value = INITIAL_SLIDER_VALUE)
# # sliderInput(inputId = i, label = i, min = SLIDER_MIN, max = SLIDER_MAX, value = INITIAL_SLIDER_VALUE, step = MIN_SLIDER_STEP)
# })
# })
# }else{output$sliders_2 = NULL}
#
# if(length(data_factors) > 2){
# output$sliders_3 <- renderUI({
# lapply(data_factors[seq(3, length(data_factors), by = 3)], function(i){
# numericInput(inputId = i, label = i, value = INITIAL_SLIDER_VALUE)
# # sliderInput(inputId = i, label = i, min = SLIDER_MIN, max = SLIDER_MAX, value = INITIAL_SLIDER_VALUE, step = MIN_SLIDER_STEP)
# })
# })
# }else{output$sliders_3 = NULL}
#
######### Updating map year layer #######
observeEvent(input$select_year,{
leafletProxy('map') %>% hideGroup(c('Clear', as.character(inputs$year_range[1]), as.character(inputs$year_range[2]),
as.character(inputs$year_range[2] + (inputs$year_range[2] - inputs$year_range[1])))) %>%
showGroup(as.character(input$select_year))
})
observeEvent(input$select_year_small,{
leafletProxy('map') %>% hideGroup(c('Clear', as.character(inputs$year_range[1]), as.character(inputs$year_range[2]),
as.character(inputs$year_range[2] + (inputs$year_range[2] - inputs$year_range[1])))) %>%
showGroup(as.character(input$select_year_small))
})
############# Updating map with updated metrics and reseting weights ##############
observeEvent(input$recalculate_weights,{
shinyjs::disable("recalculate_weights")
progress <- shiny::Progress$new()
on.exit(progress$close())
progress$set(message = "Recording inputs", value = 0)
#getting the new weights
new_weights = rep(0, length(data_factors_react()))
for(n in seq_along(data_factors_react())) new_weights[n] = input[[data_factors_react()[n]]]
#creating the scores
risk_vars = data_factors_react()
risk_weights = new_weights
print(risk_vars)
print(risk_weights)
# spdf = city_all_spdf_hash[['2018']]
data_code_book = data_code_book_react()
quantile_bins = QUANTILE_BINS
progress$set(message = "Redefining 2016 metrics", value = .10)
past_spdf = make_full_spdf(city_all_spdf_hash_react()[[as.character(inputs$year_range[1])]], data_code_book, risk_vars, risk_weights, QUANTILE_BINS, info_popup_text = INFO_POPUP_TEXT)
progress$set(message = "Redefining 2018 metrics", value = .20)
present_spdf = make_full_spdf(city_all_spdf_hash_react()[[as.character(inputs$year_range[2])]], data_code_book, risk_vars, risk_weights, QUANTILE_BINS, info_popup_text = INFO_POPUP_TEXT)
progress$set(message = "Building predictive model", value = .30)
pred_list = get_predicted_scores_and_labels(city_all_spdf_hash_react(), inputs, risk_vars, risk_weights, data_code_book, QUANTILE_BINS, MAX_LOC_DIST, info_popup_text = INFO_POPUP_TEXT)
present_spdf@data$pred_score = pred_list$raw_score
present_spdf@data$pred_quantile = pred_list$score_quantile
present_spdf@data$pred_label = pred_list$label
progress$set(message = "Updating map", value = .60)
new_map = make_map(present_spdf, past_spdf, inputs, TRACT_PAL, TRACT_OPACITY, QUANTILE_BINS)
progress$set(message = "Rendering map", value = .90)
output$map = renderLeaflet(new_map)
updatePickerInput(session, 'select_year',
choices = c('Clear', as.character(inputs$year_range[1]), as.character(inputs$year_range[2]),
as.character(inputs$year_range[2] + (inputs$year_range[2] - inputs$year_range[1]))),
selected = as.character(inputs$year_range[2]))
updatePickerInput(session, 'select_year_small',
choices = c('Clear', as.character(inputs$year_range[1]), as.character(inputs$year_range[2]),
as.character(inputs$year_range[2] + (inputs$year_range[2] - inputs$year_range[1]))),
selected = as.character(inputs$year_range[2]))
shinyBS::updateCollapse(session, "sliders", close = 'Click here to edit weight of metrics')
# progress$close()
shinyjs::enable("recalculate_weights")
})
observeEvent(input$reset_weight,{
shinyjs::disable("reset_weight")
output$sliders_1 <- renderUI({
lapply(data_factors_react()[seq(1, length(data_factors_react()), by = 3)], function(i){
numericInput(inputId = i, label = i, value = INITIAL_SLIDER_VALUE)
# sliderInput(inputId = i, label = i, min = SLIDER_MIN, max = SLIDER_MAX, value = INITIAL_SLIDER_VALUE, step = MIN_SLIDER_STEP)
})
})
if(length(data_factors_react()) > 1){
output$sliders_2 <- renderUI({
lapply(data_factors_react()[seq(2, length(data_factors_react()), by = 3)], function(i){
numericInput(inputId = i, label = i, value = INITIAL_SLIDER_VALUE)
# sliderInput(inputId = i, label = i, min = SLIDER_MIN, max = SLIDER_MAX, value = INITIAL_SLIDER_VALUE, step = MIN_SLIDER_STEP)
})
})
}else{output$sliders_2 = NULL}
if(length(data_factors_react()) > 2){
output$sliders_3 <- renderUI({
lapply(data_factors_react()[seq(3, length(data_factors_react()), by = 3)], function(i){
numericInput(inputId = i, label = i, value = INITIAL_SLIDER_VALUE)
# sliderInput(inputId = i, label = i, min = SLIDER_MIN, max = SLIDER_MAX, value = INITIAL_SLIDER_VALUE, step = MIN_SLIDER_STEP)
})
})
}else{output$sliders_3 = NULL}
shinyjs::enable("reset_weight")
})
|
e153b15743288f2ba5b8b04650042268eee5e6a9 | b7be985e023d4ff52407f2271b1967b5973c7bfd | /R/link_function.R | 9ee5898a0c335d0d933d955ea5480aa83f39a6e0 | [] | no_license | YuqiTian35/multipledls | a257430f4a30e480318fc9e6705c55b26c27d142 | b6206edcbd7d867e874b8db2a491e673782b4013 | refs/heads/master | 2023-04-13T12:20:21.583399 | 2021-05-01T21:41:41 | 2021-05-01T21:41:41 | 363,510,310 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 1,787 | r | link_function.R | #' Link functions number
#'
#' This function faciliates the stan code
#'
#' @param link the link function
#' @return An integer representing corresponding link function
func_link_num <- function(link){
return(case_when(link == "logit" ~ 1,
link == "probit" ~ 2,
link == "loglog" ~ 3,
link == "cloglog" ~ 4))
}
#' Link functions
#'
#' This function includes necessary functions related to each link function
#'
#' @param link the link function
#' @return A list of functions subject to a link function
#' @export
func_link <- function(link){
families <-
list(logit =
list(cumprob=function(x) 1 / (1 + exp(-x)),
inverse=function(x) log(x / (1 - x)),
deriv =function(x, f) f * (1 - f),
deriv2 =function(x, f, deriv) f * (1 - 3*f + 2*f*f) ),
probit =
list(cumprob=pnorm,
inverse=qnorm,
deriv =function(x, ...) dnorm(x),
deriv2 =function(x, f, deriv) - deriv * x),
loglog =
list(cumprob=function(x) exp(-exp(-x)),
inverse=function(x) -log(-log(x )),
deriv =function(x, ...) exp(-x - exp(-x)),
deriv2 =function(x, ...) ifelse(abs(x) > 200, 0,
exp(-x - exp(-x)) * (-1 + exp(-x)))),
cloglog =
list(cumprob=function(x) 1 - exp(-exp(x)),
inverse=function(x) log(-log(1 - x)),
deriv =function(x, ...) exp( x - exp( x)),
deriv2 =function(x, f, deriv) ifelse(abs(x) > 200, 0,
deriv * ( 1 - exp( x)))))
return(families[[link]])
}
|
57fdf8b20b02acb146ce2e3bbcc5e159b31d87a4 | 3a6d384573453ec998d342f09c5ad0bc3d070ddc | /IntroR_2.3.4_FOR_GRADING_Aspringer.R | 8ef66f21f7e199f3b5f22365cd9ffa84b7f414cc | [] | no_license | pearselab/r-intro-aspri951 | 53c1fe6734a06e500508fa233e7be6aa2f71008c | b29ad99db5b0b57943784fe6bde32ef576a62dd4 | refs/heads/master | 2021-01-12T18:26:00.822052 | 2017-02-10T23:58:07 | 2017-02-10T23:58:07 | 71,375,834 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 26,900 | r | IntroR_2.3.4_FOR_GRADING_Aspringer.R | #LESSON 2:
#1: Write loop that prints 20-->10
for(i in 20:10){
print(i)
}
#2: Write loop that prints 20-->10, evens only
for (i in 20:10){
if (i %% 2 == 0){print(i)}
}
#3: Write a function that calculates whether a number is prime
prime <- function(n){
if (n < 0){
stop("Number must be greater than zero!")
}
if (n == 1 | n == 2){
return(TRUE)
} else {
for (i in (n-1):2){
if (n %% i == 0){
return(FALSE)
}
}
for (i in (n-1):2){
if (n %% i != 0){
return(TRUE)
}
}
}
}
#4: Write loop printing out numbers 1:20, print "Good: NUMBER" if divisible by 5, "Job: NUMBER" if prime, nothing otherwise
for(i in 1:20){
if(i %% 5 == 0){
cat("Good:", i, "\n")
}
if(prime(i) == TRUE)
cat("Job:", i, "\n")
}
#5: Gompertz curve is y(t) = a*e^(-b*e^(-c*t)); create function calculating y (pop size) given any parameters
# exp(x) = e^x in R
Gompertz.population <- function(t, a, b, c){
y = a*exp(-b*exp(-c*t))
return(y)
}
#6: Function to plot Gompertz curve over time
Gompertz.plot <- function(ti, tf, a, b, c){
x.coordinates = seq(from = ti, to = tf, by = 0.1)
y.coordinates = a*exp(-b*exp(-c*x.coordinates))
plot(x.coordinates, y.coordinates, type = "l")
}
#Test:
Gompertz.plot(1, 80, 1000, 8, 0.15)
#7: Plot line as red if y>a, plot line as blue if y>b
Gompertz.plot <- function(ti, tf, a, b, c){
x.coordinates = seq(from = ti, to = tf, by = 0.1)
y.coordinates = a*exp(-b*exp(-c*x.coordinates))
plot(x.coordinates[1:max(which(y.coordinates < b))], y.coordinates[1: max(which(y.coordinates < b))], type = "l")
lines(x.coordinates[max(which(y.coordinates < b)):max(which(y.coordinates < a))], y.coordinates[max(which(y.coordinates < b)):max(which(y.coordinates < a))], col = "red")
lines(x.coordinates[max(which(y.coordinates < a)):length(x.coordinates)], y.coordinates[max(which(y.coordinates < a)):length(y.coordinates)], col = "blue")
}
#9: write a function that draws boxes out of *** given a width and height
box <- function(w, h){
star <- "*"
space <- ""
new.line <- "\n"
lid.of.box <- paste(rep(star, w), collapse = "")
inside.of.box <- rep(space, w-3)
side.of.box <- c(star, inside.of.box, star, new.line)
box <- cat("", lid.of.box, "\n", rep(side.of.box, h-2), lid.of.box)
}
#10: Modify box to put text centered in box:
box <- function(w, h, word){
star <- "*"
space <- ""
new.line <- "\n"
lid.of.box <- paste(rep(star, w), collapse = "")
inside.of.box <- rep(space, (w-3))
side.of.box <- c(star, inside.of.box, star, new.line)
if(h %% 2 == 0){
stop("Word inside box cannot be centered. Height of box needs to be an odd integer.")
}
if(w %% 2 != 0 & nchar(word) %% 2 == 0 | w %% 2 == 0 & nchar(word) %% 2 != 0){
stop("Word inside box and width of box must BOTH be even or BOTH be odd in order to center word in box.")
}
if(nchar(word) > (w-2)){
stop("Word inside of box cannot be greater than the width of the box minus two!")
}
if(nchar(word) == (w-2)){
big.word.vector <- c(star, word, star)
big.word.in.box <- c(paste(big.word.vector, collapse = ""), new.line)
box <- cat("", lid.of.box, "\n", rep(side.of.box, (h-3)/2), big.word.in.box, rep(side.of.box, (h-3)/2), lid.of.box)
} else {
small.word.in.box <- c(star, rep(space, (w-nchar(word)-4)/2), word, rep(space, (w-nchar(word)-4)/2), star, new.line)
box <- cat("", lid.of.box, "\n", rep(side.of.box, (h-3)/2), small.word.in.box, rep(side.of.box, (h-3)/2), lid.of.box)
}
}
#11: Modify box function to build boxes out of arbitrary text:
box <- function(line.type, w, h, word){
space <- ""
new.line <- "\n"
lid.of.box <- paste(rep(unlist(strsplit(line.type, "")), length.out = w), collapse = "")
inside.of.box <- rep(space, (w-3))
edge.of.box <- unlist(strsplit(line.type, ""))[1]
side.of.box <- c(edge.of.box, inside.of.box, edge.of.box, new.line)
if(h %% 2 == 0){
stop("Word inside box cannot be centered. Height of box needs to be an odd integer.")
}
if(w %% 2 != 0 & nchar(word) %% 2 == 0 | w %% 2 == 0 & nchar(word) %% 2 != 0){
stop("Word inside box and width of box must BOTH be even or BOTH be odd in order to center word in box.")
}
if(nchar(word) > (w-2)){
stop("Word inside of box cannot be greater than the width of the box minus two!")
}
if(nchar(word) == (w-2)){
big.word.vector <- c(edge.of.box, word, edge.of.box)
big.word.in.box <- c(paste(big.word.vector, collapse = ""), new.line)
box <- cat("", lid.of.box, "\n", rep(side.of.box, (h-3)/2), big.word.in.box, rep(side.of.box, (h-3)/2), lid.of.box)
} else {
small.word.in.box <- c(edge.of.box, rep(space, (w-nchar(word)-4)/2), word, rep(space, (w-nchar(word)-4)/2), edge.of.box, new.line)
box <- cat("", lid.of.box, "\n", rep(side.of.box, (h-3)/2), small.word.in.box, rep(side.of.box, (h-3)/2), lid.of.box)
}
}
#Test:
box("hell yeah ", 39, 17, "What hath science wrought")
#12: Hurdle models first decide if a species is present (yes/no), and if so, decide their abundance level (how many)
#Write a function that models if a species (ONE species) is present at any of n sites, and if so, how many are there
#use bernoulli determine presence (0,1, user chosen probability of 1 (like coin flip)), and poisson for abundance (user-chosen lambda)
#Want output of abundance at each site (so, prob of presence multiplied by abundance value?)
hurdle.model <- function(numb.sites, prob.presence, expected.abundance){
if(prob.presence > 1 | prob.presence < 0){
stop("Probabilities must have a value between 0 and 1")
}
presence <- rbinom(numb.sites, 1, prob.presence)
abundance <- rpois(numb.sites, expected.abundance)
adjusted.abundance <- (presence * abundance)
return(adjusted.abundance)
}
#Test:
hurdle.model(10, 0.5, 50)
hurdle.model(10, 12, 50)
#13: Write a hurdle model that simulates lots of of species with their own p and lambda on n sites
#return results in a matrix with species as columns, sites as rows
#My function takes vectors of length n for species, prob.presence, and expected.abundance
hurdle.model.expanded <- function(numb.sites, species, prob.presence, expected.abundance){
for(i in 1:length(numb.sites)){
if(any(prob.presence[i] > 1) | any(prob.presence[i] < 0)){
stop("Probabilities must have a value between 0 and 1")
}
}
if(length(species) != length(prob.presence) | length(species) != length(expected.abundance)){
stop("There must be a probability of presence and expected abundance for each species")
}
abundance.matrix <- matrix(nrow = numb.sites, ncol = length(species), dimnames = list(1:numb.sites, species))
for(i in 1:length(species)){
presence <- rbinom(numb.sites, 1, prob.presence[i])
abundance <- rpois(numb.sites, expected.abundance[i])
adjusted.abundance <- (presence * abundance)
abundance.matrix[, i] <- adjusted.abundance
}
return(abundance.matrix)
}
#Test:
hurdle.model.expanded(4, c("boba", "jil", "yef"), c(0.5, 0.9, 0.1), c(10, 50, 100))
#14: Progress through time, professor moves a random, normally-distributed distance N-S and E-W every five minutes.
#Simulate process 100 times and plot.
random.walk <- plot(0, 0, xlim = c(-40, 40), ylim = c(-40, 40), type = "l")
start.point <- c(0,0)
for(i in 1:100){
start.point.prior <- start.point
start.point <- start.point + c(rnorm(1, 0, 2), rnorm(1, 0, 2))
lines(c(start.point.prior[1], start.point[1]), c(start.point.prior[2], start.point[2]))
}
#15: #15) Run simulation to see how long, on average, until faculty member falls of cliff (approx 5 miles away in all directions)
#Need scale: Assume person walks 4 mi/h. This means person walks 0.33 miles in 5 min. Thus, set SD = 0.33 for more accurate model
#Assume 1 on the plot = 1 mile
#need distance formula:
time.to.death <- function(n){
timestep.to.death <- numeric(n)
time.to.death <- 5 * timestep.to.death
start.point <- c(0,0)
distance.from.origin <- (start.point[1]^2 + start.point[2]^2)^(1/2)
for (j in 1:n){
for(i in 1:10000){
if (distance.from.origin <= 5){
start.point.prior <- start.point
start.point <- start.point + c(rnorm(1, 0, 0.33), rnorm(1, 0, 0.33))
}
if (distance.from.origin > 5){
timestep.to.death[j] <- (i-1)
}
}
}
print(time.to.death)
cat("Average time to death with a sample size of", n, ":")
return(mean(time.to.death))
}
#LESSON 3:
#1: Implement cat class with arbitrary slots, methods = print and race
# Constructor function:
new.cat <- function(weight, color, hair_length, polydactyl){
object <- list(weight = weight, color = color, hair_length = hair_length, polydactyl = polydactyl)
class(object) <- "cat"
return(object)
}
# Some cats:
Milo <- new.cat(12, "flamepoint", "short", FALSE)
Remy <- new.cat(4, "grey tabby", "long", TRUE)
Remus <- new.cat(8, "black", "short", FALSE)
Darwin <- new.cat(11, "blue", "short", FALSE)
# Print method:
print.cat <- function(cat.name, ...){
if(!inherits(cat.name, "cat")){
stop("This creature is not a cat, and is grieviously insulted that you should insinuate otherwise")
} else if(cat.name$polydactyl == TRUE){
cat("This creature is a", cat.name$color, "cat with", cat.name$hair_length, "hair", "and far too many toes.")
} else if(cat.name$polydactyl == FALSE){
cat("This creature is a", cat.name$color, "cat with", cat.name$hair_length, "hair", "and a normal number of toes.")
}
}
# Race method:
#Suppose that if cat A weighs less than cat B, cat A wins.
#Suppose that if cat A and cat B weigh the same, then the cat with more toes is faster and wins.
#Suppose that if the two cats are the same weight and have the same number of toes, it's a tie.
race <- function(first, second){
if(!inherits(first, "cat") | !inherits(second, "cat")){
stop("At least one of these creatures is not a cat, thus the two cannot race.")
} else if(first$weight < second$weight){
print("First cat won the race.")
} else if(first$weight > second$weight){
print("Second cat won the race.")
} else if(first$weight == second$weight){
if(first$polydactyl == TRUE && second$polydactyl == FALSE){
print("First cat won the race.")
} else if(first$polydactyl == FALSE && second$polydactyl == TRUE){
print("Second cat won the race.")
} else if(first$polydactyl == second$polydactyl){
print("These cats are the same weight and have the same number of toes. It's a tie.")
}
}
}
#Test:
fat.black.polydactyl <- new.cat(15, "black", "short", TRUE)
skinny.polydactyl <- new.cat(5, "grey", "long", TRUE)
fat.normal <- new.cat(15, "calico", "long", FALSE)
skinny.normal <- new.cat(5, "sealpoint", "rex", FALSE)
fat.calico.polydactyl <- new.cat(15, "calico", "long", TRUE)
race(fat.black.polydactyl, fat.calico.polydactyl)
race(fat.calico.polydactyl, fat.black.polydactyl)
race(fat.calico.polydactyl, fat.normal)
race(fat.normal, skinny.normal)
#2: Implement a point class, holds a coordinate pair (x,y)
new.point <- function(x,y){
point <- c(x,y)
class(point) <- "point"
return(point)
}
#Test:
point.a <- new.point(3,4)
class(point.a)
#3: Write a distance method calculating distance between two points
distance.point <- function(first.point, second.point){
if(!inherits(first.point, "point") | !inherits(second.point, "point")){
stop("At least one of the objects is not a point.")
} else{
distance <- ((second.point[1] - first.point[1])^2 + (second.point[2] - first.point[2])^2)^(1/2)
return(distance)
}
}
#Test:
point.a <- new.point(3,4)
point.b <- new.point(10,10)
distance.point(point.a, point.b)
#4: Make a line class: takes two point objects, makes line between them
new.line <- function(first.point, second.point){
if(!inherits(first.point, "point") | !inherits(second.point, "point")){
stop("At least one object is not a point.")
}
line <- list(first.point, second.point)
class(line) <- "line"
return(line)
}
#5: Make a polygon class that stores polygon from point objects
new.polygon <- function(first.point, second.point, ...){
polygon <- list(first.point, second.point, ..., first.point)
class.check <- sapply(polygon, class)
if(any((class.check) != "point")){
stop("At least one object is not a point.")
} else {
class(polygon) <- "polygon"
return(polygon)
}
}
#Test:
point.a <- new.point(3,4)
point.b <- new.point(10,10)
point.c <- new.point(0,0)
polygon.a <- new.polygon(point.a, point.b, point.c)
new.polygon(point.a, point.b, c(4,5), point.c)
#6: Write plot methods for point and line objects
#Plot point objects:
plot.point <- function(point){
if(!inherits(point, "point")){
stop("Object must be of class 'point'!")
} else {
plot(point[1], point[2])
}
}
#Plot line objects:
plot.line <- function(line){
if(!inherits(line, "line")){
stop("Object must be of class 'line'!")
} else {
first.point <- unlist(line[1])
second.point <- unlist(line[2])
x.coordinates <- c(first.point[1], second.point[1])
y.coordinates <- c(first.point[2], second.point[2])
plot(x.coordinates, y.coordinates, type = "l")
}
}
#7: Plot methods for polygon objects
plot.polygon <- function(polygon){
if(!inherits(polygon, "polygon")){
stop("Object must be of class 'polygon'!")
} else {
x.coordinates <- numeric(length(polygon))
y.coordinates <- numeric(length(polygon))
for(i in 1:length(polygon)){
point <- unlist(polygon[i])
x.coordinates[i] <- point[1]
y.coordinates[i] <- point[2]
}
plot(x.coordinates, y.coordinates, type = "l")
}
}
#Test:
point.a <- new.point(3,4)
point.b <- new.point(10,10)
point.c <- new.point(0,0)
point.d <- new.point(-1, 4)
polygon.b <- new.polygon(point.a, point.b, point.c, point.d)
plot.polygon(polygon.b)
#9:
#Circle object: takes a POINT object and a number (radius)
new.circle <- function(point, radius){
if(!inherits(point, "point")){
stop("Point object must be of class 'point'!")
} else {
circle <- list(point, radius)
class(circle) <- "circle"
return(circle)
}
}
#Plot method for circle objects:
plot.circle <- function(circle){
if(!inherits(circle, "circle")){
stop("Object must be of class 'circle'!")
} else {
point <- unlist(circle[1])
radius <- unlist(circle[2])
pos.x.coordinates = seq((point[1] - radius), (point[1] + radius), by = 0.1)
neg.x.coordinates = seq((point[1] + radius), (point[1] - radius), by = -0.1)
total.x.coordinates = c(pos.x.coordinates, neg.x.coordinates)
pos.y.coordinates = c(point[2] + (radius^2 - (pos.x.coordinates - point[1])^2)^(1/2))
neg.y.coordinates = c(point[2] - (radius^2 - (pos.x.coordinates - point[1])^2)^(1/2))
total.y.coordinates = c(pos.y.coordinates, neg.y.coordinates)
plot(total.x.coordinates, total.y.coordinates, type = "l", asp = 1)
}
}
#Test:
point.a <- new.point(3,4)
circle.test <- new.circle(point.a, 5)
plot.circle(circle.test)
#8: create a canvas object that the "add" function can add point, line, circle, and polygon objects to.
#Create plot and print methods for this class.
#Canvas object:
new.canvas <- function(object, ...){
canvas <- list(object, ...)
class.check <- sapply(canvas, class)
for(i in 1:length(class.check)){
if(class.check[i] != "point"){
if(class.check[i] != "circle"){
if(class.check[i] != "line"){
if(class.check[i] != "polygon"){
stop("All canvas objects must be of class point, line, circle, or polygon!")
}
}
}
}
}
class(canvas) <- "canvas"
return(canvas)
}
#Test:
point.a <- new.point(3,4)
point.b <- new.point(10,10)
point.c <- new.point(0,0)
point.d <- new.point(-1, 4)
circle.test <- new.circle(point.a, 5)
polygon.a <- new.polygon(point.a, point.b, point.c)
new.canvas(point.a, point.b, point.c, point.d, circle.test, polygon.a)
new.canvas(point.a, point.b, circle.test, c(1, 5))
#Add function(add more objects to canvas):
add.to.canvas <- function(canvas, new.object, ...){
if(class(canvas) != "canvas"){
stop("Canvas object must be of class canvas!")
}
new.objects <- list(new.object, ...)
class.check <- sapply(new.objects, class)
for(i in 1:length(class.check)){
if(class.check[i] != "point"){
if(class.check[i] != "circle"){
if(class.check[i] != "line"){
if(class.check[i] != "polygon"){
stop("All canvas objects must be of class point, line, circle, or polygon!")
}
}
}
}
}
for (i in 1:length(new.objects)){
canvas[length(canvas) + i] <- new.objects[i]
return(canvas)
}
}
#Test:
test.canvas <- new.canvas(point.a, point.b, circle.test)
add.to.canvas(new.canvas, point.c)
#Plot methods:
plot.canvas <- function(canvas){
plot(c(0, 0, 15, 15, 0), c(0, 15, 15, 0, 0), type = "l", asp = 1, col = "white")
for(i in 1:length(canvas)){
if(class(canvas[[i]]) == "point"){
point <- canvas[[i]]
points(point[1], point[2])
} else if(class(canvas[[i]]) == "circle"){
circle <- canvas[[i]]
point <- unlist(circle[1])
radius <- unlist(circle[2])
pos.x.coordinates = seq((point[1] - radius), (point[1] + radius), by = 0.1)
neg.x.coordinates = seq((point[1] + radius), (point[1] - radius), by = -0.1)
total.x.coordinates = c(pos.x.coordinates, neg.x.coordinates)
pos.y.coordinates = c(point[2] + (radius^2 - (pos.x.coordinates - point[1])^2)^(1/2))
neg.y.coordinates = c(point[2] - (radius^2 - (pos.x.coordinates - point[1])^2)^(1/2))
total.y.coordinates = c(pos.y.coordinates, neg.y.coordinates)
lines(total.x.coordinates, total.y.coordinates)
} else if (class(canvas[[i]]) == "polygon"){
x.coordinates <- numeric(length(polygon))
y.coordinates <- numeric(length(polygon))
for(i in 1:length(polygon)){
point <- unlist(polygon[i])
x.coordinates[i] <- point[1]
y.coordinates[i] <- point[2]
}
lines(x.coordinates, y.coordinates)
} else if(class(canvas[[i]]) == "line"){
line <- canvas[[i]]
first.point <- unlist(line[1])
second.point <- unlist(line[2])
x.coordinates <- c(first.point[1], second.point[1])
y.coordinates <- c(first.point[2], second.point[2])
lines(x.coordinates, y.coordinates, type = "l")
}
}
}
#Test:
test.canvas <- new.canvas(point.a, point.b, circle.test)
plot.canvas(test.canvas)
#Finally, print methods for canvas objects:
print.canvas <- function(canvas){
if(class(canvas) != "canvas"){
stop("Object must be of class 'canvas'!")
} else {
cat("There are", length(canvas), "objects on this canvas.", "\n")
for(i in 1:length(canvas)){
cat("Object", paste("#", i, collapse = ""), "is a", paste(class(canvas[[i]]), ".", collapse = ""), "\n")
}
}
}
#13: Add OPTIONAL color support to canvas plot
plot.canvas <- function(canvas, point.color = "black", line.color = "black", circle.color = "black", polygon.color = "black"){
plot(c(0, 0, 15, 15, 0), c(0, 15, 15, 0, 0), type = "l", asp = 1, col = "white")
for(i in 1:length(canvas)){
if(class(canvas[[i]]) == "point"){
point <- canvas[[i]]
points(point[1], point[2], col = point.color)
} else if(class(canvas[[i]]) == "circle"){
circle <- canvas[[i]]
point <- unlist(circle[1])
radius <- unlist(circle[2])
pos.x.coordinates = seq((point[1] - radius), (point[1] + radius), by = 0.1)
neg.x.coordinates = seq((point[1] + radius), (point[1] - radius), by = -0.1)
total.x.coordinates = c(pos.x.coordinates, neg.x.coordinates)
pos.y.coordinates = c(point[2] + (radius^2 - (pos.x.coordinates - point[1])^2)^(1/2))
neg.y.coordinates = c(point[2] - (radius^2 - (pos.x.coordinates - point[1])^2)^(1/2))
total.y.coordinates = c(pos.y.coordinates, neg.y.coordinates)
lines(total.x.coordinates, total.y.coordinates, col = circle.color)
} else if (class(canvas[[i]]) == "polygon"){
x.coordinates <- numeric(length(polygon))
y.coordinates <- numeric(length(polygon))
for(i in 1:length(polygon)){
point <- unlist(polygon[i])
x.coordinates[i] <- point[1]
y.coordinates[i] <- point[2]
}
lines(x.coordinates, y.coordinates, col = polygon.color)
} else if(class(canvas[[i]]) == "line"){
line <- canvas[[i]]
first.point <- unlist(line[1])
second.point <- unlist(line[2])
x.coordinates <- c(first.point[1], second.point[1])
y.coordinates <- c(first.point[2], second.point[2])
lines(x.coordinates, y.coordinates, type = "l", col = line.color)
}
}
}
#LESSON 4:
#1: Create a dataset of 10 variables, each drawn from normal distributions with DIFF MEANS AND SDs!
replicate(10, rnorm(1, mean = runif(1, -5, 5), sd = runif(1, 0, 10)))
#2: Make version of summary function for continuous datasets
summary.continuous <- function(data.vector){
if(is.numeric(data.vector) == FALSE){
stop("Data are not numeric.")
}
mean.data <- mean(data.vector)
sd.data <- sd(data.vector)
max.data <- max(data.vector)
min.data <- min(data.vector)
cat("", "Mean = ", mean.data, "\n", "Standard deviation =", sd.data, "\n", "Min. value =", min.data, "\n", "Max. value =", max.data)
}
#Test:
bunch.of.numbers <- replicate(20, rnorm(1, mean = runif(1, -5, 5), sd = runif(1, 0, 10)))
summary.continuous(bunch.of.numbers)
#3: Write summary function that summarizes only categorical data (NOT is.numeric, aka !is.numeric)
#Percentages, total count make sense for categorical data?
#In other words, you've got a vector of people who are male, female, etc. Can't take mean etc., but can say %male
summary.categorical <- function(data.vector){
if(is.numeric(data.vector) == TRUE){
stop("Data are not categorical.")
}
cat("N =", length(data.vector), "\n \n")
instances.of.element.one <- length(which(data.vector == data.vector[1]))
percent.element.one <- round(100*(instances.of.element.one/length(data.vector)), 2)
cat("Percent", data.vector[1], "=", paste(c(percent.element.one, "%"), collapse = ""), "\n")
for(i in 2:length(data.vector)){
if(any(data.vector[i] == data.vector[1:(i-1)]) == FALSE){
instances.of.i <- length(which(data.vector == data.vector[i]))
percent.element.i <- round(100*(instances.of.i/length(data.vector)), 2)
cat("Percent", data.vector[i], "=", paste(c(percent.element.i, "%"), collapse = ""), "\n")
}
}
}
#Test:
horses <- c("arab", "arab", "Morgan", "saddlebred", "Friesian", "Haflinger", "paint", "appaloosa", "Friesian", "Peruvian Paso", "arab", "arab", "Welsh cob", "Knabstrupper", "arab")
summary.categorical(horses)
#4) summary function capable of both kinds of data
summary.general <- function(data.vector){
if(is.numeric(data.vector) == TRUE){
summary.continuous(data.vector)
} else {
summary.categorical(data.vector)
}
}
#Test:
tf <- c(TRUE, TRUE, FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE, TRUE)
summary.general(horses)
#(character)
summary.general(tf)
#(logical)
summary.general(bunch.of.numbers)
#(numeric)
#5: write a function that will take an arbitrary input sequence of DNA and output the translated sequence
#First, lookup table construction:
bases <- c("A", "G", "T", "C")
codons <- expand.grid(bases, bases, bases)
rearranged.codons <- codons[c(3, 2, 1)]
amino.acids <- c("Lys", "Lys", "Asn", "Asn", "Arg", "Arg", "Ser", "Ser", "Ile", "Met", "Ile", "Ile", "Thr", "Thr", "Thr", "Thr", "Glu", "Glu", "Asp", "Asp", "Gly", "Gly", "Gly", "Gly", "Val", "Val", "Val", "Val", "Ala", "Ala", "Ala", "Ala", "stop", "stop", "Tyr", "Tyr", "stop", "Trp", "Cys", "Cys", "Leu", "Leu", "Phe", "Phe", "Ser", "Ser", "Ser", "Ser", "Gln", "Gln", "His", "His", "Arg", "Arg", "Arg", "Arg", "Leu", "Leu", "Leu", "Leu", "Pro", "Pro", "Pro", "Pro")
DNA.translation.lookup.table <- cbind(rearranged.codons, amino.acids)
merged.codons <- character(nrow(DNA.translation.lookup.table))
for(i in 1:nrow(DNA.translation.lookup.table)){
merged.codons[i] <- paste(DNA.translation.lookup.table[i, 1], DNA.translation.lookup.table[i, 2], DNA.translation.lookup.table[i, 3], collapse = "")
}
merged.codons.final <- gsub(" ", "", merged.codons)
DNA.translation.lookup.table[,5] <- merged.codons.final
#Now I have a data frame with amino acids in col 4 and codons in col 5
#Second, split string into character vector with nchar == 3 for each element
#I tried using strsplit and substring and other stuff... no luck.
#DNA splitting function:
DNA.codons <- function(DNA.seq){
single.nucleotides <- unlist(strsplit(DNA.seq, ""))
if((length(single.nucleotides) %% 3) != 0){
stop("DNA sequence must have a length that is a multiple of three!")
}
first.nuc.seq <- seq(1, length(single.nucleotides), by = 3)
second.nuc.seq <- first.nuc.seq + 1
third.nuc.seq <- first.nuc.seq + 2
first.nucleotide <- single.nucleotides[first.nuc.seq]
second.nucleotide <- single.nucleotides[second.nuc.seq]
third.nucleotide <- single.nucleotides[third.nuc.seq]
codons <- character(length(first.nuc.seq))
for(i in 1:length(first.nuc.seq)){
codons[i] <- paste(first.nucleotide[i], second.nucleotide[i], third.nucleotide[i], collapse = "")
}
codons.final <- gsub(" ", "", codons)
return(codons.final)
}
#Finally, translate codons into amino acids:
DNA.translate <- function(DNA.seq){
single.nucleotides <- unlist(strsplit(DNA.seq, ""))
if((length(single.nucleotides) %% 3) != 0){
stop("DNA sequence must have a length that is a multiple of three!")
}
first.nuc.seq <- seq(1, length(single.nucleotides), by = 3)
second.nuc.seq <- first.nuc.seq + 1
third.nuc.seq <- first.nuc.seq + 2
first.nucleotide <- single.nucleotides[first.nuc.seq]
second.nucleotide <- single.nucleotides[second.nuc.seq]
third.nucleotide <- single.nucleotides[third.nuc.seq]
codons <- character(length(first.nuc.seq))
for(i in 1:length(first.nuc.seq)){
codons[i] <- paste(first.nucleotide[i], second.nucleotide[i], third.nucleotide[i], collapse = "")
}
codons.final <- gsub(" ", "", codons)
codon.ref <- unlist(DNA.translation.lookup.table[5])
names(codon.ref) <- unlist(DNA.translation.lookup.table[4])
protein <- character(length(codons.final))
for(i in 1:length(codons.final)){
protein[i] <- names(codon.ref[match(codons.final[i], codon.ref)])
}
return(protein)
}
#Test:
DNA.real.test <- "GATTTCCCCAAACTGAAGCTA"
DNA.translate(DNA.real.test)
#6: write a function that will take multiple sequences, translate, and flag where they match up
DNA.seq.overlap <- function(DNA.seq.1, DNA.seq.2, ...){
DNA.list <- list(DNA.seq.1, DNA.seq.2, ...)
translated.seqs <- sapply(DNA.list, DNA.translate)
for(i in 1:(length(DNA.seq.1)-1)){
match <- which(DNA.list[[i]] == DNA.list[[i+1]])
}
}
#...
|
bf06caeed522b7581235b9ad386bce2ca6b6f875 | 3c994c347b5cc6655cc5f2c75c17558c5024dbb8 | /test.R | 97ee40d47f3f0ee72652ff4bf78d195b687fe0d6 | [] | no_license | by3362/RClass | 973dfaf490df213004dc64ae444047ef7c571754 | 135d08f2665dda579960d2a68cc410bf211ad32f | refs/heads/master | 2021-01-13T17:01:37.321408 | 2016-12-25T05:20:07 | 2016-12-25T05:20:07 | 76,712,709 | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 4,759 | r | test.R | name <- c("Eric","Jack","Tom")
name
age <- c("28", "26", "34")
gender <- c("Male","Male","Female")
data <- data.frame(name, age, gender, stringsAsFactors = FALSE)
data
data[1,]
y <- 0
for (x in 1:10) {
y <- x + y
print(y)
}
data1 <- c("Is that apple pie I smell?",
"Julie never missed a ball, a promenade, or a play.",
"Did the cat get your tongue at the table?")
data1
data2 <- gsub("a", "A", data1)
data2
strsplit("TATAT","A")# 將字串"TATAT"以"A"為切割點拆開
nchar(data1)
x <- "Is that apple pie I smell?"# 將兩個以上的空白,取代為1個空白
x
x1 <- gsub(" {2, }", " ", x)
x1
x <- gsub("[[:punct:]]","", "T,E!X$T")# 消除標點符號
x
x <- c("company","companies")
x
x1 <- grepl("company",x)
x1
x2 <- grep("compan(y|ies)",x)
x2
wiki.apple1 <- "Apple Inc. is an American multinational technology company headquartered in Cupertino, California, that designs, develops, and sells consumer electronics, computer software, and online services. Its hardware products include the iPhone smartphone, the iPad tablet computer, the Mac personal computer, the iPod portable media player, and the Apple Watch smartwatch. Apple's consumer software includes the OS X and iOS operating systems, the iTunes media player, the Safari web browser, and the iLife and iWork creativity and productivity suites. Its online services include the iTunes Store, the iOS App Store and Mac App Store, and iCloud."
wiki.apple2 <- "Apple was founded by Steve Jobs, Steve Wozniak, and Ronald Wayne on April 1, 1976, to develop and sell personal computers.[5] It was incorporated as Apple Computer, Inc. on January 3, 1977, and was renamed as Apple Inc. on January 9, 2007, to reflect its shifted focus toward consumer electronics. Apple (NASDAQ: AAPL) joined the Dow Jones Industrial Average on March 19, 2015.[6]"
wiki.apple3 <- "Apple is the world's largest information technology company by revenue, the world's largest technology company by total assets,[7] and the world's second-largest mobile phone manufacturer.[8] On November 25, 2014, in addition to being the largest publicly traded corporation in the world by market capitalization, Apple became the first U.S. company to be valued at over US$700 billion.[9] The company employs 115,000 permanent full-time employees as of July 2015[4] and maintains 453 retail stores in sixteen countries as of March 2015;[1] it operates the online Apple Store and iTunes Store, the latter of which is the world's largest music retailer."
wiki.apple4 <- "Apple's worldwide annual revenue totaled $233 billion for the fiscal year ending in September 2015.[3] The company enjoys a high level of brand loyalty and, according to the 2014 edition of the Interbrand Best Global Brands report, is the world's most valuable brand with a valuation of $118.9 billion.[10] By the end of 2014, the corporation continued to receive significant criticism regarding the labor practices of its contractors and its environmental and business practices, including the origins of source materials."
wiki.apple1 <- gsub("[[:punct:]]","",wiki.apple1)# 消除標點符號
wiki.apple1
wiki.apple2 <- gsub("[[:punct:]]","",wiki.apple2)
wiki.apple3 <- gsub("[[:punct:]]","",wiki.apple3)
wiki.apple4 <- gsub("[[:punct:]]","",wiki.apple4)
wiki.apple1 <- gsub("[0-9]","",wiki.apple1)# 消除數字
wiki.apple1
wiki.apple2 <- gsub("[0-9]","",wiki.apple2)
wiki.apple3 <- gsub("[0-9]","",wiki.apple3)
wiki.apple4 <- gsub("[0-9]","",wiki.apple4)
wiki.apple4
wiki.apple.vec <- c(wiki.apple1,
wiki.apple2,
wiki.apple3,
wiki.apple4)#把它們放進一個vector
wiki.apple.vec
wiki.apple.vec <- gsub(" {2,}"," ",wiki.apple.vec)#把超過兩個以上空白的換成一個空白
wiki.apple.vec
wiki.apple.vec <- tolower(wiki.apple.vec)#全部轉為小寫
wiki.apple.vec
wiki.apple.list <- strsplit(wiki.apple.vec, " ")#以空格分隔
wiki.apple.list
library(ngram)
ng <- ngram(wiki.apple1, n=2)
ng
get.ngrams(ng)
news.zhTW <- "美國最新飲食指南首度訂出健康成人咖啡攝取量,建議有喝咖啡習慣的人可喝黑咖啡,或在咖啡中添加低脂牛奶,但不建議額外添加糖或奶精。至於平日沒有喝咖啡習慣的人,是否需要現在開始喝咖啡?衛福部國健署昨天表示「不建議」。"
news.zhCN <- "凤凰科技讯 北京时间2月19日消息,据《今日美国》网络版报道,苹果刚刚推出了新的以旧换新计划,以便于部分用户使用他们的老款iPhone置换新机型。而且,苹果目前接受的以旧换新手机包括Android手机、Windows Phone手机以及iPhone。"
library(jiebaR)
mixseg = worker()
segment(news.zhTW, mixseg)
segment(news.zhCN, mixseg)
|
dd3c541ecc8e53722383cdb5f9c2e53d113a0313 | 35ce753be66bb1756a68b1de5a2567ea8a922dd2 | /graph_of_causes_effects.R | 1bc021b9aa4e5677af9cb35715ae47faba453a03 | [] | no_license | adsieg/Cause-Consequences-relationships-NLP | 33a98beb4b1d14bf86d38a7051c69fea4cf93852 | 0ee9806dc3d513a41133abb92c61a29b9cbd2514 | refs/heads/master | 2020-05-17T03:15:32.920131 | 2019-04-29T13:36:20 | 2019-04-29T13:36:20 | 183,473,671 | 6 | 2 | null | null | null | null | UTF-8 | R | false | false | 1,026 | r | graph_of_causes_effects.R | # Loading
library("readxl")
setwd("C:/Users/adsieg/Desktop/Cause_Effect/app/dataset")
# xls files
edges <- read_excel("word.xlsx")
edges<- edges[,c("from","to")]
nodes <- read_excel("nature.xlsx")
library(dplyr)
nodes %>%
select('id', 'group', 'which_sentence') %>%
filter(which_sentence == "sentence_id_0")
library(visNetwork)
# default, on group
visNetwork(nodes, edges, main = "Cause-Effect", height = "500px", width = "100%") %>%
visEdges(arrows = "to")%>%
visOptions(highlightNearest = TRUE)
nodes <- nodes %>%
select('id', 'group', 'which_sentence') %>%
filter(which_sentence == "sentence_id_0")
sentences <-c()
for(item in 1:nrow(nodes)){
if (nodes$group[item]=="neutral") {
print(nodes$id[item])
sentences <- c(sentences, nodes$id[item])
} else {
print(paste('<span style="background-color: #e6ffe6"> ',nodes$id[item], ' </span>'))
sentences <- c(sentences, paste('<span style="background-color: #e6ffe6"> ',nodes$id[item], ' </span>'))
}
}
paste(sentences, collapse=" ")
|
2caef9266dfcaf506dfd7a4e13aaf4e03fce4016 | 9d0546fb67bf2d6800e37513cb1612e554f6d6ad | /man/output_result.Rd | eb14d6ff956e8300b04d19d26d6c066fe910daf1 | [
"MIT"
] | permissive | jaspershen/lipidflow | 15f3bac0f1370b2bdf6ea1a984055a07eb2f51be | e06c3c0b794eae3764ca1296485df36e3a8a7887 | refs/heads/main | 2023-03-13T02:55:38.405431 | 2021-03-04T06:53:36 | 2021-03-04T06:53:36 | 336,693,231 | 2 | 0 | null | null | null | null | UTF-8 | R | false | true | 402 | rd | output_result.Rd | % Generated by roxygen2: do not edit by hand
% Please edit documentation in R/output_result.R
\name{output_result}
\alias{output_result}
\title{output_result}
\usage{
output_result(path = ".", match_item_pos, match_item_neg)
}
\arguments{
\item{path}{work directory.}
\item{match_item_pos}{match_item_pos}
\item{match_item_neg}{match_item_neg}
}
\description{
output_result
}
\author{
Xiaotao Shen
}
|
d30633fcc8a2b53c22927d1744155760ef2203f5 | ffdea92d4315e4363dd4ae673a1a6adf82a761b5 | /data/genthat_extracted_code/TSdist/examples/LBKeoghDistance.Rd.R | 652f4dd714c83b83a8873a057a3b927cfc83fbe1 | [] | no_license | surayaaramli/typeRrh | d257ac8905c49123f4ccd4e377ee3dfc84d1636c | 66e6996f31961bc8b9aafe1a6a6098327b66bf71 | refs/heads/master | 2023-05-05T04:05:31.617869 | 2019-04-25T22:10:06 | 2019-04-25T22:10:06 | null | 0 | 0 | null | null | null | null | UTF-8 | R | false | false | 570 | r | LBKeoghDistance.Rd.R | library(TSdist)
### Name: LBKeoghDistance
### Title: LB_Keogh for DTW.
### Aliases: LBKeoghDistance
### ** Examples
# The objects example.series1 and example.series2 are two
# numeric series of length 100 contained in the TSdist package.
data(example.series1)
data(example.series2)
# For information on their generation and shape see help
# page of example.series.
help(example.series)
# Calculate the LB_Keogh distance measure for these two series
# with a window of band of width 11:
LBKeoghDistance(example.series1, example.series2, window.size=11)
|
42bf4cf2dab760bfd1a670492eaaa5836b4dd49f | 1b473280443b277ef942c21ed2f786da252dea35 | /R/opasnet.R | cc34281335067793c7b2dfc2cc4897d356600886 | [] | no_license | jtuomist/OpasnetUtils | d52def972ceb5daa0c3076ced4672fe577478863 | 7bbc38d71188f6bab3bfd969584817cca28f2a2d | refs/heads/master | 2021-07-09T22:38:10.228825 | 2020-07-09T16:13:37 | 2020-07-09T16:13:37 | 150,073,444 | 0 | 1 | null | 2019-04-27T13:48:32 | 2018-09-24T08:24:19 | R | UTF-8 | R | false | false | 5,128 | r | opasnet.R | # Get data from Opasnet
#
# filename - Name of the file
# wiki - Source Wiki: opasnet_en (default), opasnet_fi, heande (.htaccess protected)
# unzip - File name in package (if compressed)
#
# Returns file contents (loaded using curl)
opasnet.data <- function(filename,wiki='', unzip='') {
now <- Sys.time()
file <- opasnet.file_url(filename, wiki)
if (unzip != '')
{
f <- tempfile(pattern = 'opasnet.data.', fileext = '.zip')
bin <- getBinaryURL(file)
con <- file(f, open = "wb")
writeBin(bin, con)
close(con)
con <- unz(f, unzip)
return(paste(readLines(con),collapse="\n"))
}
else
{
return(getURL(file))
}
}
# Get table data (e.g. csv) from Opasnet
#
# filename - Name of the file
# wiki - Source Wiki: opasnet_en (default), opasnet_fi, heande (.htaccess protected)
# unzip - File name in package (if compressed)
#
# Returns file contents in table (loaded using curl)
opasnet.csv <- function(filename, wiki='', unzip = '', ...) {
now <- Sys.time()
file <- opasnet.file_url(filename, wiki)
if (unzip != '')
{
f <- tempfile(pattern = 'opasnet.csv.', fileext = '.zip')
bin <- getBinaryURL(file)
con <- file(f, open = "wb")
writeBin(bin, con)
close(con)
return(read.table(unz(f, unzip), ...))
}
else
{
csv <- getURL(file)
return(read.table(file = textConnection(csv), ...))
}
}
# Get R data from Opasnet
#
# filename - Name of the file
# wiki - Source Wiki: opasnet_en (default), opasnet_fi, heande (.htaccess protected)
# unzip - File name in package (if compressed)
#
# Loads file contents to .GlobalEnv
#opasnet.R <- function(filename,wiki='', unzip='') {
#
# now <- Sys.time()
#
# file <- opbase.file_url(filename, wiki)
#
# if (unzip != '')
# {
# f <- tempfile(pattern = 'opasnet.R.', fileext = '.zip')
# bin <- getBinaryURL(file)
# con <- file(f, open = "wb")
# writeBin(bin, con)
# close(con)
# con <- unz(f, unzip)
# load(con, .GlobalEnv)
# #return(paste(readLines(con),collapse="\n"))
# }
# else
# {
# load(getURL(file), .GlobalEnv)
# #return(getURL(file))
# }
#}
# Private function to get file url for given wiki
opasnet.file_url <- function(filename, wiki)
{
# Parse arguments
targs <- strsplit(commandArgs(trailingOnly = TRUE),",")
args = list()
if (length(targs) > 0)
for(i in targs[[1]])
{
tmp = strsplit(i,"=")
key <- tmp[[1]][1]
value <- tmp[[1]][2]
args[[key]] <- value
}
if (wiki == '')
{
if (is.null(args$user)) stop('Wiki cannot be resolved!')
wiki <- args$user
}
if (wiki == 'opasnet_en' || wiki == 'op_en')
{
file <- paste("http://en.opasnet.org/en-opwiki/images/",filename,sep='')
}
if (wiki == 'opasnet_fi' || wiki == 'op_fi')
{
file <- paste("http://fi.opasnet.org/fi_wiki/images/",filename,sep='')
}
if (wiki == 'heande')
{
file <- paste("http://",args$ht_username,":",args$ht_password,"@heande.opasnet.org/heande/images/",filename,sep='')
}
return(file)
}
# OPASNET.DATA #####################################
## opasnet.data downloads a file from Finnish Opasnet wiki, English Opasnet wiki, or Opasnet File.
## Parameters: filename is the URL without the first part (see below), wiki is "opasnet_en", "opasnet_fi", or "M-files".
## If table is TRUE then a table file for read.table function is assumed; all other parameters are for this read.table function.
#
#opasnet.data <- function(filename, wiki = "opasnet_en", table = FALSE, ...)
#{
#if (wiki == "opasnet_en") {
#file <- paste("http://en.opasnet.org/en-opwiki/images/", filename, sep = "")
#}
#if (wiki == "opasnet_fi") {
#file <- paste("http://fi.opasnet.org/fi_wiki/images/", filename, sep = "")
#}
#if (wiki == "M-files") {
#file <- paste("http://http://fi.opasnet.org/fi_wiki/extensions/mfiles/", filename, sep = "")
#}
#
#if(table == TRUE) {
#file <- re#ad.table(file, header = FALSE, sep = "", quote = "\"'",
# dec = ".", row.names, col.names,
# as.is = !stringsAsFactors,
# na.strings = "NA", colClasses = NA, nrows = -1,
# skip = 0, check.names = TRUE, fill = !blank.lines.skip,
# strip.white = FALSE, blank.lines.skip = TRUE,
# comment.char = "#",
# allowEscapes = FALSE, flush = FALSE,
# stringsAsFactors = default.stringsAsFactors(),
# fileEncoding = "", encoding = "unknown")
#return(file)
#}
#else {return(ge#tURL(file))}
#}
opasnet.page <- function(pagename, wiki = "") {
if (wiki == '')
{
if (is.null(args$user)) stop('Wiki cannot be resolved!')
wiki <- args$user
}
if (wiki == "opasnet_en" | wiki == "op_en")
{
url <- paste("http://en.opasnet.org/en-opwiki/index.php?title=", pagename, sep = "")
}
if (wiki == "opasnet_fi" | wiki == "op_fi")
{
url <- paste("http://fi.opasnet.org/fi_wiki/index.php?title=", pagename, sep = "")
}
if (wiki == 'heande')
{
url <- paste("http://",args$ht_username,":",args$ht_password,"@heande.opasnet.org/heande/index.php?title=", pagename, sep = "")
}
return(getURL(url))
} |
632fc0d1ef703e329cb2671ef777b044a21d6059 | 13fd537c59bf51ebc44b384d2b5a5d4d8b4e41da | /R/tests/testdir_autoGen/runit_simpleFilterTest_lowbwt_131.R | 5ea722b3ea88b56f39b7938ca8453f31ec3ad288 | [
"Apache-2.0"
] | permissive | hardikk/h2o | 8bd76994a77a27a84eb222a29fd2c1d1c3f37735 | 10810480518d43dd720690e729d2f3b9a0f8eba7 | refs/heads/master | 2020-12-25T23:56:29.463807 | 2013-11-28T19:14:17 | 2013-11-28T19:14:17 | 14,797,021 | 0 | 1 | null | null | null | null | UTF-8 | R | false | false | 12,162 | r | runit_simpleFilterTest_lowbwt_131.R | ##
# Author: Autogenerated on 2013-11-27 18:13:59
# gitHash: c4ad841105ba82f4a3979e4cf1ae7e20a5905e59
# SEED: 4663640625336856642
##
source('./findNSourceUtils.R')
Log.info("======================== Begin Test ===========================")
simpleFilterTest_lowbwt_131 <- function(conn) {
Log.info("A munge-task R unit test on data <lowbwt> testing the functional unit <!=> ")
Log.info("Uploading lowbwt")
hex <- h2o.uploadFile(conn, locate("../../smalldata/logreg/umass_statdata/lowbwt.dat"), "rlowbwt.hex")
Log.info("Filtering out rows by != from dataset lowbwt and column \"UI\" using value 0.800628718635")
filterHex <- hex[hex[,c("UI")] != 0.800628718635,]
Log.info("Perform filtering with the '$' sign also")
filterHex <- hex[hex$"UI" != 0.800628718635,]
Log.info("Filtering out rows by != from dataset lowbwt and column \"ID\" using value 186.733072033")
filterHex <- hex[hex[,c("ID")] != 186.733072033,]
Log.info("Perform filtering with the '$' sign also")
filterHex <- hex[hex$"ID" != 186.733072033,]
Log.info("Filtering out rows by != from dataset lowbwt and column \"PTL\" using value 2.87328202707")
filterHex <- hex[hex[,c("PTL")] != 2.87328202707,]
Log.info("Perform filtering with the '$' sign also")
filterHex <- hex[hex$"PTL" != 2.87328202707,]
Log.info("Filtering out rows by != from dataset lowbwt and column \"ID\" using value 220.471597258")
filterHex <- hex[hex[,c("ID")] != 220.471597258,]
Log.info("Perform filtering with the '$' sign also")
filterHex <- hex[hex$"ID" != 220.471597258,]
Log.info("Filtering out rows by != from dataset lowbwt and column \"UI\" using value 0.0868400200671")
filterHex <- hex[hex[,c("UI")] != 0.0868400200671,]
Log.info("Perform filtering with the '$' sign also")
filterHex <- hex[hex$"UI" != 0.0868400200671,]
Log.info("Filtering out rows by != from dataset lowbwt and column \"LWT\" using value 212.326943742")
filterHex <- hex[hex[,c("LWT")] != 212.326943742,]
Log.info("Perform filtering with the '$' sign also")
filterHex <- hex[hex$"LWT" != 212.326943742,]
Log.info("Filtering out rows by != from dataset lowbwt and column \"BWT\" using value 1200.03928132")
filterHex <- hex[hex[,c("BWT")] != 1200.03928132,]
Log.info("Perform filtering with the '$' sign also")
filterHex <- hex[hex$"BWT" != 1200.03928132,]
Log.info("Filtering out rows by != from dataset lowbwt and column \"RACE\" using value 1.10394629558")
filterHex <- hex[hex[,c("RACE")] != 1.10394629558,]
Log.info("Perform filtering with the '$' sign also")
filterHex <- hex[hex$"RACE" != 1.10394629558,]
Log.info("Filtering out rows by != from dataset lowbwt and column \"RACE\" using value 2.68886239178")
filterHex <- hex[hex[,c("RACE")] != 2.68886239178,]
Log.info("Perform filtering with the '$' sign also")
filterHex <- hex[hex$"RACE" != 2.68886239178,]
Log.info("Filtering out rows by != from dataset lowbwt and column \"FTV\" using value 0.306831265317")
filterHex <- hex[hex[,c("FTV")] != 0.306831265317,]
Log.info("Perform filtering with the '$' sign also")
filterHex <- hex[hex$"FTV" != 0.306831265317,]
Log.info("Filtering out rows by != from dataset lowbwt and column \"HT\" using value 0.609021543785")
filterHex <- hex[hex[,c("HT")] != 0.609021543785,]
Log.info("Perform filtering with the '$' sign also")
filterHex <- hex[hex$"HT" != 0.609021543785,]
Log.info("Filtering out rows by != from dataset lowbwt and column \"LOW\" using value 0.595558767249")
filterHex <- hex[hex[,c("LOW")] != 0.595558767249,]
Log.info("Perform filtering with the '$' sign also")
filterHex <- hex[hex$"LOW" != 0.595558767249,]
Log.info("Filtering out rows by != from dataset lowbwt and column \"BWT\" using value 3007.80726986")
filterHex <- hex[hex[,c("BWT")] != 3007.80726986,]
Log.info("Perform filtering with the '$' sign also")
filterHex <- hex[hex$"BWT" != 3007.80726986,]
Log.info("Filtering out rows by != from dataset lowbwt and column \"RACE\" using value 1.29978232645")
filterHex <- hex[hex[,c("RACE")] != 1.29978232645,]
Log.info("Perform filtering with the '$' sign also")
filterHex <- hex[hex$"RACE" != 1.29978232645,]
Log.info("Filtering out rows by != from dataset lowbwt and column \"PTL\" using value 2.25196682697, and also subsetting columns.")
filterHex <- hex[hex[,c("PTL")] != 2.25196682697, c("PTL")]
Log.info("Now do the same filter & subset, but select complement of columns.")
filterHex <- hex[hex[,c("PTL")] != 2.25196682697, c("BWT","LWT","LOW","PTL","ID","UI","FTV","RACE","HT","SMOKE","AGE")]
Log.info("Filtering out rows by != from dataset lowbwt and column \"RACE\" using value 1.78862389648, and also subsetting columns.")
filterHex <- hex[hex[,c("RACE")] != 1.78862389648, c("RACE")]
Log.info("Now do the same filter & subset, but select complement of columns.")
filterHex <- hex[hex[,c("RACE")] != 1.78862389648, c("BWT","LWT","LOW","PTL","ID","UI","FTV","RACE","HT","SMOKE","AGE")]
Log.info("Filtering out rows by != from dataset lowbwt and column \"LWT\" using value 88.693446287, and also subsetting columns.")
filterHex <- hex[hex[,c("LWT")] != 88.693446287, c("LWT")]
Log.info("Now do the same filter & subset, but select complement of columns.")
filterHex <- hex[hex[,c("LWT")] != 88.693446287, c("BWT","LWT","LOW","PTL","ID","UI","FTV","RACE","HT","SMOKE","AGE")]
Log.info("Filtering out rows by != from dataset lowbwt and column \"PTL\" using value 2.42460498649, and also subsetting columns.")
filterHex <- hex[hex[,c("PTL")] != 2.42460498649, c("PTL")]
Log.info("Now do the same filter & subset, but select complement of columns.")
filterHex <- hex[hex[,c("PTL")] != 2.42460498649, c("BWT","LWT","LOW","PTL","ID","UI","FTV","RACE","HT","SMOKE","AGE")]
Log.info("Filtering out rows by != from dataset lowbwt and column \"LOW\" using value 0.84235460034, and also subsetting columns.")
filterHex <- hex[hex[,c("LOW")] != 0.84235460034, c("LOW")]
Log.info("Now do the same filter & subset, but select complement of columns.")
filterHex <- hex[hex[,c("LOW")] != 0.84235460034, c("BWT","LWT","LOW","PTL","ID","UI","FTV","RACE","HT","SMOKE","AGE")]
Log.info("Filtering out rows by != from dataset lowbwt and column \"UI\" using value 0.997945897788, and also subsetting columns.")
filterHex <- hex[hex[,c("UI")] != 0.997945897788, c("UI")]
Log.info("Now do the same filter & subset, but select complement of columns.")
filterHex <- hex[hex[,c("UI")] != 0.997945897788, c("BWT","LWT","LOW","PTL","ID","UI","FTV","RACE","HT","SMOKE","AGE")]
Log.info("Filtering out rows by != from dataset lowbwt and column \"LOW\" using value 0.907373398659, and also subsetting columns.")
filterHex <- hex[hex[,c("LOW")] != 0.907373398659, c("LOW")]
Log.info("Now do the same filter & subset, but select complement of columns.")
filterHex <- hex[hex[,c("LOW")] != 0.907373398659, c("BWT","LWT","LOW","PTL","ID","UI","FTV","RACE","HT","SMOKE","AGE")]
Log.info("Filtering out rows by != from dataset lowbwt and column \"LWT\" using value 127.80428964, and also subsetting columns.")
filterHex <- hex[hex[,c("LWT")] != 127.80428964, c("LWT")]
Log.info("Now do the same filter & subset, but select complement of columns.")
filterHex <- hex[hex[,c("LWT")] != 127.80428964, c("BWT","LWT","LOW","PTL","ID","UI","FTV","RACE","HT","SMOKE","AGE")]
Log.info("Filtering out rows by != from dataset lowbwt and column \"RACE\" using value 2.82161410894, and also subsetting columns.")
filterHex <- hex[hex[,c("RACE")] != 2.82161410894, c("RACE")]
Log.info("Now do the same filter & subset, but select complement of columns.")
filterHex <- hex[hex[,c("RACE")] != 2.82161410894, c("BWT","LWT","LOW","PTL","ID","UI","FTV","RACE","HT","SMOKE","AGE")]
Log.info("Filtering out rows by != from dataset lowbwt and column \"SMOKE\" using value 0.011108635166, and also subsetting columns.")
filterHex <- hex[hex[,c("SMOKE")] != 0.011108635166, c("SMOKE")]
Log.info("Now do the same filter & subset, but select complement of columns.")
filterHex <- hex[hex[,c("SMOKE")] != 0.011108635166, c("BWT","LWT","LOW","PTL","ID","UI","FTV","RACE","HT","SMOKE","AGE")]
Log.info("Filtering out rows by != from dataset lowbwt and column \"UI\" using value 0.0187419616686, and also subsetting columns.")
filterHex <- hex[hex[,c("UI")] != 0.0187419616686, c("UI")]
Log.info("Now do the same filter & subset, but select complement of columns.")
filterHex <- hex[hex[,c("UI")] != 0.0187419616686, c("BWT","LWT","LOW","PTL","ID","UI","FTV","RACE","HT","SMOKE","AGE")]
Log.info("Filtering out rows by != from dataset lowbwt and column \"HT\" using value 0.893341868051, and also subsetting columns.")
filterHex <- hex[hex[,c("HT")] != 0.893341868051, c("HT")]
Log.info("Now do the same filter & subset, but select complement of columns.")
filterHex <- hex[hex[,c("HT")] != 0.893341868051, c("BWT","LWT","LOW","PTL","ID","UI","FTV","RACE","HT","SMOKE","AGE")]
Log.info("Filtering out rows by != from dataset lowbwt and column \"BWT\" using value 746.867023376, and also subsetting columns.")
filterHex <- hex[hex[,c("BWT")] != 746.867023376, c("BWT")]
Log.info("Now do the same filter & subset, but select complement of columns.")
filterHex <- hex[hex[,c("BWT")] != 746.867023376, c("BWT","LWT","LOW","PTL","ID","UI","FTV","RACE","HT","SMOKE","AGE")]
Log.info("Filtering out rows by != from dataset lowbwt and column \"BWT\" using value 4205.33243635, and also subsetting columns.")
filterHex <- hex[hex[,c("BWT")] != 4205.33243635, c("BWT")]
Log.info("Now do the same filter & subset, but select complement of columns.")
filterHex <- hex[hex[,c("BWT")] != 4205.33243635, c("BWT","LWT","LOW","PTL","ID","UI","FTV","RACE","HT","SMOKE","AGE")]
}
conn = new("H2OClient", ip=myIP, port=myPort)
tryCatch(test_that("simpleFilterTest_ on data lowbwt", simpleFilterTest_lowbwt_131(conn)), warning = function(w) WARN(w), error = function(e) FAIL(e))
PASS()
|
3d17f96a74ab77da8685f8e1ba59f16911f8d9ae | 4e7372d1bd37ce2a112bde0db2dae68f6ce74001 | /server.R | d6fd1aa6f65926f239dcb213e3c4b2fc21ee82c7 | [] | no_license | vc2004/server_malfunc_prediction | 3f60d97a43450405ff3c31cfc3931757bdadfc2b | 4032e7f93079dc6b23875842afbdeb98a342531d | refs/heads/master | 2021-01-10T02:11:10.283339 | 2016-02-14T14:19:06 | 2016-02-14T14:19:06 | 51,649,699 | 1 | 0 | null | null | null | null | UTF-8 | R | false | false | 634 | r | server.R | library(shiny)
HOURS <- 24
MIN <- 60
SEC <- 60
err <- 8
all_keepalive <- 22204800
lambda <- err/all_keepalive * (HOURS * MIN * SEC/10)
shinyServer(function(input, output) {
output$zero_rate <- renderPrint({
zero_rate <- ppois(1, lambda*input$server*input$day) -
dpois(1, lambda*input$server*input$day)
zero_rate
})
output$one_rate <- renderPrint({
one_rate <- dpois(1, lambda*input$server*input$day)
one_rate
})
output$text <- renderText({
text <- paste("for ", input$server, "servers in ", input$day, " days")
text
})
})
|
3317f954f1cf98543cd29f5b05e2e2449dbf63cb | fb9ee3cde7a557cbdef0eebc9a66356c1b2223ab | /cachematrix.R | d1f9f0c7efae48bf71721f6f4fdaa8578064a8eb | [] | no_license | samantha0214/ProgrammingAssignment2 | 659a85c86f28fb3abec4a81fa028f6c9ab917c27 | 0957ec6cf2666968b7c90c606526f7d4f1492565 | refs/heads/master | 2021-01-16T21:29:34.831412 | 2015-08-05T08:43:46 | 2015-08-05T08:43:46 | 39,734,598 | 0 | 0 | null | 2015-07-26T17:31:07 | 2015-07-26T17:31:06 | null | UTF-8 | R | false | false | 1,319 | r | cachematrix.R | #The following two function are used to find the inverse of a matrix.
#makeMatrix creates a list containing a function to :
#1.set the value of the matrix
#2.get the value of the matrix
#3.set the value of inverse of the matrix
#4.get the value of inverse of the matrix
makeCacheMatrix <- function(x = matrix()) {
i <- NULL
set <- function(y) {
x <<- y
i <<- NULL
}
get <- function() x
setinv <- function(inverse) i <<- inverse
getinv <- function() i
list(set=set, get=get, setinverse=setinv, getinverse=getinv)
}
#The following function returns the inverse of the matrix.
#If the inverse has already been computed,it gets the result and
#skip the computation.
cacheSolve <- function(x, ...) {
i <- x$geti()
if(!is.null(i)) {
message("getting cached data.")
return(i)
}
data <- x$get()
i <- solve(data)
x$seti(i)
i
}
##
#Sample:
# x = cbind(c(1, 1/3), c(1/3, 1))
# m = makeCacheMatrix(x)
# m$get()
# [,1] [,2]
# [1,] 1.0000000 0.3333333
# [2,] 0.3333333 1.0000000
#
# No cache in the first run
# cacheSolve(m)
# [,1] [,2]
# [1,] 1.125 -0.375
# [2,] -0.375 1.125
#
# Second run
# cacheSolve(m)
# getting cached data.
# [,1] [,2]
# [1,] 1.125 -0.375
# [2,] -0.375 1.125
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.