blob_id
stringlengths 40
40
| directory_id
stringlengths 40
40
| path
stringlengths 2
327
| content_id
stringlengths 40
40
| detected_licenses
listlengths 0
91
| license_type
stringclasses 2
values | repo_name
stringlengths 5
134
| snapshot_id
stringlengths 40
40
| revision_id
stringlengths 40
40
| branch_name
stringclasses 46
values | visit_date
timestamp[us]date 2016-08-02 22:44:29
2023-09-06 08:39:28
| revision_date
timestamp[us]date 1977-08-08 00:00:00
2023-09-05 12:13:49
| committer_date
timestamp[us]date 1977-08-08 00:00:00
2023-09-05 12:13:49
| github_id
int64 19.4k
671M
⌀ | star_events_count
int64 0
40k
| fork_events_count
int64 0
32.4k
| gha_license_id
stringclasses 14
values | gha_event_created_at
timestamp[us]date 2012-06-21 16:39:19
2023-09-14 21:52:42
⌀ | gha_created_at
timestamp[us]date 2008-05-25 01:21:32
2023-06-28 13:19:12
⌀ | gha_language
stringclasses 60
values | src_encoding
stringclasses 24
values | language
stringclasses 1
value | is_vendor
bool 2
classes | is_generated
bool 2
classes | length_bytes
int64 7
9.18M
| extension
stringclasses 20
values | filename
stringlengths 1
141
| content
stringlengths 7
9.18M
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
64a5c171340d2432277c711c584783977f10d102
|
f30911417d39d5e539579ffb8c17e9e17e069a8f
|
/scripts/6 código para gerar o design.R
|
23e0596618b21b713aa32f96dae83b3fc9d0d247
|
[
"MIT"
] |
permissive
|
LucasLBrandao/Efeitos-redistributivos-dos-impostos-estaduais-MG
|
d06760ac813dd77a1ff3960a1b0d5b3e0b0838b1
|
bcb20cfb03d1e823ff9661ddcb2da5dd0dc776b7
|
refs/heads/main
| 2023-09-02T04:44:07.668461
| 2021-11-18T01:07:27
| 2021-11-18T01:07:27
| 400,895,240
| 1
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 147
|
r
|
6 código para gerar o design.R
|
files <- paste0("./scripts/",list.files("./scripts"))
for (i in 1:9) {
source(files[i], encoding = "UTF-8")
}
rm(files,lista)
|
266bb4287a30919a79e564c09786d4a934f407db
|
de49df10f28de69ff2220746a00c459ef80ec62b
|
/Phenotypic_Stats.R
|
c4aaf1e6bc481724b5061bfe783f562568262962
|
[] |
no_license
|
ars26/Phenotypic-Selection
|
64cf124a39348538494c056d6fd7a65c5dedcfe1
|
32290a41d7a65150371ae77313c4913a286a1a8a
|
refs/heads/main
| 2023-02-27T14:24:15.507023
| 2021-02-01T04:19:02
| 2021-02-01T04:19:02
| 334,529,046
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 28,042
|
r
|
Phenotypic_Stats.R
|
---
title: "BIO395 Lab Notebook on Phenotypic Selection"
author: "Ars Ghani"
date: "9/11/2019"
output: pdf_document
---
# __BIO395 Lab Notebook__
# *Arranged chronologically from 9/6/19 to the end of semester.*
#_________________________________________________________________________________
# Friday, 6 September 2019
# *One-on-one meeting with Nancy and Michaela*
# Discussed expectations for credit-hour commitments, final paper, and the project.
# The paper will be peer-re#Viewed and about 6 papers and will have incremental writing steps that will be punctuated by re#View from Michaela and Nancy. The final paper will be due (by e-mail) to Dr. Larracuente when the semester ends to ensure BIO395 has been completed properly.
*Outline & Annotated Bibliography: Friday, November 1
*First Draft: Monday, November 25
*Peer ReView: Wednesday, December 2
*Final draft: Friday, December 13
# Saturday, 7 September 2019
Read phenotypic selection papers including the famous Lande & Arnold (1983) and Kingsolver & Pfennig (2007). These papers will guide this research project as they focus on the inheritance and patterns of phenotypic selection. There were certain parts of these papers that piqued my interest:
>"Phenotypic selection in nature is common and can be measured in the field in real time. In particular, directional selection is often sufficiently strong to cause substantial evolutionary change in a relatively short period" (Kingsolver and Pfennig, 2007).
This forms the basis of our project: measuring the phenotypic changes in populations of FLSJ: because hopefully any or all phenotypic selection in the Jays will be evident through the data collected on that certain phenotype. We can then regress fitness as a function of an individual phenotype and that will give us valuable insight on this phenomenon.
>"Directional selection favors larger body size. This pattern contrasts with the pattern for other morphological traits, which tend to experience positive and negative directional selection with equal frequency. Moreover, bigger organisms are generally fitter, regardless of whether larger body size enhances survival, fecundity, or mating success. In fact, directional selection favoring larger body size is sufficiently strong to explain Cope's rule, the widespread tendency for lineages to evolve toward larger body size" (Kingsolver and Pfennig, 2007).
This actually is very exciting and fascinating to me- I would never think that a larger size would translate to being fitter, even though it seems very plausible. This is one of the reason that a phenotype that I would like to focus on would be *body weight* and some other numerical trait which can correlate with a larger size such as *head length.*
# Thursday, 12 September 2019
Started and finished making a spreadsheet which would be the stepping-stone to digitizing the metadata. It contains a separate sheet for each of the list in the FSLJ demography data with its corresponding data frame and columns. The best part about this whole endeavor is that this will serve as a dictionary and reference point for any future analyses. This is because it breaks down each and every data marker of the demography data and also gives a small description of what it is and why it is included.
The spreadsheet can be found here: [Link](REDACTED)
# Friday, 13 September 2019
*One-on-one meeting with Nancy and Michaela*
Got more clarity on the project and approval for the spreadsheet. The spreadsheet is now more detailed and will be filled in within a week. It would be pertinent to choose the phenotypes for analysis during this time since it is the first time we are really diving into the demography data.
```{r}
#Using this for figuring out the type of each data point:
#str()
#typeof()
#This usually gives a type such as integer, date, or character.
```
# Monday, 16 to Thursday, 19 September 2019
Around 10 hours were spent accessing the myriad of data contained in the FLSJ demography data and digitizing it. It personally gave me a lot of time to look at each data entry personally, explore the serious nooks of different tables, and decide upon phenotypes for analysis by looking at data available for them. Since this is a collaborative effort, certain parts of our spreadsheet remain unfilled but for the most part it seems to be done.
Next steps:
*Getting the list of phenotypes confirmed by Michaela and Nancy,
*Data filtering step for each phenotype,
*Knowing how PLE digitization will tie in with this project.
# Thursday, 19 to Friday, 27 September 2019
This was a very productive week especially due to the clear expectations set in the last week's meeting. Right after last week's meeting, the following goals for this week were made:
* Complete the metadata sheet and highlight description for everything we are unsure about,
* Complete lit search: 20 papers and 4 books reference minimum on phenotypes, fitness, and other things our project is focusing on,
* Make a list of phenotypes, split it between the group members, and also put in where it could be found with code,
* Put in why we think this phenotype is important and how it will act in the population.
The metadata sheet was completed. Next group meeting, we can go over description of stuff we were confused about. The metadata sheet can be found here: [Link](https://docs.google.com/spreadsheets/d/1NuCdvMbRx2HeiNA05KI6vO2Q-7aFpiGuhCE3KXMhNtk/edit?usp=sharing)
A lit search was also completed, and in doing so I think I realized what I really want to look into specifically: Cope's Rule in the FLSJ. I have thoroughly looked in Cope's Law and I am sure it will make for a great and interesting dive into the FLSJ phenotypes. The lit search can be found here: [link](https://docs.google.com/document/d/1E3GPUdilLpO63dB6E7dV0KiJbrmSFlZF0OGpRGG9oXs/edit?usp=sharing)
The phenotypes were also divided:
### Phenotypes that I am doing:
* Weight (access$metrics)
* HeadLength (access$metrics)
* BillDepth (access$metrics)
* BillWidth (access$metrics)
* TailLength (access$metrics)
* HeadBreadth (access$metrics)
* Sex (access$metrics)
* Culmen (access$metrics)
* Nares (access$metrics)
### Phenotypes that Michaela is doing:
* Parasite (access$metrics)
* ClutchNum (accesss$Nests)
* PMolt (access$metrics)
* Fat (access$metrics)
* Manus (access$metrics)
* Tarsus (access$metrics)
* Primary7 (access$metrics)
* WingCord (access$metrics)
The week ended with me choosing the discussion paper for the lab meeting, and Michaela and I pondering upon how each and every chosen phenotype will act in population to form sort of mental models and early hypotheses.
# October and Fall Break:
Filled with more lit searches and more importantly finalization of phenotypes.
```{r}
metadata <- load("/cloud/project/all_tables_v2.rdata")
library(ggplot2)
```
# Filtering Each Phenotype Alone.
The phenotypes I chose were Weight, HeadLength, BillDepth, BillWidth, TailLength, HeadBreadth, Culmen, Nares.
My general protocol for filtering these phenotypes is:
1) Remove all data enteries that do not have a data of birth and/or a date of measurement because then it is impossible to determine the age of the bird,
2) Only Demo tract data was used,
3) Data only after 1990 was used,
4) Keep the ID of the bird. This will allow us to merge them together later when calculating corelations between phenotypes.
5) Remove very obviously erroneous data points.
6) Also the ages were categorised in brackets:
??? day 0 to 11: nestling,
??? >day 11 post hatch: fledgelings,
??? >day 70 post hatch: juveniles,
??? >365 post hatch: yearlings (with multiples)
Filtering Weight Alone:
```{r}
WeightAge <- (access$metrics[, c('Weight', 'AgeMeas', 'MeasDate', "ID", 'est.hatch')]) # Storing Weight and Age when these phenotypes were measured from the Access List to a new variable. It is important to remember that the age is in days. I also included date of measurement and estimated date of hatch.
#View(WeightAge) # #Viewing the variable
WeightAge_Cleaned <- WeightAge[which(!is.na(WeightAge$Weight) & WeightAge$AgeMeas != ""), ] # Removing any instances where data is either blank or "NA".
#View(WeightAge_Cleaned) # #Viewing cleaned data
plot(WeightAge_Cleaned$Weight~WeightAge_Cleaned$AgeMeas) # Just a quick scatterplot to see where we stand at the moment. I notice that there is an erroneous data point which is around 170grams. Let's remove it.
boxplot(WeightAge_Cleaned$Weight)
outliersweight<-boxplot(WeightAge_Cleaned$Weight, plot=FALSE)$out # This gives me the exact value of the erroneous data point
print(outliersweight)
WeightAge_Cleaned[which(WeightAge_Cleaned$Weight %in% outliersweight),] # First you need find in which rows the outliers are in and the values they will be impacting.
WeightAge.Cleaned<-WeightAge_Cleaned[!(WeightAge_Cleaned$Weight %in% c(178)), ] # The outlier is in the 1643rd row!
#View(WeightAge.Cleaned)
boxplot(WeightAge.Cleaned$Weight)
ggplot(data = WeightAge.Cleaned, aes(x = AgeMeas , y = Weight)) + geom_point() + ggtitle("Weight vs Age")
# There are 11,750 entries.
```
Filtering Headlength Alone:
```{r}
HeadlengthAge <- (access$metrics[, c('HeadLength', 'AgeMeas', 'MeasDate', "ID", 'est.hatch')]) # Storing HeadLength and Age when these phenotypes were measured from the Access List to a new variable. It is important to remember that the age is in days. I also included date of measurement and estimated date of hatch.
#View(HeadlengthAge) # #Viewing the variable
HeadlengthAge_Cleaned <- HeadlengthAge[which(!is.na(HeadlengthAge$HeadLength) & HeadlengthAge$AgeMeas != ""), ] # Removing any instances where data is either blank or "NA".
#View(HeadlengthAge_Cleaned) # #Viewing cleaned data
plot(HeadlengthAge_Cleaned$HeadLength~HeadlengthAge_Cleaned$AgeMeas) # Just a quick scatterplot to see where we stand at the moment.
boxplot(HeadlengthAge_Cleaned$HeadLength) # No erroneous data points are obvious.
ggplot(data = HeadlengthAge_Cleaned, aes(x = AgeMeas , y = HeadLength)) + geom_point() + ggtitle("Head length vs Age")
# There are 2,711 entries.
```
Filtering BillDepth Alone:
```{r}
BillDepthAge <- (access$metrics[, c('BillDepth', 'AgeMeas', 'MeasDate', "ID", 'est.hatch')]) # Storing BillDepth and Age when these phenotypes were measured from the Access List to a new variable. It is important to remember that the age is in days. I also included date of measurement and estimated date of hatch.
#View(BillDepthAge) # #Viewing the variable
BillDepthAge_Cleaned <- BillDepthAge[which(!is.na(BillDepthAge$BillDepth) & BillDepthAge$AgeMeas != ""), ] # Removing any instances where data is either blank or "NA".
#View(BillDepthAge_Cleaned) # #Viewing cleaned data
plot(BillDepthAge_Cleaned$BillDepth~BillDepthAge_Cleaned$AgeMeas) # Just a quick scatterplot to see where we stand at the moment.
boxplot(BillDepthAge_Cleaned$BillDepth) # A huge outlier noticed at ~80. Let's remove it.
outliersbilldepth<-boxplot(BillDepthAge_Cleaned$BillDepth, plot=FALSE)$out # This gives me the exact value of the erroneous data point
print(outliersbilldepth) # We can see it as 82.
BillDepthAge.Cleaned<-BillDepthAge_Cleaned[!(BillDepthAge_Cleaned$BillDepth %in% c(82)), ]
boxplot(BillDepthAge.Cleaned$BillDepth)
ggplot(data = BillDepthAge.Cleaned, aes(x = AgeMeas , y = BillDepth)) + geom_point() + ggtitle("BillDepth vs Age")
# There are 2,760 entries.
```
Filtering BillWidth Alone:
```{r}
BillWidthAge <- (access$metrics[, c('BillWidth', 'AgeMeas', 'MeasDate', "ID", 'est.hatch')]) # Storing BillWidth and Age when these phenotypes were measured from the Access List to a new variable. It is important to remember that the age is in days. I also included date of measurement and estimated date of hatch.
#View(BillWidthAge) # #Viewing the variable
BillWidthAge_Cleaned <- BillWidthAge[which(!is.na(BillWidthAge$BillWidth) & BillWidthAge$AgeMeas != ""), ] # Removing any instances where data is either blank or "NA".
#View(BillWidthAge_Cleaned) # #Viewing cleaned data
plot(BillWidthAge_Cleaned$BillWidth~BillWidthAge_Cleaned$AgeMeas) # Just a quick scatterplot to see where we stand at the moment.
boxplot(BillWidthAge_Cleaned$BillWidth) # No erroneous data detected.
ggplot(data = BillWidthAge_Cleaned, aes(x = AgeMeas , y = BillWidth)) + geom_point() + ggtitle("Bill Width vs Age")
# There are 2,758 entries.
```
Filtering TailLength Alone:
```{r}
TailLengthAge <- (access$metrics[, c('TailLength', 'AgeMeas', 'MeasDate', "ID", 'est.hatch')]) # Storing TailLength and Age when these phenotypes were measured from the Access List to a new variable. It is important to remember that the age is in days. I also included date of measurement and estimated date of hatch.
#View(TailLengthAge) # #Viewing the variable
TailLengthAge_Cleaned <- TailLengthAge[which(!is.na(TailLengthAge$TailLength) & TailLengthAge$AgeMeas != ""), ] # Removing any instances where data is either blank or "NA".
#View(TailLengthAge_Cleaned) # #Viewing cleaned data
plot(TailLengthAge_Cleaned$TailLength~TailLengthAge_Cleaned$AgeMeas) # Just a quick scatterplot to see where we stand at the moment.
boxplot(TailLengthAge_Cleaned$TailLength)
ggplot(data = TailLengthAge_Cleaned, aes(x = AgeMeas , y = TailLength)) + geom_point() + ggtitle("Tail Length vs Age")
# There are 4,429 entries.
```
Filtering HeadBreadth Alone:
```{r}
HeadBreadthAge <- (access$metrics[, c('HeadBreadth', 'AgeMeas', 'MeasDate', "ID", 'est.hatch')]) # Storing HeadBreadth and Age when these phenotypes were measured from the Access List to a new variable. It is important to remember that the age is in days. I also included date of measurement and estimated date of hatch.
#View(HeadBreadthAge) # #Viewing the variable
HeadBreadthAge_Cleaned <- HeadBreadthAge[which(!is.na(HeadBreadthAge$HeadBreadth) & HeadBreadthAge$AgeMeas != ""), ] # Removing any instances where data is either blank or "NA".
#View(HeadBreadthAge_Cleaned) # #Viewing cleaned data
plot(HeadBreadthAge_Cleaned$HeadBreadth~HeadBreadthAge_Cleaned$AgeMeas) # Just a quick scatterplot to see where we stand at the moment.
boxplot(HeadBreadthAge_Cleaned$HeadBreadth) #Two outliers/errors can be seen.
outliersheadbreadth<-boxplot(HeadBreadthAge_Cleaned$HeadBreadth, plot=FALSE)$out # This gives me the exact value of the erroneous data point
print(outliersheadbreadth) # read them off as 252 and 247.
HeadBreadthAge.Cleaned<-HeadBreadthAge_Cleaned[!(HeadBreadthAge_Cleaned$HeadBreadth %in% c(247,252)), ]
boxplot(HeadBreadthAge.Cleaned$HeadBreadth)
#View(HeadBreadthAge.Cleaned)
ggplot(data = HeadBreadthAge.Cleaned, aes(x = AgeMeas , y = HeadBreadth)) + geom_point() + ggtitle("Headbreadth vs Age")
# There are 694 entries.
```
Filtering Culmen Alone:
```{r}
CulmenAge <- (access$metrics[, c('Culmen', 'AgeMeas', 'MeasDate', "ID", 'est.hatch')]) # Storing Culmen and Age when these phenotypes were measured from the Access List to a new variable. It is important to remember that the age is in days. I also included date of measurement and estimated date of hatch.
#View(CulmenAge) # #Viewing the variable
Culmen_Cleaned <- CulmenAge[which(!is.na(CulmenAge$Culmen) & CulmenAge$AgeMeas != ""), ] # Removing any instances where data is either blank or "NA".
#View(Culmen_Cleaned) # #Viewing cleaned data
plot(Culmen_Cleaned$Culmen~Culmen_Cleaned$AgeMeas) # Just a quick scatterplot to see where we stand at the moment.
ggplot(data = Culmen_Cleaned, aes(x = AgeMeas , y = Culmen)) + geom_point() + ggtitle("Culmen vs Age")
# There are 2,731 entries.
```
# Filtering Each Phenotype in Pairs
Filtering Weight and HeadLength:
```{r}
WeightHL <- merge(WeightAge.Cleaned, HeadlengthAge_Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(WeightHL)
ggplot(data = WeightHL, aes(x = Weight, y = HeadLength)) + geom_point() + ggtitle("Weight & Head Length")
cor(WeightHL$Weight,WeightHL$HeadLength)
# There are 2,693 entries.
```
Filtering Weight and BillDepth:
```{r}
WeightBD <- merge(WeightAge.Cleaned, BillDepthAge.Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(WeightBD)
ggplot(data = WeightBD, aes(x = Weight, y = BillDepth)) + geom_point() + ggtitle("Weight & Bill Depth")
cor(WeightBD$Weight,WeightBD$BillDepth)
# There are 2,743 entries.
```
Filtering Weight and Bill Width:
```{r}
WeightBW <- merge(WeightAge.Cleaned, BillWidthAge_Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(WeightBW)
ggplot(data = WeightBW, aes(x = Weight, y = BillWidth)) + geom_point() + ggtitle("Weight & Bill Width")
cor(WeightBW$Weight,WeightBW$BillWidth)
# There are 2,742 entries.
```
Filtering Weight and Tail Length:
```{r}
WeightTL <- merge(WeightAge.Cleaned, TailLengthAge_Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(WeightTL)
ggplot(data = WeightTL, aes(x = Weight, y = TailLength)) + geom_point() + ggtitle("Weight & Tail Length")
cor(WeightTL$Weight,WeightTL$TailLength)
# There are 4,415 entries.
```
Filtering Weight and Head Breadth:
```{r}
WeightHB <- merge(WeightAge.Cleaned, HeadBreadthAge.Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(WeightHB)
ggplot(data = WeightHB, aes(x = Weight, y = HeadBreadth)) + geom_point() + ggtitle("Weight & Head Breadth")
cor(WeightHB$Weight,WeightHB$HeadBreadth)
# There are 691 entries.
```
Filtering Weight and Culmen:
```{r}
WeightCulmen <- merge(WeightAge.Cleaned, Culmen_Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(WeightCulmen)
ggplot(data = WeightCulmen, aes(x = Weight, y = Culmen)) + geom_point() + ggtitle("Weight & Culmen")
cor(WeightCulmen$Weight,WeightCulmen$Culmen)
# There are 2,714 entries.
```
Filtering HeadLength and BillDepth:
```{r}
HLBD <- merge(HeadlengthAge_Cleaned, BillDepthAge.Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(HLBD)
ggplot(data = HLBD, aes(x = HeadLength, y = BillDepth)) + geom_point() + ggtitle("Head Length & Bill Depth")
cor(HLBD$HeadLength,HLBD$BillDepth)
# There are 2,624 entries.
```
Filtering HeadLength and BillWidth:
```{r}
HLBW <- merge(HeadlengthAge_Cleaned, BillWidthAge_Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(HLBW)
ggplot(data = HLBW, aes(x = HeadLength, y = BillWidth)) + geom_point() + ggtitle("Head Length & Bill Width")
cor(HLBW$HeadLength,HLBW$BillWidth)
# There are 2,623 entries.
```
Filtering HeadLength and TailLength:
```{r}
HLTL <- merge(HeadlengthAge_Cleaned, TailLengthAge_Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(HLTL)
ggplot(data = HLTL, aes(x = HeadLength, y = TailLength)) + geom_point() + ggtitle("Head Length & TailLength")
cor(HLTL$HeadLength,HLTL$TailLength)
# There are 2,218 entries.
```
Filtering HeadLength and Head Breadth:
```{r}
HLHB <- merge(HeadlengthAge_Cleaned, HeadBreadthAge.Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(HLHB)
ggplot(data = HLHB, aes(x = HeadLength, y = HeadBreadth)) + geom_point() + ggtitle("Head Length & Head Breadth")
cor(HLHB$HeadLength,HLHB$HeadBreadth)
# There are 694 entries.
```
Filtering Head Length and Culmen:
```{r}
HLCulmen <- merge(HeadlengthAge_Cleaned, Culmen_Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(HLCulmen)
ggplot(data = HLCulmen, aes(x = HeadLength, y = Culmen)) + geom_point() + ggtitle("Head Length & Culmen")
cor(HLCulmen$HeadLength,HLCulmen$Culmen)
# There are 2,597 entries.
```
Filtering Bill Depth and Bill Width:
```{r}
BDBW <- merge(BillDepthAge.Cleaned, BillWidthAge_Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(BDBW)
ggplot(data = BDBW, aes(x = BillDepth, y = BillWidth)) + geom_point() + ggtitle("Bill Depth & Bill Width")
cor(BDBW$BillDepth,BDBW$BillWidth)
# There are 2,757 entries.
```
Filtering Bill Depth and Tail Length:
```{r}
BDTL <- merge(BillDepthAge.Cleaned, TailLengthAge_Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(BDTL)
ggplot(data = BDTL, aes(x = BillDepth, y = TailLength)) + geom_point() + ggtitle("Bill Depth & Tail Length")
cor(BDTL$BillDepth,BDTL$TailLength)
# There are 2,171 entries.
```
Filtering Bill Depth and Head Breadth:
```{r}
BDHB <- merge(BillDepthAge.Cleaned, HeadBreadthAge.Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(BDHB)
ggplot(data = BDHB, aes(x = BillDepth, y = HeadBreadth)) + geom_point() + ggtitle("Bill Depth & Head Breadth")
cor(BDHB$BillDepth,BDHB$HeadBreadth)
# There are 690 entries.
```
Filtering Bill Depth and Culmen:
```{r}
BDCulmen <- merge(BillDepthAge.Cleaned, Culmen_Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(BDCulmen)
ggplot(data = BDCulmen, aes(x = BillDepth, y = Culmen)) + geom_point() + ggtitle("Bill Depth & Culmen")
cor(BDCulmen$BillDepth,BDCulmen$Culmen)
# There are 2,698 entries.
```
Filtering Bill Width and Tail Length:
```{r}
BWTL <- merge(BillWidthAge_Cleaned, TailLengthAge_Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(BWTL)
ggplot(data = BWTL, aes(x = BillWidth, y = TailLength)) + geom_point() + ggtitle("Bill Width & Tail Length")
cor(BWTL$BillWidth,BWTL$TailLength)
# There are 2,170 entries.
```
Filtering Bill Width and Culmen:
```{r}
BWCulmen <- merge(BillWidthAge_Cleaned, Culmen_Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(BWCulmen)
ggplot(data = BWCulmen, aes(x = BillWidth, y = Culmen)) + geom_point() + ggtitle("Bill Width & Culmen")
cor(BWCulmen$BillWidth,BWCulmen$Culmen)
# There are 2,697 entries.
```
Filtering Tail Length and Head Breadth:
```{r}
TLHB <- merge(TailLengthAge_Cleaned, HeadBreadthAge.Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(TLHB)
ggplot(data = TLHB, aes(x = TailLength, y = HeadBreadth)) + geom_point() + ggtitle("Tail Length & Head Breadth")
cor(TLHB$TailLength,TLHB$HeadBreadth)
# There are 624 entries.
```
Filtering Tail Length and Culmen:
```{r}
TLCulmen <- merge(TailLengthAge_Cleaned, Culmen_Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(TLCulmen)
ggplot(data = TLCulmen, aes(x = TailLength, y = Culmen)) + geom_point() + ggtitle("Tail Length & Culmen")
cor(TLCulmen$TailLength,TLCulmen$Culmen)
# There are 2,149 entries.
```
Filtering Head Breadth and Culmen:
```{r}
HBCulmen <- merge(HeadBreadthAge.Cleaned, Culmen_Cleaned, by = "ID") # Since every Jay has a unique ID in the DEMO dataset, we can merge the two data lists horizontally together.
#View(HBCulmen)
ggplot(data = HBCulmen, aes(x = HeadBreadth, y = Culmen)) + geom_point() + ggtitle("Tail Length & Culmen")
cor(HBCulmen$HeadBreadth,HBCulmen$Culmen)
# There are 693 entries.
```
The procedure to ensure that we are controlling for birds only in demo and not in the peripheral territories is as follows (in psuedocode):
Go to the list all.breeders,
Ensure that the following columns have loaded properly: ID, TerrYr.
This is to make sure that the ID of the Jay can be matched with the territory and Year and thus the data can be properly filtered. The last 2 digits of TerrYr are the years the data was collected in.
TLDR: all.breeders->filter out ID, and TerrYr column. Use merge function to merge the this new list with each phenotype using ID as an argument.
```{r}
PerTer<-all.breeders[, c('TerrYr', 'ID', 'Tract', 'Year', 'Terr')] #Make a new list of just these columns
View(PerTer) #This has some "South" in it. Lets remove that. Also there are no PLEs in this.
PerTer_Cleaned<-na.omit(PerTer) # Removes all NA
View(PerTer_Cleaned)
PerTer_Cleaned1<-PerTer[PerTer$Tract!= "South",] # Removes all South tracts.
View(PerTer_Cleaned1)
```
# Now to ensure that we are controlling for the correct year because Demo study tract stabilised around 1990.
```{r}
PerTer_Year<-PerTer_Cleaned1[PerTer_Cleaned1$Year >= "1990",]
View(PerTer_Year)
```
# Importing a list of peripheral territories to be removed. This list will be done against our PerTer_Year and will remove any matches from the Terr column.
```{r}
#toremovePT<-read.csv("/cloud/project/PTNancy1.csv")
#View(toremovePT)
remove <- c("ATNT", "BEAR", "CHGR", "CHIK", "CIRB", "CIRC" ,"COTG" ,"EAGB", "EAGR", "FTZP", "GRVR", "GRVW", "HARB", "HILD" ,"HTOP", "KAJO", "LARB", "LARY", "MIDG", "NARY", "NENE" ,"NNAR", "OSEE", "PINN", "PITA", "PMPN", "RISE", "RTHA", "TOAK" ,"TP11", "WARY", "WCHK", "WICK" ,"WISB", "WISE", "SEGR", "WCIT", "PINE", "PINS", "SWCT", "CRIS", "FELD", "JESS", "KELV", "TNGL", "WINO" ,"WLAK") #This makes a variable "remove" that contains all the peripheral territories that need to be removed.
finalphenotype<-PerTer_Year[!PerTer_Year$Terr %in% remove,] #Removal sucessful!
View(finalphenotype)
```
Now hopefully, this can be merged with our phenotypes, and although our number will decrease, we will get controlled data.
```{r}
#Trying to merge weight with the new controlled for periphery data along with marking the age of the Jays as nestlings, fledgelings, juveniles, and yearlings This will represent the final step in ensuring data filteration and we can move on to data filteration of fitness measures and ultimately results. This will also include a discussion and control for covariates along with potential fitness variable to regress against. Final file names will be called systematically!
```
```{r}
#File name: WeightAge.Cleaned
#11,750 entries
WeightMerge<-merge(WeightAge.Cleaned, finalphenotype, by = "ID")
View(WeightMerge)
WeightMerge$AgeCategory<-99 #adding a new column to prepare for categorical ages such as juveniles, yearlings etc.
WeightMerge$AgeCategory[WeightMerge$AgeMeas<=11]<-"Nestling"
WeightMerge$AgeCategory[11<WeightMerge$AgeMeas & WeightMerge$AgeMeas<=70]<-"Fledgling"
WeightMerge$AgeCategory[70<WeightMerge$AgeMeas & WeightMerge$AgeMeas<=365]<-"Juveniles"
WeightMerge$AgeCategory[365<WeightMerge$AgeMeas & WeightMerge$AgeMeas<=730]<-"Yearling"
WeightMerge$AgeCategory[730<WeightMerge$AgeMeas & WeightMerge$AgeMeas<=1095]<-"Second Yearling"
WeightMerge$AgeCategory[1095<WeightMerge$AgeMeas & WeightMerge$AgeMeas<=1460]<-"Third Yearling"
WeightMerge$AgeCategory[1460<WeightMerge$AgeMeas & WeightMerge$AgeMeas<=1825]<-"Fourth Yearling"
WeightMerge$AgeCategory[1825<WeightMerge$AgeMeas & WeightMerge$AgeMeas<=2190]<-"Fifth Yearling"
WeightMerge$AgeCategory[1825<WeightMerge$AgeMeas & WeightMerge$AgeMeas<=2190]<-"Fifth Yearling"
WeightMerge$AgeCategory[2190<WeightMerge$AgeMeas & WeightMerge$AgeMeas<=2555]<-"Sixth Yearling"
WeightMerge$AgeCategory[2555<WeightMerge$AgeMeas & WeightMerge$AgeMeas<=2920]<-"Seventh Yearling"
WeightMerge$AgeCategory[2920<WeightMerge$AgeMeas & WeightMerge$AgeMeas<=3285]<-"Eighth Yearling"
WeightMerge$AgeCategory[3285<WeightMerge$AgeMeas & WeightMerge$AgeMeas<=3650]<-"Ninth Yearling"
#The number of entries dropped to 1,524.
```
|
dd3afb8b9b1f947793e9816c15e6583585ec99ec
|
722b1b22e1c2cf99d79d23dbe1181c103236b3b2
|
/Scripts/Data/ForStan/create_data_issue.R
|
1d08b94212d01952d1f07ae3c42fdd6e2bba171f
|
[] |
no_license
|
GFJudd33/UNHRCprocedures
|
4bd15c1a2410f64cf0a08386499260965b885859
|
8af19f845c4460535476cf579fa3b3eba71825ae
|
refs/heads/master
| 2020-12-24T16:07:33.194168
| 2016-03-10T13:49:28
| 2016-03-10T13:49:28
| 40,205,597
| 0
| 0
| null | 2015-08-04T19:57:55
| 2015-08-04T19:45:42
| null |
UTF-8
|
R
| false
| false
| 2,286
|
r
|
create_data_issue.R
|
################################################################################
###
###
### Code to create data object for use in Stan code.
### Authors: Gleason Judd, Brad Smith
###
### Created: 11/12/2014
### Last modified: 11/12/2014
###
###
### Purpose: This script creates a data object that estimates the model with
### issue - specific alphas.
###
###
###
################################################################################
rm(list=ls())
library(foreign)
setwd("~/Google Drive/Research/IO_latent.engage")
data <- read.dta("Data/IOfull.dta")
cyear <- read.dta("Data/CtryYearIO.dta")
####### Build Dataset ########
# Generate issue-area variable
data$issue <- as.numeric(as.factor(data$ResponseScore))
# Subset full data to include only needed Variables in engagement dataset
eng <- as.data.frame(cbind(data[,c("Country",
"Year",
"COWid",
"QualCode",
"issue")]))
# Generate country year ID, sort data by this
eng$ID <- eng$Year*1000+eng$COWid
order.eng <- eng[order(eng$ID),]
# Pull out identifier for the loop
EngID <- sort(unique(eng$ID))
X <- matrix(0,length(EngID),max(cyear$CompCount))
Y <- matrix(0, length(EngID), max(cyear$CompCount))
for (i in 1:length(EngID)){
# Subset original matrix for the given ID variable
sub <- eng[eng$ID==EngID[i],]
# Fill in rows of X object with corresponding Quality Scores
for(j in 1:length(sub$QualCode)){
X[i,j] <- sub$QualCode[j]
Y[i,j] <- sub$issue[j]
}
}
# Now generate a count of responses for each country year
A <- rep(0,nrow(X))
for (i in 1:nrow(X)){
A[i] <- sum(as.numeric(as.logical(X[i,]>0)))
}
# Now generate complaint counts for each country year
# This is for use in the poisson, not needed for stan_v4
cyear$ID <- cyear$Year*1000+cyear$COWid
Z <- as.data.frame(cyear[,c("ID",
"CompCount")])
CountID <- Z$ID
Z <- Z$CompCount
# Clear Workspace, save data object
rm(list=setdiff(ls(),c("X","Y","Z","EngID", "CountID","A")))
# Save this as object engagement.R
save.image("~/Google Drive/Research/IO_latent.engage/Data/engagement_issue.Rdata")
|
26ea42eb12c25ab9cc1483ed61ba63fe8b2244ab
|
42887bb4d93c96125f492852176283469dcb771e
|
/R/BCG.Metric.Membership.R
|
61860b8d2bbba977233f0024ce002ea3ee6baa9e
|
[
"MIT"
] |
permissive
|
leppott/BCGcalc
|
a9fb1aa7d686fc53b9833dc1e3b12c6751e5d3ba
|
e4e226581c79c3603f7e65a7b523c61a6b48cbca
|
refs/heads/main
| 2023-09-01T20:52:27.555241
| 2023-08-29T20:29:44
| 2023-08-29T20:29:44
| 121,774,105
| 4
| 0
|
MIT
| 2021-12-01T21:54:21
| 2018-02-16T16:36:58
|
R
|
UTF-8
|
R
| false
| false
| 9,039
|
r
|
BCG.Metric.Membership.R
|
#' @title BCG Metric Membership
#'
#' @description Biological Condition Gradient fuzzy membership for metrics.
#'
#' @details Converts metric values into BCG membership values.
#' Uses a rules table to define the metrics, scoring range, and direction for
#' each named index.
#'
#' Deprecated col_SITE_TYPE for col_INDEX_CLASS in v2.0.0.9001.
#'
#' @param df.metrics Wide data frame with metric values to be evaluated.
#' @param df.rules Data frame of metric thresholds to check.
#' @param input.shape Shape of df.metrics; wide or long. Default is wide.
#' @param col_SAMPLEID Column name for sample id. Default = "SAMPLEID"
#' @param col_INDEX_NAME Column name for index name. Default = "INDEX_NAME"
#' @param col_INDEX_CLASS Column name for index class Default = "INDEX_CLASS"
#' @param col_LEVEL Column name for level. Default = "LEVEL"
#' @param col_METRIC_NAME Column name for metric name. Default = "METRIC_NAME"
#' @param col_RULE_TYPE Column name for rule type (e.g., Rule0).
#' Default = "RULE_TYPE"
#' @param col_LOWER Column name for lower limit. Default = "LOWER"
#' @param col_UPPER Column name for upper limit. Default = "UPPER"
#' @param col_METRIC_VALUE Column name for metric value.
#' Default = "METRIC_VALUE"
#' @param col_INCREASE Column name for if the metric increases.
#' Default = "INCREASE"
#' @param ... Arguments passed to `BCG.MetricMembership` used internally
#'
#' @return Returns a data frame of results in the long format.
#'
#' @examples
#' # library(readxl)
#' # library(BioMonTools)
#'
#' # Calculate Metrics
#' df_samps_bugs <- readxl::read_excel(
#' system.file("extdata/Data_BCG_PugLowWilVal.xlsx"
#' , package = "BCGcalc")
#' , guess_max = 10^6)
#' myDF <- df_samps_bugs
#' myCols <- c("Area_mi2", "SurfaceArea", "Density_m2", "Density_ft2")
#' # populate missing columns prior to metric calculation
#' col_missing <- c("INFRAORDER", "HABITAT", "ELEVATION_ATTR", "GRADIENT_ATTR"
#' , "WSAREA_ATTR", "HABSTRUCT", "UFC")
#' myDF[, col_missing] <- NA
#' df_met_val_bugs <- BioMonTools::metric.values(myDF
#' , "bugs"
#' , fun.cols2keep = myCols)
#'
#' # Import Rules
#' df_rules <- readxl::read_excel(system.file("extdata/Rules.xlsx"
#' , package = "BCGcalc")
#' , sheet="Rules")
#'
#' # Run function
#' df_met_memb <- BCG.Metric.Membership(df_met_val_bugs, df_rules)
#'
#' # Show Results
#' #View(df_met_memb)
#'
#' # Save Results
#' write.table(df_met_memb
#' , file.path(tempdir(), "Metric_Membership.tsv")
#' , row.names = FALSE
#' , col.names = TRUE
#' , sep = "\t")
#~~~~~~~~~~~~~~~~~~~~~~~~~~
# QC
# df.metrics <- df_met_val_bugs
# df.rules <- df_rules
# input.shape <- "wide"
# scores <- BCG.Metric.Membership(df.metrics, df.rules, "wide")
# col_SAMPLEID = "SAMPLEID"
# col_INDEX_NAME = "INDEX_NAME"
# col_INDEX_CLASS = "INDEX_CLASS"
# col_LEVEL = "LEVEL"
# col_METRIC_NAME = "METRIC_NAME"
# col_RULE_TYPE = "RULE_TYPE"
# col_LOWER = "LOWER"
# col_UPPER = "UPPER"
# col_METRIC_VALUE = "METRIC_VALUE"
# col_INCREASE = "INCREASE"
#~~~~~~~~~~~~~~~~~~~~~~~~~~
#' @export
BCG.Metric.Membership <- function(df.metrics
, df.rules
, input.shape = "wide"
, col_SAMPLEID = "SAMPLEID"
, col_INDEX_NAME = "INDEX_NAME"
, col_INDEX_CLASS = "INDEX_CLASS"
, col_LEVEL = "LEVEL"
, col_METRIC_NAME = "METRIC_NAME"
, col_RULE_TYPE = "RULE_TYPE"
, col_LOWER = "LOWER"
, col_UPPER = "UPPER"
, col_METRIC_VALUE = "METRIC_VALUE"
, col_INCREASE = "INCREASE"
, ...) {
# QC
# DEPRECATE SITE_TYPE
if (exists("col_SITE_TYPE")) {
col_INDEX_CLASS <- col_SITE_TYPE
msg <- "The parameter 'col_SITE_TYPE' was deprecated in v2.0.0.9001. \n
Use 'col_INDEX_CLASS' instead."
message(msg)
} ## IF ~ col_SITE_TYPE
# scrub off "Tibble" as it throws off other data operations below
df.metrics <- as.data.frame(df.metrics)
df.rules <- as.data.frame(df.rules)
# QC, Column names
## use inputs
#
# Metrics to long
if (input.shape == "wide") {##IF.input.shape.START
df.long <- reshape2::melt(df.metrics
, id.vars = c(col_SAMPLEID
, col_INDEX_NAME
, col_INDEX_CLASS)
, variable.name = col_METRIC_NAME
, value.name = col_METRIC_VALUE)
} else {
df.long <- df.metrics
}##IF.input.shape.END
#
# ColNames to Upper Case
## has to be df.long if upper case df.metrics the metric names become upper case
names(df.long) <- toupper(names(df.long))
names(df.rules) <- toupper(names(df.rules))
#
# INDEX_CLASS to lowercase
df.long[, col_INDEX_CLASS] <- tolower(df.long[, col_INDEX_CLASS])
df.rules[, col_INDEX_CLASS] <- tolower(df.rules[, col_INDEX_CLASS])
#
# Extra columns may have text (convert to numeric)
suppressWarnings(df.long[, col_METRIC_VALUE] <- as.numeric(df.long[
, col_METRIC_VALUE]))
#
# Check for Missing Metrics (only for index provided in metric df)
## ignore site type for checking
### added back 20220214, for when run a single index region
# & rules has more than one
### and metrics are not the same in each region
index.data <- unique(df.long[, col_INDEX_NAME])
index.data.region <- unique(df.long[, col_INDEX_CLASS])
rules.metrics.names <- unique(df.rules[(df.rules[, col_INDEX_NAME] %in%
index.data
& df.rules[, col_INDEX_CLASS] %in%
index.data.region)
, col_METRIC_NAME])
rules.metrics.TF <- rules.metrics.names %in% unique(df.long[
, col_METRIC_NAME])
rules.metrics.len <- length(rules.metrics.names)
#
if (sum(rules.metrics.TF) != rules.metrics.len) {##IF.RulesCount.START
Msg <- paste0("Data provided does not include all metrics in rules table. "
, "The following metrics are missing: "
, paste(rules.metrics.names[!rules.metrics.TF]
, collapse = ", "))
stop(Msg)
}##IF.RulesCount.END
# merge metrics and checks
df.merge <- merge(df.long, df.rules
, by.x = c(col_INDEX_NAME, col_INDEX_CLASS, col_METRIC_NAME)
, by.y = c(col_INDEX_NAME, col_INDEX_CLASS, col_METRIC_NAME))
#
# The above only returns a single match, not all.
# dplyr version
#df.merge2 <- df.long %>% left_join(df.rules)
#
# Excel FuzzyMembership function is much simpler than the Access code
# need to apply only to select rows
df.merge[, "MEMBERSHIP"] <- NA
#
boo.score.0 <- df.merge[, col_METRIC_VALUE] <= df.merge[, col_LOWER]
#
boo.score.1 <- df.merge[, col_METRIC_VALUE] >= df.merge[, col_UPPER]
#
# Use ifelse() to avoid errors with NA
df.merge[, "MEMBERSHIP"] <- ifelse(boo.score.0
, 0
, ifelse(boo.score.1
, 1
, NA)
)
#
boo.score.calc <- is.na(df.merge[,"MEMBERSHIP"])
df.merge[boo.score.calc, "MEMBERSHIP"] <- df.merge[boo.score.calc
, col_METRIC_VALUE] /
(df.merge[boo.score.calc, col_UPPER] - df.merge[boo.score.calc
, col_LOWER]) -
df.merge[boo.score.calc, col_LOWER] /
(df.merge[boo.score.calc, col_UPPER] - df.merge[boo.score.calc, col_LOWER])
# direction
boo.direction <- df.merge[, col_INCREASE]
df.merge[!boo.direction, "MEMBERSHIP"] <- 1 - df.merge[!boo.direction
, "MEMBERSHIP"]
# can mess up 0 and 1
# Access uses 2 different formulas
# wide name
df.merge[,"NAME_WIDE"] <- paste0(df.merge[, col_INDEX_CLASS]
, "_L"
, df.merge[, col_LEVEL]
, "_"
, df.merge[, col_RULE_TYPE]
, "_"
, df.merge[, col_METRIC_NAME])
#
# create output
return(df.merge)
#
}##FUNCTION.END
|
df2912fd783ee283f10fe1e8cef55db9e3251710
|
a474308e2677232a30a60c2604627aa9b7a23f7a
|
/R/loom.R
|
7d0c7a2184d8b19b75fcfbe7216c79976eed6099
|
[] |
no_license
|
antortjim/sleepapp
|
3dcea8ddf1a8bbc65fc3419bf9ecc0a32368d744
|
1ad9ac5e5d5d6034c2f49b3a4b7fd6c8cbbc27f6
|
refs/heads/master
| 2023-02-04T14:50:27.497147
| 2020-12-23T19:16:37
| 2020-12-23T19:16:37
| 316,051,219
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 4,614
|
r
|
loom.R
|
#' Load a Scope loomfile into R
#'
#' Open a Scope loomfile and load its data
#' into R as a SingleCellExperiment object
#' @param path Path to a Scope loomfile
#' @import rhdf5
#' @import HDF5Array
#' @import SingleCellExperiment
#' @details A scope loomfile is an HDF5 file with the following structure:
#' \itemize{
#' \item \textbf{matrix}: counts matrix
#' \item \textbf{col_attrs}: cell metatadata. cell identifiers (barcodes) are available under CellID field
#' \item \textbf{row_attrs}: gene metatadata. gene identifiers are available under Gene field
#' \item Extra layers may be available
#' }
#' @export
loom2sce <- function(path) {
message(paste0("Loading loomfile -> ", path))
sce <- tryCatch({
# Extracting the count matrix.
mat <- HDF5Array::HDF5Array(path, "matrix")
mat <- t(mat)
# Extracting the row and column metadata.
col.attrs <- rhdf5::h5read(path, "col_attrs")
if (length(col.attrs)) {
col.df <- data.frame(col.attrs)
row.names(col.df) <- col.attrs$CellID
} else {
col.df <- DataFrame(matrix(0, nrow(mat), 0))
}
row.attrs <- rhdf5::h5read(path, "row_attrs")
if (length(row.attrs)) {
row.df <- data.frame(row.attrs)
row.names(row.df) <- row.attrs$Gene
} else {
row.df <- NULL
}
# Extracting layers (if there are any)
optional <- rhdf5::h5ls(path)
is.layer <- optional$group=="/layer"
if (any(is.layer)) {
layer.names <- optional$name[is.layer,]
other.layers <- vector("list", length(layer.names))
names(other.layers) <- layer.names
for (layer in layer.names) {
current <- HDF5Array::HDF5Array(path, file.path("/layer", layer))
other.layers[[layer]] <- t(current)
}
} else {
other.layers <- list()
}
# Returning SingleCellExperiment object.
sce <- SingleCellExperiment::SingleCellExperiment(c(counts=mat, other.layers), rowData=row.df, colData=col.df)
}, error = function(e) {
Sys.sleep(60)
message("Loomfile unavailable. Trying again in 50 seconds")
loom2sce(path)
})
return(sce)
}
#' Export a SingleCellExperiment as a loomfile
#'
#' @param sce SingleCellExperiment instance
#' @param output_loomfile Path to newly created loomfile. It must not exist
#'# @param clean If TRUE, all columns containing ClusterMarkers in gene metadata are dropped
#' @import SingleCellExperiment
#' @import loomR
#' @importFrom dplyr select
#' @export
sce2loom <- function(sce, output_loomfile) {
features <- colnames(rowData(sce))
features <- features[features != "Gene"]
if (length(features) == 0) {
features_metadata <- NULL
} else {
features_metadata <- as.list(rowData(sce)[, features, drop=FALSE])
names(features_metadata) <- features
}
# Cell metadata
cell_metadata <- as.list(as.data.frame(colData(sce)) %>% dplyr::select(-CellID))
if (length(cell_metadata) == 0) cell_metadata <- NULL
names(features_metadata)
# Open loom connection and fill it with data
loomcon <- loomR::create(
filename = output_loomfile,
data = counts(sce),
cell.attrs = cell_metadata,
feature.attrs = features_metadata,
transpose = TRUE # because counts(sce) returns features as rows
)
# Close the connection!!!!
loomcon$close_all()
}
#' Keep cells whose barcode is included in the barcodes_file
#'
#' The cell barcode must be available
#' in the CellID column of the colData dataframe in the sce object
#' @param sce A SingleCellExperiment object
#' @param barcodes_file A txt file leading to cell barcodes, one per line
#' @export
filter_sce <- function(sce, barcodes_file, verbose=0) {
barcodes <- read.table(barcodes_file)[, 1]
keep_cells <- (colData(sce)[,"CellID"] %in% barcodes)
if (verbose) table(keep_cells)
stopifnot(nrow(colData(sce)) != 0)
return(sce[,keep_cells])
}
#' @importFrom data.table fwrite
#' @import SingleCellExperiment
#' @export
sce2csv <- function(sce, output, assay) {
if (is.null(names(assays(sce))))
sce_counts <- assays(sce)
else
sce_counts <- assays(sce)[[assay]]
data.table::fwrite(x = as.data.frame(as.matrix(sce_counts)), file = output[1], row.names = F, col.names = F)
data.table::fwrite(x = as.data.frame(colData(sce)), file = output[2])
data.table::fwrite(x = as.data.frame(rowData(sce)), file = output[3])
}
|
57b08ab79da65126158c3a7c96fab71d95d9e7c4
|
ee0689132c92cf0ea3e82c65b20f85a2d6127bb8
|
/06-DS/15a-objects.R
|
924d23ca3e27c8e789439fc006630f4823dcf0ed
|
[] |
no_license
|
DUanalytics/rAnalytics
|
f98d34d324e1611c8c0924fbd499a5fdac0e0911
|
07242250a702631c0d6a31d3ad8568daf9256099
|
refs/heads/master
| 2023-08-08T14:48:13.210501
| 2023-07-30T12:27:26
| 2023-07-30T12:27:26
| 201,704,509
| 203
| 29
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,018
|
r
|
15a-objects.R
|
# Objects
#
m1 = matrix(c(10:1, rep(5,10), rep(c(5,6),5),seq_len(length.out=10)), byrow=F, ncol =4)
colnames(m1) = c('sub1','sub2','sub3','sub4')
rownames(m1) = paste('R',1:10,sep='')
a1= array(1:24, dim=c(4,3,2), dimnames = list( c(paste('c',1:4,sep='')), c('d1','d2','d3'),c('s1','s2')) )
a1
df1 = data.frame(sub1=10:1, sub2=5, sub3=rep(c(5,6),5), sub4=seq_len(length.out=10))
df1
# Lists
list1 = list(sub1=10:1, sub2=rep(5,3), sub3=rep(c(5,6),4),sub4=seq_len(length.out=10))
list1
list2 = list(num=1:10, vec=c(1:5, 4:5, 6:8, NA, 9, 12, 17), lg=log(1:5))
list2
#Data Frame df3
newnum = c(2:5, 4:5, 6:8, 9,17)
fac1 = factor(c(rep("A", 3), rep("B", 3), rep("C", 3), rep("D",2)))
fac2 = gl(n=2, k=1, length=11, labels = month.abb[1:2])
newnum
fac2
fac1
df3 = data.frame(response = newnum, pred1 = fac1, pred2 = fac2)
df3
#rm(list=ls())
student1 <- readRDS("student1.rds")
# Dataframe student
str(student1)
s1 = student1[,c('br', 'city','java','dbms', 'dwm','vlsi', 'cpp', 'cbnst')]
str(s1)
student1[,c(15:22)]
|
01b491b796e1532253e1c4a4e013625c9adaf59f
|
fad0b084f2b9bd157e27cb642f137fad3d14d8b8
|
/man/get_grades_f2.Rd
|
4fcb594f236d53af9c5b1456e445c07d2d933da2
|
[] |
no_license
|
thelayc/laycReportCards
|
7414a5754dd44cf8789cff6313809388a347b772
|
ee1ed75d13a41eed0a06df581ac4451efd62d46a
|
refs/heads/master
| 2016-09-06T12:41:09.314066
| 2016-03-09T03:07:48
| 2016-03-09T03:07:48
| 32,465,167
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 613
|
rd
|
get_grades_f2.Rd
|
% Generated by roxygen2 (4.1.0): do not edit by hand
% Please edit documentation in R/11_get_grades_f2.R
\name{get_grades_f2}
\alias{get_grades_f2}
\title{get_grades_f2()}
\usage{
get_grades_f2(student_rcard)
}
\arguments{
\item{student_rcard}{A vector containing a student's report card information}
}
\description{
This function takes a student's report card as input (vector), and extract the student's grades and courses.
}
\examples{
parsed_pdf <-read_pdf('my_pdf_file.pdf')
students_list <- split_students(parsed_pdf)
student_rcard <- students_list[[1]]
get_grades(student_rcard)
}
\keyword{student_rcard}
|
6fc8fbd1b29fcd42c14fdbe73f38cf892644175e
|
9a4518c0ac57cfaffd4069a39fcdccd6a8173949
|
/R/geom-spatial.R
|
8c4223249e1f19d9ca713fcfb496a9a9122cfdc6
|
[] |
no_license
|
paleolimbot/ggspatial
|
78c1047e344ec658d092851ce6fb2a83d10c5db3
|
5c4c903a0785702d83acfe6d9753294882ed676c
|
refs/heads/master
| 2023-08-19T22:53:31.102638
| 2023-08-18T00:27:19
| 2023-08-18T00:27:19
| 63,102,201
| 357
| 37
| null | 2023-08-18T00:27:20
| 2016-07-11T21:06:12
|
R
|
UTF-8
|
R
| false
| false
| 6,748
|
r
|
geom-spatial.R
|
#' Spatial-aware ggplot2 layers
#'
#' These layers are much like their counterparts, [stat_identity][ggplot2::stat_identity],
#' [geom_point][ggplot2::geom_point], [geom_path][ggplot2::geom_path],
#' and [geom_polygon][ggplot2::geom_polygon], except they have a `crs` argument that
#' ensures they are projected when using [coord_sf][ggplot2::coord_sf]. Stats are applied to the x and y coordinates
#' that have been transformed.
#'
#' @param mapping An aesthetic mapping created with [ggplot2::aes()].
#' @param data A data frame or other object, coerced to a data.frame by [ggplot2::fortify()].
#' @param crs The crs of the x and y aesthetics, or NULL to use default lon/lat
#' crs (with a message).
#' @param geom The geometry to use.
#' @param position The position to use.
#' @param ... Passed to the combined stat/geom as parameters or fixed aesthetics.
#' @param show.legend,inherit.aes See [ggplot2::layer()].
#'
#' @return A [ggplot2::layer()].
#' @export
#'
#' @examples
#' cities <- data.frame(
#' x = c(-63.58595, 116.41214, 0),
#' y = c(44.64862, 40.19063, 89.9),
#' city = c("Halifax", "Beijing", "North Pole")
#' )
#'
#' library(ggrepel)
#' ggplot(cities, aes(x, y)) +
#' geom_spatial_point(crs = 4326) +
#' stat_spatial_identity(aes(label = city), geom = "label_repel") +
#' coord_sf(crs = 3857)
#'
stat_spatial_identity <- function(
mapping = NULL, data = NULL, crs = NULL, geom = "point",
position = "identity", ..., show.legend = NA, inherit.aes = TRUE
) {
ggplot2::layer(
data = data, mapping = mapping, stat = StatSpatialIdentity,
geom = geom, position = position, show.legend = show.legend,
inherit.aes = inherit.aes, params = list(na.rm = FALSE, crs = crs, ...)
)
}
#' @rdname stat_spatial_identity
#' @export
geom_spatial_point <- function(mapping = NULL, data = NULL, crs = NULL, ...) {
ggplot2::geom_point(mapping = mapping, data = data, stat = StatSpatialIdentity, crs = crs, ...)
}
#' @rdname stat_spatial_identity
#' @export
geom_spatial_path <- function(mapping = NULL, data = NULL, crs = NULL, ...) {
ggplot2::geom_path(mapping = mapping, data = data, stat = StatSpatialIdentity, crs = crs, ...)
}
#' @rdname stat_spatial_identity
#' @export
geom_spatial_polygon <- function(mapping = NULL, data = NULL, crs = NULL, ...) {
geom_polypath(mapping = mapping, data = data, stat = StatSpatialIdentity, crs = crs, ...)
}
#' @rdname stat_spatial_identity
#' @export
geom_spatial_text <- function(mapping = NULL, data = NULL, crs = NULL, ...) {
ggplot2::geom_text(mapping = mapping, data = data, stat = StatSpatialIdentity, crs = crs, ...)
}
#' @rdname stat_spatial_identity
#' @export
geom_spatial_label <- function(mapping = NULL, data = NULL, crs = NULL, ...) {
ggplot2::geom_label(mapping = mapping, data = data, stat = StatSpatialIdentity, crs = crs, ...)
}
#' @rdname stat_spatial_identity
#' @export
geom_spatial_text_repel <- function(mapping = NULL, data = NULL, crs = NULL, ...) {
ggrepel::geom_text_repel(mapping = mapping, data = data, stat = StatSpatialIdentity, crs = crs, ...)
}
#' @rdname stat_spatial_identity
#' @export
geom_spatial_label_repel <- function(mapping = NULL, data = NULL, crs = NULL, ...) {
ggrepel::geom_label_repel(mapping = mapping, data = data, stat = StatSpatialIdentity, crs = crs, ...)
}
#' Coordinate transform
#'
#' Coordinate transform, propotating non-finite cases.
#'
#' @param x The x coordinate
#' @param y The y coordinate
#' @param from From CRS
#' @param to To CRS
#' @param na.rm Warn for non-finite cases?
#'
#' @return A data.frame with x and y components.
#' @export
#'
#' @examples
#' xy_transform(c(1, 2, 3), c(1, 2, 3), to = 3857)
#' xy_transform(c(1, 2, 3), c(NA, NA, NA), to = 3857)
#' xy_transform(c(1, 2, 3), c(NA, 2, 3), to = 3857)
#' xy_transform(c(1, 2, 3), c(1, 2, NA), to = 3857)
#'
xy_transform <- function(x, y, from = 4326, to = 4326, na.rm = FALSE) {
from <- sf::st_crs(from)
to <- sf::st_crs(to)
finite <- is.finite(x) & is.finite(y)
if(!all(finite) && !na.rm) warning(sum(!finite), " non-finite points removed by xy_transform()")
# if none are finite, return none
if(!any(finite)) {
return(data.frame(x = rep(NA_real_, length(x)), y = rep(NA_real_, length(y))))
}
# no transform necessary if CRS is equal
if(from == to) return(data.frame(X = x, Y = y))
# create coordinates for finite, infinite cases
df_finite <- data.frame(id = which(finite), X = x[finite], Y = y[finite])
if(any(!finite)) {
df_non_finite <- data.frame(id = which(!finite), X = NA_real_, Y = NA_real_)
} else {
df_non_finite <- data.frame(id = numeric(0), X = numeric(0), Y = numeric(0))
}
sf_finite <- sf::st_as_sf(df_finite, coords = c("X", "Y"), crs = from)
# finite points get transformed
sf_finite_trans <- sf::st_transform(sf_finite, crs = to)
df_finite_trans <- as.data.frame(sf::st_coordinates(sf_finite_trans))
df_finite_trans$id <- which(finite)
# non-finite points get rbinded
df_trans <- rbind(
df_finite_trans,
df_non_finite
)
# return arranged by id, without id column
df_trans <- df_trans[order(df_trans$id), c("X", "Y")]
names(df_trans) <- c("x", "y")
rownames(df_trans) <- NULL
df_trans
}
#' Create spatial-aware stat transformations
#'
#' @param ParentStat The parent Stat
#' @param class_name The class name
#'
#' @return A ggproto Stat subclass
#' @noRd
#'
create_spatial_stat_class <- function(ParentStat, class_name) {
ggplot2::ggproto(
class_name,
ParentStat,
extra_params = c(ParentStat$extra_params, "crs"),
required_aes = unique(c("x", "y", ParentStat$required_aes)),
compute_layer = function(self, data, params, layout) {
if(is.null(params$crs)) {
message("Assuming `crs = 4326` in ", class_name, "()")
from_crs <- sf::st_crs(4326)
} else {
from_crs <- sf::st_crs(params$crs)
}
if(!is.null(layout$coord_params$crs)) {
# project data XY coordinates
if(!all(c("x", "y") %in% colnames(data))) {
stop("Missing required aesthetics x, y in ", class_name, "()")
}
# project `x` and `y`
data[c("x", "y")] <- xy_transform(
data$x, data$y,
from = from_crs,
to = layout$coord_params$crs
)
} else {
warning(
"Ignoring transformation in ", class_name, "(). Use coord_sf() with a crs to project this layer.",
call. = FALSE
)
}
# do whatever the parent geom was going to do with it
ggplot2::ggproto_parent(ParentStat, self)$compute_layer(data, params, layout)
}
)
}
# the workhorses of the above functions
StatSpatialIdentity <- create_spatial_stat_class(ggplot2::StatIdentity, "stat_spatial_identity")
|
f865633ebb5b228c41ef158770d500a0b43fae9c
|
cb53f7e3d95513a4ef7ac086b5b4c34ba62ed1b3
|
/man/options_locales.Rd
|
fa6138f5a5454638f680fd20eb9672e99c09bc7e
|
[] |
no_license
|
24p11/dimRactivite
|
c381a4b273dc7b9694169612a45c08bd1d0841f8
|
619bd12fa334cfac4fd9ea32b04b9fd777a34f83
|
refs/heads/master
| 2021-06-29T20:48:44.231817
| 2021-02-15T08:06:39
| 2021-02-15T08:06:39
| 198,663,596
| 3
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 961
|
rd
|
options_locales.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/fonctions_complementaires.R
\name{options_locales}
\alias{options_locales}
\title{Mise en oeuvre des options locales pour le décompte des séjours dans les données d'activité}
\usage{
options_locales(DF, val = NULL, niveau = NULL)
}
\arguments{
\item{val}{chaine de caractères, variable géographique qui sera choisie pour la selection (ex : service, pole,...).}
\item{niveau}{chaine de caractères, variable géographique qui sera choisie pour la selection (ex : service, pole,...).}
\item{df}{tibble de type rum/rsa, utilisé entre autres en entrée de \code{\link{get_data}}}
}
\value{
tibble de type rum/rsa, le tibble local auquel a été ajoutée.
}
\description{
La fonction ajoute une variable doublon ( TRUE / FALSE ) à un objet de type rum/rsa
qui sera utilisée dans les tableaux de bord pour dédoublonner les séjours au moment de leur décompte
}
\examples{
}
|
554440fc92a37456d8b73caf80cb819f5a17e366
|
e7d40077078eae86b06770e95474d245b33472a1
|
/tests/testthat.R
|
806430a816b241c631e11b1f278da0eb9d78b541
|
[
"MIT"
] |
permissive
|
lpantano/DEGreport
|
1f90ac81886da7b96c024dfc8dbfe4831cf20469
|
0e961bfc129aab8b70e50892cb017f6668002e1a
|
refs/heads/main
| 2023-01-31T23:33:51.568775
| 2022-11-22T14:40:17
| 2022-11-22T14:40:17
| 17,710,312
| 20
| 14
|
MIT
| 2023-01-20T13:55:22
| 2014-03-13T13:06:49
|
R
|
UTF-8
|
R
| false
| false
| 93
|
r
|
testthat.R
|
library(testthat)
library(edgeR)
library(DESeq2)
library(DEGreport)
test_check("DEGreport")
|
1d625a638e3d8fb1e0a912dc6e501ee6453c9c2c
|
affee151ef20940e52eea1473635c8f4e35b65de
|
/man/allen.string.from.result.Rd
|
1ef6dab94c8c7b0dc20f41dcbbb0d8834b5dfc13
|
[] |
no_license
|
tsdye/allen.archaeology
|
d433c346b6ae93935cb369a8dd917e267aee2cb0
|
ae1e3806df684ffa27fbf2ec1645f178ba101e18
|
refs/heads/master
| 2023-04-11T14:12:16.751804
| 2023-03-25T12:59:26
| 2023-03-25T12:59:26
| 245,044,592
| 2
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 503
|
rd
|
allen.string.from.result.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/set.R
\name{allen.string.from.result}
\alias{allen.string.from.result}
\title{String representation of an Allen set from a result vector}
\usage{
allen.string.from.result(result.vector)
}
\arguments{
\item{result.vector}{A result vector}
}
\value{
A string representing an Allen set
}
\description{
Given a result vector, return a string corresponding to the non-zero
elements of the result vector
}
\author{
Thomas S. Dye
}
|
294acf2091659ea25deea120858c57834e322e95
|
256c40c02a738a59541a8699491b08b21f458991
|
/Code/mapping.R
|
d8bbce9057ebdd696355648e572558a57e314d77
|
[] |
no_license
|
dmorison/GIS-in-R-spatial-crime-data-analysis
|
5d874c22926476b4e5d0f64a3555bb1cd6a7ca9e
|
60963d56b7d0de5be9f90f3a96fcba2e6e71fccf
|
refs/heads/master
| 2021-01-22T17:49:08.045108
| 2017-04-27T07:22:22
| 2017-04-27T07:22:22
| 85,037,504
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 3,384
|
r
|
mapping.R
|
libs <- c("ggplot2", "rgeos", "rgdal", "maps", "mapdata", "mapproj", "maptools", "sp", "ggmap")
lapply(libs, library, character.only = TRUE)
### Variables to keep
retain <- c("Month", "Longitude", "Latitude", "LSOA.code", "LSOA.name", "Crime.type")
### Import the City of London dataset
city.init <- read.csv("Data/city-of-london-police-coord-data/2016-12/2016-12-city-of-london-street.csv")
city <- city.init[which(complete.cases(city.init$Longitude)), ]
city <- city[retain]
### Import the Met dataset
met.init <- read.csv("Data/met-police-coord-data/2016-12/2016-12-metropolitan-street.csv")
met <- met.init[which(complete.cases(met.init$Longitude)), ]
met <- met[retain]
all <- rbind(city, met)
all <- all[which(all$Latitude > 51.275), ]
all <- all[which(all$Latitude < 51.7), ]
all <- all[which(all$Longitude > -0.53), ]
all <- all[which(all$Longitude < 0.35), ]
df <- all
bike <- all[all$Crime.type == "Bicycle theft", ]
###
# map("world2Hires", "UK")
# points(mapproject(x = city.init$Longitude, y = city.init$Latitude))
###
dir_1 <- "References/Creating-maps-in-R-master/Creating-maps-in-R-master/data/"
dir_2 <- "Data/statistical-gis-boundaries-london/ESRI/"
ldn1 <- readOGR(file.path(dir_1), layer = "london_sport")
proj4string(ldn1) <- CRS("+init=epsg:27700")
ldn1.wgs84 <- spTransform(ldn1, CRS("+init=epsg:4326"))
ggplot(ldn1.wgs84) + geom_polygon(aes(x = long, y = lat, group = group), fill = "white", colour = "black") +
geom_point(data = bike, aes(x = Longitude, y = Latitude), colour = "red") +
theme(axis.title = element_blank(), text = element_text(size = 14, face = "bold")) +
labs(title = "Bicycle theft in Greater London - December 2016")
# plot the area codes
ggplot(ldn1.wgs84) + geom_polygon(aes(x = long, y = lat, group = group)) +
geom_point(data = all, aes(x = Longitude, y = Latitude, colour = LSOA.name)) +
theme(axis.title = element_blank(), text = element_text(size = 14, face = "bold"), legend.position = "none") +
scale_colour_manual(values = rainbow(4991)) +
labs(title = "Boroughs of London distinguished by crime incidences",
subtitle = "Shading of colours within each borough represent the localised areas")
### wrong projections
map1 <- ggplot(ldn1)
map1 <- map1 + geom_polygon(aes(x = long, y = lat, group = group))
map1 + geom_point(data = df, aes(x = Longitude, y = Latitude), colour = "red")
### transforming coordinates ###
class(df)
coordinates(df) <- ~Longitude+Latitude
class(df)
proj4string(df) <- CRS("+init=epsg:4326")
df <- spTransform(df, CRS(proj4string(ldn1)))
identical(proj4string(ldn1), proj4string(df))
df.t <- data.frame(df)
###
plot(ldn1)
points(df.t$Longitude, df.t$Latitude)
###
map2 <- ggplot()
map2 + geom_polygon(data = ldn1, aes(x = long, y = lat, group = group)) +
geom_point(data = df.t, aes(x = Longitude, y = Latitude), colour = "red")
##########################################################
# proj4string(ldn1) <- CRS("+init=epsg:27700")
# ldn1.wgs84 <- spTransform(ldn1, CRS("+init=epsg:4326"))
ldn1.f <- fortify(ldn1, region = "ons_label")
ldn1.f <- merge(ldn1.f, ldn1@data, by.x = "id", by.y = "ons_label")
map_ldn <- ggplot(ldn1.f, aes(x = long, y = lat, group = group, fill = Partic_Per))
map_ldn + geom_polygon() +
coord_equal()
########################################################
ldn2 <- readOGR(file.path(dir_2), layer = "OA_2011_London_gen_MHW")
plot(ldn2)
|
527d3db23c331adb80d7c12c9c4184fc342021d1
|
006666eece54ebcbfc1f29b56f8146f808c781a1
|
/method/0_simulateonelargePLINK.R
|
475b9f1be250ce6140599e9e1bfc574989385265
|
[] |
no_license
|
MoisesExpositoAlonso/nap
|
d56d278bcc4c7e49fb3a8d4497df962a0ed94004
|
8fcbe7aa89f97bcf615943318fd1daa931e7a34f
|
refs/heads/master
| 2020-03-28T09:11:16.076766
| 2019-05-13T00:16:13
| 2019-05-13T00:16:13
| 148,019,561
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 2,935
|
r
|
0_simulateonelargePLINK.R
|
library(bigmemory)
library(data.table)
library(devtools)
library(dplyr)
library(moiR)
load_all('.')
set.seed(1)
################################################################################
#### SIMULATE GENOME MATRIX ####
totsnps=1e4
totinds=1.5e3
# x<-XBMsimulate(n=1.5e3,m=1e4,force=T)
# MAPwrite(x,path="../databig/example")
# RAW012write(x,path="../databig/example")
# PEDwrite(x,path="../databig/example")
# BEDmake(path="../databig/example")
#### Reatach
x<-attach.big.matrix("../databig/example.desc")
################################################################################
#### SIMULATE ARRAY OF GENOTYPES ####
as<-c(0.01,0.5)
bs<-c(0.01,0.5)
ps<-c(0,0.2)
mu=1
svars=c(0.01,0.1)
ss=0
epi<-c(0.8,1,1.2)
FITs<-c(1,2)
replicates=1 # assuming one replicate
#### grid of simulation
mysim<-expand.grid(bs,as,ps,svars,epi,FITs)
colnames(mysim)<-c("b","a","p","svar","epi","mod")
head(mysim)
dim(mysim)
#### effect SNPs
## generate some distributions, that I can compare later
# s_svar01<- c(exp(rnorm(500,0,0.01))-1 , rep(0,1e4 -500))
# hist(s_svar01[s_svar01!=0])
# ssaveC(s_svar01,"databig/s_svar01.txt") # all simulations will have same S but scaled to a Svar
#### Run simulations of phenotypes and create FAM
d<-matrix(ncol=nrow(mysim),nrow=length(inds) ) %>% data.frame
i=1
h2s<-c()
for( i in 1:nrow(mysim)){
l=mysim[i,]
s = ssimC(1:(totsnps),fn(l["svar"]));
# all(wC(x[],s,1,1,1) == wCBM(x@address,s,1:totsnps-1,1:totinds-1,1,1,1))
w=wCBM(x@address,
s,
mycols = 1:totsnps,myrows= 1:totinds,
mode=fn(l["mod"]),
epi=fn(l["epi"])
)
y=sampleWC(w,
b=fn(l["b"]),
a=fn(l["a"]),
p=fn(l["p"]),
rep=1)
d[,i]<-y
h2<-format(var(w,na.rm = T) / var(y,na.rm = T),digits=2)
h2s[i]<-h2
colnames(d)[i]<- paste0( collapse="_",
c(paste0( colnames(mysim), (mysim[i,])),
paste0("h2",h2)
)
)
}
head(d)
d[is.na(d)]<-0
#### write simulations array (and add h2)
mysim$h2<-h2s
write.csv(file = "../databig/simulationgrid.tsv",mysim)
#### write general fam
structure<-data.frame(FID=1:totinds,IID=1:totinds,PAT=0,MAT=0,SEX=0)
dfam<-cbind(structure,d)
dim(dfam)
head(dfam[,1:6])
write.table(row.names = F,col.names = T, quote = F,
file="../databig/simexample.fam",
dfam
)
#### write fams per folder
for( i in 1:ncol(d)){
message(colnames(d)[i])
system(paste0("mkdir ../databig/",colnames(d)[i]))
write.table(row.names = F,col.names = F, quote = F,
file=paste0("../databig/",colnames(d)[i],"/example.fam"),
cbind(structure,data.frame(PHENO=d[,i]))
)
system(paste0("ln ../databig/example.bed ../databig/",colnames(d)[i],"/example.bed"))
system(paste0("ln ../databig/example.bim ../databig/",colnames(d)[i],"/example.bim"))
}
|
227929db0f9452b303db8a1aac76cccc21f134c0
|
84dd0562ebd14dab4913a27313c3e82b792e3d76
|
/R/year_survRate.R
|
46fd5febb609e858c68b45283598a66adbb546b9
|
[
"Apache-2.0"
] |
permissive
|
yikeshu0611/TCGAimmunelncRNA
|
daaca3623943b63b83ad98ef357c179ad6048c2f
|
9c86b20caa6674a976aca6ea3111755fd34bb5b5
|
refs/heads/main
| 2023-06-11T09:01:11.701450
| 2021-07-03T07:10:31
| 2021-07-03T07:10:31
| 381,676,631
| 1
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 326
|
r
|
year_survRate.R
|
year_survRate <- function(survfit,times){
x <- summary(fit,times = times)
data.frame(model = deparse(substitute(survfit)),
strata = x$strata,
times = x$time,
surv = x$surv,
lower = x$lower,
upper = x$upper,
n.risk = x$n.risk)
}
|
4edb976b5d80c5ee2628c2f6aa4833173cb852ac
|
3c76c3b1e9a91043a0d872cf2f5ad1e07538f6e0
|
/plot4.R
|
72e032c8f8fae2b07773b559fa5812c28e98e6ef
|
[] |
no_license
|
dpeka/Exploratory_Graphs_Coursera
|
b095b3f9b66f75526bf3aebca0ad598854c40c6f
|
b00e37f76fcb91cd696d07a29d4adbf238a1d83f
|
refs/heads/master
| 2020-04-13T03:15:26.646360
| 2018-12-23T21:58:54
| 2018-12-23T21:58:54
| 162,926,213
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 2,016
|
r
|
plot4.R
|
# This will plot the first graph
# which is a histogram
library(lubridate)
library(dplyr)
energy <- read.csv("household_power_consumption.txt", sep=";", header=T)
# concatenating Data and Time into a new field, which I then process
# with lubridate to convert into R date format. I am using
# timezone of Paris as per the UCI website:
energy <- energy %>% mutate(date.time = paste(Date, Time, sep = " ")) %>%
mutate(date.time = dmy_hms(date.time, tz = "Europe/Paris"))
energy$Date <- as.Date(energy$Date, format="%d/%m/%Y")
energy.feb <- energy %>%
subset(Date == "2007-02-01" | Date == "2007-02-02")
rm(energy)
energy.feb$Global_active_power <- as.numeric(energy.feb$Global_active_power)
par(mfcol=c(2,2))
# plotting time series
plot(energy.feb$date.time, energy.feb$Global_active_power, type="n",
ylab = "Global Active Power (kilowatts)", xlab = NA)
lines(energy.feb$date.time, energy.feb$Global_active_power)
# plot all subMetering readings against time. Add legend.
plot(energy.feb$date.time, energy.feb$Sub_metering_1, type="n",
ylab = "Energy sub metering", xlab=NA)
lines(energy.feb$date.time, energy.feb$Sub_metering_1)
lines(energy.feb$date.time, energy.feb$Sub_metering_2, col="red")
lines(energy.feb$date.time, energy.feb$Sub_metering_3, col="blue")
legend("topright", col = c("black", "red", "blue"),
legend = c("Sub_metering_1", "Sub_metering_2", "Sub_metering_3"),
lty=c(1,1,1), bty="n")
# plot of Voltage over time
with(energy.feb, plot(date.time, Voltage, type="n",
ylab="Voltage", xlab="datetime"))
with(energy.feb, lines(date.time, Voltage))
# plot of Global reactive power over time
with(energy.feb, plot(date.time, Global_reactive_power, type="n",
ylab="Global_reactive_power", xlab="datetime"))
with(energy.feb, lines(date.time, Global_reactive_power))
# now creating the PNG file
dev.copy(png,'plot4.png', width = 480, height = 480)
dev.off()
|
f6119afe9757673fb14c51bc6a45d542c4aa3c02
|
5c67b449677b90e98bca928c2948ae15b4f34a49
|
/www/Internationallandings/2017/RectangleLand - 2016.R
|
a8035d480e547a4549a5726389a0c7af2710e32b
|
[] |
no_license
|
isabelle08/digital-stockbook
|
9dc6e86d859b1e264bed2b801ad44c0f5438524d
|
2c1b91ac4a3184a23d62a1a41b6b976ca2564686
|
refs/heads/master
| 2021-12-13T06:46:57.941478
| 2021-12-09T16:49:37
| 2021-12-09T16:49:37
| 218,807,888
| 1
| 0
| null | 2021-09-01T15:24:17
| 2019-10-31T16:15:34
|
R
|
WINDOWS-1252
|
R
| false
| false
| 3,357
|
r
|
RectangleLand - 2016.R
|
library(mapplots)
library(shapefiles)
library(RColorBrewer)
library(splancs)
library(rgdal)
coast <- read.shapefile('//galwayfs03/FishData/Mapping/Shapefiles/Europe')
#eez <- read.csv('F:\\Mapping\\Shapefiles\\Maritime Boundaries\\eez.csv')
eezWorld <- readOGR("//Galwayfs03/FishData/StockBooks/_StockBook2017/Maps/VMS/PropSpecies","eez_boundaries") # marineregions v9
eezWorld1 <- subset(eezWorld,Line_type%in%c('Treaty','200 NM','Connection line','Median line','Unilateral claim (undisputed)'))
legend.grid <-
function (x, y = NULL, breaks, col, digits = 2, suffix = "",
type = 1, pch = 15, pt.cex = 2.5, bg = "lightblue", ...)
{
ncol <- length(breaks) - 1
if (missing(col))
col = c("white", heat.colors(ncol - 1)[(ncol - 1):1])
tempfun <- function(x) format(x, digits = digits)
min <- sapply(breaks[(ncol):1], tempfun)
mid <- sapply((breaks[(ncol):1] + breaks[(ncol + 1):2])/2,
tempfun)
max <- sapply(breaks[(ncol + 1):2], tempfun)
if (type == 1)
legend <- paste(mid, suffix, sep = "")
if (type == 2)
legend <- paste(min, " - ", max, suffix, sep = "")
legend(x, y, legend = legend, col = col[ncol:1], pch = pch,
pt.cex = pt.cex, bg = bg, ...)
}
setwd('H:/Stockbook/shiny/WIP/www/Internationallandings/2017')
sp <- sort(c('MAC','JAX','HER','WHB','NEP','WHG','HAD','ANF','CRE','LEZ','SPR','RAJ'
,'HKE','COD','SCE','LIN','ALB','BFT','POL','POK','PLE','SOL','WIT','JOD','BOC','LEM','SWO','PIL'))
# https://stecf.jrc.ec.europa.eu/data-reports, see email steve holmes in folder: F:\StockBooks\_Stockbook2016\Maps\STECF
stecf <- read.csv('stecf_2016.csv')
stecf$Species <- ifelse(stecf$species %in% c('JAD','RJG','RAJ','RJY','SRX'),'RAJ',as.character(stecf$species))
stecf$Species <- ifelse(stecf$Species %in% c('BOR','BOF','BOC'),'BOC',as.character(stecf$Species))
stecf$Species <- factor(stecf$Species)
stecf$area <- 1.0*cos(stecf$lat*pi/180)*60*1.852 * 0.5*60*1.852
xlim <- c(-30.5,15)
ylim <- c(30.25,70)
grd <- with(stecf, make.multigrid(lon,lat,1000*landings/area,Species,1,0.5,xlim,ylim) )
col <- colorRampPalette(c("lightyellow","yellow","orange","red", "brown4"))(8)
xlim <- c(-16.5,5)
ylim <- c(45.25,63)
#grd[['RAJ']] <- NULL
for(s in sp){
# S <- SP[match(s,sp)]
png(paste0('Rect',s,'.png'),2.75,3.25,'in',6,res=600)
# pdf(paste0('./PlotsLand/Rect',s,'.pdf'),2.75,3.25,pointsize=6)
par(mar=c(1,1,.1,.1),lwd=0.5)
basemap(xlim=xlim,ylim=ylim,xaxt='n',yaxt='n',ann=F,bg='lightcyan')
if(s%in%c('ALB','BFT')) basemap(xlim=xlim-5,ylim=ylim-5,xaxt='n',yaxt='n',ann=F,bg='lightcyan')
axis(1,-90:90,labels=c(paste0(90:1,'°W'),0,paste0(1:90,'°E')),lwd=0.5,tcl=-0.15,padj=-2,cex.axis=0.8)
axis(2,0:90,labels=paste0(0:90,'°N'),lwd=0.5,tcl=-0.15,padj=1.7,cex.axis=0.8)
if(!is.null(grd[[s]])){
breaks <- breaks.grid(grd[[s]],ncol=8,zero=F)
draw.grid(grd[[s]],breaks,col=col)
}
plot(eezWorld1,lwd=0.25,add=T,col='darkblue')
draw.shape(coast,col='darkolivegreen1',lwd=0.25) #cornsilk1
title <- expression(paste('kg/',km^2))
if(!is.null(grd[[s]])) legend.grid('bottomright',NULL,breaks,col,2,'',1,bg='lightcyan',inset=0.02,title=title) else
text(-10,60,'No data',cex=2)
box()
dev.off()
}
|
480dd7ea470eb2fdd2e84c1b4c6ce22517fd7db2
|
4248965331d139de1baa1039333a1147c1bb483e
|
/data-raw/add_internal_data.R
|
d133122f034c453c0c89e544e224d567a99a96fe
|
[] |
no_license
|
tonyfujs/eextoddh
|
7de9b3678577e9a4a779cb65ad9eb3f747fc9bd8
|
a08daad5cf6c498d13edbaa9d88989ca33ba1dbc
|
refs/heads/master
| 2021-06-19T17:08:26.820856
| 2017-07-20T14:09:15
| 2017-07-20T14:09:15
| 97,842,799
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 6,547
|
r
|
add_internal_data.R
|
library(dplyr)
library(jsonlite)
root_url <- ddhconnect:::stg_root_url
# CHECK ASSUMPTIONS -------------------------------------------------------
# taxonomy_stg <- ddhconnect::get_lovs(root_url = ddhconnect:::stg_root_url)%>%
# rename(ddh_machine_name = machine_name, field_lovs = list_value_name)
# names(taxonomy_stg) <- paste0('stg_', names(taxonomy_stg))
# taxonomy_prod <- ddhconnect::get_lovs(root_url = ddhconnect:::production_root_url)%>%
# rename(ddh_machine_name = machine_name, field_lovs = list_value_name)
# names(taxonomy_prod) <- paste0('prod_', names(taxonomy_prod))
#
# diff_taxonomy <- full_join(taxonomy_prod, taxonomy_stg, by = c("prod_ddh_machine_name"="stg_ddh_machine_name",
# "prod_field_lovs"="stg_field_lovs"))
# diff_taxonomy <- diff_taxonomy %>%
# mutate(
# same_vocabulary_name = prod_vocabulary_name == stg_vocabulary_name,
# same_tid = prod_tid == stg_tid
# ) %>%
# select(prod_ddh_machine_name, prod_vocabulary_name, stg_vocabulary_name, prod_field_lovs, prod_tid, stg_tid, same_vocabulary_name, same_tid)
#
# readr::write_csv(diff_taxonomy, path = 'diff_taxonomy.csv', na = '')
# STEP 1: Get data --------------------------------------------------------
# Matadata master
httr::set_config(httr::config(ssl_verifypeer = 0L))
googlesheets::gs_ls("ddh_metadata_master")
ddh_master_key <- googlesheets::gs_title("ddh_metadata_master")
lookup <- googlesheets::gs_read(ddh_master_key)
mdlib_api_mapping <- readr::read_csv('./data-raw/ddh_microdata_mapping.csv') %>%
filter(!is.na(ddh_fields))
taxonomy <- ddhconnect::get_lovs(root_url = root_url)%>%
rename(ddh_machine_name = machine_name, field_lovs = list_value_name)
#taxonomy <- readr::read_csv('./data-raw/taxonomy_cache.csv')
fields <- ddhconnect::get_fields(root_url = root_url) %>%
filter(data_type == 'microdata') %>%
rename(ddh_machine_name = machine_name)
# TO BE REMOVED
fields$ddh_machine_name[fields$ddh_machine_name == "field__wbddh_depositor_notes"] <- "field_wbddh_depositor_notes"
# Clean lookup table ------------------------------------------------------
# Format lookup
lookup <- lookup %>%
filter(form %in% c('Microdata', 'Basic')) %>%
full_join(mdlib_api_mapping, by = c('field_key'='ddh_fields')) %>%
select(field_label:microdata_library,
mdlib_section = microdatalib_section,
mdlib_field = microdatalib_field,
mdlib_json_field = json_fields)
lookup$field_lovs[lookup$field_lovs == 'PeopleSoft'] <- NA
lookup <- lookup %>% filter(!field_key == 'granularity')
# Format taxonomy
vocab_names <- sort(unique(taxonomy$vocabulary_name))
vocab_names <- vocab_names[vocab_names %in% unique(lookup$pretty_name)]
taxonomy <- taxonomy %>%
filter(vocabulary_name %in% vocab_names) %>%
rename(pretty_name = vocabulary_name) %>%
left_join(lookup[, c('pretty_name', 'ddh_machine_name')]) %>%
select(-pretty_name) %>%
filter(!is.na(ddh_machine_name)) %>%
distinct()
# Temporary: TO BE REMOVED ONCE THE ENCODING ISSUES ARE RESOLVED
taxonomy$field_lovs[taxonomy$field_lovs == 'Côte d'Ivoire'] <- "Côte d'Ivoire"
taxonomy$field_lovs[taxonomy$field_lovs == 'Europe & Central Asia'] <- "Europe and Central Asia"
taxonomy$field_lovs[taxonomy$field_lovs == 'East Asia & Pacific'] <- "East Asia and Pacific"
taxonomy$field_lovs[taxonomy$field_lovs == 'Korea, Dem. People's Rep.'] <- "Korea, Dem. People's Rep."
taxonomy$field_lovs[taxonomy$field_lovs == 'Latin America & Caribbean'] <- "Latin America and Caribbean"
taxonomy$field_lovs[taxonomy$field_lovs == 'Middle East & North Africa'] <- "Middle East and North Africa"
# join taxonomy
lookup <- lookup %>%
dplyr::left_join(taxonomy, by = c('ddh_machine_name', 'field_lovs'))
# CHECK matching tid issues
check_tids <- lookup %>%
filter(ddh_machine_name %in% unique(taxonomy$ddh_machine_name)) %>%
filter(!is.na(field_lovs) & is.na(tid))
check_tids
# Generate microdata placeholder for DDH
machine_names <- unique(fields$ddh_machine_name)
machine_names <- sort(machine_names)
md_placeholder <- vector(mode = 'list', length = length(machine_names))
names(md_placeholder) <- machine_names
# Generate a lkup table to map Microdata values to DDH LOVs ---------------
field_to_machine <- create_lkup_vector(lookup, vector_keys = 'field_key', vector_values = 'ddh_machine_name')
field_to_machine_no_na <- field_to_machine[!is.na(field_to_machine)]
my_sheets <- readxl::excel_sheets('./data-raw/control_vocab_mapping.xlsx')
md_ddh_lovs <- purrr::map_df(my_sheets, function(x) {
temp <- readxl::read_excel('./data-raw/control_vocab_mapping.xlsx', sheet = x)
temp$ddh_machine_name <- field_to_machine_no_na[x]
return(temp)
})
md_ddh_lovs <- bind_rows(md_ddh_lovs)
md_ddh_lovs <- md_ddh_lovs %>%
select(-ddh_category_multiple, field_lovs = ddh_category)
md_ddh_names <- sort(unique(md_ddh_lovs$ddh_machine_name))
md_ddh_lovs <- purrr::map(md_ddh_names, function(x){
temp <- md_ddh_lovs[md_ddh_lovs$ddh_machine_name == x, ]
out <- create_lkup_vector(temp, vector_keys = 'microdata_category' , vector_values = 'field_lovs')
return(out)
})
names(md_ddh_lovs) <- md_ddh_names
# Generate a lookup table to map DDH LOVs to tids ---------------------------
ddh_tid_lovs <- lookup %>%
select(ddh_machine_name, field_lovs, tid) %>%
filter(!is.na(tid),
ddh_machine_name != 'field_topic')
ddh_tid_names <- sort(unique(ddh_tid_lovs$ddh_machine_name))
ddh_tid_lovs <- purrr::map(ddh_tid_names, function(x){
temp <- ddh_tid_lovs[ddh_tid_lovs$ddh_machine_name == x, ]
out <- create_lkup_vector(temp, vector_keys = 'field_lovs' , vector_values = 'tid')
return(out)
})
names(ddh_tid_lovs) <- ddh_tid_names
ddh_tid_lovs <- ddh_tid_lovs[purrr::map_int(ddh_tid_lovs, length) > 0]
# Add JSON templates ------------------------------------------------------
json_template_dataset <- fromJSON('./data-raw/ddh_schema_microdata_dataset.json')
json_template_resource <- fromJSON('./data-raw/ddh_schema_microdata_resource.json')
json_template_attach <- fromJSON('./data-raw/ddh_schema_microdata_resource_attach.json')
# Save lookup table -------------------------------------------------------
lookup <- as.data.frame(lookup)
devtools::use_data(lookup,
md_placeholder,
md_ddh_lovs,
ddh_tid_lovs,
json_template_dataset,
json_template_resource,
json_template_attach,
overwrite = TRUE)
|
e37c25632e3ee7ff0a5dba2a5d30ea71107d7872
|
ad522819f54aa659c951ff39fff1dda0fff0f89f
|
/man/functional__find_max_per_frame.Rd
|
63c13b536d761318c8ca6cd5c2189880ed00b765
|
[
"MIT"
] |
permissive
|
davidbrae/torchaudio
|
4dbc4e12067b14dedd8fa785a6b753719e39b0d3
|
d20ccc237a8eff58e77bb8e3f08ef24150a4fc4e
|
refs/heads/master
| 2023-07-20T16:06:59.791249
| 2021-08-29T19:16:50
| 2021-08-29T19:16:50
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 786
|
rd
|
functional__find_max_per_frame.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/functional.R
\name{functional__find_max_per_frame}
\alias{functional__find_max_per_frame}
\title{Find Max Per Frame (functional)}
\usage{
functional__find_max_per_frame(nccf, sample_rate, freq_high)
}
\arguments{
\item{nccf}{(tensor): Usually a tensor returned by \link{functional__compute_nccf}}
\item{sample_rate}{(int): sampling rate of the waveform, e.g. 44100 (Hz)}
\item{freq_high}{(int): Highest frequency that can be detected (Hz)
Note: If the max among all the lags is very close
to the first half of lags, then the latter is taken.}
}
\value{
\code{tensor} with indices
}
\description{
For each frame, take the highest value of NCCF,
apply centered median smoothing, and convert to frequency.
}
|
f6df1a65773845d2de6d186a6b56998d40dcd7b0
|
9bbde9df5f4fe193f234512cf019fbb244f3814e
|
/RCode/02_DataManipulation.R
|
788a0318d01a4f70a27562352ee69276ad08e411
|
[] |
no_license
|
alanchalk/SL_Intro_1_TODO
|
fb9681e99da4ff8de4b32ffe3f15d14d305334e0
|
3477bdd93b7f15d7f0661be2ec21ba1c3bb53558
|
refs/heads/master
| 2021-05-11T19:55:42.526197
| 2018-01-14T12:36:37
| 2018-01-14T12:36:37
| 117,426,877
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,419
|
r
|
02_DataManipulation.R
|
# 02_ManipulateData.R
# Author: Alan Chalk
# Date: 10 Jan 2018
# Purpose: Carry out any data manipulation needed
# Contents
# _. Load data
# 1. Add an exposure variable
# 2. Add folds
# Notes
#--------------------------------------------------------------------------------
# _. Load data
load(file.path(dirRData, '01_tbl_all.RData'))
#--------------------------------------------------------------------------------
# 1. Add an exposure variable
## TODO
# Add an variable called ex to the data. It should always take the value of 1
# or using dplyr::mutate
tbl_all <-
tbl_all %>%
#--------------------------------------------------------------------------------
# 2. Add folds
set.seed(2018)
# Note: The function sample() creates random draws, with or without replacement
# for example:
sample(x = 1:1000, size = 10, replace = FALSE)
## TODO
# Create a variable called n_examples, which contains the number of examples
# in our dataset
# Note: Consider the output of dim(tbl_all)
n_examples <-
## TODO
# Add a variable called fold to the data, it should take values 1- 10
# fold should be sampled from the numbers 1:10
fold <-
hist(fold)
tbl_all$fold <-
rm(fold, n_examples)
#--------------------------------------------------------------------------------
# End. Save data. rm. gc
save(tbl_all,
file = file.path(dirRData, '02_tbl_all.RData'))
rm(tbl_all); gc()
|
89d093f293c840294c11f641eb0a58e04799ed3c
|
616e8ba5e7356a3b493062cd8095fa98455d12f1
|
/SensitivityAnalysis/utils/sensitivity.to.model.parameters.plots.R
|
06f1447dc5e422a95c58fe4cc440514942108de7
|
[] |
no_license
|
Breakend/RIBSS_tax_evasion_ABM
|
0813ecc2ac964d0d68a12fb5b4559f26d25b502d
|
f64c4b8ab1e02a95fa3e032fbcb3b37647eeb017
|
refs/heads/master
| 2022-02-21T14:40:09.271146
| 2019-09-03T20:39:18
| 2019-09-03T20:39:18
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 6,484
|
r
|
sensitivity.to.model.parameters.plots.R
|
#This function returns the plots of sensitivity of field of interest to model parameters
sensitivity.to.model.parameters.plots <- function(field, dataset, full.name = NULL, params=NULL)
{
all.names <- get.full.names()
if(is.null(full.name)) full.name <- all.names[field]
o.title <- paste("Overall Sensitivity of", full.name)
sp.title <- paste("Sensitivity of", full.name, "(various metrics)")
fname <- gsub("[.]", "_", field)
browser()
formula <- paste(field, "~", paste(model.option.fields, collapse=" + "))
formula <-eval(parse(text=formula))
CART.fit <- rpart(formula, dataset, method="anova")
RF.fit <- randomForest(formula,data = dataset,importance = TRUE)
#rpart.plot(CART.fit)
#fit.branches <- get.CART.branches(CART.fit)
plots <- list()
#plots[['cart.tree']] <- cart.plot
overall.sensi.data <- NULL
leaf.scores <- NULL
total.obs <- nrow(dataset)
wt <- NULL
ln.conditions <- conditions.to.reach.leaf.nodes(CART.fit)
for(con in ln.conditions)
{
#Gather indices of those observations that satisfy the condition
if(con == "")
{
leaf.data <- dataset
} else {
indices <- which(with(dataset, eval(parse(text = con))))
leaf.data <- dataset[indices, ]
#Tokenize the conditions using "&" as the delimiter
tokenized.con <- strsplit(con, split = " & ") %>% unlist()
n.con <- length(tokenized.con)
last.con <- tokenized.con[n.con]
tokenized.con <- tokenized.con[-n.con]
new.n.con <- n.con - 1
if(new.n.con > 1)
{
leaf.but.1.con <- paste(tokenized.con, collapse = " & ")
} else {
if(new.n.con == 1)
leaf.but.1.con <- tokenized.con[1]
else
leaf.but.1.con <- last.con
}
#print(paste(field, leaf.but.1.con))
indices.lb1 <- which(with(dataset, eval(parse(text = leaf.but.1.con))))
leaf.but.1.data <- dataset[indices.lb1, ]
mod.op <- NULL
#Splitting it further based on < or >=
tokenized.con <- strsplit(last.con, "<") %>% unlist()
if(length(tokenized.con) > 1)
mod.op <- tokenized.con[1]
else {
tokenized.con <- strsplit(last.con, ">") %>% unlist()
if(length(tokenized.con) > 1)
mod.op <- tokenized.con[1]
}
leaf.but.1.data[, mod.op] <- as.factor(leaf.but.1.data[ , mod.op])
ln.title <- gsub("< 0.5","=OFF", leaf.but.1.con)
ln.title <- gsub(">=0.5","=ON", ln.title)
ln.title <- gsub("&","AND", ln.title)
#Plot the histogram
leaf.node.hist <- ggplot(leaf.but.1.data) +
geom_histogram(aes(x=leaf.but.1.data[, field],
y=..count../sum(..count..),
fill=eval(parse(text=mod.op))),
bins = 20, color="black") +
theme_bw() +
ylab("Proportion of runs")+ xlab(full.name) +
scale_fill_discrete(mod.op,
labels=c("Off", "On"))+
ggtitle(paste(ln.title, sep=""))
mod_op <- gsub("[.]", "_", mod.op)
save.plots(leaf.node.hist, paste0(fname, "_leafnode_", mod_op), width = 10)
}
leaf.obs <- nrow(leaf.data)
wt <- c(wt, leaf.obs/total.obs)
mod.con <- gsub("< 0.5"," OFF", con)
mod.con <- gsub(">=0.5"," ON", mod.con)
mod.con <- gsub("&"," AND", mod.con)
#print(paste("Percent weight: ", leaf.obs, "/", total.obs, leaf.obs/total.obs))
#Calculate the importance of each of the parameters
if(is.null(params)) params <- names(dataset)[12:40]
formula<- paste(field,"~",paste(params,collapse="+"),sep="")
formula <-eval(parse(text=formula))
CART.fit <- rpart(formula, leaf.data, method="anova") ## , method="class" didn't work well: histograms seemed wrong.
RF.fit <- randomForest(formula,data = leaf.data,importance = TRUE)
sensi.dat <- get.sensitivity.table(CART.fit,RF.fit, Normalize = F)
sensi.dat.melt <-melt(sensi.dat, id="option" ,value.name = "importance")
sensi.dat.melt <- sensi.dat.melt[!is.na(sensi.dat.melt[,"importance"]),]
normd.sensi.dat <- get.sensitivity.table(CART.fit,RF.fit)
sensi.dat[, "overall"] <- apply(abs(normd.sensi.dat[, 1:3]), 1, mean, na.rm = T)
sensi.dat <- sensi.dat[order(sensi.dat$overall, decreasing = T), ]
#Reordering the factors for an ordered plot. Reordering occurs accoding to 'overall'
sensi.dat$option <- ordered(sensi.dat$option, levels=sensi.dat$option)
x.axis.labs <- as.character(levels(sensi.dat$option))
score.plot <- ggplot()+
geom_bar(data= sensi.dat,aes(x=option,y=overall), fill= "steelblue3",
stat="identity", color="black", position="dodge") +
theme_bw() +
ylab("Overall Importance Score")+ xlab("Model Inputs") +
ggtitle(o.title) +
scale_x_discrete(labels = str_wrap(all.names[x.axis.labs], width = 10))
save.plots(score.plot, paste(fname,"to", "inputs", sep="_"), width = 10)
#Saving the scores for the purposes of a heatmap
leaf.sensi.data <- sensi.dat[, c("option", "overall")]
#Overwrite the column name 'overall' to field name
names(leaf.sensi.data) <- c("model.inputs", mod.con)
row.names(leaf.sensi.data) <- NULL
if(is.null(overall.sensi.data)) {
overall.sensi.data <- leaf.sensi.data
} else {
overall.sensi.data <- merge(overall.sensi.data, leaf.sensi.data, by="model.inputs")
}
}
if(ncol(overall.sensi.data) > 2)
overall.sensi.data[, 'wtd.sum'] <- rowSums(overall.sensi.data[, -1]*wt)
else
overall.sensi.data[, 'wtd.sum'] <- overall.sensi.data[, -1]*wt
overall.sensi.data[, 'overall'] <- with(overall.sensi.data, wtd.sum/max(wtd.sum))
overall.sensi.data <- overall.sensi.data[order(overall.sensi.data$overall, decreasing=T), ]
#But save the data without factorizing it for future use. Factorization screws up the order.
heat.map.data <- overall.sensi.data[, c("model.inputs", "overall")]
names(heat.map.data) <- c("model.inputs", field)
overall.sensi.data$model.inputs <- with(overall.sensi.data, ordered(model.inputs, levels = model.inputs))
overall.plot <- ggplot() +
geom_bar(data=overall.sensi.data, aes(x = model.inputs, y=overall),
stat="identity" , fill="steelblue3") + theme_bw()
return(list(overall.plot=overall.plot, score.plot = score.plot, heat.map.data = heat.map.data))
}
|
6f99db92bc0020f14bcb4626134d82cbfccb8260
|
cd82731e5755625d0f65151430b47d8d86737530
|
/man/slopeshatprime.Rd
|
f714aabf17ac748a0740109896e334dccf1bc97c
|
[
"MIT"
] |
permissive
|
ArefinMizan/jeksterslabRlinreg
|
31a2d8f9201bf084b385a52e8788b7d7a5225307
|
21b2ed9dcae3b6c275b573b4a71438558c35d08d
|
refs/heads/master
| 2023-03-19T10:08:30.303897
| 2020-12-30T22:31:36
| 2020-12-30T22:31:36
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 2,495
|
rd
|
slopeshatprime.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/betahat_matrix.R
\name{slopeshatprime}
\alias{slopeshatprime}
\title{Estimates of Regression Standardized Slopes \eqn{\boldsymbol{\hat{\beta}}_{2, \cdots, k}^{\prime}}}
\usage{
slopeshatprime(X, y)
}
\arguments{
\item{X}{\code{n} by \code{k} numeric matrix.
The data matrix \eqn{\mathbf{X}}
(also known as design matrix, model matrix or regressor matrix)
is an \eqn{n \times k} matrix of \eqn{n} observations of \eqn{k} regressors,
which includes a regressor whose value is 1 for each observation on the first column.}
\item{y}{Numeric vector of length \code{n} or \code{n} by \code{1} matrix.
The vector \eqn{\mathbf{y}} is an \eqn{n \times 1} vector of observations
on the regressand variable.}
}
\value{
Returns the estimated standardized slopes
\eqn{\boldsymbol{\hat{\beta}}_{2, \cdots, k}^{\prime}}
of a linear regression model derived from the estimated correlation matrix.
}
\description{
Estimates of Regression Standardized Slopes \eqn{\boldsymbol{\hat{\beta}}_{2, \cdots, k}^{\prime}}
}
\details{
Estimates of the linear regression standardized slopes are calculated using
\deqn{
\boldsymbol{\hat{\beta}}_{2, \cdots, k}^{\prime} =
\mathbf{\hat{R}}_{\mathbf{X}}^{T} \mathbf{\hat{r}}_{\mathbf{y}, \mathbf{X}}
}
where
\itemize{
\item \eqn{\mathbf{\hat{R}}_{\mathbf{X}}}
is the \eqn{p \times p} estimated correlation matrix of the regressor variables \eqn{X_2, X_3, \cdots, X_k} and
\item \eqn{\mathbf{\hat{r}}_{\mathbf{y}, \mathbf{X}}}
is the \eqn{p \times 1} column vector
of the estimated correlations between the regressand \eqn{y} variable
and regressor variables \eqn{X_2, X_3, \cdots, X_k}
}
}
\examples{
# Simple regression------------------------------------------------
X <- jeksterslabRdatarepo::wages.matrix[["X"]]
X <- X[, c(1, ncol(X))]
y <- jeksterslabRdatarepo::wages.matrix[["y"]]
slopeshatprime(X = X, y = y)
# Multiple regression----------------------------------------------
X <- jeksterslabRdatarepo::wages.matrix[["X"]]
# age is removed
X <- X[, -ncol(X)]
slopeshatprime(X = X, y = y)
}
\seealso{
Other beta-hat functions:
\code{\link{.betahatnorm}()},
\code{\link{.betahatqr}()},
\code{\link{.betahatsvd}()},
\code{\link{.intercepthat}()},
\code{\link{.slopeshatprime}()},
\code{\link{.slopeshat}()},
\code{\link{betahat}()},
\code{\link{intercepthat}()},
\code{\link{slopeshat}()}
}
\author{
Ivan Jacob Agaloos Pesigan
}
\concept{beta-hat functions}
\keyword{beta-hat-ols}
|
6e41a7c1a591316fe3d27720c1861199988c623a
|
0a906cf8b1b7da2aea87de958e3662870df49727
|
/grattan/inst/testfiles/anyOutside/libFuzzer_anyOutside/anyOutside_valgrind_files/1610388068-test.R
|
6469c5ed85e9cee5a1299d729938902a0e7c9af4
|
[] |
no_license
|
akhikolla/updated-only-Issues
|
a85c887f0e1aae8a8dc358717d55b21678d04660
|
7d74489dfc7ddfec3955ae7891f15e920cad2e0c
|
refs/heads/master
| 2023-04-13T08:22:15.699449
| 2021-04-21T16:25:35
| 2021-04-21T16:25:35
| 360,232,775
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 864
|
r
|
1610388068-test.R
|
testlist <- list(a = -1694498817L, b = -128L, x = c(-7010049L, -1694498817L, -27L, -218L, -6625793L, -43415L, 1702308136L, 1397053520L, 543387502L, 1936993379L, 1869509492L, 536870912L, 0L, 197504L, 0L, 402915079L, 0L, 0L, 771L, NA, 1573632L, NA, 4200451L, 8388608L, 16392L, -1140130048L, 65490L, 741134714L, -738260992L, 150950366L, -1974337537L, -248L, 0L, 1376258748L, 688392448L, 327448L, -16187640L, -1414812757L, 1373539004L, -450455032L, 134744252L, 170786815L, -218L, -16187393L, 1852731068L, -419559416L, 1678402604L, -768856879L, -741081336L, 5373951L, -16777216L, 771L, 0L, -2130718164L, 751948755L, -248L, 524296L, 524296L, -16187393L, -1L, -1139931126L, 1680658988L, 751948755L, -15269884L, 1078001416L, -16259802L, -1710749084L, 751971372L, -774646852L, -450454785L, -1L, -66L))
result <- do.call(grattan:::anyOutside,testlist)
str(result)
|
3937ff048378a413144976a2000822adf28b2e10
|
c1fba56a73eea1ed8ff817b0a86e57a67ba4ad44
|
/app.R
|
e40d707778c1164d75e1df9fd75738fddbfe02de
|
[] |
no_license
|
mjclemen/Water_Features_Dashboard
|
f38dedec28fe40f41e8895087a77af99905ad7df
|
0bb0518db9ca98b2598d299fac36653ebdc527f2
|
refs/heads/master
| 2020-11-29T16:49:15.715137
| 2019-12-26T01:12:53
| 2019-12-26T01:12:53
| 230,172,007
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 13,790
|
r
|
app.R
|
library(shiny)
library(shinydashboard)
library(reshape2)
library(dplyr)
library(plotly)
library(shinythemes)
library(DT)
library(stringr)
library(tools)
library(rlist)
library(scales)
library(data.table)
library(rgdal)
library(httr)
library(jsonlite)
library(leaflet.extras)
# FIRST API CALL: Grabbing Pittsburgh's council data to add polygons as a layer on leaflet map
council <- readOGR('https://services1.arcgis.com/YZCmUqbcsUpOKfj7/ArcGIS/rest/services/Council_Districts/FeatureServer/0/query?where=1%3D1&objectIds=&time=&geometry=&geometryType=esriGeometryEnvelope&inSR=&spatialRel=esriSpatialRelIntersects&resultType=none&distance=0.0&units=esriSRUnit_Meter&returnGeodetic=false&outFields=*&returnGeometry=true&returnCentroid=false&featureEncoding=esriDefault&multipatchOption=xyFootprint&maxAllowableOffset=&geometryPrecision=&outSR=&datumTransformation=&applyVCSProjection=false&returnIdsOnly=false&returnUniqueIdsOnly=false&returnCountOnly=false&returnExtentOnly=false&returnQueryGeometry=false&returnDistinctValues=false&cacheHint=false&orderByFields=&groupByFieldsForStatistics=&outStatistics=&having=&resultOffset=&resultRecordCount=&returnZ=false&returnM=false&returnExceededLimitFeatures=true&quantizationParameters=&sqlFormat=none&f=pgeojson&token=')
# SECOND API CALL: Grabbing Pittsburgh's water features data to display to users in a map, datable, and two maps
get.water.features <- GET("https://data.wprdc.org/api/3/action/datastore_search_sql?sql=SELECT%20*%0AFROM%20%22513290a6-2bac-4e41-8029-354cbda6a7b7%22")
water.features <- fromJSON(content(get.water.features, "text"))$result$records
# Clean the data --------------------------------------------------------------------------
# Convert column titles to title case, remove "_", fill in blank cells, remove unnecessary
# columns, and convert some columns to factors to recognize categories --------------------
names(water.features) <- gsub(x = names(water.features), pattern = "_", replacement = " ")
names(water.features) <- str_to_title(names(water.features))
water.features$Make[is.na(water.features$Make)] <- "Unknown"
water.features$`Control Type`[is.na(water.features$`Control Type`)] <- "N/A"
water.features$Inactive[is.na(water.features$Inactive)] <- "Inactive"
water.features$Make <- as.factor(water.features$Make)
water.features$`Control Type` <- as.factor(water.features$`Control Type`)
water.features$Ward <- as.factor(water.features$Ward)
water.features$Inactive <- as.factor(water.features$Inactive)
levels(water.features$Inactive)[levels(water.features$Inactive) == "FALSE"] <- "Active"
colnames(water.features)[colnames(water.features) == "Inactive"] <- "Status"
water.features <- select(water.features, -c(" Full Text", " Id"))
# Make icons to appear as markers on leaflet map. Will show different images based on user's
# selected water feature type
icons <- awesomeIconList(
Decorative = makeAwesomeIcon(icon = "fire", library = "glyphicon", markerColor = "white", iconColor = "steelblue"),
`Drinking Fountain` = makeAwesomeIcon(icon = "coffee", library = "fa", markerColor = 'white', iconColor = "steelblue"),
Spray = makeAwesomeIcon(icon = 'tint', library = 'fa', markerColor = "white", iconColor = "steelblue")
)
# Place application title in header of dashboard ------------------------------------------
app.header <- dashboardHeader(
title = "Pittsburgh Water Features", titleWidth = 300
)
# Place user inputs and tab options in a sidebar to be displayed in dashboard
app.sidebar <- dashboardSidebar(
# Change sidebar width to match the title width -----------------------------------------
width = 300,
# Create four tab options to place the datatable, map, and 2 plots
# Also place user input controls below the tab options ----------------------------------
sidebarMenu(id = "tabs",
menuItem("Map of Water Features", tabName = "water_map", icon = icon("map-marker", lib = "glyphicon")),
menuItem("Water Features Info", tabName = "datatable", icon = icon("table", lib = "font-awesome")),
menuItem("Neighborhood", tabName = "neighborhood_count", icon = icon("home", lib = "glyphicon")),
menuItem("Control Type by Ward", tabName = "controls_by_ward", icon = icon("gamepad", lib= "font-awesome")),
# Select the Makes of the water features to view -----------------------------
checkboxGroupInput(inputId = "selected.make",
label = "Select which Make(s) of Water Features you would like to view:",
choices = sort(unique(water.features$Make)),
selected = c("Regular Fountain", "Murdock")),
# Select what Council District to view ---------------------------------------------------
selectInput(inputId = "selected.council",
label = "Select which Council District you would like to view:",
choices = sort(unique(water.features$`Council District`)),
selected = "5"),
# Select what Feature Types to view -------------------------------------------
radioButtons(inputId = "selected.feature.type",
label = "Select which Water Feature Type(s) you would like to view:",
choices = sort(unique(water.features$`Feature Type`)),
selected = c("Drinking Fountain")),
downloadButton("downloadWaterFeatures", "Download Filtered 'Water Features' Data", class = "download"),
# Changing color of download button to better show up against background
tags$head(tags$style(".download{color: black !important;}"))
)
)
# Display 4 tabs: 1 containing the datatable, one a map, and the other 2 each containing a plot
app.body <- dashboardBody(
theme = shinytheme("readable"),
tabItems(
tabItem(tabName = "datatable",
fluidRow(
# Show data table filtered based on user input --------------------------------
box(title = "Selected Water Features Data",
dataTableOutput(outputId = "watertable"),
width = 12)
)
),
tabItem(tabName = "water_map",
fluidRow(
column(12,
leafletOutput("water.leaflet")
)
)
),
tabItem(tabName = "neighborhood_count",
fluidRow(
column(12,
verbatimTextOutput(outputId = "printMessage")),
tags$head(tags$style("#printMessage{color: red; font-size: 15px; font-style: italic; text-align: center;}"))
),
fluidRow(
column(12,
plotlyOutput(outputId = "barplot.neighborhoods")
)
)
),
tabItem(tabName = "controls_by_ward",
fluidRow(
column(12,
verbatimTextOutput(outputId = "printMessage2"),
tags$head(tags$style("#printMessage2{color: red; font-size: 15px; font-style: italic; text-align: center;}"))
)
),
fluidRow(
column(12,
plotlyOutput(outputId = "control.types.per.ward"))
)
)
)
)
# Define UI for application that creates a dashboard on water features in Pittsburgh
ui <- dashboardPage(
header = app.header,
sidebar = app.sidebar,
body = app.body,
skin = "black"
)
# Define server logic required to draw 2 charts, datatable, and map
server <- function(input, output) {
# Create subset of water features to account for user input. Specifically, the make, council
# district, and features of the water dataset ----------------------------------------------
waterSubset <- reactive({
water.features <- subset(water.features,
Make %in% input$selected.make) %>%
filter(`Council District` == input$selected.council) %>%
filter(`Feature Type` == input$selected.feature.type)
})
# Perform updated API call, based on user's selected council district
councilUpdate <- reactive({
# Build API Query with proper encodes (provided by Insomnia)
newUrl <- paste0("https://services1.arcgis.com/YZCmUqbcsUpOKfj7/ArcGIS/rest/services/Council_Districts/FeatureServer/0/query?where=Council%20=%20", input$selected.council, "&objectIds=&time=&geometry=&geometryType=esriGeometryEnvelope&inSR=&spatialRel=esriSpatialRelIntersects&resultType=none&distance=0.0&units=esriSRUnit_Meter&returnGeodetic=false&outFields=*&returnGeometry=true&returnCentroid=false&featureEncoding=esriDefault&multipatchOption=xyFootprint&maxAllowableOffset=&geometryPrecision=&outSR=&datumTransformation=&applyVCSProjection=false&returnIdsOnly=false&returnUniqueIdsOnly=false&returnCountOnly=false&returnExtentOnly=false&returnQueryGeometry=false&returnDistinctValues=false&cacheHint=false&orderByFields=&groupByFieldsForStatistics=&outStatistics=&having=&resultOffset=&resultRecordCount=&returnZ=false&returnM=false&returnExceededLimitFeatures=true&quantizationParameters=&sqlFormat=none&f=pgeojson&token=")
# Change projection after doing new API call, based on user's selection of council district
council <- readOGR(newUrl)
council <- council %>%
spTransform(CRS('+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs'))
return(council)
})
# Display a data table that shows all of water features in Pittsburgh
output$watertable <- renderDataTable({
datatable(data = waterSubset(), options = list(orderClasses = TRUE, autoWidth = FALSE, scrollX = TRUE,
pageLength = 5),
class = 'cell-border stripe', rownames = FALSE)
})
# Basic Map -- chosen basemap and set the view to show a certain part of pittsburgh
output$water.leaflet <- renderLeaflet({
leaflet() %>%
addProviderTiles(provider = providers$Esri.NatGeoWorldMap) %>%
setView(-79.978, 40.439, 12)
})
# Add the user's selected water feature to view on the map. Remove old markers
observe({
leafletProxy("water.leaflet", data = waterSubset()) %>%
clearGroup(group = "featureTypes") %>%
addAwesomeMarkers(icon = ~icons[`Feature Type`],
popup = ~paste0("<b>", "Located At", "</b>: ", Name),
group = "featureTypes")
})
# Update the polygon layer, showing the selected council district. Remove old polygons
observe({
leafletProxy("water.leaflet", data = councilUpdate()) %>%
clearGroup(group = "councilDistricts") %>%
addPolygons(popup = ~paste0("<b>", "Council District ", COUNCIL, "</b>"), group = "councilDistricts", color = "red")
})
# Plot the number of water features in given neighborhoods
output$barplot.neighborhoods <- renderPlotly({
ws <- waterSubset()
# Ensure there is at least one row of data to plot ---------------------------
req(nrow(ws) > 0)
# Find the 10 neighborhoods with the most water features to plot on barplot
top.neighborhoods <- names(tail(sort(table(ws$Neighborhood)),10))
ggplot(ws, aes(x = Neighborhood, fill = Status)) + geom_bar() +
scale_x_discrete(limits = top.neighborhoods) + scale_fill_manual("Status:",
values = c("Active" = "steelblue", "Inactive" = "red")) +
labs(x = "Neighborhood of Water Feature", y = "Number of Water Features",
title = "Number of Water Features per Neighborhood")
})
# Plot the types of user controls throughout the wards in Pittsburgh -----------
output$control.types.per.ward <- renderPlotly({
# Read in the reactive subset ------------------------------------------------
ws <- waterSubset()
# Ensure there is at least one row of data to plot ---------------------------
req(nrow(ws) > 0)
ggplot(ws, aes(x = Ward, y = `Control Type`)) +
geom_point(col = "steelblue", size = 3, position = "jitter", alpha = 0.7) +
labs(x = "Ward of Water Features", y = "User Controls on Water Feature",
title = "Types of Water Feature Controls in Wards Throughout Pittsburgh")
})
# Downloadable csv of water features data filtered by make, council district, and feature type.
# Note -- filename and file type (csv) work in web browser, not RStudio. RStudio glitch from what I have read about it
output$downloadWaterFeatures <- downloadHandler(
filename = function() {
paste("Water Features Throughout Pittsburgh with Your Filters",
".csv", sep = "")
},
content = function(file) {
write.csv(waterSubset(), file, row.names = FALSE)
}
)
# Print a display message to the user if their selections result in zero data to plot neighborhoods
output$printMessage <- renderText({
req(nrow(waterSubset()) == 0)
"Unable to plot Neighborhoods: There is no data meeting your filter criteria. Please adjust filters."
})
# Print a display message to the user if their selections result in zero data to plot countrol type by ward
output$printMessage2 <- renderText({
req(nrow(waterSubset()) == 0)
"Unable to plot Control Types by Ward: There is no data meeting your filter criteria. Please adjust filters."
})
}
# Run the application
shinyApp(ui = ui, server = server)
|
c151cfa39631191cd134300fc2fe9cf57ad7e2b1
|
eb370df99e4af3976213eb2fc3c786b765120fd8
|
/Q1/HW03/PartitionTests.R
|
ae0a982a5819752944375a03151d745062fd3df2
|
[] |
no_license
|
LeoSalemann/UW_DataScience
|
55692d19b6bb25b52310477ac6af5a396ad1ee3c
|
1cd43bc342c56eea2b065e3edc3e45550c289295
|
refs/heads/master
| 2021-09-15T02:05:00.473068
| 2018-05-24T01:37:32
| 2018-05-24T01:37:32
| 106,077,578
| 0
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 9,400
|
r
|
PartitionTests.R
|
# PartitionTests.R
# Copyright 2016 by Ernst Henle
# To use this script:
# 1 Place CollegeStudentsDataset.R in your working directory
# 2 Source this script
# Clear objects from Memory
rm(list=ls())
# Clear Console:
cat("\014")
source("CollegeStudentsDataset.R")
getDataToPartition <- function(numberOfRows = NA)
{
if (is.na(numberOfRows))
{
numberOfRows <- 500 + round(runif(1),2)*1000
}
DataToPartition = data.frame(c1=runif(numberOfRows), c2=1:numberOfRows, c3=sample(c(0:9, letters, LETTERS), numberOfRows, replace=TRUE))
return(DataToPartition)
} # getDataToPartition
getFractionOfTest <- function()
{
fractionOfTest = 0.2 + round(runif(1)*0.6,1)
return(fractionOfTest)
} # getFractionOfTest
TestBasic <- function(Partition)
{
success <- "Test could not complete"
tryCatch({
testsCompleted <- 0
numberOfTries <- 100
for (testNo in 1:numberOfTries)
{
dataSet <- getDataToPartition()
fractionOfTest <- getFractionOfTest()
basicTest <- Partition(dataSet, fractionOfTest)
# Test correct names in list
expectedNames <- sort(c("trainingData", "testingData"))
if(!identical(sort(names(basicTest)), expectedNames))
{
success <- paste(c("Error, partition should return: ", expectedNames, "; not: ", sort(names(basicTest))), collapse=" ")
break;
} # if else
# Test total rows
if (nrow(basicTest$trainingData) + nrow(basicTest$testingData) != nrow(dataSet))
{
success <- paste("Error:", nrow(basicTest$trainingData), " + ", nrow(basicTest$testingData), "!=", nrow(dataSet))
break;
} # if else
# Test Exclusive
dataSetRecombined <- rbind(basicTest$trainingData, basicTest$testingData)
dataSetRecombined <- dataSetRecombined[order(dataSetRecombined$c2), ]
row.names(dataSetRecombined) <- NULL
dataSetOrdered <- dataSet[order(dataSet$c2), ]
if(!identical(dataSetOrdered, dataSetRecombined))
{
success <- paste("Error, recombined test and training observations are not same as original")
break
}
testsCompleted <- testsCompleted + 1
} # for
if ((testsCompleted == numberOfTries) && (success == "Test could not complete"))
{
success <- "Tests have correct results"
} else if ((testsCompleted < numberOfTries) && (success != "Test could not complete"))
{
success <- paste("Test found problems: ", success)
} else
{
success <- paste("Test failed: ", success)
} # 2X if else
},
warning = function(war){cat("Unexpected warning\n"); print(war); success <- war; return("Balderdash")},
error = function(err){cat("Unhandeled exception\n"); print(err); success <- err; return(success)}) # tryCatch
return(success)
} # TestBasic
TestApprox <- function(Partition, numberOfTries = 200)
{
success <- "Test could not complete"
tryCatch({
testsCompleted <- 0
differences <- rep(NA, numberOfTries)
for (testNo in 1:numberOfTries)
{
dataSet <- getDataToPartition()
fractionOfTest <- getFractionOfTest()
basicTest <- Partition(dataSet, fractionOfTest)
actualNumberOfTest <- nrow(basicTest$testingData)
expectedNumberOfTest <- nrow(dataSet)*fractionOfTest
difference <- round(expectedNumberOfTest) - actualNumberOfTest
differences[testNo] <- difference
testsCompleted <- testsCompleted + 1
} # for
expectedMean <- -0.2
expectedSD <- 14.8
actualMean <- mean(differences)
actualSd <- sd(differences)
if ((abs(actualMean - expectedMean) > 3) || (abs(actualSd - expectedSD) > 3))
{
success <- paste("Error, too much difference between actualMean(", actualMean,") and expectedMean(", expectedMean, ") or actualSd(", actualSd,") and expectedSD(", expectedSD, ")")
} # if
if ((testsCompleted == numberOfTries) && (success == "Test could not complete"))
{
success <- "Tests have correct results"
} else if ((testsCompleted == numberOfTries) && (success != "Test could not complete"))
{
success <- paste("Test found problems: ", success)
} else
{
success <- paste("Test failed: ", success)
} # 2X if else
},
warning = function(war){cat("Unexpected warning\n"); print(war); success <- war; return("Balderdash")},
error = function(err){cat("Unhandeled exception\n"); print(err); success <- err; return(success)}) # tryCatch
return(success)
} # TestApprox
TestRandom <- function(Partition = PartitionFast, numberOfTries = 500)
{
success <- "Test could not complete"
tryCatch({
testsCompleted <- 0
means <- rep(NA, numberOfTries)
sds <- rep(NA, numberOfTries)
for (testNo in 1:numberOfTries)
{
numberOfRows <- 100
dataSet <- getDataToPartition(numberOfRows)
fractionOfTest <- 0.15
basicTest <- Partition(dataSet, fractionOfTest)
means[testNo] <- mean(basicTest$testingData$c2)
sds[testNo] <- sd(basicTest$testingData$c2)
testsCompleted <- testsCompleted + 1
} # for
expectedMeanOfMeans <- 50.5 # 49 52 (numberOfRows + 1)/2
expectedSdOfMeans <- 7.3 # 6 to 8.5 sd(1:numberOfRows)/sqrt(numberOfRows*fractionOfTest)
expectedMeanOfSds <- 28.7 # 28 to 29.5 sd(1:numberOfRows)
expectedSdOfSds <- 3.5 # 3 to 4 sqrt((1-fractionOfTest)*fractionOfTest/(numberOfRows*fractionOfTest))*expectedMeanOfSds*1.27
MeanOfMeans <- mean(means)
SdOfMeans <- sd(means)
MeanOfSds <- mean(sds)
SdOfSds <- sd(sds)
if ((abs(MeanOfMeans - expectedMeanOfMeans) > 1.5) || (abs(SdOfMeans - expectedSdOfMeans) > 1.3) || (abs(MeanOfSds - expectedMeanOfSds) > 0.7) || (abs(SdOfSds - expectedSdOfSds) > 0.5))
{
success <- "Error"
if (abs(MeanOfMeans - expectedMeanOfMeans) > 1.5)
{
success <- paste(success, "difference between MeanOfMeans(", MeanOfMeans,") and expectedMeanOfMeans (",expectedMeanOfMeans,")")
} # if
if (abs(SdOfMeans - expectedSdOfMeans) > 1.3)
{
success <- paste(success, "difference between SdOfMeans(", SdOfMeans,") and expectedSdOfMeans (",expectedSdOfMeans,")")
} # if
if (abs(MeanOfSds - expectedMeanOfSds) > 0.7)
{
success <- paste(success, "difference between MeanOfSds(", MeanOfSds,") and expectedMeanOfSds (",expectedMeanOfSds,")")
} # if
if (abs(SdOfSds - expectedSdOfSds) > 0.5)
{
success <- paste(success, "difference between SdOfSds(", SdOfSds,") and expectedSdOfSds (",expectedSdOfSds,")")
} # if
} # if
if ((testsCompleted == numberOfTries) && (success == "Test could not complete"))
{
success <- "Tests have correct results"
} else if ((testsCompleted == numberOfTries) && (success != "Test could not complete"))
{
success <- paste("Test found problems: ", success)
} else
{
success <- paste("Test failed: ", success)
} # 2X if else
},
warning = function(war){cat("Unexpected warning\n"); print(war); success <- war; return("Balderdash")},
error = function(err){cat("Unhandeled exception\n"); print(err); success <- err; return(success)}) # tryCatch
return(success)
} # TestRandom
TestWrong <- function(Partition, numberOfTries = 200)
{
success <- "Test could not complete"
tryCatch({
testsCompleted <- 0
for (testNo in 1:numberOfTries)
{
dataSet <- getDataToPartition()
fractionOfTest <- getFractionOfTest()
basicTest <- Partition(dataSet, fractionOfTest)
if (min(basicTest$trainingData$c2) > max(basicTest$testingData$c2))
{
recombinedData <- rbind(basicTest$testingData, basicTest$trainingData)
} else
{
recombinedData <- rbind(basicTest$trainingData, basicTest$testingData)
} # if else
if(!identical(recombinedData, dataSet))
{
success <- paste("Test 2: !identical(recombinedData, dataSet)")
break;
} # if else
testsCompleted <- testsCompleted + 1
} # for
if ((testsCompleted == numberOfTries) && (success == "Test could not complete"))
{
success <- "Tests have correct results"
} else if ((testsCompleted < numberOfTries) && (success != "Test could not complete"))
{
success <- paste("Test found problems: ", success)
} else
{
success <- paste("Test failed: ", success)
} # 2X if else
},
warning = function(war){cat("Unexpected warning\n"); print(war); success <- war; return("Balderdash")},
error = function(err){cat("Unhandeled exception\n"); print(err); success <- err; return(success)}) # tryCatch
return(success)
}
# The following test results are necessary but not sufficient for the assignment
print(TestBasic(PartitionWrong)) # "Tests have correct results"
print(TestBasic(PartitionFast)) # "Tests have correct results"
print(TestBasic(PartitionExact)) # "Tests have correct results"
print(TestWrong(PartitionWrong)) # "Tests have correct results"
print(TestApprox(PartitionFast)) # "Tests have correct results"
print(TestRandom(PartitionFast)) # "Tests have correct results"
|
2f3fcb5ad0fa584132c09f3f801a1976fb1444a2
|
184ddf699ff3a35b91691a2a8c2589c279b80494
|
/a35.r
|
81b81cbfa61698092c2e4e940b8bf9d2b321c227
|
[] |
no_license
|
majidaldo/tsa
|
870ac07ec914ce2185469928164491601e842b6e
|
b5ec7c254c33ac942aacf146fb46e4a887325c89
|
refs/heads/master
| 2021-01-01T18:29:38.602269
| 2014-11-21T14:37:42
| 2014-11-21T14:37:42
| 16,283,197
| 0
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 191
|
r
|
a35.r
|
ibmdata=read.csv("ibmd.csv")
closings=xts(ibmdata$Adj.Close ,as.Date(ibmdata$Date))
sr=(abs(diff(closings))[-1] #simple return
/as.vector(closings[-length(closings)]))
#acf(sr,100)
|
bd2457ad8c60b3361c1759ce95391ec190edae86
|
0934f0bef7587d90c47dfb405fcececb91eae01b
|
/P2_studies/comp_case_study/graclus_cocit_fig.R
|
817d9f370e2efcd20758b53945f4a7fb01e51b23
|
[
"MIT"
] |
permissive
|
Djamil17/ERNIE
|
2def22ad0848cb1e9985ee32f3e4e917f9da5199
|
454518f28b39a6f37ad8dde4f3be15d4dccc6f61
|
refs/heads/master
| 2022-07-27T11:09:03.620862
| 2021-04-20T15:41:38
| 2021-04-20T15:41:38
| 186,028,656
| 0
| 1
|
MIT
| 2019-05-10T17:32:49
| 2019-05-10T17:32:49
| null |
UTF-8
|
R
| false
| false
| 1,025
|
r
|
graclus_cocit_fig.R
|
# script to reconcile dc with co-c
# for S&B paper
rm(list=ls())
setwd('~/Desktop/dblp')
library(data.table)
x <- fread('graclus_cocitation_clusters.csv')
x1 <- x[order(graclus_cluster,-co_citation_nodes)]
nl <- fread('nl_scp_top20.csv')
nl1 <- nl[,.(cocit_cluster_total=length(source_id)),by='cluster_no']
merged <- merge(x1,nl1,by.x='co_citation_cluster',by.y='cluster_no')
merged[,perc:=round(100*co_citation_nodes/cocit_cluster_total)]
library(ggplot2)
pmerge <- merged[perc>=15][order(graclus_cluster)]
pdf('graclus_cocit_fig.pdf')
qplot(as.factor(graclus_cluster),as.factor(co_citation_cluster),data=pmerge,size=perc,xlab="Cluster ID (Direct Citation)",ylab="Cluster ID (Co-citation)")
dev.off()
system("cp graclus_cocit_fig.pdf ~/ernie_comp/Scientometrics")
tiff("graclus_cocit_fig.tif", res=600, compression = "lzw", height=8, width=8, units="in")
qplot(as.factor(graclus_cluster),as.factor(co_citation_cluster),data=pmerge,size=perc,xlab="Cluster ID (Direct Citation)",ylab="Cluster ID (Co-citation)")
dev.off()
|
ac71f6f8e91518545b496b8c185b99d8993c036b
|
e54e7a8f0140a33da41e420f4149c5c737175a89
|
/R/create_master_raw_data.R
|
f2179afa829f86119ae4506909891d137f761e1a
|
[] |
no_license
|
one-acre-fund/arc2weather
|
0d80547adca12a0bcdd1a299d9886ce192dc16cd
|
25b6ab102f8cf74e6e60b48dac760ba07d198e86
|
refs/heads/master
| 2020-03-28T04:33:39.962589
| 2019-02-04T16:49:59
| 2019-02-04T16:49:59
| 147,722,497
| 0
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 806
|
r
|
create_master_raw_data.R
|
#' consolidate raw data into single file to simplify storage and loading
#'
#' @param dir directory where raw data lives. Dir is the raw_data folder from
#' inhereted from the get_raw_data
#' @param pattern the pattern that identifies the raw data files
#' @inheritParams get_raw_data()
#' @return nothing. It saves a single data file and deletes the other files.
create_master_raw_data <- function(dir, pattern = "weatherRasterList"){
list_to_combine <- list.files(dir, pattern)
master_data <- do.call(rbind, lapply(list_to_combine, function(file_name){
print(paste0("loading ", file_name, "..."))
df <- readRDS(paste(dir, file_name, sep = "/"))
return(df)
}))
print("saving master data file...")
saveRDS(master_data, file = paste(dir, "master_weather_data.rds", sep = "/"))
}
|
53a72e162f1857ce5387040c5da6f7448e9a02af
|
9721b7e97328faf3e4dafaa24d70310129d52b01
|
/man/plsR.dof.Rd
|
4d918c02fdb78e695f28d32a9306ed05cf15a0fe
|
[] |
no_license
|
kongdd/plsRglm
|
77dd10e804ec3606d914aae22a863a497497cd18
|
dfa4e54ea02bca8bf04d29bb65dc7dba611927c9
|
refs/heads/master
| 2022-02-19T20:26:29.672362
| 2019-10-01T10:41:55
| 2019-10-01T10:41:55
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 2,097
|
rd
|
plsR.dof.Rd
|
\name{plsR.dof}
\alias{plsR.dof}
\title{
Computation of the Degrees of Freedom
}
\description{
This function computes the Degrees of Freedom using the Krylov representation of PLS and other quantities that are used to get information criteria values. For the time present, it only works with complete datasets.
}
\usage{
plsR.dof(modplsR, naive = FALSE)
}
\arguments{
\item{modplsR}{A plsR model i.e. an object returned by one of the functions \code{plsR}, \code{plsRmodel.default}, \code{plsRmodel.formula}, \code{PLS_lm} or \code{PLS_lm_formula}.}
\item{naive}{A boolean.}
}
\details{
If \code{naive=FALSE} returns values for estimated degrees of freedom and error dispersion. If \code{naive=TRUE} returns returns values for naive degrees of freedom and error dispersion.
The original code from Nicole Kraemer and Mikio L. Braun was unable to handle models with only one component.
}
\value{
\item{DoF}{Degrees of Freedom}
\item{sigmahat}{Estimates of dispersion}
\item{Yhat}{Predicted values}
\item{yhat}{Square Euclidean norms of the predicted values}
\item{RSS}{Residual Sums of Squares}
}
\references{
N. Kraemer, M. Sugiyama. (2011). The Degrees of Freedom of Partial Least Squares Regression. \emph{Journal of the American Statistical Association}, 106(494), 697-705.\cr
N. Kraemer, M. Sugiyama, M.L. Braun. (2009). Lanczos Approximations for the Speedup of Kernel Partial Least Squares Regression, \emph{Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics (AISTATS)}, 272-279.
}
\author{Nicole Kraemer, Mikio L. Braun with improvements from
\enc{Frederic}{Fr\'ed\'eric} Bertrand\cr
\email{frederic.bertrand@math.unistra.fr}\cr
\url{http://www-irma.u-strasbg.fr/~fbertran/}
}
\seealso{\code{\link{aic.dof}} and \code{\link{infcrit.dof}} for computing information criteria directly from a previously fitted plsR model.}
\examples{
data(Cornell)
XCornell<-Cornell[,1:7]
yCornell<-Cornell[,8]
modpls <- plsR(yCornell,XCornell,4)
plsR.dof(modpls)
plsR.dof(modpls,naive=TRUE)
}
\keyword{models}
\keyword{regression}
\keyword{utilities}
|
03fe495f89c31f83fd4f584afb45dcfb334717dd
|
52a6cea02ee8ac8c53e1049a1df8c31494aaadd0
|
/model.R
|
70fba2f0a87ca3e8fdf6d5fc6d017eddd3e878f6
|
[
"MIT"
] |
permissive
|
ControlNet/ml-algorithms
|
ccd8ed592e8dfa90ca0a15b9f4aa7b1843e07cf0
|
16e37eae032250ecda7a12d84839d5ad72753635
|
refs/heads/main
| 2023-04-14T14:59:06.459439
| 2021-04-16T17:58:02
| 2021-04-16T17:58:02
| 358,675,339
| 2
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 3,348
|
r
|
model.R
|
Model <- setRefClass("Model", fields = list(), methods = list(
intialize = function() { },
fit = function(x_train = NULL, y_train = NULL) { },
predict = function(x_test) { },
evaluate = function(y_pred, y_test) { }
))
SupervisedModel <- setRefClass("SupervisedModel", contains = "Model", fields = list(), methods = list(
to_batches = function(x, y, batch_size) {
# this function is for divide the data as batches
# as the batches are not divided equally, so there are complete batches,
# and a residual batch.
x <- as.matrix(x)
y <- as.matrix(y)
x_nrow <- nrow(x)
# the number of complete batches
complete_batch_num <- floor(x_nrow / batch_size)
# The last row of complete batches
complete_nrow <- complete_batch_num * batch_size
# locate the residual part
if (complete_nrow == x_nrow) {
x_residual <- NULL
y_residual <- NULL
batch_nums <- complete_batch_num
} else {
x_residual <- x[(complete_nrow + 1):x_nrow,]
y_residual <- y[(complete_nrow + 1):x_nrow,]
batch_nums <- complete_batch_num + 1
}
# get residual batch data
residual_nrow <- x_nrow - complete_nrow
if (residual_nrow == 1) {
x_residual <- t(x_residual)
y_residual <- t(y_residual)
}
# locate and get other batches
lapply(1:batch_nums, function(i) {
start_index <- (i - 1) * batch_size + 1
if (start_index + batch_size - 1 <= x_nrow) {
list(x = x[start_index:(start_index + batch_size - 1),],
y = y[start_index:(start_index + batch_size - 1),])
} else {
list(x = x_residual, y = y_residual)
}
})
}
))
UnsupervisedModel <- setRefClass("UnsupervisedModel", contains = "Model", methods = list(
to_batches = function(x, batch_size) {
# this function is for divide the data as batches
# as the batches are not divided equally, so there are complete batches,
# and a residual batch.
x <- as.matrix(x)
x_nrow <- nrow(x)
# the number of complete batches
complete_batch_num <- floor(x_nrow / batch_size)
# The last row of complete batches
complete_nrow <- complete_batch_num * batch_size
# locate the residual part
if (complete_nrow == x_nrow) {
x_residual <- NULL
batch_nums <- complete_batch_num
} else {
x_residual <- x[(complete_nrow + 1):x_nrow,]
batch_nums <- complete_batch_num + 1
}
# get residual batch data
residual_nrow <- x_nrow - complete_nrow
if (residual_nrow == 1) {
x_residual <- t(x_residual)
}
# locate and get other batches
lapply(1:batch_nums, function(i) {
start_index <- (i - 1) * batch_size + 1
if (start_index + batch_size - 1 <= x_nrow) {
x[start_index:(start_index + batch_size - 1),]
} else {
x_residual
}
})
}
))
Classifier <- setRefClass("Classifier", contains = "SupervisedModel", methods = list(
evaluate = function(y_pred, y_test) .self$evaluate_accuracy(y_pred, y_test),
confusionMatrix = function(y_pred, y_test) table(y_pred, y_test),
evaluate_accuracy = function(y_pred, y_test) sum(y_pred == y_test) / length(y_pred)
))
Regressor <- setRefClass("Regressor", contains = "SupervisedModel", methods = list())
Clusterer <- setRefClass("Clusterer", contains = "UnsupervisedModel", methods = list())
|
0d7c0b7a0c12055532837ea72f52039aa9b10aa8
|
b75c76365c852202db6616287f9163ce0e704202
|
/man/Armitage.Rd
|
efe08dc4ae6f02b2f12be0845deae3f6d6a15be6
|
[] |
no_license
|
cran/MCPerm
|
848a919f3d28d176c63cb979e51138c492e4d0f4
|
7501eb560d0b2c8809fa6f22407141a59a3dcb4f
|
refs/heads/master
| 2020-05-27T04:16:30.278126
| 2013-06-17T00:00:00
| 2013-06-17T00:00:00
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 2,559
|
rd
|
Armitage.Rd
|
\name{Armitage}
\alias{Armitage}
\title{
Armitage's trend test for the 2x3 genotype table
}
\description{
Armitage's trend test for the 2x3 genotype table.
}
\usage{
Armitage(case_11, case_12, case_22, control_11, control_12, control_22)
}
\arguments{
\item{case_11}{
a non-negative integer, the frequency of genotype "allele1/allele1" in case samples.
}
\item{case_12}{
a non-negative integer, the frequency of genotype "allele1/allele2" in case samples.
}
\item{case_22}{
a non-negative integer, the frequency of genotype "allele2/allele2" in case samples.
}
\item{control_11}{
a non-negative integer, the frequency of genotype "allele1/allele1" in control samples.
}
\item{control_12}{
a non-negative integer, the frequency of genotype "allele1/allele2" in control samples.
}
\item{control_22}{
a non-negative integer, the frequency of genotype "allele2/allele2" in control samples.
}
}
\details{
The Cochran-Armitage test for trend, is used in categorical data analysis when the aim is to assess
for the presence of an association between a variable with two categories and a variable with k
categories. It modifies the Pearson chi-squared test to incorporate a suspected ordering in the effects
of the k categories of the second variable. The trend test is ofen used as a genotype-based test for
case-control genetic association studies.
}
\value{
\item{statistic }{numeric, the statistic of armitage test for trend.}
\item{pValue }{numeric, the p value of armitage test for trend.}
}
\references{
Armitage, P(1955): Tests for Linear Trends in Proportions and Frequencies.
statgen.org(2007): A derivation for Armitage's trend test for the 2x3 genotype table.
}
\author{
Lanying Zhang and Yongshuai Jiang <jiangyongshuai@gmail.com>
}
\seealso{
\code{\link{OR}},
\code{\link{OR.TradPerm}},
\code{\link{OR.MCPerm}},
\code{\link{Armitage.TradPerm}},
\code{\link{Armitage.MCPerm}},
\code{\link{chisq.test}},
\code{\link{chisq.TradPerm}},
\code{\link{chisq.MCPerm}},
\code{\link{fisher.test}},
\code{\link{fisher.TradPerm}},
\code{\link{fisher.MCPerm}},
\code{\link{meta}},
\code{\link{meta.TradPerm}},
\code{\link{meta.MCPerm}},
\code{\link{permuteGenotype}},
\code{\link{rhyper}},
\code{\link{permuteGenotypeCount}},
\code{\link{genotypeStat}}
}
\examples{
# case_11=4
# case_12=1
# case_22=1
# control_11=3
# control_12=0
# control_22=0
# Armitage(case_11,case_12,case_22,control_11,control_12,control_22)
}
\keyword{ Armitage }
|
e1177290a70574af8f6af37d9df187b9f27d10f5
|
7d51f78a1a58e67aa5302e0cac35e2142246c1ae
|
/permutation.R
|
08f70d9bf708aef7196d61e7ef732d5b384939bc
|
[] |
no_license
|
richasdy/HelloR
|
8a35b4ad32403f61df8b8f17b03a21cfea819e97
|
384bab9cf879a4176d7c131a66cee178d5e9e8ed
|
refs/heads/master
| 2021-06-10T14:31:49.126870
| 2016-12-08T04:45:54
| 2016-12-08T04:45:54
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 490
|
r
|
permutation.R
|
# permutation, combination
# load library
library(gtools)
#urn with 3 balls
x <- c('red', 'blue', 'black')
#pick 2 balls from the urn with replacement
#get all permutations
permutations(n=3,r=2,v=x,repeats.allowed=T)
#pick 2 balls from the urn with replacement
#get all permutations
permutations(n=3,r=2,v=x)
#number of permutations
nrow(permutations(n=3,r=2,v=x,repeats.allowed=T))
#calculate the number of combinations without replacement/repetition
choose(24,4)
choose(n=24,k=4)
|
332044d8ec942887e03be0f58cba0bf19032b8f6
|
9034dfa4936f52dff9b31f2dee5515c848a3d9bb
|
/man/GenfrailtyPenal.Rd
|
3923c5ca27e4e9b39b213e3ad407278d54011309
|
[] |
no_license
|
cran/frailtypack
|
41c6fa2c6a860c9b1e2feff84814e29d32b56091
|
dfbdc53920a754f8529829f1f0fccc8718948cca
|
refs/heads/master
| 2022-01-01T02:07:40.270666
| 2021-12-20T09:30:02
| 2021-12-20T09:30:02
| 17,696,141
| 9
| 4
| null | null | null | null |
UTF-8
|
R
| false
| true
| 44,502
|
rd
|
GenfrailtyPenal.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/GenfrailtyPenal.R
\name{GenfrailtyPenal}
\alias{GenfrailtyPenal}
\title{Fit a Shared or a Joint Frailty Generalized Survival Model}
\usage{
GenfrailtyPenal(formula, formula.terminalEvent, data, recurrentAG = FALSE,
family, hazard = "Splines", n.knots, kappa, betaknots = 1, betaorder = 3,
RandDist = "Gamma", init.B, init.Theta, init.Alpha, Alpha, maxit = 300,
nb.gh, nb.gl, LIMparam = 1e-3, LIMlogl = 1e-3, LIMderiv = 1e-3, print.times = TRUE,
cross.validation, jointGeneral, nb.int, initialize, init.Ksi, Ksi, init.Eta)
}
\arguments{
\item{formula}{A formula object, with the response on the left of a
\eqn{\sim} operator, and the terms on the right. The response must be a
survival object as returned by the '\code{Surv}' function
like in survival package. Interactions are possible using ' * ' or ' : '.}
\item{formula.terminalEvent}{Only for joint frailty models: a formula object,
only requires terms on the right to indicate which variables are used for
the terminal event. Interactions are possible using ' * ' or ' : '.}
\item{data}{A 'data.frame' with the variables used in '\code{formula}'.}
\item{recurrentAG}{Logical value. Is Andersen-Gill model fitted? If so
indicates that recurrent event times with the counting process approach of
Andersen and Gill is used. This formulation can be used for dealing with
time-dependent covariates. The default is FALSE.}
\item{family}{Type of Generalized Survival Model to fit.
\code{"PH"} for a proportional hazards model,
\code{"AH"} for an additive hazards model,
\code{"PO"} for a proportional odds model and
\code{"probit"} for a probit model.
A vector of length 2 is expected for joint models
(e.g., \code{family=c("PH","PH")}).}
\item{hazard}{Type of hazard functions:
\code{"Splines"} for semi-parametric hazard functions using equidistant
intervals, or \code{"parametric"} for parametric distribution functions.
In case of \code{family="PH"} or \code{family="AH"},
the \code{"parametric"} option corresponds to a Weibull distribution.
In case of \code{family="PO"} and \code{family="probit"},
the \code{"parametric"} option corresponds to a log-logistic
and a log-normal distribution, respectively.
So far, the \code{"Splines"} option is only available for PH and AH submodels.
Default is \code{"Splines"}.}
\item{n.knots}{Integer giving the number of knots to use. Value required in
the penalized likelihood estimation. It corresponds to the \code{n.knots+2}
splines functions for the approximation of the hazard or the survival
functions. We estimate I- or M-splines of order 4. When the user set a
number of knots equals to \code{k} (i.e. \code{n.knots=k}),
then the number of interior knots is \code{k-2} and the number of splines is
\code{(k-2)+order}. Number of knots must be between 4 and 20. (See Note)}
\item{kappa}{Positive smoothing parameter in the penalized likelihood
estimation. The coefficient kappa tunes the intensity of the penalization
(the integral of the squared second derivative of hazard function).
In a stratified shared model, this argument must be a vector
with kappas for both strata. In a stratified joint model, this argument
must be a vector with kappas for both strata for recurrent events plus one
kappa for terminal event.
We advise the user to identify several possible tuning
parameters, note their defaults and look at the sensitivity of the results
to varying them. Value required. (See Note).}
\item{betaknots}{Number of inner knots used for the
B-splines time-varying coefficient estimation. Default is 1.
See '\code{timedep}' function for more details.}
\item{betaorder}{Order of the B-splines used for the
time-varying coefficient estimation.
Default is cubic B-splines (\code{order=3}).
See '\code{timedep}' function for more details.
Not implemented for Proportional Odds and Probit submodels.}
\item{RandDist}{Type of random effect distribution:
\code{"Gamma"} for a gamma distribution,
and \code{"LogN"} for a log-normal distribution (not implemented yet).
Default is \code{"Gamma"}.}
\item{init.B}{A vector of initial values for regression coefficients. This
vector should be of the same size as the whole vector of covariates with the
first elements for the covariates related to the recurrent events and then
to the terminal event (interactions in the end of each component). Default
is 0.1 for each (for Generalized Survival and Shared Frailty Models)
or 0.5 (for Generalized Joint Frailty Models).}
\item{init.Theta}{Initial value for frailty variance.}
\item{init.Alpha}{Only for Generalized Joint Frailty Models:
initial value for parameter alpha.}
\item{Alpha}{Only for Generalized Joint Frailty Models:
input "None" so as to fit a joint model without the parameter alpha.}
\item{maxit}{Maximum number of iterations for the Marquardt algorithm.
Default is 300}
\item{nb.gh}{Number of nodes for the Gaussian-Hermite quadrature.
It can be chosen among 5, 7, 9, 12, 15, 20 and 32.
The default is 20 if \code{hazard="Splines"}, 32 otherwise.}
\item{nb.gl}{Number of nodes for the Gaussian-Laguerre quadrature.
It can be chosen between 20 and 32.
The default is 20 if \code{hazard="Splines"}, 32 otherwise.}
\item{LIMparam}{Convergence threshold of the Marquardt algorithm for the
parameters (see Details), \eqn{10}\out{<sup>-3</sup>} by default.}
\item{LIMlogl}{Convergence threshold of the Marquardt algorithm for the
log-likelihood (see Details), \eqn{10}\out{<sup>-3</sup>} by default.}
\item{LIMderiv}{Convergence threshold of the Marquardt algorithm for the
gradient (see Details), \eqn{10}\out{<sup>-3</sup>} by default.}
\item{print.times}{A logical parameter to print iteration process. Default
is TRUE.}
\item{cross.validation}{Not implemented yet for the generalized settings.}
\item{jointGeneral}{Not implemented yet for the generalized settings.}
\item{nb.int}{Not implemented yet for the generalized settings.}
\item{initialize}{Not implemented yet for the generalized settings.}
\item{init.Ksi}{Not implemented yet for the generalized settings.}
\item{Ksi}{Not implemented yet for the generalized settings.}
\item{init.Eta}{Not implemented yet for the generalized settings.}
}
\value{
The following components are included in a 'frailtyPenal' object for each
model.
\item{b}{Sequence of the corresponding estimation of the coefficients for
the hazard functions (parametric or semiparametric), the random effects
variances and the regression coefficients.}
\item{call}{The code used for the model.}
\item{formula}{The formula part of the code used for the model.}
\item{n}{The number of observations used in the fit.}
\item{groups}{The maximum number of groups used in the fit.}
\item{n.events}{The number of events observed in the fit.}
\item{n.eventsbygrp}{A vector of length the number of groups
giving the number of observed events in each group.}
\item{loglik}{The marginal log-likelihood in the parametric case.}
\item{loglikPenal}{The marginal penalized log-likelihood
in the semiparametric case.}
\item{coef}{The regression coefficients.}
\item{varH}{The variance matrix of
the regression coefficients before positivity constraint transformation.
Then, the delta method is needed to obtain the estimated variance parameters.
That is why some variances don't match with
the printed values at the end of the model.}
\item{varHtotal}{The variance matrix of
all the parameters before positivity constraint transformation.
Then, the delta method is needed to obtain the estimated variance parameters.
That is why some variances don't match with
the printed values at the end of the model.}
\item{varHIH}{The robust estimation of
the variance matrix of the regression coefficients}
\item{varHIHtotal}{The robust estimation of
the variance matrix of all parameters.}
\item{x}{Matrix of times where the hazard functions are estimated.}
\item{xSu}{Matrix of times where the survival functions are estimated.}
\item{lam}{Array (dim=3) of baseline hazard estimates
and confidence bands.}
\item{surv}{Array (dim=3) of baseline survival estimates
and confidence bands.}
\item{type}{Character string specifying the type of censoring,
see the \code{Surv} function for more details.}
\item{n.strat}{Number of strata.}
\item{n.iter}{Number of iterations needed to converge.}
\item{median}{The value of the median survival and its confidence bands.
If there are two strata or more, the first value corresponds to the value
for the first strata, etc.}
\item{LCV}{The approximated likelihood cross-validation criterion in the
semiparametric case.
With H (resp. H\out{<sub>pen</sub>}) the hessian matrix
of log-likelihood (resp. penalized log-likelihood),
EDF = H\out{<sub>pen</sub>}\out{<sup>-1</sup>} H
the effective degrees of freedom,
L(\eqn{\xi},\eqn{\theta}) the log-likelihood and
n the number of observations,
\deqn{LCV = 1/n x (trace(EDF) - L(\xi,\theta)).}}
\item{AIC}{The Akaike information Criterion for the parametric case.
With p the number of parameters,
n the number of observations and L(\eqn{\xi},\eqn{\theta}) the log-likelihood,
\deqn{AIC = 1/n x (p - L(\xi,\theta)).}}
\item{npar}{Number of parameters.}
\item{nvar}{Number of explanatory variables.}
\item{typeof}{Indicator of the type of hazard functions computed :
0 for "Splines", 2 for "parametric".}
\item{istop}{Convergence indicator:
1 if convergence is reached,
2 if convergence is not reached,
3 if the hessian matrix is not positive definite,
4 if a numerical problem has occurred in the likelihood calculation}
\item{shape.param}{Shape parameter for the parametric hazard function
(a Weibull distribution is used for proportional and additive hazards models,
a log-logistic distribution is used for proportional odds models,
a log-normal distribution is used for probit models).}
\item{scale.param}{Scale parameter for the parametric hazard function.}
\item{Names.data}{Name of the dataset.}
\item{Frailty}{Logical value. Was model with frailties fitted ?}
\item{linear.pred}{Linear predictor:
\bold{\eqn{\beta}}'\bold{\eqn{X}} in the generalized survival models or
\bold{\eqn{\beta}}'\bold{\eqn{X}} + log(\eqn{u}\out{<sub>i</sub>})
in the shared frailty generalized survival models.}
\item{BetaTpsMat}{Matrix of time varying-effects and confidence bands
(the first column used for abscissa of times).}
\item{nvartimedep}{Number of covariates with time-varying effects.}
\item{Names.vardep}{Name of the covariates with time-varying effects.}
\item{EPS}{Convergence criteria concerning
the parameters, the likelihood and the gradient.}
\item{family}{Type of Generalized Survival Model fitted
(0 for PH, 1 for PO, 2 for probit, 3 for AH).}
\item{global_chisq.test}{A binary variable equals to 0 when no multivariate
Wald is given, 1 otherwise.}
\item{beta_p.value}{p-values of the Wald test for the estimated
regression coefficients.}
\item{cross.Val}{Logical value. Is cross validation procedure used for
estimating the smoothing parameters in the penalized likelihood estimation?}
\item{DoF}{Degrees of freedom associated with the smoothing parameter
\code{kappa}.}
\item{kappa}{A vector with the smoothing parameters in the penalized
likelihood estimation corresponding to each baseline function as components.}
\item{n.knots}{Number of knots for estimating the baseline functions in the
penalized likelihood estimation.}
\item{n.knots.temp}{Initial value for the number of knots.}
\item{global_chisq}{A vector with the values of each multivariate Wald test.}
\item{dof_chisq}{A vector with the degree of freedom for each multivariate
Wald test.}
\item{p.global_chisq}{A vector with the p-values for each global multivariate
Wald test.}
\item{names.factor}{Names of the "as.factor" variables.}
\item{Xlevels}{Vector of the values that factor might have taken.}
The following components are specific to \bold{shared} models.
\item{equidistant}{Indicator for the intervals used in the spline estimation
of baseline hazard functions :
1 for equidistant intervals ; 0 for intervals using percentile
(note: \code{equidistant = 2} in case of parametric estimation).}
\item{Names.cluster}{Cluster names.}
\item{theta}{Variance of the gamma frailty parameter, i.e.
Var(\eqn{u}\out{<sub>i</sub>}).}
\item{varTheta}{Variance of parameter \code{theta}.}
\item{theta_p.value}{p-value of the Wald test for
the estimated variance of the gamma frailty.}
The following components are specific to \bold{joint} models.
\item{formula}{The formula part of the code
used for the recurrent events.}
\item{formula.terminalEvent}{The formula part of the code
used for the terminal model.}
\item{n.deaths}{Number of observed deaths.}
\item{n.censored}{Number of censored individuals.}
\item{theta}{Variance of the gamma frailty parameter, i.e.
Var(\eqn{u}\out{<sub>i</sub>}).}
\item{indic_alpha}{Indicator if a joint frailty model with
\eqn{\alpha} parameter was fitted.}
\item{alpha}{The coefficient \eqn{\alpha} associated
with the frailty parameter in the terminal hazard function.}
\item{nvar}{A vector with the number of covariates
of each type of hazard function as components.}
\item{nvarnotdep}{A vector with the number of constant effect covariates
of each type of hazard function as components.}
\item{nvarRec}{Number of recurrent explanatory variables.}
\item{nvarEnd}{Number of death explanatory variables.}
\item{noVar1}{Indicator of recurrent explanatory variables.}
\item{noVar2}{Indicator of death explanatory variables.}
\item{Names.vardep}{Name of the covariates with time-varying effects
for the recurrent events.}
\item{Names.vardepdc}{Name of the covariates with time-varying effects
for the terminal event.}
\item{xR}{Matrix of times where both survival and hazard function
are estimated for the recurrent event.}
\item{xD}{Matrix of times for the terminal event.}
\item{lamR}{Array (dim=3) of hazard estimates and confidence bands
for recurrent event.}
\item{lamD}{The same value as \code{lamR} for the terminal event.}
\item{survR}{Array (dim=3) of baseline survival estimates and
confidence bands for recurrent event.}
\item{survD}{The same value as \code{survR} for the terminal event.}
\item{nb.gh}{Number of nodes for the Gaussian-Hermite quadrature.}
\item{nb.gl}{Number of nodes for the Gaussian-Laguerre quadrature.}
\item{medianR}{The value of the median survival for the recurrent events
and its confidence bands.}
\item{medianD}{The value of the median survival for the terminal event
and its confidence bands.}
\item{names.factor}{Names of the "as.factor" variables
for the recurrent events.}
\item{names.factordc}{Names of the "as.factor" variables
for the terminal event.}
\item{Xlevels}{Vector of the values that factor might have taken
for the recurrent events.}
\item{Xlevels2}{Vector of the values that factor might have taken
for the terminal event.}
\item{linear.pred}{Linear predictor for the recurrent part:
\bold{\eqn{\beta}}'\bold{\eqn{X}} + log(\eqn{u}\out{<sub>i</sub>}).}
\item{lineardeath.pred}{Linear predictor for the terminal part:
\bold{\eqn{\beta}}'\bold{\eqn{X}} + \eqn{\alpha} x log(\eqn{u}\out{<sub>i</sub>}).}
\item{Xlevels}{Vector of the values that factor might have taken
for the recurrent part.}
\item{Xlevels2}{vector of the values that factor might have taken
for the death part.}
\item{BetaTpsMat}{Matrix of time varying-effects and confidence bands for
recurrent event (the first column used for abscissa of times of recurrence).}
\item{BetaTpsMatDc}{Matrix of time varying-effects and confidence bands for
terminal event (the first column used for abscissa of times of death).}
\item{alpha_p.value}{p-value of the Wald test for the estimated \eqn{\alpha}.}
}
\description{
{
\if{html}{\bold{I. SHARED FRAILTY GENERALIZED SURVIVAL MODELS}
Fit a gamma Shared Frailty Generalized Survival Model using
a parametric estimation, or a semi-parametric penalized likelihood estimation.
Right-censored data and strata (up to 6 levels) are allowed.
It allows to obtain a parametric or flexible semi-parametric smooth
hazard and survival functions.
Each frailty term \eqn{u}\out{<sub>i</sub>} is assumed
to act multiplicatively on the hazard function, and to be drawn from a
Gamma distribution with unit mean and variance \eqn{\theta}.
Conditional on the frailty term, the hazard function for the
\eqn{j}\out{<sup>th</sup>} subject in the \eqn{i}\out{<sup>th</sup>} group
is then expressed by
{\figure{gsm1.png}{options: width="70\%"}}
where \bold{\eqn{x}}\out{<sub>ij</sub>}
is a collection of baseline covariates,
\bold{\eqn{\xi}} is a vector of parameters, and
\eqn{\lambda}\out{<sub>ij</sub>}
(\eqn{t} | \bold{\eqn{x}}\out{<sub>ij</sub>} ; \bold{\eqn{\xi}})
is the hazard function for an average value of the frailty.
The associated conditional survival function writes
{\figure{gsm2.png}{options: width="70\%"}}
where
\eqn{S}\out{<sub>ij</sub>}
(\eqn{t} | \bold{\eqn{x}}\out{<sub>ij</sub>} ; \bold{\eqn{\xi}})
designates the survival function for an average value of the frailty.
Following Liu et al. (2017, 2018), the latter function is expressed in terms of
a link function \eqn{g}(.) and a linear predictor
\eqn{\eta}\out{<sub>ij</sub>}
(\eqn{t}, \bold{\eqn{x}}\out{<sub>ij</sub>}; \bold{\eqn{\xi}})
such that
\eqn{g}[\eqn{S}\out{<sub>ij</sub>}
(\eqn{t} | \bold{\eqn{x}}\out{<sub>ij</sub>} ; \bold{\eqn{\xi}})]
=
\eqn{\eta}\out{<sub>ij</sub>}
(\eqn{t}, \bold{\eqn{x}}\out{<sub>ij</sub>}; \bold{\eqn{\xi}}),
i.e.
\eqn{S}\out{<sub>ij</sub>}
(\eqn{t} | \bold{\eqn{x}}\out{<sub>ij</sub>} ; \bold{\eqn{\xi}})
=
\eqn{h}[\eqn{\eta}\out{<sub>ij</sub>}
(\eqn{t}, \bold{\eqn{x}}\out{<sub>ij</sub>}; \bold{\eqn{\xi}})]
with \eqn{h}() = \eqn{g}\out{<sup>-1</sup>}().
The conditional survival function is finally modeled by
{\figure{gsm3.png}{options: width="70\%"}}
The table below summarizes the most commonly used (inverse) link functions and
their associated conditional survival, hazard and cumulative hazard functions.
PHM stands for "Proportional Hazards Model",
POM for "Proportional Odds Model,
PROM for "Probit Model" and AHM for "Additive Hazards Model".
{\figure{gsm4.png}{options: width="100\%"}}
\bold{I.(a) Fully parametric case}
In the fully parametric case, linear predictors considered are of the form
{\figure{gsm5.png}{options: width="70\%"}}
where \eqn{\rho > 0} is a shape parameter,
\eqn{\gamma > 0} a scale parameter,
\bold{\eqn{\beta}} a vector of regression coefficients,
and \bold{\eqn{\xi}} = (\eqn{\rho} ,\eqn{\gamma}, \bold{\eqn{\beta}}).
With the appropriate link function, such linear parametric predictors
make it possible to recover
a Weibull baseline survival function for PHMs and AHMs,
a log-logistic baseline survival function for POMs,
and a log-normal one for PROMs.
\bold{I. (b) Flexible semi-parametric case}
For PHM and AHM, a more flexible splines-based approach is proposed for
modeling the baseline hazard function and time-varying regression coefficients.
In this case, conditional on the frailty term \eqn{u}\out{<sub>i</sub>},
the hazard function for the \eqn{j}\out{<sup>th</sup>} subject
in the \eqn{i}\out{<sup>th</sup>} group is still expressed by
\eqn{\lambda}\out{<sub>ij</sub>}
(\eqn{t} | \bold{\eqn{x}}\out{<sub>ij</sub>}, \eqn{u}\out{<sub>i</sub>} ;
\bold{\eqn{\xi}})
= \eqn{u}\out{<sub>i</sub>}
\eqn{\lambda}\out{<sub>ij</sub>}
(\eqn{t} | \bold{\eqn{x}}\out{<sub>ij</sub>} ; \bold{\eqn{\xi}}),
but we have this time
{\figure{gsm6.png}{options: width="70\%"}}
The smoothness of baseline hazard function \eqn{\lambda}\out{<sub>0</sub>}()
is ensured by penalizing the log-likelihood by a term which has
large values for rough functions.
Moreover, for parametric and flexible semi-parametric AHMs, the
log-likelihood is constrained to ensure the strict positivity of the hazards,
since the latter is not naturally guaranteed by the model.
\bold{II. JOINT FRAILTY GENERALIZED SURVIVAL MODELS}
Fit a gamma Joint Frailty Generalized Survival Model for recurrent and
terminal events using a parametric estimation,
or a semi-parametric penalized likelihood estimation.
Right-censored data and strata (up to 6 levels) for the recurrent event part
are allowed.
Joint frailty models allow studying, jointly, survival processes
of recurrent and terminal events, by considering the terminal event as an
informative censoring.
This model includes an common patient-specific frailty term
\eqn{u}\out{<sub>i</sub>} for the two survival functions which will
take into account the unmeasured heterogeneity in the data,
associated with unobserved covariates.
The frailty term acts differently for the two survival functions
(\eqn{u}\out{<sub>i</sub>} for the recurrent survival function and
\eqn{u}\out{<sub>i</sub>}\out{<sup>α</sup>} for the death one).
The covariates could be different for the recurrent and terminal event parts.
\bold{II.(a) Fully parametric case}
For the \eqn{j}\out{<sup>th</sup>} recurrence (j=1,...,n\out{<sub>i</sub>})
and the \eqn{i}\out{<sup>th</sup>} patient (i=1,...,N),
the gamma Joint Frailty Generalized Survival Model
for recurrent event survival function
\eqn{S}\out{<sub>Rij</sub>}(.) and death survival function
\eqn{S}\out{<sub>Di</sub>}(.) is
{\figure{gsm7.png}{options: width="70\%"}}
- \eqn{\eta}\out{<sub>Rij</sub>} (resp. \eqn{\eta}\out{<sub>Di</sub>})
is the linear predictor for the recurrent (resp. terminal) event process.
The form of these linear predictors is the same as the one presented in I.(a).
- \eqn{h}\out{<sub>R</sub>}(.) (resp. \eqn{h}\out{<sub>D</sub>}(.))
is the inverse link function associated with
recurrent events (resp. terminal event).
- \bold{\eqn{x}}\out{<sub>Rij</sub>} and \bold{\eqn{x}}\out{<sub>Di</sub>}
are two vectors of baseline covariates associated with
recurrent and terminal events.
- \bold{\eqn{\xi}}\out{<sub>R</sub>} and \bold{\eqn{\xi}}\out{<sub>D</sub>}
are the parameter vectors for recurrent and terminal events.
- \eqn{\alpha} is a parameter allowing more flexibility in the association
between recurrent and terminal events processes.
- The random frailties \eqn{u}\out{<sub>i</sub>} are still assumed iid and
drown from a \eqn{\Gamma}(1/\eqn{\theta},1/\eqn{\theta}).
\bold{II.(b) Flexible semi-parametric case}
If one chooses to fit a PHM or an AHM for recurrent and/or terminal events,
a splines-based approach for modeling baseline hazard functions
and time-varying regression coefficients is still available.
In this approach, the submodel for recurrent events is expressed as
\eqn{\lambda}\out{<sub>Rij</sub>}
(\eqn{t} | \bold{\eqn{x}}\out{<sub>Rij</sub>}, \eqn{u}\out{<sub>i</sub>} ;
\bold{\eqn{\xi}}\out{<sub>R</sub>})
= \eqn{u}\out{<sub>i</sub>}
\eqn{\lambda}\out{<sub>Rij</sub>}
(\eqn{t} | \bold{\eqn{x}}\out{<sub>Rij</sub>} ;
\bold{\eqn{\xi}}\out{<sub>R</sub>}), where
{\figure{gsm8.png}{options: width="70\%"}}
The submodel for terminal event is expressed as
\eqn{\lambda}\out{<sub>Di</sub>}
(\eqn{t} | \bold{\eqn{x}}\out{<sub>Di</sub>}, \eqn{u}\out{<sub>i</sub>} ;
\bold{\eqn{\xi}}\out{<sub>D</sub>})
= \eqn{u}\out{<sub>i</sub>}\out{<sup>α</sup>}
\eqn{\lambda}\out{<sub>Di</sub>}
(\eqn{t} | \bold{\eqn{x}}\out{<sub>Di</sub>} ;
\bold{\eqn{\xi}}\out{<sub>D</sub>}), where
{\figure{gsm9.png}{options: width="70\%"}}
Baseline hazard functions
\eqn{\lambda}\out{<sub>R0</sub>}(.) and \eqn{\lambda}\out{<sub>D0</sub>}(.)
are estimated using cubic M-splines (of order 4)
with positive coefficients, and the time-varying coefficients
\eqn{\beta}\out{<sub>R</sub>}(.) and \eqn{\beta}\out{<sub>D</sub>}(.)
are estimated using B-splines of order q.
The smoothness of baseline hazard functions
is ensured by penalizing the log-likelihood by two terms
which has large values for rough functions.
Moreover,
if one chooses an AHM for recurrent and/or terminal event submodel,
the log-likelihood is constrained to ensure
the strict positivity of the hazards,
since the latter is not naturally guaranteed by the model.
}
\if{latex}{\bold{Shared Frailty model}
Fit a shared gamma or log-normal frailty model using a semiparametric
Penalized Likelihood estimation or parametric estimation on the hazard
function. Left-truncated, right-censored data, interval-censored data and
strata (up to 6 levels) are allowed. It allows to obtain a non-parametric
smooth hazard of survival function. This approach is different from the
partial penalized likelihood approach of Therneau et al.
The hazard function, conditional on the frailty term \eqn{\omega_i}, of a
shared gamma frailty model for the \eqn{j^{th}} subject in the \eqn{i^{th}}
group:
\deqn{\lambda_{ij}(t|\omega_i)=\lambda_0(t)\omega_i\exp(\bold{\beta^{'}Z_{ij}})}
\deqn{\omega_i\sim\Gamma\left(\frac{1}{\theta},\frac{1}{\theta}\right)
\hspace{0.5cm} \bold{E}(\omega_i)=1
\hspace{0.5cm}\bold{Var}(\omega_i)=\theta}
where \eqn{\lambda_0(t)} is the baseline hazard function, \eqn{\bold{\beta}}
the vector of the regression coefficient associated to the covariate vector
\eqn{\bold{Z_{ij}}} for the \eqn{j^{th}} individual in the \eqn{i^{th}}
group.
Otherwise, in case of a shared log-normal frailty model, we have for the
\eqn{j^{th}} subject in the \eqn{i^{th}} group:
\deqn{\lambda_{ij}(t|\eta_i)=\lambda_0(t)\exp(\eta_i+\bold{\beta^{'}Z_{ij}})}
\deqn{\eta_i\sim N(0,\sigma^2)}
From now on, you can also consider time-varying effects covariates in your
model, see \code{timedep} function for more details.
\bold{Joint Frailty model}
Fit a joint either with gamma or log-normal frailty model for recurrent and
terminal events using a penalized likelihood estimation on the hazard
function or a parametric estimation. Right-censored data and strata (up to 6
levels) for the recurrent event part are allowed. Left-truncated data is not
possible. Joint frailty models allow studying, jointly, survival processes
of recurrent and terminal events, by considering the terminal event as an
informative censoring.
There is two kinds of joint frailty models that can be fitted with
\code{GenfrailtyPenal} :
- The first one (Rondeau et al. 2007) includes a common frailty term to the
individuals \eqn{(\omega_i)} for the two rates which will take into account
the heterogeneity in the data, associated with unobserved covariates. The
frailty term acts differently for the two rates ( \eqn{\omega_i} for the
recurrent rate and \eqn{\omega_i^{\alpha}} for the death rate). The
covariates could be different for the recurrent rate and death rate.
For the \eqn{j^{th}}{j^th} recurrence \eqn{(j=1,...,n_i)} and the
\eqn{i^{th}}{i^th} subject \eqn{(i=1,...,G)}, the joint gamma frailty model
for recurrent event hazard function \eqn{r_{ij}(.)} and death rate
\eqn{\lambda_i(.)} is :
\deqn{\left\{ \begin{array}{ll}
r_{ij}(t|\omega_i)=\omega_ir_0(t)\exp(\bold{\beta_1^{'}Z_i(t)}) &
\mbox{(Recurrent)} \\
\lambda_i(t|\omega_i)=\omega_i^{\alpha}\lambda_0(t)\exp(\bold{\beta_2^{'}Z_i(t)})
& \mbox{(Death)} \\ \end{array} \right. }
where \eqn{r_0(t)} (resp. \eqn{\lambda_0(t)}) is the recurrent (resp.
terminal) event baseline hazard function, \eqn{\bold{\beta_1}} (resp.
\eqn{\bold{\beta_2}}) the regression coefficient vector, \eqn{\bold{Z_i(t)}}
the covariate vector. The random effects of frailties
\eqn{\omega_i\sim\bold{\Gamma}(\frac{1}{\theta},\frac{1}{\theta})} and are
iid.
The joint log-normal frailty model will be :
\deqn{\left\{ \begin{array}{ll}
r_{ij}(t|\eta_i)=r_0(t)\exp(\eta_i+\bold{\beta_1^{'}Z_i(t)}) &
\mbox{(Recurrent)} \\ \lambda_i(t|\eta_i)=\lambda_0(t)\exp(\alpha
\eta_i+\bold{\beta_2^{'}Z_i(t)}) & \mbox{(Death)} \\ \end{array} \right. }
where \deqn{\eta_i\sim N(0,\sigma^2)}
- The second one (Rondeau et al. 2011) is quite similar but the frailty term
is common to the individuals from a same group. This model is useful for the
joint modelling two clustered survival outcomes. This joint models have been
developed for clustered semi-competing events. The follow-up of each of the
two competing outcomes stops when the event occurs. In this case, j is for
the subject and i for the cluster.
\deqn{\left\{ \begin{array}{ll}
r_{ij}(t|u_i)=u_ir_0(t)\exp(\bold{\beta_1^{'}Z_{ij}(t)}) & \mbox{(Time to
event)} \\
\lambda_{ij}(t|u_i)=u_i^{\alpha}\lambda_0(t)\exp(\bold{\beta_2^{'}Z_{ij}(t)})
& \mbox{(Death)} \\ \end{array} \right. }
It should be noted that in these models it is not recommended to include
\eqn{\alpha} parameter as there is not enough information to estimate it and
thus there might be convergence problems.
In case of a log-normal distribution of the frailties, we will have :
\deqn{\left\{ \begin{array}{ll}
r_{ij}(t|v_i)=r_0(t)\exp(v_i+\bold{\beta_1^{'}Z_{ij}(t)}) & \mbox{(Time to
event)} \\ \lambda_{ij}(t|v_i)=\lambda_0(t)\exp(\alpha
v_i+\bold{\beta_2^{'}Z_{ij}(t)}) & \mbox{(Death)} \\ \end{array} \right. }
where \deqn{v_i\sim N(0,\sigma^2)}
This joint frailty model can also be applied to clustered recurrent events
and a terminal event (example on "readmission" data below).
From now on, you can also consider time-varying effects covariates in your
model, see \code{timedep} function for more details.
There is a possibility to use a weighted penalized maximum likelihood
approach for nested case-control design, in which risk set sampling is
performed based on a single outcome (Jazic et al., \emph{Submitted}).
General Joint Frailty model Fit a general joint frailty model for recurrent
and terminal events considering two independent frailty terms. The frailty
term \eqn{u_i} represents the unobserved association between recurrences and
death. The frailty term \eqn{v_i} is specific to the recurrent event rate.
Thus, the general joint frailty model is:
\eqn{\left\{ \begin{array}{ll}
r_{ij}(t|u_i,v_i)=u_iv_ir_0(t)\exp(\bold{\beta_1^{'}Z_{ij}(t)})
=u_iv_ir_{ij}(t) & \mbox{(Recurrent)} \\
\lambda_{i}(t|u_i)=u_i\lambda_0(t)\exp(\bold{\beta_1^{'}Z_{i}(t)}) = u_i
\lambda_{i}(t) & \mbox{(Death)} \\ \end{array} \right. }
where the \eqn{iid} random effects
\eqn{\bold{u_i}\sim\Gamma(\frac{1}{\theta},\frac{1}{\theta})} and the
\eqn{iid} random effects
\eqn{\bold{v_i}\sim\Gamma(\frac{1}{\eta},\frac{1}{\eta})} are independent
from each other. The joint model is fitted using a penalized likelihood
estimation on the hazard. Right-censored data and time-varying covariates
\eqn{\bold{Z}_i(t)} are allowed.
\bold{Nested Frailty model}
\bold{\emph{Data should be ordered according to cluster and subcluster}}
Fit a nested frailty model using a Penalized Likelihood on the hazard
function or using a parametric estimation. Nested frailty models allow
survival studies for hierarchically clustered data by including two iid
gamma random effects. Left-truncated and right-censored data are allowed.
Stratification analysis is allowed (maximum of strata = 2).
The hazard function conditional on the two frailties \eqn{v_i} and
\eqn{w_{ij}} for the \eqn{k^{th}} individual of the \eqn{j^{th}} subgroup of
the \eqn{i^{th}} group is :
\deqn{\left\{ \begin{array}{ll}
\lambda_{ijk}(t|v_i,w_{ij})=v_iw_{ij}\lambda_0(t)exp(\bold{\beta^{'}X_{ijk}})
\\ v_i\sim\Gamma\left(\frac{1}{\alpha},\frac{1}{\alpha}\right)
\hspace{0.05cm}i.i.d. \hspace{0.2cm} \bold{E}(v_i)=1
\hspace{0.2cm}\bold{Var}(v_i)=\alpha \\
w_{ij}\sim\Gamma\left(\frac{1}{\eta},\frac{1}{\eta}\right)\hspace{0.05cm}i.i.d.
\hspace{0.2cm} \bold{E}(w_{ij})=1 \hspace{0.2cm} \bold{Var}(w_{ij})=\eta
\end{array} \right. }
where \eqn{\lambda_0(t)} is the baseline hazard function, \eqn{X_{ijk}}
denotes the covariate vector and \eqn{\beta} the corresponding vector of
regression parameters.
\bold{Joint Nested Frailty Model}
Fit a joint model for recurrent and terminal events using a penalized
likelihood on the hazard functions or a parametric estimation.
Right-censored data are allowed but left-truncated data and stratified
analysis are not allowed.
Joint nested frailty models allow studying, jointly, survival processes of
recurrent and terminal events for hierarchically clustered data, by
considering the terminal event as an informative censoring and by including
two iid gamma random effects.
The joint nested frailty model includes two shared frailty terms, one for
the subgroup (\eqn{u_{fi}}) and one for the group (\eqn{w_f}) into the
hazard functions. This random effects account the heterogeneity in the data,
associated with unobserved covariates. The frailty terms act differently for
the two rates (\eqn{u_{fi}}, \eqn{w_f^\xi} for the recurrent rate and
\eqn{u_{fi}^\alpha, {w_i}} for the terminal event rate). The covariates
could be different for the recurrent rate and death rate.
For the \eqn{j^{th}} recurrence (j = 1, ..., \eqn{n_i}) of the \eqn{i^{th}}
individual (i = 1, ..., \eqn{m_f}) of the \eqn{f^{th}} group (f = 1, ...,
n), the joint nested gamma frailty model for recurrent event hazard function
\eqn{r_{fij}}(.) and for terminal event hazard function \eqn{\lambda_{fi}}
is :
\deqn{\left\{ \begin{array}{ll} r_{fij}(t|\omega_f, u_{fi}, \bold{X_{fij}})=
r_0(t) u_{fi} \omega_f^\xi \exp(\bold{\beta'} \bold{X_{fij}}) &
\mbox{(Recurrent)} \\ \lambda_{fi}(t|\omega_f, u_{fi},
\bold{X_{fij}})=\lambda_0(t)u_{fi}^\alpha \omega_f \exp(\bold{\gamma'}
\bold{X_{fi}}) & \mbox{(Death)} \\ \end{array} \right. }
where \eqn{r_0(t)}(resp. \eqn{\lambda_0(t)}) is the recurrent (resp.
terminal) event baseline hazard function, \eqn{\beta} (resp. \eqn{\gamma})
the regression coefficient vector, \eqn{\bold{X_{fij}}(t)} the covariates
vector. The random effects are \deqn{\omega_f \sim \Gamma \left(
\frac{1}{\eta}, \frac{1}{\eta}\right)} and \deqn{ u_{fi} \sim \Gamma \left(
\frac{1}{\theta}, \frac{1}{\theta} \right)}
}
}
}
\details{
{
\bold{TYPICAL USES}
For a Generalized Survival Model:
\preformatted{GenfrailtyPenal(
formula=Surv(time,event)~var1+var2,
data, family, \dots)}
For a Shared Frailty Generalized Survival Model:
\preformatted{GenfrailtyPenal(
formula=Surv(time,event)~cluster(group)+var1+var2,
data, family, \dots)}
For a Joint Frailty Generalized Survival Model:
\preformatted{GenfrailtyPenal(
formula=Surv(time,event)~cluster(group)+var1+var2+var3+terminal(death),
formula.terminalEvent= ~var1+var4,
data, family, \dots)}
\bold{OPTIMIZATION ALGORITHM}
The estimated parameters are obtained using the robust Marquardt algorithm
(Marquardt, 1963) which is a combination between a Newton-Raphson algorithm
and a steepest descent algorithm. The iterations are stopped when
the difference between two consecutive log-likelihoods is small
(\eqn{<10}\out{<sup>-3</sup>}),
the estimated coefficients are stable
(consecutive values \eqn{<10}\out{<sup>-3</sup>},
and the gradient small enough (\eqn{<10}\out{<sup>-3</sup>}).
When the frailty variance is small, numerical problems may arise.
To solve this problem, an alternative formula of the penalized log-likelihood
is used (see Rondeau, 2003 for further details).
For Proportional Hazards and Additive Hazards submodels,
cubic M-splines of order 4 can be used to estimate the hazard function.
In this case, I-splines (integrated M-splines) are used to compute the
cumulative hazard function.
The inverse of the Hessian matrix is the variance estimator.
To deal with the positivity constraint of the variance component and the
spline coefficients, a squared transformation is used and the standard errors
are computed by the \eqn{\Delta}-method (Knight & Xekalaki, 2000).
The integrations in the full log likelihood are evaluated using
Gaussian quadrature. Laguerre polynomials with 20 points are used to treat
the integrations on \eqn{[0,\infty[}.
\bold{INITIAL VALUES}
In case of a shared frailty model,
the splines and the regression coefficients are initialized to 0.1.
The program fits, firstly, an adjusted Cox model to give new initial values
for the splines and the regression coefficients.
The variance of the frailty term \eqn{\theta} is initialized to 0.1.
Then, a shared frailty model is fitted.
In case of a joint frailty model,
the splines and the regression coefficients are initialized to 0.5.
The program fits firstly, an adjusted Cox model to have new initial values
for the splines and the regression coefficients.
The variance of the frailty term \eqn{\theta} and the association parameter
\eqn{\alpha} are initialized to 1.
Then, a joint frailty model is fitted.
}
}
\note{
In the flexible semiparametric case, smoothing parameters \code{kappa} and
number of knots \code{n.knots} are the arguments that the user have to change
if the fitted model does not converge.
\code{n.knots} takes integer values between 4 and 20.
But with \code{n.knots=20}, the model would take a long time to converge.
So, usually, begin first with \code{n.knots=7}, and increase it step by step
until it converges.
\code{kappa} only takes positive values. So, choose a value for kappa (for
instance 10000), and if it does not converge, multiply or divide this value
by 10 or 5 until it converges.
}
\examples{
\dontrun{
#############################################################################
# ----- GENERALIZED SURVIVAL MODELS (without frailties) ----- #
#############################################################################
library(timereg)
adult.retino = retinopathy[retinopathy$type == "adult", ]
adult.retino[adult.retino$futime >= 50, "status"] = 0
adult.retino[adult.retino$futime >= 50, "futime"] = 50
### --- Parametric PH, AH, PO and probit models --- ###
GenfrailtyPenal(formula=Surv(futime,status)~trt, data=adult.retino,
hazard="parametric", family="PH")
GenfrailtyPenal(formula=Surv(futime,status)~trt, data=adult.retino,
hazard="parametric", family="AH")
GenfrailtyPenal(formula=Surv(futime,status)~trt, data=adult.retino,
hazard="parametric", family="PO")
GenfrailtyPenal(formula=Surv(futime,status)~trt, data=adult.retino,
hazard="parametric", family="probit")
### --- Semi-parametric PH and AH models --- ###
GenfrailtyPenal(formula=Surv(futime,status)~timedep(trt), data=adult.retino,
family="PH", hazard="Splines", n.knots=8, kappa=10^6, betaknots=1, betaorder=2)
GenfrailtyPenal(formula=Surv(futime,status)~timedep(trt), data=adult.retino,
family="AH", hazard="Splines", n.knots=8, kappa=10^10, betaknots=1, betaorder=2)
#############################################################################
# ----- SHARED FRAILTY GENERALIZED SURVIVAL MODELS ----- #
#############################################################################
library(timereg)
adult.retino = retinopathy[retinopathy$type == "adult", ]
adult.retino[adult.retino$futime >= 50, "status"] = 0
adult.retino[adult.retino$futime >= 50, "futime"] = 50
### --- Parametric PH, AH, PO and probit models --- ###
GenfrailtyPenal(formula=Surv(futime,status)~trt+cluster(id), data=adult.retino,
hazard="parametric", family="PH")
GenfrailtyPenal(formula=Surv(futime,status)~trt+cluster(id), data=adult.retino,
hazard="parametric", family="AH")
GenfrailtyPenal(formula=Surv(futime,status)~trt+cluster(id), data=adult.retino,
hazard="parametric", family="PO")
GenfrailtyPenal(formula=Surv(futime,status)~trt+cluster(id), data=adult.retino,
hazard="parametric", family="probit")
### --- Semi-parametric PH and AH models --- ###
GenfrailtyPenal(formula=Surv(futime,status)~cluster(id)+timedep(trt),
data=adult.retino, family="PH", hazard="Splines",
n.knots=8, kappa=10^6, betaknots=1, betaorder=2)
GenfrailtyPenal(formula=Surv(futime,status)~cluster(id)+timedep(trt),
data=adult.retino, family="AH", hazard="Splines",
n.knots=8, kappa=10^10, betaknots=1, betaorder=2)
#############################################################################
# ----- JOINT FRAILTY GENERALIZED SURVIVAL MODELS ----- #
#############################################################################
data("readmission")
readmission[, 3:5] = readmission[, 3:5]/365.25
### --- Parametric dual-PH, AH, PO and probit models --- ###
GenfrailtyPenal(
formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+chemo,
formula.terminalEvent=~sex+dukes+chemo, data=readmission, recurrentAG=TRUE,
hazard="parametric", family=c("PH","PH"))
GenfrailtyPenal(
formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+chemo,
formula.terminalEvent=~sex+dukes+chemo, data=readmission, recurrentAG=TRUE,
hazard="parametric", family=c("AH","AH"))
GenfrailtyPenal(
formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+chemo,
formula.terminalEvent=~sex+dukes+chemo, data=readmission, recurrentAG=TRUE,
hazard="parametric", family=c("PO","PO"))
GenfrailtyPenal(
formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+chemo,
formula.terminalEvent=~sex+dukes+chemo, data=readmission, recurrentAG=TRUE,
hazard="parametric", family=c("probit","probit"))
### --- Semi-parametric dual-PH and AH models --- ###
GenfrailtyPenal(
formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+timedep(chemo),
formula.terminalEvent=~sex+dukes+timedep(chemo), data=readmission, recurrentAG=TRUE,
hazard="Splines", family=c("PH","PH"),
n.knots=5, kappa=c(100,100), betaknots=1, betaorder=3)
GenfrailtyPenal(
formula=Surv(t.start,t.stop,event)~cluster(id)+terminal(death)+sex+dukes+timedep(chemo),
formula.terminalEvent=~sex+dukes+timedep(chemo), data=readmission, recurrentAG=TRUE,
hazard="Splines", family=c("AH","AH"),
n.knots=5, kappa=c(600,600), betaknots=1, betaorder=3)
}
}
\references{
J. Chauvet and V. Rondeau (2021). A flexible class of generalized
joint frailty models for the analysis of survival endpoints. In revision.
Liu XR, Pawitan Y, Clements M. (2018)
Parametric and penalized generalized survival models.
\emph{Statistical Methods in Medical Research} \bold{27}(5), 1531-1546.
Liu XR, Pawitan Y, Clements MS. (2017)
Generalized survival models for correlated time-to-event data.
\emph{Statistics in Medicine} \bold{36}(29), 4743-4762.
A. Krol, A. Mauguen, Y. Mazroui, A. Laurent, S. Michiels and V. Rondeau
(2017). Tutorial in Joint Modeling and Prediction: A Statistical Software
for Correlated Longitudinal Outcomes, Recurrent Events and a Terminal Event.
\emph{Journal of Statistical Software} \bold{81}(3), 1-52.
V. Rondeau, Y. Mazroui and J. R. Gonzalez (2012). Frailtypack: An R package
for the analysis of correlated survival data with frailty models using
penalized likelihood estimation or parametric estimation. \emph{Journal of
Statistical Software} \bold{47}, 1-28.
V. Rondeau, J.P. Pignon, S. Michiels (2011). A joint model for the
dependance between clustered times to tumour progression and deaths: A
meta-analysis of chemotherapy in head and neck cancer. \emph{Statistical
methods in medical research} \bold{897}, 1-19.
V. Rondeau, S. Mathoulin-Pellissier, H. Jacqmin-Gadda, V. Brouste, P.
Soubeyran (2007). Joint frailty models for recurring events and death using
maximum penalized likelihood estimation:application on cancer events.
\emph{Biostatistics} \bold{8},4, 708-721.
V. Rondeau, D. Commenges, and P. Joly (2003). Maximum penalized likelihood
estimation in a gamma-frailty model. \emph{Lifetime Data Analysis} \bold{9},
139-153.
C.A. McGilchrist, and C.W. Aisbett (1991). Regression with frailty in
survival analysis. \emph{Biometrics} \bold{47}, 461-466.
D. Marquardt (1963). An algorithm for least-squares estimation of nonlinear
parameters. \emph{SIAM Journal of Applied Mathematics}, 431-441.
}
\seealso{
\code{\link{Surv}},
\code{\link{terminal}},
\code{\link{timedep}}
}
\keyword{models}
|
ebefd1fb7d2a3b8e5c8f189358416ab63b6c486c
|
9c475c4e74df2f07218f947ab7e2f64adbcdd0a8
|
/man/pe_avg_cv.Rd
|
30e0ba2f37d0ea82a6ce9f9e1be294688c68f4b2
|
[] |
no_license
|
lilyzzhao/ecofolio
|
9133794f6b5b54a3c2040b158d318a853880ab47
|
de3f887018c0cf9b53889ea21cdd1582fa193a9b
|
refs/heads/master
| 2020-04-27T03:06:40.701337
| 2014-12-10T21:37:59
| 2014-12-10T21:37:59
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 2,297
|
rd
|
pe_avg_cv.Rd
|
% Generated by roxygen2 (4.0.1): do not edit by hand
\name{pe_avg_cv}
\alias{pe_avg_cv}
\title{Estimate the average-CV portfolio effect}
\usage{
pe_avg_cv(x, detrending = c("not_detrended", "linear_detrended",
"loess_detrended"), ci = FALSE, boot_reps = 500, na.rm = FALSE)
}
\arguments{
\item{x}{A matrix or dataframe of abundance or biomass data. The
columns should represent different subpopulations or species. The
rows should represent the values through time.}
\item{detrending}{Character value describing if (and how) the time
series should be detrended before estimating the portfolio effect.
Defaults to not detrending.}
\item{ci}{Logical value (defaults to \code{FALSE}). Should a 95\%
confidence interval be calculated using a bootstrap procedure?
Returns the bias-corrected (bca) version of the bootstrap
confidence interval.}
\item{boot_reps}{Number of bootstrap replicates.}
\item{na.rm}{A logical value indicating whether \code{NA} values
should be row-wise deleted.}
}
\value{
A numeric value representing the average-CV portfolio
effect. If confidence intervals were requested then a list is
returned with the portfolio effect \code{pe} and 95\% bootstrapped
confidence interval
\code{ci}.
}
\description{
Takes a matrix of abundance or biomass data and returns various
estimates of the average-CV portfolio effect. Options exist to
detrend the time series data.
}
\details{
This version of the portfolio effect consists of dividing
the mean of the coefficient of variations (CV) of all individual
subpopulations (assets) by the CV of the combined total population.
}
\examples{
data(pinkbr)
pe_avg_cv(pinkbr[,-1], ci = TRUE)
pe_avg_cv(pinkbr[,-1], detrending = "loess_detrended", ci = TRUE)
}
\references{
Doak, D., D. Bigger, E. Harding, M. Marvier, R. O'Malley, and D.
Thomson. 1998. The Statistical Inevitability of Stability-Diversity
Relationships in Community Ecology. Amer. Nat. 151:264-276.
Tilman, D., C. Lehman, and C. Bristow. 1998. Diversity-Stability
Relationships: Statistical Inevitability or Ecological Consequence?
Amer. Nat. 151:277-282.
Schindler, D., R. Hilborn, B. Chasco, C. Boatright, T. Quinn, L.
Rogers, and M. Webster. 2010. Population diversity and the
portfolio effect in an exploited species. Nature 465:609-612. doi:
10.1038/nature09060.
}
|
90be12993bd9ab47410c89b9cf4f1091553cfe6d
|
a189183555031f077a0b52ac55176c808b272e9d
|
/GIS_2.R
|
e1d5999fb5bba574e2f9a05737a76e6d01cef7c1
|
[] |
no_license
|
TomoyaOzawa-DA/GIS
|
030d427c13e1009178d3c92f21273fa78811469a
|
07321e584d43859d9d40a647bcfad5d728bbaa8c
|
refs/heads/main
| 2023-01-06T21:49:55.093138
| 2020-10-30T12:22:52
| 2020-10-30T12:22:52
| 308,618,531
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 812
|
r
|
GIS_2.R
|
#################################
### Rでのジオコーディング入門 ###
#################################
### データの読み込み
# Windowsの人はfileEncoding = "utf8"とする必要があるかもしれません。
gas_naha <- read.csv("gas_naha_new.csv", fileEncoding = "shift-jis")
## データを整える(エクセルで作業可能です)
library(tidyverse)
gas_naha <- gas_naha %>%
select(store, address,company, fX, fY) %>% # 必要な変数だけ取得
rename(longitude = fX, latitude = fY) # 変数の名前を変更
### GISプロット
## leafletというライブラリを使用します。
library(leaflet)
## 地図をRstudio上に用意します。
map <- leaflet(gas_naha) %>%
addTiles()
map
map %>%
addCircles(lng = ~ longitude, lat = ~latitude)
|
0af98707aa07e3d24e2820f239e018d280956c53
|
d94a9c6d27f64d020241dd7c6bf5d48e256b090d
|
/Scripts/test 1. determine what is removing all the soil observations.r
|
45f88b24c73421d6a01061f3ab6ad90db35dbd78
|
[] |
no_license
|
colinaverill/FIA-ecto.growth
|
eb737e4487dd303efe43077d3546cda9a2f2abb4
|
7fa6b8bb94d366a6316bd58e7e5856646d970ed3
|
refs/heads/master
| 2016-08-11T15:36:33.993055
| 2016-02-04T19:56:44
| 2016-02-04T19:56:44
| 47,923,316
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 3,083
|
r
|
test 1. determine what is removing all the soil observations.r
|
##test what is narrowing ~3.5k soil observations to ~300
rm(list=ls())
library(data.table) #note, version 1.9.4 or higher must be installed, otherwise you will have trouble running particular commands.
library(RPostgreSQL)
library(bit64)
source('required_products_utilities/PSQL_utils.R') #this will give you the tools needed to work with the PSQL database.
tic = function() assign("timer", Sys.time(), envir=.GlobalEnv)
toc = function() print(Sys.time()-timer)
bigtime = Sys.time()
dbsettings = list(
user = "bety", # PSQL username ###NOTE colin changed the info here to get into the DB @ BU. this works.
password = "", # PSQL password
dbname = "fia5", # PSQL database name
host = "psql-pecan.bu.edu",# PSQL server address (don't change unless server is remote)
driver = 'PostgreSQL', # DB driver (shouldn't need to change)
write = FALSE # Whether to open connection with write access.
)
#load soils data- 3451 unique profiles.
soils<- read.csv('FIA_soils/FIAsoil_output_CA.csv')
#Ryan Kelly sets these- this kills everything west of the center of the country.
lon.bounds = c(-95,999)
lat.bounds = c(-999,999)
# -----------------------------
# Open connection to database
fia.con = db.open(dbsettings)
# ---------- PLOT & COND DATA
# --- Query PLOT
cat("Query PLOT...\n")
query = paste('SELECT
cn, statecd, prev_plt_cn, remper
FROM plot WHERE remper>3 AND remper<9.5 AND designcd=1 AND statecd<=56 AND ',
'lon>', min(lon.bounds),' AND lon<', max(lon.bounds), ' AND ',
'lat>', min(lat.bounds),' AND lat<', max(lat.bounds))
tic() # ~10 sec
PLOT = as.data.table(db.query(query, con=fia.con))
setnames(PLOT, toupper(names(PLOT)))
setnames(PLOT,"CN","PLT_CN")
toc()
nrow(merge(soils,PLOT, by="PLT_CN")) #639 site smake it through with these constraints.
#demonstrate that removing lat/long constraints, remper, and designcd=1 generates matches for all soil profiles.
query = paste('SELECT
cn, statecd, prev_plt_cn, remper
FROM plot ')
tic() # ~10 sec
PLOT = as.data.table(db.query(query, con=fia.con))
setnames(PLOT, toupper(names(PLOT)))
setnames(PLOT,"CN","PLT_CN")
toc()
nrow(merge(soils,PLOT, by="PLT_CN")) #3451 sites match now.
#demonstrate effect of REMPER constraint.
query = paste('SELECT
cn, statecd, prev_plt_cn, remper
FROM plot WHERE remper>0 AND remper<100')
tic() # ~10 sec
PLOT = as.data.table(db.query(query, con=fia.con))
setnames(PLOT, toupper(names(PLOT)))
setnames(PLOT,"CN","PLT_CN")
toc()
nrow(merge(soils,PLOT, by="PLT_CN")) #770 sites match now.
#demonstrate effect of deisgncd constraint
query = paste('SELECT
cn, statecd, prev_plt_cn, remper,designcd
FROM plot ') #WHERE designcd>0 AND designcd < 4
tic() # ~10 sec
PLOT = as.data.table(db.query(query, con=fia.con))
setnames(PLOT, toupper(names(PLOT)))
setnames(PLOT,"CN","PLT_CN")
toc()
nrow(merge(soils,PLOT, by="PLT_CN")) #2575 sites match now.
|
759ca513f3c3c840ddad0e8091b43b8dccc6e020
|
a6965cf239160fd1586d1fdb6ed16df8604cebeb
|
/tests/testthat/test-Contours.R
|
230edb1ecb9732c1c884aa2c84b75e7e8ecff33f
|
[] |
no_license
|
ms609/Ternary
|
3d4be17ca91cca8f48323a2ce7819ca233f7a6b2
|
db7451ca733878f499f08a22997bf0f26aa42e21
|
refs/heads/master
| 2023-07-06T12:41:57.171117
| 2023-06-29T15:34:08
| 2023-06-29T15:34:08
| 111,806,977
| 34
| 3
| null | 2023-04-20T19:53:55
| 2017-11-23T12:30:42
|
R
|
UTF-8
|
R
| false
| false
| 8,821
|
r
|
test-Contours.R
|
test_that("Densities are correctly calculated", {
coordinates <- list(
middle = c(1, 1, 1),
top = c(1, 0, 0),
belowTop = c(2, 1, 1),
leftSideSolid = c(9, 2, 9),
leftSideSolid2 = c(9, 2, 9) / 2,
right3way = c(1, 2, 0),
rightEdge = c(2.5, 0.5, 0),
leftBorder = c(1, 1, 4),
topBorder = c(2, 1, 3),
rightBorder = c(1, 2, 3)
)
values <- TernaryDensity(coordinates, resolution = 3L, direction = 1L)
expect_equal(
c(3, 10, 4, 3, 2, 16, 7, 3, 12),
values["z", ]
)
})
test_that("Contours are plotted", {
Contours <- function() {
par(mar = rep(0, 4), mfrow = c(2, 2))
FunctionToContour <- function(a, b, c) {
a - c + (4 * a * b) + (27 * a * b * c)
}
TernaryPlot(alab = "a", blab = "b", clab = "c", point = 1L)
ColourTernary(TernaryPointValues(FunctionToContour, resolution = 6L))
TernaryContour(FunctionToContour, resolution = 12L, legend = 3, bty = "n")
TernaryPlot(alab = "a", blab = "b", clab = "c", point = 2L)
ColourTernary(TernaryPointValues(FunctionToContour, resolution = 6L))
TernaryContour(FunctionToContour, resolution = 12L, legend = TRUE)
TernaryPlot(alab = "a", blab = "b", clab = "c", point = 3L)
ColourTernary(TernaryPointValues(FunctionToContour, resolution = 6L),
legend = TRUE, x = "bottomleft", bty = "n")
TernaryContour(FunctionToContour, resolution = 12L)
TernaryPlot(alab = "a", blab = "b", clab = "c", point = 4L)
ColourTernary(TernaryPointValues(FunctionToContour, resolution = 6L))
val <- TernaryContour(FunctionToContour, resolution = 12L,
legend = letters[1:5],
legend... = list(bty = "n", x = "bottomleft"))
expect_equal(val$x, seq(-sqrt(0.75), 0, length.out = 12L))
expect_equal(val$y, seq(-0.5, 0.5, length.out = 12L))
abc <- XYToTernary(val$x[4], val$y[7])
expect_equal(val$z[4, 7], FunctionToContour(abc[1], abc[2], abc[3]))
}
skip_if_not_installed("vdiffr")
vdiffr::expect_doppelganger("Contours", Contours)
FilledContours <- function() {
par(mar = rep(0, 4), mfrow = c(2, 2))
FunctionToContour <- function(a, b, c) {
a - c + (4 * a * b) + (27 * a * b * c)
}
TernaryPlot(alab = "a", blab = "b", clab = "c", point = 1L)
TernaryContour(FunctionToContour, filled = TRUE)
TernaryPlot(alab = "a", blab = "b", clab = "c", point = 2L)
TernaryContour(FunctionToContour, filled = TRUE,
color.palette = function(n)
hcl.colors(n, alpha = 0.6, rev = TRUE))
TernaryPlot(alab = "a", blab = "b", clab = "c", point = 3L)
TernaryContour(FunctionToContour, filled = TRUE, nlevels = 9,
fill.col = 0:8)
TernaryPlot(alab = "a", blab = "b", clab = "c", point = 4L)
TernaryContour(FunctionToContour, filled = TRUE, nlevels = 4)
}
skip_if_not_installed("vdiffr")
vdiffr::expect_doppelganger("FilledContours", FilledContours)
ContoursSkiwiff <- function() {
FunctionToContour <- function(a, b, c) {
a - c + (4 * a * b) + (27 * a * b * c)
}
SubTest <- function(direction) {
ColourTernary(TernaryPointValues(FunctionToContour,
resolution = 6L,
direction = direction
))
TernaryContour(FunctionToContour,
resolution = 12L,
direction = direction,
within = -t(TernaryToXY(diag(3))))
}
par(mar = rep(0, 4), mfrow = c(2, 2))
TernaryPlot(point = 3L, ylim = c(0, 1))
SubTest(1)
TernaryPlot(point = 4L, xlim = c(0, 1))
SubTest(2)
TernaryPlot(point = 1L, ylim = c(-1, 0))
SubTest(3)
TernaryPlot(point = 2L, xlim = c(-1, 0))
SubTest(4)
}
skip_if_not_installed("vdiffr")
vdiffr::expect_doppelganger("Contours-skiwiff", ContoursSkiwiff)
DensityContours <- function() {
par(mar = rep(0.2, 4), mfrow = c(1, 2))
TernaryPlot()
nPoints <- 400L
set.seed(0)
coordinates <- cbind(
abs(rnorm(nPoints, 2, 3)),
abs(rnorm(nPoints, 1, 1.5)),
abs(rnorm(nPoints, 1, 0.5))
)
ColourTernary(TernaryDensity(coordinates, resolution = 10L),
legend = 4:1, x = "topleft", bty = "n")
TernaryPoints(coordinates, col = "red", pch = ".")
val <- TernaryDensityContour(coordinates, resolution = 10L)
expect_equal(names(val), letters[24:26])
expect_equal(val$x, seq.int(-0.5, 0.5, length.out = 10))
expect_equal(val$y, seq.int(0, sqrt(0.75), length.out = 10))
expect_equal(val$z[10, 10], NA_real_)
TernaryPlot()
TernaryDensityContour(coordinates, resolution = 10L, filled = TRUE)
TernaryPoints(coordinates, col = "red", pch = ".")
}
skip_if_not_installed("vdiffr")
vdiffr::expect_doppelganger("density-contours", DensityContours)
DensityContours2 <- function() {
par(mar = rep(0.2, 4))
TernaryPlot(point = 2)
nPoints <- 400L
set.seed(0)
coordinates <- cbind(
abs(rnorm(nPoints, 2, 3)),
abs(rnorm(nPoints, 1, 1.5)),
abs(rnorm(nPoints, 1, 0.5))
)
TernaryPoints(coordinates, col = "red", pch = ".")
TernaryDensityContour(coordinates, resolution = 10L, edgeCorrection = FALSE)
}
skip_if_not_installed("vdiffr")
vdiffr::expect_doppelganger("density-contours-2", DensityContours2)
DensityContours3 <- function() {
par(mar = rep(0.2, 4))
TernaryPlot(point = 3)
nPoints <- 400L
set.seed(0)
coordinates <- cbind(
abs(rnorm(nPoints, 2, 3)),
abs(rnorm(nPoints, 1, 1.5)),
abs(rnorm(nPoints, 1, 0.5))
)
TernaryPoints(coordinates, col = "red", pch = ".")
TernaryDensityContour(coordinates, resolution = 10L)
}
skip_if_not_installed("vdiffr")
vdiffr::expect_doppelganger("density-contours-3", DensityContours3)
LoResDensCont <- function() {
coordinates <- list(
middle = c(1, 1, 1),
top = c(3, 0, 0),
belowTop = c(2, 1, 1),
leftSideSolid = c(9, 2, 9),
leftSideSolid2 = c(9.5, 2, 8.5),
right3way = c(1, 2, 0),
rightEdge = c(2.5, 0.5, 0),
leftBorder = c(1, 1, 4),
topBorder = c(2, 1, 3),
rightBorder = c(1, 2, 3)
)
par(mfrow = c(2, 2), mar = rep(0.2, 4))
TernaryPlot(grid.lines = 3, axis.labels = 1:3, point = "up")
values <- TernaryDensity(coordinates, resolution = 3L)
ColourTernary(values)
TernaryPoints(coordinates, col = "red")
text(values[1, ], values[2, ], paste(values[3, ], "/ 6"), cex = 0.8)
TernaryPlot(grid.lines = 3, axis.labels = 1:3, point = "right")
values <- TernaryDensity(coordinates, resolution = 3L)
ColourTernary(values)
TernaryPoints(coordinates, col = "red")
text(values[1, ], values[2, ], paste(values[3, ], "/ 6"), cex = 0.8)
TernaryPlot(grid.lines = 3, axis.labels = 1:3, point = "down")
values <- TernaryDensity(coordinates, resolution = 3L)
ColourTernary(values)
TernaryPoints(coordinates, col = "red")
text(values[1, ], values[2, ], paste(values[3, ], "/ 6"), cex = 0.8)
TernaryPlot(grid.lines = 3, axis.labels = 1:3, point = "left")
values <- TernaryDensity(coordinates, resolution = 3L)
ColourTernary(values)
TernaryPoints(coordinates, col = "red")
text(values[1, ], values[2, ], paste(values[3, ], "/ 6"), cex = 0.8)
TernaryDensityContour(t(vapply(coordinates, I, double(3L))),
resolution = 12L, tolerance = -0.02, col = "orange"
)
}
skip_if_not_installed("vdiffr")
vdiffr::expect_doppelganger("lo-res-density-contours", LoResDensCont)
})
test_that("Colours are drawn", {
skip_if_not_installed("vdiffr")
vdiffr::expect_doppelganger("RGBColours", function() {
TernaryPlot()
values <- TernaryPointValues(rgb, resolution = 20, alpha = 0.5)
ColourTernary(values, spectrum = NULL)
})
})
test_that("Errors are handled", {
skip_if_not_installed("vdiffr")
vdiffr::expect_doppelganger("contour-error-handling", function() {
TernaryPlot()
# Non-vectorized Func
expect_warning(expect_warning(TernaryContour(max)))
expect_warning(TernaryPointValues(max))
# Positive bandwidths
expect_error(TernaryDensityContour(rbind(c(1, 1, 1)), -1))
expect_error(ColourTernary(TernaryPointValues(as.character, 5)))
})
})
test_that("TriangleInHull()", {
expect_error(
TriangleInHull(coord = 1:5),
"`coordinates` must be a matrix with two \\(xy\\) or three \\(abc\\) rows"
)
# From example
set.seed(0)
nPts <- 50
a <- runif(nPts, 0.3, 0.7)
b <- 0.15 + runif(nPts, 0, 0.7 - a)
c <- 1 - a - b
coordinates <- rbind(a, b, c)
triangles <- TriangleCentres(resolution = 5)
# Coordinate transform resilience
fromABC <- TriangleInHull(triangles, coordinates)
fromXY <- TriangleInHull(triangles, TernaryToXY(coordinates))
expect_equal(fromABC, fromXY)
})
|
f54dc6fb80b1e3ef1ba4e21a0b7cc1fdb8d6ad09
|
9baaf555daf5ef32086a80b0d5094bf0cba183dd
|
/man/H.diff.small.nopopvariance.Rd
|
0ddbd432bede3edf9afc8673d2603a85607d4994
|
[] |
no_license
|
edeaguiar/springstatsexam2
|
0a780ab38d2ad21bd6567e380dfe78aedd842cc1
|
37a8c01a1deac56b4b1ffa654d2f6f144178c99b
|
refs/heads/master
| 2022-04-21T16:47:25.846371
| 2020-04-17T14:04:59
| 2020-04-17T14:04:59
| 255,984,231
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 518
|
rd
|
H.diff.small.nopopvariance.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/HypothesisTesting.R
\name{H.diff.small.nopopvariance}
\alias{H.diff.small.nopopvariance}
\title{Hypothesis test for a VECTOR that is population mean difference. Outputs values needed to put into pt}
\usage{
H.diff.small.nopopvariance(x, mu)
}
\arguments{
\item{x}{Vector}
\item{mu}{Hypothesized sample mean}
}
\value{
}
\description{
Hypothesis test for a VECTOR that is population mean difference. Outputs values needed to put into pt
}
|
7a92d9a69c048cedb3d02260ddc9e82e62e3b6a1
|
3726e1e995dd037da7e97a244e8cef8a03ad4421
|
/plot3.R
|
7628bd26d5b4ac688ac1dc9dbbccd6ec27898c54
|
[] |
no_license
|
yiyanghu/ExData_Assignment2
|
153faec4ec9a9ce99a6b23ee5db9fadf4436ac8e
|
d543c15f720967b4b38e5059d57e1ba2d9ed3940
|
refs/heads/master
| 2021-01-18T18:45:15.223112
| 2016-08-13T19:11:56
| 2016-08-13T19:11:56
| 65,632,872
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 639
|
r
|
plot3.R
|
if (!exists("NEI")) {
NEI <-readRDS("summarySCC_PM25.rds")
}
if (!exists("SCC")) {
SCC <-readRDS("Source_Classification_Code.rds")
}
BaltimoreSet <-subset(NEI, fips ==24510)
BaltimoreAggYearType <- aggregate(Emissions ~ year + type, BaltimoreSet,sum)
png("plot3.png",width=640, height=480,units='px')
library(ggplot2)
g<-ggplot(BaltimoreAggYearType, aes(year, Emissions, color = type))
g<- g + geom_line() +
xlab("Year") +
ylab(expression("Total PM" [2.5]*" Emissions in Tons")) +
labs(title=expression("PM"[2.5]*" Emissions in Baltimore 1999 - 2008 by type"))
print(g)
dev.off()
|
809e9edce84e3670b8e31bcaa385f2453c8d8b36
|
e04c83847155110809e1eaaec794952fcceb523d
|
/answer.r
|
cb26261a0c7a95dfda49559567ac18f59ab58781
|
[] |
no_license
|
jahnvitandon/dataanalytics_assignment_8.2
|
78445c8348ccc406f45c44d6d6d6d34ccce74f6e
|
44482e5a581fc2c48503586a26992d3e3173089c
|
refs/heads/master
| 2020-03-22T14:27:09.229910
| 2018-07-08T15:45:16
| 2018-07-08T15:45:16
| 140,180,955
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 2,200
|
r
|
answer.r
|
#Problem 1
#library(RcmdrPlugin.IPSUR)
#data(RcmdrTestDrive)
#Perform the below operations:
# 1. Compute the measures of central tendency for salary and reduction which variable has highest center?
# 2. Which measure of center is more appropriate for before and after?
#Answer 1
#first find the measures of central tendency for salary and reduction for salary
library(RcmdrPlugin.IPSUR)
x<- c(mean(RcmdrTestDrive$salary),median(RcmdrTestDrive$salary))
x
#for reduction
y<- c(median(RcmdrTestDrive$reduction),mean(RcmdrTestDrive$reduction))
y
#we can check for variable which has highest center by plotting histogram or by checking kurtosis which describes the amount of peakedness of a distribution.
library(psych)
kurtosi(RcmdrTestDrive$salary)
kurtosi(RcmdrTestDrive$reduction)
#thus we can see variable reduction has more kurtosis thus more peaked hence more highest center
#or by plotting histogram we can also check that
x<-RcmdrTestDrive$salary
h<- hist(x,breaks = 10,col = "red",xlab = "salary",main= "histogram of salary with normal curve")
y<-RcmdrTestDrive$reduction
h<- hist(y,breaks = 10,col = "blue",xlab = "reduction",main= "histogram of reduction with normal curve")
#however as reduction is not purely continuous hence for center we cant see peak of this in from center
#in that manner salary is more peaked from center as it is purely continous
#howsoever variable reduction is more peaked if we talk about the peakedness from whole data
#by seeing histo curve overall as compare to salary variable
#Answer 2
#If the distribution is fairly symmetric then the mean and median
#should be approximately the same
#by boxplot we can check for median where it lies
boxplot(RcmdrTestDrive$before,horizontal = T,col = "red",xlab="before",ylab="Boxplot")
#normal distributed
boxplot(RcmdrTestDrive$after,horizontal = T,col = "red",xlab="after",ylab="Boxplot")
#left skewed as the data is assymetrical distributed
#if we check the skewness of variables
skew (RcmdrTestDrive$before)
skew (RcmdrTestDrive$after)
#after more negative so data more on right side as compare to before variable
#thus, the median would likely be a good choice and it is more appropriate
|
a14dd4c8540bd68a5a2aa75f8fdde45725779f4a
|
ced1fefafdf15797bd34f6f5768b8bd066dfa236
|
/final.R
|
46a065528454eb6c15fd99265fab9106aab5bf20
|
[] |
no_license
|
HongfanChen/Stats506_final_project
|
5ccd5bc1f4ace5cff620c285279b58609fd874c4
|
a4534cb5b9f0d0d84065adabb8c3454e80bc1dfc
|
refs/heads/main
| 2023-01-29T07:03:00.819322
| 2020-12-12T06:26:11
| 2020-12-12T06:26:11
| 316,160,169
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 7,589
|
r
|
final.R
|
## Stats 506, F20
## Final Project
##
## This R script is for Final Project
## Author: Hongfan Chen, chenhf@umich.edu
## Updated: November, 25, 2020
# 79: -------------------------------------------------------------------------
# libraries:
library(tidyverse)
# directories: ----------------------------------------------------------------
path = './data'
# data cleaning: --------------------------------------------------------------
## 2012 CBECS data
### OPEN24 NWKER PUBID SQFTC PBA
cbecs_file = sprintf('%s/2012_public_use_data_aug2016.csv', path)
cbecs_min = sprintf('%s/cbecs_min.csv', path)
if ( !file.exists(cbecs_min) ) {
cbecs = read_delim(cbecs_file, delim = ',' ) %>%
select(id = PUBID, w = FINALWT, `square footage`= SQFTC,
`24-Hour` = OPEN24, number = NWKER, activity = PBA)
write_delim(cbecs, path = cbecs_min, delim = ',')
} else {
cbecs = read_delim(cbecs_min, delim = ',')
}
## codebook
cb_file = sprintf('%s/2012microdata_codebook.xlsx', path)
codebook = readxl::read_xlsx(cb_file) %>%
as.data.frame()
## variables of interest
variables = c(`square footage` = 'SQFTC',
`24-Hour` = 'OPEN24', activity = 'PBA')
codes = codebook %>%
filter(`Variable\r\nname` %in% variables)
## cleaning the codebook and select the needed factor
decode = str_replace_all(codes$`Values/Format codes`,
pattern = "\\r\\n", replacement = "") %>%
str_replace_all(pattern = "' = '", replacement = "placeholder") %>%
str_replace_all(pattern = "''", replacement = "placeholder") %>%
str_replace_all(pattern = "'", replacement = "") %>%
str_split(pattern = "placeholder")
## create a vector consisting of levels and labels
var_factor = c()
for (i in 1:length(decode)) {
level_idx = seq(1, length(decode[[i]]), by = 2)
label_idx = seq(2, length(decode[[i]]), by = 2)
var_factor[[i]] = list(decode[[i]][level_idx], decode[[i]][label_idx])
}
## apply the levels and labels to the factor
cbecs = cbecs %>%
mutate(activity = factor(activity,
levels = var_factor[[1]][[1]],
labels = var_factor[[1]][[2]]),
`square footage` = factor(`square footage`,
levels = var_factor[[2]][[1]],
labels = var_factor[[2]][[2]]),
`24-Hour` = factor(`24-Hour`,
levels = var_factor[[3]][[1]],
labels = var_factor[[3]][[2]]),
id = as.double(id)
) %>%
drop_na()
# point estimates of TV means by division and region: -------------------------
avg_wf = cbecs %>%
group_by(`square footage`, activity, `24-Hour`) %>%
summarize(avg_wf = sum(w * number)/sum(w))
# for CI's, make rep_weights long format: -------------------------------------
cbecs_full = read_delim(cbecs_file, delim = ',')
long_weights =
cbecs_full %>%
select( id = PUBID, FINALWT1:FINALWT197) %>%
pivot_longer(
cols = starts_with('FINALWT'),
names_to = 'rep',
names_prefix = 'FINALWT',
values_to = 'rw'
) %>%
mutate(rep = as.integer(rep),
id = as.double(id)
)
# compute confidence intervals, using replicate weights: ----------------------
## replicat means
## should be very cautious here because sum(rw) can equal to 0 due to jackknife
## method of estimating standard error
avg_wf_rep = cbecs %>%
select(-w) %>%
left_join(long_weights, by = 'id') %>%
group_by(`square footage`, activity, `24-Hour`, rep) %>%
summarize(avg_wf_rep = ifelse(sum(rw) != 0,
sum(rw * number)/sum(rw),
0),
.groups = 'drop')
## variance of replicate means around the point estimate
var_wf = avg_wf_rep %>%
left_join(avg_wf, by = c('square footage', 'activity', '24-Hour')) %>%
group_by(`square footage`, activity, `24-Hour`) %>%
summarize(v = sum({avg_wf_rep - avg_wf}^2),
.groups = 'drop')
avg_wf = avg_wf %>%
left_join(var_wf, by = c('square footage', 'activity', '24-Hour'))
## construct mean's CI
m = qnorm(.975)
avg_wf = avg_wf %>%
mutate(
se = sqrt(v),
lwr = avg_wf - m * se,
upr = avg_wf + m * se,
CI = sprintf('%.3f (%.3f-%.3f)', avg_wf, lwr, upr)
)
# construct the first part of the table and help the next step to
# filter rows with complete status
avg_wf_table = avg_wf %>%
select(`square footage`, activity, `24-Hour`, CI) %>%
pivot_wider(id_cols = c('square footage', 'activity'),
names_from = `24-Hour`,
values_from = CI) %>%
drop_na()
# filter rows with variable `24-Hour` has both value yes and no.
complete_buildings = avg_wf_table %>%
pivot_longer(
cols = c('Yes', 'No'),
names_to = "fake"
) %>%
select(-fake, -value)
# merge with the avg_wf, find the complete observations.
# really wierd here, if I use left_join the observations will be duplicated???
# complete_buildings %>%
# left_join(avg_wf, by = c('square footage', 'activity'))
# but semi_join can be a replacement.
# construct CI for difference in average work force
complete_data = avg_wf %>%
semi_join(complete_buildings, by = c('square footage', 'activity')) %>%
select(`square footage`, activity, `24-Hour`, avg_wf, v) %>%
pivot_wider(id_cols = c('square footage', 'activity'),
names_from = `24-Hour`,
values_from = c('avg_wf', 'v')
) %>%
mutate(diff_avg = avg_wf_Yes - avg_wf_No,
v = v_Yes + v_No,
se = sqrt(v),
lwr = pmax(diff_avg - m * se),
upr = pmin(diff_avg + m * se),
diffCI = sprintf('%.3f (%.3f-%.3f)', diff_avg, lwr, upr)
)
# merge with the previous table to form a complete table
tab = complete_data %>%
select(`square footage`, activity, diffCI) %>%
left_join(avg_wf_table, by = c('square footage', 'activity')) %>%
rename(Difference = diffCI,
`OPEN 24` = Yes,
`Not OPEN 24` = No,
`Square Footage Category` = `square footage`,
`Principal Building Activity` = activity)
# create a function for plotting
visual = function(data){
data %>%
ggplot( aes(x = diff_avg, y = activity, color = activity)
) +
geom_point(
position = position_dodge2(width = 0.5)
) +
geom_errorbar(
aes(xmin = lwr, xmax = upr),
position = position_dodge(width = 0.5),
alpha = 0.75
) +
geom_vline(xintercept = 0, lty = 'dashed') +
facet_wrap(~`square footage`) +
theme_bw() +
xlab('difference in average work forces')
}
data_1 = complete_data %>%
filter(`square footage` %in% c("1,001 to 5,000 square feet",
"5,001 to 10,000 square feet",
"10,001 to 25,000 square feet",
"25,001 to 50,000 square feet"))
data_2 = complete_data %>%
filter(`square footage` %in% c("50,001 to 100,000 square feet",
"100,001 to 200,000 square feet",
"200,001 to 500,000 square feet",
"500,001 to 1 million square feet",
"Over 1 million square feet"))
datalist = list(data_1, data_2)
graphname = c("./graph/1.png", "./graph/2.png")
for (i in 1:2) {
visual(datalist[[i]])
ggsave(graphname[i],
width = 12,
height = 6)
}
|
91da14f626b3cff5bc8eab0fff3c417ed750eead
|
e2a55a3b310e9f8fa7a25d255422aa5cc82baf10
|
/plot1.R
|
d036d75b90e7cc93fd144e51812cde5becb862e7
|
[] |
no_license
|
drbooshkit/ExData_Plotting1
|
3b4de5b1f5a21f0c894ff4ac23f6b5df4f763dc9
|
e8cf75b665db57792aaea7844a3bea8073237c99
|
refs/heads/master
| 2021-01-15T11:23:45.067921
| 2014-05-10T19:58:14
| 2014-05-10T19:58:14
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 741
|
r
|
plot1.R
|
# create vector of column classes:
cols <- c("Date", "POSIXct", "numeric", "numeric", "numeric", "numeric", "numeric", "numeric", "numeric")
# read in data from TXT
data <- read.table(file="data/household_power_consumption.txt",sep=";", header=TRUE, stringsAsFactors=FALSE)
data$Date <- as.Date(data$Date, "%d/%m/%Y")
# change character to numeric
data$Global_active_power <- as.numeric(data$Global_active_power)
# subset between dates 2007-02-01 and 2007-02-02
subData <- data[data$Date >= "2007-02-01" & data$Date <= "2007-02-02",]
# plot 1. histogram
# initialize PNG device
png(file="plot1.png")
# create plot
hist(subData$Global_active_power, main="Global Active Power", xlab="Global Active Power (kilowatts)", col="red")
dev.off()
|
71c62cc8ec258c495630e2f19382566ed75f26c2
|
8c3a12894f1b01e10899817634afc1403b34c3f0
|
/man/padmm.Rd
|
9d4e9d18cf35aa33e174a0101fd308b464e52635
|
[
"MIT"
] |
permissive
|
fboehm/openFHDQR
|
db44c9833d861f22d40e28171d1784efe4dc102c
|
d4dc737a4fa1391306a202051f31e714df380789
|
refs/heads/main
| 2023-06-29T02:24:40.786808
| 2021-08-07T22:36:14
| 2021-08-07T22:36:14
| 387,862,159
| 5
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 1,130
|
rd
|
padmm.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/RcppExports.R
\name{padmm}
\alias{padmm}
\title{Perform proximal ADMM for weighted L1-penalized quantile regression}
\usage{
padmm(
beta0,
z0,
theta0,
sigma,
X,
eta,
y,
l1,
l2,
w,
nu,
tau,
gamma,
max_iter,
eps1,
eps2
)
}
\arguments{
\item{beta0}{initial value of beta}
\item{z0}{initial value of z}
\item{theta0}{initial value of theta}
\item{sigma}{sigma constant, a positive number}
\item{X}{design matrix}
\item{eta}{eta constant}
\item{y}{y vector}
\item{l1}{L1 penalty parameter}
\item{l2}{L2 penalty parameter}
\item{w}{weights vector for lambda1 penalty}
\item{nu}{weights vector for lambda2 penalty}
\item{tau}{quantile, a number between 0 and 1}
\item{gamma}{gamma constant, affects the step length in the theta update step}
\item{eps1}{epsilon1 constant for stopping}
\item{eps2}{epsilon2 constant for stopping}
\item{maxiter}{maximum number of iterations}
}
\value{
beta, the vector of coefficient estimates
}
\description{
Perform proximal ADMM for weighted L1-penalized quantile regression
}
|
0f94eb27cb2c222c7923415e10fb7bf1540d7fea
|
5b2a03e70b27b35f4ab1bb7159e3fd9ae1fec133
|
/PrepareForClassification.R
|
583b40ba7433fdb7c9796d0d31d2607b3f5b5047
|
[
"MIT"
] |
permissive
|
Jakob-Bach/CS-Select-ML-Server
|
d84393f69319e1c83a56185d4ed52f57ce554dc0
|
620dafa77db5da632c6374afaecc27837662139d
|
refs/heads/master
| 2022-11-12T18:28:14.474070
| 2020-07-13T13:46:00
| 2020-07-13T13:46:00
| 279,306,906
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 4,075
|
r
|
PrepareForClassification.R
|
library(data.table)
source("UtilityFunctions.R")
# To specify by user
DATASET_NAME <- "populationGender"
#### Load data ####
# Read data
cat("Reading data ...\n")
dataset <- readRDS(file = paste0("datasets/", DATASET_NAME, ".rds"))
featureNames <- colnames(dataset)[-ncol(dataset)]
targetColumn <- colnames(dataset)[ncol(dataset)]
# Check target
stopifnot(dataset[, is.logical(get(targetColumn)) ||
(is.factor(get(targetColumn)) && length(levels(get(targetColumn))) == 2)])
# Check that no column name prefix of another (important for xgboost factor encoding)
stopifnot(areColNamesDistinct(colnames(dataset)))
#### Prepare feature summary ####
# Read column description CSV
columnDescription <- data.table(read.csv(file = paste0("datasets/", DATASET_NAME, "_columns.csv"),
header = TRUE, sep = "\t", as.is = TRUE, encoding = "UTF-8"))
stopifnot(length(intersect(featureNames, columnDescription$dataset_feature)) ==
length(union(featureNames, columnDescription$dataset_feature)))
# Create and save summary JSON (feature descriptions, statistics, exemplary values)
# useBytes makes sure we keep UTF-8 encoding
cat("Creating feature summary JSON ...\n")
dir.create(paste0("datasets/", DATASET_NAME), showWarnings = FALSE)
writeLines(text = jsonlite::toJSON(createSummaryList(dataset = dataset,
featureNames = featureNames, columnDescription = columnDescription), auto_unbox = TRUE),
con = paste0("datasets/", DATASET_NAME, "/summary.json"), useBytes = TRUE)
# Create and save summary plots (distribution, distribution against classes)
cat("Creating feature summary plots ...\n")
createSummaryPlots(dataset = dataset, featureNames = featureNames,
targetColumn = targetColumn, path = paste0("datasets/", DATASET_NAME, "/"))
# Zip feature summary data
cat("Zipping feature summary data ...\n")
oldWd <- getwd()
setwd(paste0(oldWd, "/datasets/", DATASET_NAME))
zip(zipfile = paste0("../", DATASET_NAME, ".zip"), files = list.files())
setwd(oldWd)
#### Prepare classification ####
cat("Preparing data classification ...\n")
# Convert boolean attributes to integer
dataset[, (colnames(dataset)) := lapply(.SD, makeBooleanInteger)]
# Handle NAs in categorical data
dataset[, (colnames(dataset)) := lapply(.SD, makeNAFactor)]
# Harmonize target column (name, encoding as 0/1)
if (is.factor(dataset[, get(targetColumn)])) {
dataset[, target := as.integer(get(targetColumn)) - 1]
} else {
dataset[, target := as.integer(get(targetColumn))]
}
dataset[, (targetColumn) := NULL]
# Train-test split (stratified)
set.seed(25)
target0Idx <- dataset[, which(target == 0)]
target1Idx <- dataset[, which(target == 1)]
trainTarget0 <- sample(target0Idx, size = round(0.8 * length(target0Idx)), replace = FALSE)
trainTarget1 <- sample(target1Idx, size = round(0.8 * length(target1Idx)), replace = FALSE)
trainData <- dataset[sort(c(trainTarget0, trainTarget1))]
testData <- dataset[-c(trainTarget0, trainTarget1)]
# Handle NAs in numerical data
naReplacements <- getColMedians(trainData)
trainData <- imputeColValues(trainData, replacements = naReplacements)
testData <- imputeColValues(testData, replacements = naReplacements)
# Convert for "xgboost"
xgbTrainData <- trainData[, -"target"]
xgbTrainPredictors <- Matrix::sparse.model.matrix(~ ., data = xgbTrainData)[, -1]
xgbTrainLabels <- trainData$target
xgbTrainData <- xgboost::xgb.DMatrix(xgbTrainPredictors, label = xgbTrainLabels)
xgbTestData <- testData[, -"target"]
xgbTestPredictors <- Matrix::sparse.model.matrix(~ ., data = xgbTestData)[, -1]
xgbTestLabels <- testData$target
# Save
saveRDS(xgbTrainPredictors, file = paste0("datasets/", DATASET_NAME, "_train_predictors.rds"))
saveRDS(xgbTrainLabels, file = paste0("datasets/", DATASET_NAME, "_train_labels.rds"))
saveRDS(xgbTestPredictors, file = paste0("datasets/", DATASET_NAME, "_test_predictors.rds"))
saveRDS(xgbTestLabels, file = paste0("datasets/", DATASET_NAME, "_test_labels.rds"))
saveRDS(createXgbColMapping(old = featureNames, new = colnames(xgbTrainData)),
file = paste0("datasets/", DATASET_NAME, "_featureMap.rds"))
|
41e59bed7cbc52642ed4bc32c6ab7f3ea63e1b80
|
fe9076041f3e3c4d176f7ead51c3a9d5263081a7
|
/server.R
|
cd88990646fd00901ac12c9dcaf54be998dc6580
|
[] |
no_license
|
al610/AH7_INFO_201_Project
|
d635872a3942a6de44af39e3cc3eee38c2de41e2
|
3e3ddaa7457e031c4f7eab8d328b4ea046930d33
|
refs/heads/master
| 2020-08-30T13:23:27.865798
| 2019-12-05T05:52:38
| 2019-12-05T05:52:38
| 218,393,444
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 5,280
|
r
|
server.R
|
# set ups
library(shiny)
library("dplyr")
library("lintr")
source("datasets.R")
library(tidyverse)
# The backend which takes the front end data and compute the results.
server <- function(input, output) {
# Define a histogram to render in the UI
output$histgram <- renderPlot({
# make the input varaible into numbers and store it.
col <- as.numeric(input$var)
# filter out the unwanted data which stops the function to work.
without_na <- Sex_Occupatiosn_and_wages %>%
filter(Women_As_A_Percentage_of_Mens
!= "(X)") %>%
filter(Median_Earnings_In_Dollars_Men_Estimate != "-") %>%
filter(Median_Earnings_In_Dollars_Women_Estimate != "-")
# make the whole column into numbers instead of strings.
without_na$Women_As_A_Percentage_of_Mens <-
as.numeric(
without_na$Women_As_A_Percentage_of_Mens)
# make the whole column into numbers instead of strings.
without_na$Median_Earnings_In_Dollars_Men_Estimate <-
as.numeric(without_na$Median_Earnings_In_Dollars_Men_Estimate)
# make the whole column into numbers instead of strings.
without_na$Median_Earnings_In_Dollars_Women_Estimate <-
as.numeric(without_na$Median_Earnings_In_Dollars_Women_Estimate)
# start a condition for different histograms according to different inputs.
if (col == 16) {
# store the column into an object.
x <- without_na$Women_As_A_Percentage_of_Mens
# set up the bin.
bins <- seq(min(x), max(x), length.out = input$bins + 1)
# draw out the actual histogram which allows user to change the colors
# and bins. Also, the x-axis and title are setted up.
hist(without_na[, col], breaks = bins, col = input$bincolor, border = "grey",
xlab = "female wages in percentage of males'(in percentage)",
main = "Histogram of female wages in percentage of males' in 2018")
}
# another condition for another histogram.
else if (col == 12) {
# store the column into an object.
x <- without_na$Median_Earnings_In_Dollars_Men_Estimate
# set up the bin.
bins <- seq(min(x), max(x), length.out = input$bins + 1)
# draw out the actual histogram which allows user to change the
# colors and bins. Also, the x-axis and title are setted up.
hist(without_na[, col], breaks = bins, col = input$bincolor,
border = "grey",
xlab = "Male wages(in dollars)",
main = "Distribution of male wages in 2018")
}
# the other condition.
else {
# store the column into an object.
x <- without_na$Median_Earnings_In_Dollars_Women_Estimate
# set up the bin.
bins <- seq(min(x), max(x), length.out = input$bins + 1)
# draw out the actual histogram which allows user to change the
# colors and bins. Also, the x-axis and title are setted up.
hist(without_na[, col], breaks = bins, col = input$bincolor,
border = "grey",
xlab = "Female wages(in dollars)",
main = "Distribution of female wages in 2018")
}
})
# draw a box plot about female wages
output$BoxPlotFemale <- renderPlot({
Seattle_Wages %>%
mutate(Female_Avg_Hrly_Rate = gsub("[A-Za-z&-]", "", Female_Avg_Hrly_Rate)) %>%
filter(!is.na(Female_Avg_Hrly_Rate), str_length(Female_Avg_Hrly_Rate) != 0) %>%
mutate(Female_Avg_Hrly_Rate = as.numeric(Female_Avg_Hrly_Rate)) %>%
filter(Female_Avg_Hrly_Rate > input$min_dph) %>%
ggplot(aes(y = Female_Avg_Hrly_Rate)) +
scale_y_log10() +
geom_boxplot(outlier.colour="red", outlier.shape=8,
outlier.size=4) +
labs(title = "Distribution of Females' Hourly Wages for Seattle in 2018", y = "Dollars")
})
# draw a box plot about male wages
output$BoxPlotMale <- renderPlot({
Seattle_Wages %>%
mutate(Male_Avg_Hrly_Rate = gsub("[A-Za-z&-]", "", Male_Avg_Hrly_Rate)) %>%
filter(!is.na(Male_Avg_Hrly_Rate), str_length(Male_Avg_Hrly_Rate) != 0) %>%
mutate(Male_Avg_Hrly_Rate = as.numeric(Male_Avg_Hrly_Rate)) %>%
filter(Male_Avg_Hrly_Rate > input$min_dph2) %>%
ggplot(aes(y = Male_Avg_Hrly_Rate)) +
scale_y_log10() +
geom_boxplot(outlier.colour="red", outlier.shape=8,
outlier.size=4) +
labs(title = "Distribution of Males' Hourly Wages for Seattle in 2018", y = "Dollars")
})
# draw a table about each job
output$table <- renderTable({
df <- Sex_Occupatiosn_and_wages %>% filter(Occupational_Category == input$job)
male_employees <- as.data.frame(df$Number_of_Full_Time_Year_Round_Workers_Men_Estimate)
female_employees <- as.data.frame(df$Number_of_Full_Time_Year_Round_Workers_Women_Estimate)
percent <- as.data.frame(df$Percentage_of_Women_in_Occupational_Category_Estimate)
men_wage <- as.data.frame(df$Median_Earnings_In_Dollars_Men_Estimate)
women_wage <- as.data.frame(df$Median_Earnings_In_Dollars_Women_Estimate)
comb <- cbind(male_employees, female_employees, percent, men_wage, women_wage)
colnames(comb) <- c("Total Male Employees", "Total Female Employees",
"Percentage of Women Who Have This Job", "Median Earnings For Men",
"Median Earnings For Women")
comb
})
}
|
ab49cb592f05ba023235ddcf4a1f8ef03a9eee84
|
b0207ec605dcc5e7e745382b82409b1082bf3ee8
|
/workflow/scripts/run_reg_gini_rca.r
|
f59a54da8751545a8f15c3073bb11bcb19fad52f
|
[] |
no_license
|
possible1402/national-science-exports
|
afbe5b4196bf3d53c4982d65b7d1f7e16c2f99d6
|
98f463fc3abc8db669f4a254e1e83fc3402c6521
|
refs/heads/master
| 2023-04-23T17:16:25.958063
| 2021-05-13T18:50:54
| 2021-05-13T18:50:54
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,987
|
r
|
run_reg_gini_rca.r
|
rm(list = ls())
library("plm")
library("stargazer")
library("dplyr")
library("data.table")
args = commandArgs(T)
PANEL_DATA_PATH = args[1]
RESULTS_PATH = args[2]
RESULTS_PATH_BALANCED = args[3]
RESULTS_PATH_BOTH = args[4]
panel_data <- read.csv(PANEL_DATA_PATH)
run_write_models <- function(panel_data, outpath, effect="individual") {
m1 <- plm(growth_rate ~ Income_t0_log + ECI,
data=panel_data,
index=c("Code", "date"),
model="within", effect=effect)
m2 <- plm(growth_rate ~ Income_t0_log + diversity,
data=panel_data,
index=c("Code", "date"),
model="within", effect=effect)
m3 <- plm(growth_rate ~ Income_t0_log + ECI + diversity,
data=panel_data,
index=c("Code", "date"),
model="within", effect=effect)
m4 <- plm(growth_rate ~ Income_t0_log + nm_change + ne_change + shm_change,
data=panel_data, index=c("Code", "date"),
model="within", effect=effect)
m5 <- plm(growth_rate ~ Income_t0_log + ECI + diversity + nm_change + ne_change + shm_change,
data=panel_data, index=c("Code", "date"),
model="within", effect=effect)
result.all <- list(m1=m1,m2=m2,m3=m3,m4=m4,m5=m5)
latex.table <- stargazer(result.all, type='html',dep.var.labels=c("GDP growth (log-ratio)"), ci=TRUE, single.row=FALSE, omit.stat=c("f"),
column.sep.width = "0.5pt", no.space = TRUE,font.size="small",header=FALSE,title="",
notes.align="l",covariate.labels = c('GDP','ECI','Diversity',"Natural","Physical","Societal"))
out.file <- file(outpath)
writeLines(latex.table, out.file)
close(out.file)
}
run_write_models(panel_data, RESULTS_PATH)
balance_data=make.pbalanced(panel_data,balance.type="shared.individuals")
run_write_models(balance_data, RESULTS_PATH_BALANCED)
run_write_models(balance_data, RESULTS_PATH_BOTH, effect="twoways")
|
42028babbf72acb2a47c9b2438aecc09347e3da0
|
8c2253bd47fd3d76f28950d1ef24450b24c4a0d7
|
/tests/testthat/test-annealing-HRscale.R
|
e2e3562c035c3741ce8c2357dad073f971bf93e4
|
[] |
no_license
|
cran/StrathE2E2
|
bc63d4f0dffdde94da1c7ea41133c09033c0cd4e
|
629dc5e7f2e323752349352bb2d651a56c6f4447
|
refs/heads/master
| 2023-02-25T13:18:59.217896
| 2021-01-22T21:40:05
| 2021-01-22T21:40:05
| 278,343,976
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 3,237
|
r
|
test-annealing-HRscale.R
|
test_that("test simulated annealing code (HR scaling parameters) produces correct output structure", {
skip_on_cran()
#-----------------------------------------------------------------------------------------------------------------
#
# This test runs the 1970-1999 version of the simullated annealing (HR scaling parameters) function to determine
# whether the ouputs conform to the expected structure.
#
# The test uses the North Sea model which is provide in the package, with the csv.output argument set to FALSE
#
# The returned object should be a list of 3 elements, each of one object.
#
# We test various attributes of the list and its contents to check that they are as expected.
#
#------------------------------------------------------------------------------------------------------------------
# Read the internal 1970-1999 North Sea model
model<-e2e_read(model.name="North_Sea",
model.variant="1970-1999",
model.ident="TEST")
nyears<-3
n_iter<-10
test_run<-e2e_optimize_hr(model, nyears=nyears, n_iter=n_iter, start_temperature=0.5, csv.output=FALSE,runtime.plot=FALSE)
#--------------
#Extract some attributes of the returned list object
n_1stlev <- nrow(summary(test_run)) # Number of 1st level objects in the list - should =3
#--------------
#Extract attributes of the first 2 elements of the list
nr_prop<-nrow(test_run$parameter_proposal_history) # number of rows of data in the proposal history - should = n_iter
nr_accp<-nrow(test_run$parameter_accepted_history) # number of rows of data in the proposal history - should = n_iter
proposal_lik <- test_run$parameter_proposal_history$lik # Likelihoods for proposed parameters on each iteration
accepted_lik <- test_run$parameter_accepted_history$lik # Likelihoods for accepted parameters on each iteration
dif_lik <- accepted_lik-proposal_lik # accepted values should never be less than proposed values
nneg<-length(which(dif_lik<0)) # nneg should=0
#--------------
# Find out if the final parameter objects generated by the annealing process match the structure expected for the model inputs
model.path <- model$setup$model.path
#setupdata <- read.model.setup(model.path) # Models/Model/Variant/MODEL_SETUP.csv
# setupdata[19] = "harvest_ratio_multiplier.csv"
#pf_HRmult<-readcsv(model.path, PARAMETERS_DIR, setupdata[19])
pf_HRmult<- get.model.file(model.path, PARAMETERS_DIR, file.pattern=HARVEST_RATIO_SCALING_VALUES)
#--------------
#new and expected row and column numbers for the preference matrix
nc_new_HRmult <- ncol(test_run$new_parameter_data) # Columns in the new parameter data
nr_new_HRmult <- nrow(test_run$new_parameter_data) # Rows in the new parameter data
nc_exp_HRmult <- ncol(pf_HRmult) # Columns in the existing parameter file
nr_exp_HRmult <- nrow(pf_HRmult) # Rows in the existing parameter file
nc_new_HRmult
nc_exp_HRmult
nr_new_HRmult
nr_exp_HRmult
#--------------
#Implement the testthat checks
expect_equal(n_1stlev, 3)
expect_equal(nr_prop, n_iter)
expect_equal(nr_accp, n_iter)
expect_equal(nneg,0)
expect_equal(nc_new_HRmult,nc_exp_HRmult)
expect_equal(nr_new_HRmult,nr_exp_HRmult)
})
|
201b7becd9a23441a9631b8869f032cd9ed7815a
|
d7a929cc1fa9dd9f56d739a9036d08ac86e29f88
|
/gradrates_2021_10_6.R
|
95a5382f949a9d2df887459d75bb0a175343bc3e
|
[] |
no_license
|
ozshy/gradrates
|
02169f16bda39c37cfa1a493e7f7daaff25c7ebd
|
72888e44119ea46dde3d3f98e2fbe7b6833cc8ac
|
refs/heads/main
| 2023-08-23T23:52:50.892075
| 2021-10-31T10:42:49
| 2021-10-31T10:42:49
| 315,382,897
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 15,983
|
r
|
gradrates_2021_10_6.R
|
# gradrates_202_5_27.R start revision for JOLR
# gradrates_2020_11_27.R first submission to JOLR
# gradrates_2020_11_26.R deleting previous simulations and adding Section 4 (back to the technology change explanation), line 289
# gradrates_2020_10_25.R simulating the model with differentiated brands, Section 4, Line 286
# gradrates_2020_10_23.R simulating the model with iso-elastic demand, Section 4, Line 286
# gradrates_2020_9_13.R revising the model, introducing mu as varying degree of competition
# gradrates_2020_9_14.R Start revising regressions
# gradrates_200830.R posted 1st draft gradrates_19.tex
########
# Libraries:
library(ggplot2); theme_set(theme_bw())
library(dplyr)# for lag and lead functions
library(plyr)# for join
library(stargazer)# diplay regression results
library(zoo)# to approx NAs in df
#
# Function definitions: Cummulative Annual Growth Rate
CAGR_formula <- function(FV, PV, yrs) {
values <- ((FV/PV)^(1/yrs)-1)
return(values)
}
# testing the formula
CAGR_formula(110, 100, 1)
CAGR_formula(110, 100, 2)
CAGR_formula(121, 100, 2)
setwd("~/Papers/gradrates/gradrates_coding")
dir()
# Load data on HS and college graduation rates
#grad_1 = read.csv("edu_ages_25_29.csv")
#saveRDS(grad_1, "edu_ages_25_29.rds")
grad_2 = readRDS("edu_ages_25_29.rds")
dim(grad_2)
names(grad_2)
head(grad_2)
range(grad_2$Year)
# Load data on income inequality
#inc_1 = read.csv("Measures_income_dispersion.csv")
#saveRDS(inc_1, "income_inequality.rds")
inc_2 = readRDS("income_inequality.rds")
dim(inc_2)
names(inc_2)
head(inc_2)
range(inc_2$Year)
# rename variables (shorten)
inc_3 = inc_2
names(inc_3)
colnames(inc_3)[colnames(inc_3)=="Gini.index.of.income.inequality"] = "Gini"
colnames(inc_3)[colnames(inc_3)=="Mean.logarithmic.deviation.of.income"] = "MLD"
colnames(inc_3)[5] = "Atkinson_0.25"
colnames(inc_3)[6] = "Atkinson_0.50"
colnames(inc_3)[7] = "Atkinson_0.75"
colnames(inc_3)[8] = "Lowest_quintile"
colnames(inc_3)[9] = "Second_quintile"
colnames(inc_3)[10] = "Third_quintile"
colnames(inc_3)[11] = "Fourth_quintile"
colnames(inc_3)[12] = "Highest_quintile"
colnames(inc_3)[13] = "Top_5_percent"
# Load data on earning by education degree
#earn_1 = read.csv("Earning_by_edu.csv")
#str(earn_1)
#saveRDS(earn_1, "Earning_by_edu.rds")
earn_2 = readRDS("Earning_by_edu.rds")
dim(earn_2)
names(earn_2)
head(earn_2)
range(earn_2$Year)
length(earn_2$Year) # num years
str(earn_2)
# Load per-capital GDP
#dir()
#gdp_pp_1 = read.csv("gdp-per-cap.csv")
#str(gdp_pp_1)
#saveRDS(gdp_pp_1, "GDP_PP.rds")
gdp_pp_2 = readRDS("GDP_PP.rds")
dim(gdp_pp_2)
names(gdp_pp_2)
head(gdp_pp_2)
range(gdp_pp_2$Year)
#
gdp_pp_3 = subset(gdp_pp_2, select = c(Year, GDP.PP))
names(gdp_pp_3)
colnames(gdp_pp_3)[2] = "GDP_PP"
names(gdp_pp_3)
str(gdp_pp_3)
# Loading markup data
dir()
# markup_1.df = read.table("markup_raw.txt", col.names = T)
# str(markup_1.df)
# markup_1.df
# markup_2.df = markup_1.df
# colnames(markup_2.df) = markup_1.df[1,]
# #markup_2.df
# markup_3.df = markup_2.df[-1,]
# #markup_3.df
# str(markup_3.df)
# markup_4.df = as.data.frame(sapply(markup_3.df, as.numeric))
# str(markup_4.df)
# markup_4.df$year = as.integer(markup_4.df$year)
# colnames(markup_4.df)[1] = "Year"
# saveRDS(markup_4.df, "markup.rds")
markup_5.df = readRDS("markup.rds")
names(markup_5.df)
str(markup_5.df)
### Plotting HS and college graduation rates: Fig 1
names(grad_2)
#
ggplot(grad_2) + geom_line(aes(x=Year, y=High_school_and_higher)) + geom_point(aes(x=Year, y=High_school_and_higher)) + geom_line(aes(x=Year, y=College_and_higher))+ geom_point(aes(x=Year, y=College_and_higher)) + geom_line(aes(x=Year, y=Associate_and_higher))+ geom_point(aes(x=Year, y=Associate_and_higher))+ geom_line(aes(x=Year, y=MA_and_higher))+ geom_point(aes(x=Year, y=MA_and_higher)) + annotate("text", x=1980, y=90, label="High school or higher", size = 5) + annotate("text", x=1960, y=20, label="College or higher", size = 5) + annotate("text", x=2000, y=45, label="Associate or higher", size = 5) + annotate("text", x=2000, y=10, label="MA or higher", size = 5) + ylab("Percent of graduates (%)") + scale_x_continuous(breaks = seq(1940, 2020, 10)) + scale_y_continuous(breaks = seq(0, 100, 10)) + theme(axis.text.x = element_text(size=rel(1.5)))+ theme(axis.text.y = element_text(size=rel(1.5))) + theme(axis.title.x = element_text(size = rel(1.5))) + theme(axis.title.y = element_text(size = rel(1.5)))
## Discussion of Figure 1 (graduation rates)
names(grad_2) # high school
range(grad_2$Year)
grad_2[grad_2$Year==1940, ]
grad_2[grad_2$Year==2020, ]
grad_2[grad_2$High_school_and_higher>35 & grad_2$High_school_and_higher < 44, ] # crossing %40 HS rates
grad_2[grad_2$High_school_and_higher>45 & grad_2$High_school_and_higher < 54, ] # crossing %50 HS rates
grad_2[grad_2$High_school_and_higher>55 & grad_2$High_school_and_higher < 64, ] # crossing %60 HS rates
grad_2[grad_2$High_school_and_higher>65 & grad_2$High_school_and_higher < 74, ] # crossing %70 HS rates
grad_2[grad_2$High_school_and_higher>75 & grad_2$High_school_and_higher < 84, ] # crossing %80 HS rates
grad_2[grad_2$High_school_and_higher>85 & grad_2$High_school_and_higher < 94, ] # crossing %90 HS rates
#
# compounded annual growth rate of high school grad rates
100*CAGR_formula(grad_2[grad_2$Year==2020, "High_school_and_higher"], grad_2[grad_2$Year==1940, "High_school_and_higher"], 80)
# College
names(grad_2)
grad_2[grad_2$Year==1940, ]
grad_2[grad_2$Year==2020, ]
grad_2[grad_2$College_and_higher>=10, c("Year", "College_and_higher") ] # crossing %10 HS rates
#
grad_2[grad_2$College_and_higher>=20, c("Year", "College_and_higher") ] # crossing %20 HS rates
#
grad_2[grad_2$College_and_higher>=30, c("Year", "College_and_higher") ] # crossing %30 HS rates#
#
grad_2[grad_2$College_and_higher>=35, c("Year", "College_and_higher") ] # crossing %35 HS rates#
#
100*CAGR_formula(grad_2[grad_2$Year==2020, "College_and_higher"], grad_2[grad_2$Year==1940, "College_and_higher"], 80)
### Plotting income inequality measures [Removed from v.61 and on]
names(inc_3)
#
ggplot(inc_3) + geom_line(aes(x=Year, y=Gini, color="Gini")) + geom_point(aes(x=Year, y=Gini, color="Gini", shape="Gini")) + geom_line(aes(x=Year, y=MLD, color = "MLD")) + geom_point(aes(x=Year, y=MLD, color="MLD", shape="MLD")) + geom_line(aes(x=Year, y=Theil, color="Theil")) + geom_point(aes(x=Year, y=Theil, color="Theil", shape="Theil")) + geom_line(aes(x=Year, y=Atkinson_0.50, color="Atkinson")) + geom_point(aes(x=Year, y=Atkinson_0.50, color="Atkinson", shape="Atkinson")) + guides(color=guide_legend(title="Index")) + theme(axis.text.x = element_text(size=rel(1.5)))+ theme(axis.text.y = element_text(size=rel(1.5))) + theme(axis.title.x = element_text(size = rel(1.5))) + theme(axis.title.y = element_text(size = rel(1.5))) + ylab("Inequality index") + annotate("text", x=1975, y=0.16, label="Atkinson", size = 5) + annotate("text", x=1975, y=0.29, label="Theil", size = 5) + annotate("text", x=2010, y=0.5, label="Gini", size = 5) + annotate("text", x=2010, y=0.61, label="MLD", size = 5) + guides(shape=FALSE)+ guides(color=FALSE) # remove legend
## Discussion of Fig inequality: Measures of income inequality [Removed since v.61]
names(inc_3)
range(inc_3$Year)
length(inc_3$Year)
inc_3[inc_3$Year==1967, c(1,2,3,4,6) ]
inc_3[inc_3$Year==2018, c(1,2,3,4,6) ]
### Fig 3: Plotting income inequality quintiles [Not used in paper]
names(inc_3)
#
ggplot(inc_3) + geom_line(aes(x=Year, y= Lowest_quintile, color="1st (lowest)")) + geom_point(aes(x=Year, y=Lowest_quintile, color="1st (lowest)", shape="1st (lowest)")) + geom_line(aes(x=Year, y= Second_quintile, color="2nd")) + geom_point(aes(x=Year, y=Second_quintile, color="2nd", shape="2nd")) + geom_line(aes(x=Year, y= Third_quintile, color="3rd")) + geom_point(aes(x=Year, y=Third_quintile, color="3rd", shape="3rd")) + geom_line(aes(x=Year, y= Fourth_quintile, color="4th")) + geom_point(aes(x=Year, y=Fourth_quintile, color="4th", shape="4th")) + geom_line(aes(x=Year, y= Highest_quintile, color="5th (highest)")) + geom_point(aes(x=Year, y=Highest_quintile, color="5th (highest)", shape="5th (highest)")) + geom_line(aes(x=Year, y= Top_5_percent, color="Top 5%")) + geom_point(aes(x=Year, y=Top_5_percent, color="Top 5%", shape="Top 5%")) + guides(color=guide_legend(title="Quintiles", )) + theme(axis.text.x = element_text(size=rel(1.5)))+ theme(axis.text.y = element_text(size=rel(1.5))) + theme(axis.title.x = element_text(size = rel(1.5))) + theme(axis.title.y = element_text(size = rel(1.5))) + ylab("Shares of household income of quintiles (%)") + scale_y_continuous(breaks = seq(0, 55, 5)) + scale_x_continuous(breaks = seq(1965, 2020, 5)) + annotate("text", x=1975, y=46, label="Highest quintile", size = 5) + annotate("text", x=1975, y=26, label="4th quintile", size = 5) + annotate("text", x=1985, y=19, label="Top 5%", size = 5) + annotate("text", x=2000, y=16, label="3rd quintile", size = 5) + annotate("text", x=1975, y=12, label="2nd quintile", size = 5)+ annotate("text", x=1975, y=6, label="Lowest quintile", size = 5)+ guides(shape=FALSE)+ guides(color=FALSE) # remove legend
## start discussion of Figure (income quitiles) [Not used in paper]
names(inc_3)
dim(inc_3)
range(inc_3$Year)
length(inc_3$Year)
inc_3[inc_3$Year==1967, c(8:13) ]
inc_3[inc_3$Year==2018, c(8:13) ]
#
100*CAGR_formula(inc_3[inc_3$Year==2018, "Lowest_quintile"], inc_3[inc_3$Year==1967, "Lowest_quintile"], 52)
100*CAGR_formula(inc_3[inc_3$Year==2018, "Second_quintile"], inc_3[inc_3$Year==1967, "Second_quintile"], 52)
100*CAGR_formula(inc_3[inc_3$Year==2018, "Third_quintile"], inc_3[inc_3$Year==1967, "Third_quintile"], 52)
100*CAGR_formula(inc_3[inc_3$Year==2018, "Fourth_quintile"], inc_3[inc_3$Year==1967, "Fourth_quintile"], 52)
100*CAGR_formula(inc_3[inc_3$Year==2018, "Highest_quintile"], inc_3[inc_3$Year==1967, "Highest_quintile"], 52)
100*CAGR_formula(inc_3[inc_3$Year==2018, "Top_5_percent"], inc_3[inc_3$Year==1967, "Top_5_percent"], 52)
### Plotting earning (ratios) by HS, college, grad Fig 2 [old Figure 4]
names(earn_2)
range(earn_2$Year)
length(earn_2$Year)
head(earn_2)
dim(earn_2)
earn_2_fig4.df = earn_2
str(earn_2_fig4.df)
# define 5 variables: college_earn_div_hs_earn and college_earn_div_below_hs
earn_2_fig4.df$college_earn_div_hs_earn = earn_2_fig4.df$College/earn_2_fig4.df$High_school
earn_2_fig4.df$graduate_earn_div_hs_earn = earn_2_fig4.df$Graduate/earn_2_fig4.df$High_school
earn_2_fig4.df$college_earn_div_below_hs_earn = earn_2_fig4.df$College/earn_2_fig4.df$Below_high_school
earn_2_fig4.df$graduate_earn_div_below_hs_earn = earn_2_fig4.df$Graduate/earn_2_fig4.df$Below_high_school
earn_2_fig4.df$graduate_earn_div_college_earn = earn_2_fig4.df$Graduate/earn_2_fig4.df$College
names(earn_2_fig4.df)
head(earn_2_fig4.df)
ggplot(earn_2_fig4.df) + geom_line(aes(x=Year, y=college_earn_div_below_hs_earn)) + geom_point(aes(x=Year, y=college_earn_div_below_hs_earn)) + geom_point(aes(x=Year, y=college_earn_div_hs_earn)) + geom_line(aes(x=Year, y=college_earn_div_hs_earn)) + geom_point(aes(x=Year, y=graduate_earn_div_college_earn)) + geom_line(aes(x=Year, y=graduate_earn_div_college_earn)) + annotate("text", x=1982, y=2.45, label=expression(paste(frac(College, Below_HS))), size = 5) + annotate("text", x=1982, y=1.76, label=expression(paste(frac(College, HS))), size = 5) + annotate("text", x=1994, y=1.38, label=expression(paste(frac(Graduate, College))), size = 5) + ylab("Earning ratios by education attainment") + scale_x_continuous(breaks = seq(1975, 2020, 5)) + scale_y_continuous(breaks = seq(1, 3, 0.25)) + theme(axis.text.x = element_text(size=rel(1.5)))+ theme(axis.text.y = element_text(size=rel(1.5))) + theme(axis.title.x = element_text(size = rel(1.5))) + theme(axis.title.y = element_text(size = rel(1.5))) + theme(plot.margin = unit(c(0.3,0.6,0,0), "cm"))
## discussion of Figure 2 [old fig 4] (earning by HS, college, graduate)
names(earn_2_fig4.df)
range(earn_2_fig4.df$Year)
(temp1=round(earn_2_fig4.df[earn_2_fig4.df$Year==1975, c("college_earn_div_below_hs_earn", "college_earn_div_hs_earn", "graduate_earn_div_college_earn")], digits = 2))
(temp2=round(earn_2_fig4.df[earn_2_fig4.df$Year==2019, c("college_earn_div_below_hs_earn", "college_earn_div_hs_earn", "graduate_earn_div_college_earn")],digits = 2))
#
(round(100*(temp2-temp1)/temp1, digits=1))# % change in earning ratios
### Start regressions
names(grad_2)# education attainment as in Fig 1
range(grad_2$Year)
grad_2$Year # gaps in earlier years, needs to be interpolated
dim(grad_2)
head(grad_2)
# adding missing years with NA
(year_1.vec = 1940:2020)
length(year_1.vec)
year_1.df = data.frame(Year=year_1.vec)
str(year_1.df)
grad_3.df = join(year_1.df, grad_2, by="Year", type="left")
dim(grad_3.df)
names(grad_3.df)
# deleting assoc and higher degrees and MA and higher
grad_4.df = subset(grad_3.df, select = c(Year, High_school_and_higher, College_and_higher))
names(grad_4.df)
str(grad_4.df)
# Approximating NAs
grad_5.df = grad_4.df
names(grad_5.df)
# approx NAs (library zoo)
grad_5.df = as.data.frame(na.approx(grad_4.df, rule=2))
str(grad_5.df)
head(grad_4.df,12)
head(grad_5.df,12)
tail(grad_4.df)
tail(grad_5.df)
sum(is.na(grad_5.df))# verify no NAs
# merging grad_5.df with markup_5.df
str(markup_5.df)
range(markup_5.df$Year)
str(grad_5.df)
range(grad_5.df$Year)
reg_1.df = join(markup_5.df, grad_5.df, by="Year", type="left")# merging grad5 into markup5 with smaller range
str(reg_1.df)
reg_1.df
names(reg_1.df)
# merging with gdp per capita
str(gdp_pp_3)
reg_2.df = join(reg_1.df, gdp_pp_3, by="Year", type="left")
str(reg_2.df)
range(reg_2.df$Year)
length(reg_2.df$Year)# num years in the regression panel
# merging with earning
earn_3.df = earn_2
str(earn_3.df)
range(earn_3.df$Year)
reg_3.df = join(subset(reg_2.df, Year %in% c(1975:2016)), subset(earn_3.df, Year %in% c(1975:2016)), by="Year")
str(reg_3.df)
length(reg_3.df$Year)# num years in final regression panel
range(reg_3.df$Year)
names(reg_3.df)
# define ratio college earn to HS earn (top graph in Fig 2)
reg_4.df = reg_3.df
(reg_4.df$earn_ratio = reg_4.df$College/reg_4.df$High_school)
# define regression model
reg_1.model = formula(earn_ratio ~ College_and_higher + High_school_and_higher + Aggregate_Markup + GDP_PP)
(reg_1.lm = lm(reg_1.model, data = reg_4.df))
summary(reg_1.lm)
# regression model without GDP PP
reg_2.model = formula(earn_ratio ~ College_and_higher + High_school_and_higher + Aggregate_Markup)
(reg_2.lm = lm(reg_2.model, data = reg_4.df))
summary(reg_2.lm)
# regression model w/o markup (only edu)
reg_3.model = formula(earn_ratio ~ College_and_higher + High_school_and_higher)
(reg_3.lm = lm(reg_3.model, data = reg_4.df))
summary(reg_3.lm)
# LaTeX table summarizing 3 regression results, Table 1
#stargazer(reg_3.lm, reg_2.lm, reg_1.lm)
stargazer(reg_3.lm, reg_2.lm)
# correlation between markup and PP GDP
round(with(reg_4.df, cor(Aggregate_Markup, GDP_PP)),2)
# which explains why significance shifted from aggregate markup to GPD. Hence, I remove real GDP from the regression.
### Constructing a graph (not in paper) of rising markup based on markup data by the authors of QJE (2020)
markup_5.df
range(markup_5.df$Year)
ggplot(markup_5.df, aes(x=Year, y = Aggregate_Markup)) + geom_point() + geom_line() + ylab("Aggregate price markup") + scale_x_continuous(breaks = seq(1955, 2016, 5)) + scale_y_continuous(breaks = seq(1.2, 1.7, 0.05)) + theme(axis.text.x = element_text(size=rel(1.5)))+ theme(axis.text.y = element_text(size=rel(1.5))) + theme(axis.title.x = element_text(size = rel(1.5))) + theme(axis.title.y = element_text(size = rel(1.5))) + theme(plot.margin = unit(c(0.3,0.6,0,0), "cm"))
### end of code gradrates ###
|
cf359e8ba0b754032cdc6c14f775ae118f284b16
|
88931c8cf916f9e8bacd99c65c1442e21e34e903
|
/scripts/presentation_six_tss_seqlogos.R
|
5ab9e6176dff47b73db5451568a37de929e16158
|
[] |
no_license
|
james-chuang/dissertation
|
cdb91652f9842da5ae75d72f2600c11dceb78721
|
b44d9a88cd934c1862b415b5a3961afc8ce78ec6
|
refs/heads/master
| 2020-06-14T07:03:10.846181
| 2019-07-29T21:18:32
| 2019-07-29T21:18:32
| 194,939,682
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 3,999
|
r
|
presentation_six_tss_seqlogos.R
|
main = function(theme_spec,
data_paths,
fig_width, fig_height,
pdf_out){
source(theme_spec)
library(ggseqlogo)
#we don't use ggseqlogo plotting because it doesn't allow prior to be taken into account
#we just use their dataframes with font information for geom_polygon
slop=20
tss_classes = c("wild-type genic",
"intragenic",
"antisense")
df = tibble()
for (i in 1:length(data_paths)){
df = read_tsv(data_paths[i],
comment="#",
col_names=c('position','A','C','G','T','entropy','low','high','weight')) %>%
mutate(position = position - as.integer(slop+1)) %>%
gather(key=base, value=count, c('A','C','G','T')) %>%
group_by(position) %>%
mutate(height=entropy*count/sum(count)) %>%
arrange(height, .by_group=TRUE) %>%
mutate(base_low = lag(cumsum(height), default=0),
base_high = base_low+height) %>%
left_join(ggseqlogo:::get_font("helvetica_bold"), by=c("base"="letter")) %>%
group_by(position, base) %>%
mutate(x = scales::rescale(x, to=c((1-(first(weight)))/2, 1-(1-first(weight))/2)),
x = x+(position-0.5),
y = scales::rescale(y, to=c(first(base_low), first(base_high))),
tss_class = tss_classes[i]) %>%
bind_rows(df, .)
}
df %<>%
mutate(tss_class = fct_inorder(tss_class, ordered=TRUE))
fig_five_c = ggplot() +
geom_polygon(data = df, aes(x=x, y=y, group=interaction(position, base), fill=base),
alpha=0.95) +
geom_label(data=df %>%
distinct(tss_class),
aes(label=tss_class),
x=-12,
y=max(df[["y"]])*0.8,
size=12/72*25.4,
label.size=NA,
label.padding=unit(2, "pt"),
label.r = unit(0, "pt"),
hjust=0,
family="FreeSans") +
scale_fill_manual(values = c('#109648', '#255C99', '#F7B32B', '#D62839', '#D62839'),
breaks = c('A','C','G','T','U')) +
scale_y_continuous(limits = c(NA, max(df[["y"]]) *1.05 ),
breaks = c(0, 0.4),
expand=c(0,0),
name = "bits") +
scale_x_continuous(limits = c(-12.5, 12.5),
expand = c(0,0),
labels = function(x)case_when(x==0 ~ "TSS",
x==10 ~ "+10 nt",
x>0 ~ paste0("+", x),
TRUE ~ as.character(x))) +
facet_grid(tss_class~.,
switch="y") +
ggtitle("TSS sequence preference") +
theme_default_presentation +
theme(legend.position = "none",
axis.title.y = element_text(angle=0, hjust=1, vjust=0.5),
axis.title.x = element_blank(),
axis.text.x = element_text(size=12, color="black", face="plain",
margin=margin(1,0,0,0,"pt")),
axis.line = element_line(size=0.25, color="grey65"),
panel.spacing.y = unit(10, "pt"),
panel.grid = element_blank(),
panel.border = element_blank(),
plot.margin = margin(1,0,0,0,"pt"))
ggsave(pdf_out,
plot=fig_five_c,
width=fig_width,
height=fig_height,
units="cm",
device=cairo_pdf)
}
main(theme_spec = snakemake@input[["theme"]],
data_paths = snakemake@input[["data_paths"]],
fig_width = snakemake@params[["width"]],
fig_height = snakemake@params[["height"]],
pdf_out = snakemake@output[["pdf"]])
|
6f095369163fca27222d5d70e2df9bfdfe539ed6
|
4c50f18ba0a5eb8b13f7efcab5804d59bc361c15
|
/EBIO 1010/Oh_Mar15HW.R
|
a9f2608f272578ad7d35f2069d53563624c0bb23
|
[] |
no_license
|
dmwo/EBIO-R-Code
|
a1075f3ae8e0aeb00dec36f4e3134052a3eaaf88
|
d8f9bf56e03355e00eac56bb2716c9ea39b7eb48
|
refs/heads/master
| 2023-06-19T23:02:26.203989
| 2021-07-14T06:53:46
| 2021-07-14T06:53:46
| 292,739,745
| 0
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 5,342
|
r
|
Oh_Mar15HW.R
|
#------------------------------------------------------------------------------#
# Assignment: March 15 R Homework #
# Author: Dylan Oh #
# Date Created: 14 Mar 2018 #
#------------------------------------------------------------------------------#
# Importing data from .csv
data <- read.csv('student_data_cleaned.csv')
#------------------------------------------------------------------------------#
# Employing the Linear Model #
#------------------------------------------------------------------------------#
# Creating variable sex defining male as 0 and female as 1
hike <- ifelse(
data$Nature_hike_week == 'Yes',
yes = 1,
no = 0
)
# Invoking a linear model comparing marijuana consumption by sex
marijuana_model <- lm(data$Marijuana_week ~ hike)
# Looking at the results of the test
summary(marijuana_model)
#------------------------------------------------------------------------------#
# Plotting the Data #
#------------------------------------------------------------------------------#
# Creating jitter vector
x_jitter <- jitter(hike,0.25)
# Plotting the data with jitter and no x-axis
plot(
x_jitter,
data$Marijuana_week,
main = 'Marijuana Consumption Per Week By Sex',
xlab = 'Hike Every Week',
ylab = 'Marijuana Use per Week',
xlim = c(-0.4,1.4),
ylim = c(0,14),
xaxt = 'n',
las = 1
)
# Creating background colour for the plot
rect(
par("usr")[1],
par("usr")[3],
par("usr")[2],
par("usr")[4],
col = 'floralwhite'
)
# Reapplying data points
points(
x_jitter,
data$Marijuana_week,
col = 'forestgreen'
)
# Creating x-axis, replacing 0 and 1 with no and yes
axis(
side = 1,
at = c(0,1),
c('No','Yes')
)
# Creating a line that connects the two means
abline(
marijuana_model,
lwd = 2,
col = 'navajowhite3'
)
# Creating a line that indicates the location of x = 0
abline(
v = 0,
lty = 3
)
# Plotting the mean of the no data
points(
x = 0,
y = marijuana_model$coeff[1],
pch = 19,
cex = 1.5,
col = 'royalblue'
)
# Plotting the mean of the yes data
points(
x = 1,
y = marijuana_model$coeff[1] + marijuana_model$coeff[2],
pch = 19,
cex = 1.5,
col = 'royalblue'
)
# The Intercept Estimate is the mean for the marijuana consumption Values
# among people who did not hike every week. The hike Estimate is the difference
# between the former value and people who did hike every week. The standard
# error represents the uncertainty in the mean. The t value and p value are
# other measures of uncertainty. The fact that the p value is significantly
# greater than 0.05 indicates that there is likely no significant difference
# in marijuana consumption between people who do and do not hike every week.
#------------------------------------------------------------------------------#
# Rubric #
#------------------------------------------------------------------------------#
#-------------------------------------------------------------------------------
# 1. Load the Data | YES |
#-----------------------------------------------------------------------|------|
# 2. Hike/week Data Transformed into 0 and 1 | YES |
#-----------------------------------------------------------------------|------|
# 3. Construct the Model | YES |
#-----------------------------------------------------------------------|------|
# 4. Ensure Each of the Values is Recognised | YES |
#-----------------------------------------------------------------------|------|
# 5. Jitter X-Values and Save as New Vector | YES |
#-----------------------------------------------------------------------|------|
# 6. Make a Plot of the Data and Replace X-Axis | YES |
#-----------------------------------------------------------------------|------|
# 7. Plot Axes Correct and Graph Looks Good | YES |
#-----------------------------------------------------------------------|------|
# 8. Draw Model Predictions on Graph | YES |
#-----------------------------------------------------------------------|------|
# 9. Put Means on Graph | YES |
#-----------------------------------------------------------------------|------|
# 10. Each Line of Code has Annotation | YES |
#-----------------------------------------------------------------------|------|
# 11. Submit Code and Picture of Graph | YES |
#-----------------------------------------------------------------------|------|
# 12. Add Something to Graph Extra Credit | NO |
#-------------------------------------------------------------------------------
|
576b369adacc1d108955bcf1bf8e7fe4f5d976c6
|
ee8733c46c91949478b44143e4977ca0ca857968
|
/R/getComposingPrimes.R
|
7b1c12ff0d483f31e7cfaacab3f9fb875ddfe57b
|
[] |
no_license
|
holgerschw/logicFS
|
0a7919ef1012814b83a114dbc485e8be3d21e7ae
|
ed8b0be37da919754b39e1a46e793e253b06ddaf
|
refs/heads/master
| 2021-06-06T01:10:52.594711
| 2020-04-12T21:34:42
| 2020-04-12T21:34:42
| 148,649,928
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 443
|
r
|
getComposingPrimes.R
|
getComposingPrimes <- function (primes, vec.primes)
{
if(is.null(primes))
return(NULL)
a <- strsplit(primes, split = " & ")
b <- strsplit(vec.primes, split = " & ")
d <- numeric(length(vec.primes))
for (i in 1:length(primes)){
d <- d + sapply(b, function (x, y) ifelse(
all(x %in% y) && (length(x) != length(y)),
1, 0), y = a[[i]])
}
if(sum(d) > 0)
out <- vec.primes[d > 0]
else
out <- NULL
out
}
|
29bbc5239f4350aec9d71a32ea4301959b6c1ec2
|
4928475849053b4f8374fa3883e162a4687c97f8
|
/binomal/R/main function.R
|
e16965e3a996d16d2786af9b601c2358afeac8e6
|
[] |
no_license
|
stat133-sp19/hw-stat133-snowman36
|
7d527a21725081bd73a368a9e63e803bf0b2499d
|
99fc30abfe65efc3f141fed49154c0ddfd30bafe
|
refs/heads/master
| 2020-04-28T13:42:33.558688
| 2019-05-01T02:21:26
| 2019-05-01T02:21:26
| 175,313,702
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 7,458
|
r
|
main function.R
|
#' @importFrom graphics barplot lines plot
#' @title bin_choose
#' @description Caculate the combinations "n choose k"
#' @param n the total number of trials
#' @param k the total number of successes
#' @return the combinations "n choose k"
#' @export
#' @examples
#' bin_choose(n = 5, k = 2)
bin_choose <- function(n, k) {
if(max(k) > n) {
stop("\nk cannot be greater than n")
}
number <- factorial(n)/factorial(k)/factorial(n-k)
return(number)
}
#' @title bin_probability
#' @description To caculate the result of a binominal distribution
#' @param success the number of successes
#' @param trials the total number of trials
#' @param prob the probability of success every time
#' @return the result of a binominal distribution.
#' @export
#' @examples
#' bin_probability(success = 2, trials = 5, prob = 0.5)
#'
#' bin_probability(success = 0:2, trials = 5, prob = 0.5)
#'
bin_probability <- function(success, trials, prob) {
if(!check_trials(trials)) {
stop("\ninvalid trials value")
}
if(!check_prob(prob)) {
stop("\ninvaild prob value")
}
if(!check_success(success, trials)) {
stop("\ninvalid success value")
}
probability <- bin_choose(trials, success) * prob^success * (1-prob)^(trials-success)
return(probability)
}
#' @title bin_distribution
#' @description Create an object of class \code{"bindis"}
#' @param trials the total number of the trials
#' @param prob the probability of success every time
#' @return an object of class \code{"bindis"}
#' @export
#' @examples
#' bin_distribution(trials = 5, prob = 0.5)
bin_distribution <- function(trials, prob) {
if(!check_trials(trials)) {
stop("\ninvalid trials value")
}
if(!check_prob(prob)) {
stop("\ninvaild prob value")
}
success <- 0:trials
probability <- bin_probability(success = success, trials = trials, prob = prob)
result <- list(success = success, probability = probability)
class(result) <- "bindis"
return(result)
}
#' @export
print.bindis <- function(x) {
cat("\n")
result <- data.frame(success = x$success, probability = x$probability)
print(result)
invisible(x)
}
#' @export
plot.bindis <- function(x) {
barplot(x$probability,
xlab = 'success',
ylab = "probability",
names.arg = c(0,1,2,3,4,5))
}
#' @title bin_cumulative
#' @description Create an object of class \code{"bincum"}
#' @param trials the total number of the trials
#' @param prob the probability of success every time
#' @return an object of class \code{"bindis"}
#' @export
#' @examples
#' bin_cumulative(trials = 5, prob = 0.5)
bin_cumulative <- function(trials, prob) {
if(!check_trials(trials)) {
stop("\ninvalid trials value")
}
if(!check_prob(prob)) {
stop("\ninvaild prob value")
}
success <- 0:trials
probability <- bin_probability(success = success, trials = trials, prob = prob)
result <- list(success = success, probability = probability, cumulative = cumsum(probability))
class(result) <- "bincum"
return(result)
}
#' @export
print.bincum <- function(x) {
result <- data.frame(success = x$success, probability = x$probability, cumulative = x$cumulative)
print(result)
invisible(x)
}
#' @export
plot.bincum <- function(x) {
plot(x$success,x$cumulative,
xlab = "success",
ylab = "probability")
lines(x$success, x$cumulative)
}
#' @title bin_variable
#' @description Create an object of class \code{"binvar"}
#' @param trials the total number of the trials
#' @param prob the probability of success every time
#' @return an object of class \code{"binvar"}
#' @export
#' @examples
#' bin_variable(trials = 5, prob = 0.5)
bin_variable <- function(trials, prob){
if(!check_trials(trials)) {
stop("\ninvalid trials value")
}
if(!check_prob(prob)) {
stop("\ninvaild prob value")
}
obj <- list(trials = trials, prob = prob)
class(obj) <- "binvar"
obj
}
#' @export
print.binvar <- function(x) {
cat(sprintf("\n- number of trials: %s", x$trials))
cat(sprintf("\n- prob of success: %s", x$prob))
invisible(x)
}
#' @export
summary.binvar <- function(x) {
trials <- x$trials
prob <- x$prob
obj <- list(trials = trials,
prob = prob,
mean = aux_mean(trials = trials, prob = prob),
variance = aux_variance(trials = trials,prob = prob),
mode = aux_mode(trials = trials,prob = prob),
skewness = aux_skewness(trials = trials,prob = prob),
kurtosis = aux_kurtosis(trials = trials,prob = prob))
class(obj) <- "summary.binvar"
obj
}
#' @export
print.summary.binvar <- function(x) {
trials <- x$trials
prob <- x$prob
cat("\nSummary Binomial\n\n")
cat("Paramaters")
cat(sprintf("\n- number of trials: %s", trials))
cat(sprintf("\n- prob of success: %s", prob))
cat("\n\nMeasures")
cat(sprintf("\n- mean\t: %s",aux_mean(trials = trials,prob = prob)))
cat(sprintf("\n- variance\t: %s",aux_variance(trials = trials,prob = prob)))
cat(sprintf("\n- mode\t: %s",aux_mode(trials = trials,prob = prob)))
cat(sprintf("\n- skewness\t: %s",aux_skewness(trials = trials,prob = prob)))
cat(sprintf("\n- kurtosis\t: %s",aux_kurtosis(trials = trials,prob = prob)))
}
#' @title bin_mean
#' @description caculate the mean of a binominal distribution
#' @param trials the total number of trials
#' @param prob the probability of success every time
#' @return a numeric object
#' @export
#' @examples
#' bin_mean(trials = 5, prob = 0.5)
bin_mean <- function(trials, prob) {
check_trials(trials)
check_prob(prob)
aux_mean(trials,prob)
}
#' @title bin_variance
#' @description caculate the variance of a binominal distribution
#' @param trials the total number of trials
#' @param prob the probability of success every time
#' @return a numeric object
#' @export
#' @examples
#' bin_variance(trials = 5, prob = 0.5)
bin_variance <- function(trials, prob) {
check_trials(trials)
check_prob(prob)
aux_variance(trials,prob)
}
#' @title bin_mode
#' @description caculate the mode of a binominal distribution
#' @param trials the total number of trials
#' @param prob the probability of success every time
#' @return a numeric object
#' @export
#' @examples
#' bin_mode(trials = 5, prob = 0.5)
bin_mode <- function(trials, prob) {
check_trials(trials)
check_prob(prob)
aux_mode(trials,prob)
}
#' @title bin_skewness
#' @description caculate the skewness of a binominal distribution
#' @param trials the total number of trials
#' @param prob the probability of success every time
#' @return a numeric object
#' @export
#' @examples
#' bin_skewness(trials = 5, prob = 0.5)
bin_skewness <- function(trials, prob) {
check_trials(trials)
check_prob(prob)
aux_skewness(trials,prob)
}
#' @title bin_kurtosis
#' @description caculate the kurtosis of a binominal distribution
#' @param trials the total number of trials
#' @param prob the probability of success every time
#' @return a numeric object
#' @export
#' @examples
#' bin_kurtosis(trials = 5, prob = 0.5)
bin_kurtosis <- function(trials, prob) {
check_trials(trials)
check_prob(prob)
aux_kurtosis(trials,prob)
}
|
1441582ed89129ff0784e5a5cd7c9c11b006881b
|
c409ff3ea8b7c62efd962d37c83793d4fc0dc1bc
|
/man/cqc_defaults.Rd
|
0d574d291f4502619e0e1d99a8c7111e46352fa0
|
[
"MIT"
] |
permissive
|
markdly/conquestr
|
e225ecb1347957dc025c5c719d46624f56e01207
|
7994b3768e26acf1be4ac20821da66ba7f564deb
|
refs/heads/master
| 2021-04-30T10:41:43.513747
| 2018-09-12T05:46:02
| 2018-09-12T05:46:02
| 121,339,581
| 1
| 0
| null | 2018-09-12T05:46:03
| 2018-02-13T04:43:27
|
R
|
UTF-8
|
R
| false
| true
| 431
|
rd
|
cqc_defaults.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/cq_syntax.R
\name{cqc_defaults}
\alias{cqc_defaults}
\title{List of default arguments for use with ConQuest commands}
\usage{
cqc_defaults()
}
\value{
A named list. Each element contains a default placeholder value. Designed for use with `cqc_cmds`
}
\description{
List of default arguments for use with ConQuest commands
}
\examples{
cqc_defaults()
}
|
106d0510fd414301fb2415587e4bbde90c53da55
|
71ab7a051c715f44e841fbc77cee4db54547347c
|
/plot2.R
|
92c4b715dfbc4fe57563be4544cb5de377dbf803
|
[] |
no_license
|
SpyderChica/ExData_Plotting1
|
db07859042ff98ebafb6e966faf615f956e391c9
|
f5863e2d58710c1731d73fa459d8d12af406c8a4
|
refs/heads/master
| 2020-12-25T02:49:41.786179
| 2015-08-08T22:45:46
| 2015-08-08T22:45:46
| 40,416,034
| 0
| 0
| null | 2015-08-08T20:47:19
| 2015-08-08T20:47:19
| null |
UTF-8
|
R
| false
| false
| 614
|
r
|
plot2.R
|
#Our overall goal here is simply to examine how household energy usage varies over a 2-day period in February, 2007. Your task is to reconstruct the following plots below, all of which were constructed using the base plotting system.
df = read.csv("household_power_consumption.txt", sep = ";", na.strings="?", stringsAsFactors = FALSE)
df <- subset(df, df$Date == "1/2/2007" | df$Date == "2/2/2007")
df$DT <- paste(df$Date,df$Time,sep="_")
df$DT <- strptime(df$DT, "%d/%m/%Y_%H:%M:%S")
png("plot2.png")
plot(y = df$Global_active_power, x= df$DT, ylab="Global Active Power (kilowatts)",type = "l",xlab="")
dev.off()
|
3013d7b03c56c9b5b36463f263551a9dfc411848
|
7a7375245bc738fae50df9e8a950ee28e0e6ec00
|
/R/SA4__Year_Sexunemployment.R
|
52acc6b0c5fed422cd86401c767f5c2eff5ff1b9
|
[] |
no_license
|
HughParsonage/Census2016.DataPack.TimeSeries
|
63e6d35c15c20b881d5b337da2f756a86a0153b5
|
171d9911e405b914987a1ebe4ed5bd5e5422481f
|
refs/heads/master
| 2021-09-02T11:42:27.015587
| 2018-01-02T09:01:39
| 2018-01-02T09:02:17
| 112,477,214
| 3
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 150
|
r
|
SA4__Year_Sexunemployment.R
|
#' @title Sex by SA4, Year
#' @description UnemploymentSex by SA4, Year
#' @format 642 observations and 4 variables.
"SA4__Year_Sexunemployment"
|
ba441c6e42b9db0c6696ce5b2ea70ec936edbe34
|
ce2deb9f1b4e02f16592ed78acad49bec5a674c8
|
/projects/acn/data/ONC_Art_Cooc_2011/ArtONCnetworksMay2011.R
|
3e6605c5f727fcac388105c385c39361c0f5962c
|
[] |
no_license
|
RavenDaddy/Brent-Thomas-Tripp-
|
e748b9e48ebd96bfc9ca11124283fb2acde7b86a
|
fe81f7952e67ab093d9e455b9bf520178c724383
|
refs/heads/master
| 2017-12-15T11:41:30.341714
| 2017-01-02T01:02:36
| 2017-01-02T02:31:22
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 6,381
|
r
|
ArtONCnetworksMay2011.R
|
#Reading Art's data files
#source('/Users/aeolus/Documents/R_Docs/EcoNet/EcoNet_03.R')
#source('/Users/Aeolus/Documents/Active_Projects/GeneticNetworks/Genetic_Networks/ind_net.R')
source('/Users/Aeolus/Documents/Active_Projects/CorNets/CorNets.R')
library(bipartite)
library(vegan)
library(sna)
library(gdata)
#Standard Error Function
se=function(x){sd(x)/sqrt(length(x))}
#Read Art's data files
read.art=function(file=""){
x<-read.xls(file,header=TRUE)
xspp<-x[,1]
x<-x[,-1]
x<-t(x)
colnames(x)<-xspp
return(x)
}
my.split=function(x,lim=5){
y=unlist(strsplit(x,split=''))
x=character()
for (i in (1:min(c(lim,length(y))))){
if (y[i]==' '|y[i]=='.'|y[i]=='X'){}else{
x=paste(x,y[i],sep='')
}
}
as.character(x)
}
###NOTE!!! Fly (dance) in 2004 and 2005 was changed to Fly (Dolichopodidae) given the posibility of calling a Dolichopodidae, long legged fly, a "dance fly" and given that it was nearly un-detected in 2005 and not detected in 2004.
setwd("/Users/aeolus/Documents/R_Docs/Art_Keith/ArtONCArthropods")
ArtONC=list()
for (i in 1:3){
ArtONC[[i]]=read.art(dir()[i])
names(ArtONC)[i]=strsplit(dir()[i],split=" ")[[1]][1]
}
names(ArtONC)
all(colnames(ArtONC[[1]])==colnames(ArtONC[[2]]))
all(colnames(ArtONC[[2]])==colnames(ArtONC[[3]]))
#bipartite graph perspective. These graphs show how each tree differs in its connections to the community of arthropods.
par(mfrow=c(1,3))
for (i in 1:length(ArtONC)){plotweb(ArtONC[[i]][order(apply(ArtONC[[i]],1,sum),decreasing=TRUE),order(apply(ArtONC[[i]],2,sum),decreasing=TRUE)],method='normal',text.rot=90)}
A=rbind(ArtONC[[1]],ArtONC[[2]],ArtONC[[3]])
#Cut out the species whose total abudances represent less than 1% of the total abundance across all species
barplot((apply(A,2,sum)/sum(apply(A,2,sum)))[order((apply(A,2,sum)/sum(apply(A,2,sum))),decreasing=TRUE)],las=2);abline(h=0.005)
A=rbind(ArtONC[[1]],ArtONC[[2]],ArtONC[[3]])
A=A[,(apply(A,2,sum)/sum(apply(A,2,sum)))>=0.005]
G=as.character(sapply(rownames(A),my.split))
A.list=list()
for (i in (1:length(unique(G)))){
A.list[[i]]=A[G==unique(G)[i],]
}
names(A.list)=unique(G)
A.net=list()
for (i in (1:length(A.list))){
A.net[[i]]=kendall.pairs(A.list[[i]],p.adj=FALSE,alpha=0.05,adj.method='fdr')
}
names(A.net)=names(A.list)
length(A.net)
quartz('ONC Arth. Networks',7,6.5)
par(mfrow=c(3,3))
for (i in (1:length(A.net))){
gplot(abs(A.net[[i]]),gmode='graph',mode='circle',vertex.col='black',edge.col='gray',vertex.sides=50,main=names(A.net)[i])
}
##Network distance by hybrid index
#NOTE! There is no variation in hybrid index among these trees
hindex <- read.csv('/Users/Aeolus/Documents/garden_info/ONC_hybrid_index.csv')
hindex <- na.omit(hindex)
hi.geno <- as.character(gsub('-','',hindex[,1]))
hi <- as.numeric(hindex[,3])
geno.net <- as.character(names(A.net))
hi.net <- geno.net
geno.net[geno.net == 'Coal'] <- 'COAL1'
unique(geno.net)
for (i in (1:length(unique(geno.net)))){
hi.net[geno.net == unique(geno.net)[i]] <- hi[hi.geno == unique(geno.net)[i]]
}
##Figure for presentaiton
quartz('',22,22)
par(mfrow=c(3,3),mar=c(2.4,1.3,1.3,1.3),oma=c(0.1,0.1,0.1,0.1),bg='transparent',col.main='white',cex=2,mar=c(2,1,1,1))
net.list <- A.net
com.list <- A.list
#names(net.list)=c('Excluded','Resistant','Susceptible') #rename the network graphs
net.list.reorder <- net.list
com.list.reorder <- com.list
for (i in (1:length(net.list))){
v.col=apply(com.list.reorder[[i]],2,sum); v.col[v.col != 0] = 'lightblue' #color the present species
v.col[v.col == 0] <- 'black' #color empty species
gplot(abs(net.list.reorder[[i]]),gmode='graph',vertex.cex=3,vertex.sides=100,vertex.col=v.col,edge.lwd=0.35,edge.col=gray(0.9)[1],vertex.border='grey',mode='circle',displaylabels=FALSE,cex=2,main=names(net.list.reorder)[i]) #without titles
}
#indicator species analyses
library(vegan)
library(labdsv)
G04=as.character(sapply(rownames(ArtONC[[1]]),my.split))
G05=as.character(sapply(rownames(ArtONC[[2]]),my.split))
G06=as.character(sapply(rownames(ArtONC[[3]]),my.split))
summary(indval(ArtONC[[1]],G04,numitr=5000))
G04[order(G04)]
summary(indval(ArtONC[[2]],G05,numitr=5000))
G04[order(G05)]
summary(indval(ArtONC[[3]],G06,numitr=5000))
G04[order(G06)]
par(mfrow=c(1,3))
plot(ArtONC[[1]][,colnames(ArtONC[[1]])=='Leaf miner']~factor(G04),las=2,ylab='Leaf miner',xlab='')
plot(ArtONC[[1]][,colnames(ArtONC[[1]])=='Stem Borer']~factor(G04),las=2,ylab='Stem Borer',xlab='')
plot(ArtONC[[1]][,colnames(ArtONC[[1]])=='T. Dip (tiny)']~factor(G04),las=2,ylab='T. Dip (tiny)',xlab='')
par(mfrow=c(2,4))
plot(ArtONC[[2]][,colnames(ArtONC[[2]])=='Leafhopper (brwn mot)']~factor(G05),las=2,ylab='Leafhopper (brwn mot)',xlab='')
plot(ArtONC[[2]][,colnames(ArtONC[[2]])=='Beetle (unknown)']~factor(G05),las=2,ylab='Beetle (unknown)',xlab='')
plot(ArtONC[[2]][,colnames(ArtONC[[2]])=='Coenagrionidae']~factor(G05),las=2,ylab='Coenagrionidae',xlab='')
plot(ArtONC[[2]][,colnames(ArtONC[[2]])=='Leafhopper (fsh eye)']~factor(G05),las=2,ylab='Leafhopper (fsh eye)',xlab='')
plot(ArtONC[[2]][,colnames(ArtONC[[2]])=='Parasitoid']~factor(G05),las=2,ylab='Parasitoid',xlab='')
plot(ArtONC[[2]][,colnames(ArtONC[[2]])=='Calophorid']~factor(G05),las=2,ylab='Calophorid',xlab='')
plot(ArtONC[[2]][,colnames(ArtONC[[2]])=='Leafhopper (gry str)']~factor(G05),las=2,ylab='Leafhopper (gry str)',xlab='')
plot(ArtONC[[2]][,colnames(ArtONC[[2]])=='Thrip (black)']~factor(G05),las=2,ylab='Thrip (black)',xlab='')
par(mfrow=c(1,2))
plot(ArtONC[[3]][,colnames(ArtONC[[3]])=='Beetle (unknown)']~factor(G06),las=2,xlab='',ylab='Beetle (unknown)')
plot(ArtONC[[3]][,colnames(ArtONC[[3]])=='Salticidae (gry/blk)']~factor(G06),las=2,xlab='',ylab='Salticidae (gry/blk)')
#Using across years as replicates
#NOTE: this didn't work, most likely because of an issue of low power.
A=rbind(ArtONC[[1]],ArtONC[[2]],ArtONC[[3]])
A=A[,(apply(A,2,sum)/sum(apply(A,2,sum)))>=0.005]
A.tree=list()
for (i in (1:length(unique(rownames(A))))){
A.tree[[i]]=A[rownames(A)==unique(rownames(A))[i],]
}
names(A.tree)=unique(rownames(A))
G.tree=as.character(sapply(names(A.tree),my.split))
Atree.nets=list()
for (i in (1:length(A.tree))){
Atree.nets[[i]]=kendall.pairs(A.tree[[i]],p.adj=FALSE,alpha=0.1)
Atree.nets[[i]][is.na(Atree.nets[[i]])]=0
}
for (i in (1:length(Atree.nets))){
gplot(abs(Atree.nets[[i]]))
locator(1)
}
|
ba7ac25034fbd42262b95711f5a6c28868579eed
|
d276646df2e7165caa4f77c4b1ebf3ab54d08a68
|
/man/training_data.Rd
|
0544a71d0b793e1e1c40fc0fcf28cb34ef0e13d8
|
[] |
no_license
|
zhaoxia413/PreMSIm
|
c0298a86b76d7c8bc2b8665483cb8e8109ef1323
|
0f67934545033eaabe1908806274afc1a3056960
|
refs/heads/master
| 2023-01-01T09:56:28.150818
| 2020-10-17T07:26:56
| 2020-10-17T07:26:56
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 1,423
|
rd
|
training_data.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/training_data.R
\docType{data}
\name{training_data}
\alias{training_data}
\title{Training set}
\format{A dataframe with 1383 rows (tumor samples) and 16 columns (MSI status of tumor samples, and 15 gene features). The column names
are as follows:
\describe{
\item{MSI_status}{MSI_status}
\item{\emph{DDX27}}{DEAD-box helicase 27}
\item{\emph{EPM2AIP1}}{EPM2A interacting protein 1}
\item{\emph{HENMT1}}{HEN methyltransferase 1}
\item{\emph{LYG1}}{lysozyme g1}
\item{\emph{MLH1}}{mutL homolog 1}
\item{\emph{MSH4}}{mutS homolog 4}
\item{\emph{NHLRC1}}{NHL repeat containing E3 ubiquitin protein ligase 1}
\item{\emph{NOL4L}}{nucleolar protein 4 like}
\item{\emph{RNLS}}{renalase, FAD dependent amine oxidase}
\item{\emph{RPL22L1}}{ribosomal protein L22 like 1}
\item{\emph{RTF2}}{replication termination factor 2}
\item{\emph{SHROOM4}}{shroom family member 4}
\item{\emph{SMAP1}}{small ArfGAP 1}
\item{\emph{TTC30A}}{tetratricopeptide repeat domain 30A}
\item{\emph{ZSWIM3}}{zinc finger SWIM-type containing 3}
}}
\source{
\url{https://xenabrowser.net/datapages/}
}
\usage{
training_data
}
\description{
The training set is the expression profiles of a 15-gene panel from TCGA RNA-Seq pan-cancer (involving colon, gastric, and endometrial cancers) dataset.
}
\keyword{datasets}
|
577f1c55edc5fe2bd6e77e3473970a02d880d212
|
000a615bc4e146c9c47e8b1fe158df2fe3ae0996
|
/Week7/Day1/Lecture/Wine-PCA.R
|
ed697041857c5d3480db3a423db339b2fa7b6e64
|
[] |
no_license
|
SheikMBasha/MLLearning
|
4f10da4a354dd897b61f82b1b556d29b00942737
|
525a28875745dd1090cd85da20df8f5bfe3951cc
|
refs/heads/master
| 2021-09-28T17:42:19.309258
| 2018-02-25T18:43:16
| 2018-02-25T18:43:16
| 115,577,096
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,504
|
r
|
Wine-PCA.R
|
setwd("D:/Batch37/CSE7302c/Day04")
# Chemical analysis of Wine
# These data are the results of a chemical analysis of wines grown
# in the same region in Italy but derived from three different cultivars.
# The analysis determined the quantities of 13 constituents found in each
# of the three types of wines.
WineData <- read.csv("wine.csv")
str(WineData)
# PCA analysis is done only on the predictors
wine.predictors <- WineData[,-1]
# Since the predictors are of completely different magnitude,
# we need scale them before the analysis.
scaled.Predictors <- scale(wine.predictors)
scaled.Predictors
# compute PCs
pca.out = princomp(scaled.Predictors)
# princomp(wine.predictors, cor=TRUE) would
# automatically scale
names(pca.out)
summary(pca.out)
pca.out$loadings
plot(pca.out)
#If we choose 80% explanatory power for variances, we need only first 5 components of PC.
compressed_features = pca.out$scores[,1:5]
compressed_features
library(nnet)
multout.pca <- multinom(WineData$WineClass ~ compressed_features)
summary(multout.pca)
#Gives us AIC value of 24
multout.full <- multinom(WineClass ~ scaled.Predictors, data=WineData)
summary(multout.full) #Gives us AIC of 56
#Visualizing the spread in the dataset using only the first 2 components.
#
library(devtools)
install_github("vqv/ggbiplot")
library(ggbiplot)
g <- ggbiplot(pca.out, obs.scale = 1, var.scale = 1,
groups = WineData$WineClass, ellipse = TRUE, circle = TRUE)
g + scale_color_discrete(name = '')
|
68fbab354acdaf59b559dd8d5773569de9c5d3f1
|
dfa05ab954b371ba16feefc32ce82b6bd0476545
|
/3_r_scripts/create_variables.R
|
d3c3f9afe551a4f1d85aa7690c594f55cf886c89
|
[] |
no_license
|
fotisz/travel-speeds
|
bcd7cfcaf4734602b8f8c8cb8c5686e78b93188d
|
5f8ea23547ad6fcd19fd8109ccf5f5b73dc37ead
|
refs/heads/master
| 2020-04-28T12:41:05.216275
| 2017-12-01T13:26:00
| 2017-12-01T13:47:29
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 2,263
|
r
|
create_variables.R
|
### Setup ----
library(tidyverse)
library(magrittr)
library(data.table)
dublin_csv <- fread("0_raw_data/attributes.csv")
# Melt data columns labelled as u00_00 ... u23_45 into one variable "Time" with value "Speed"
melted_dt <- data.table::melt(dublin_csv, measure.vars = grep("u\\d", names(dublin_csv), value = TRUE),
variable.name = "Time", value.name = "Speed")
# Convert Time variable into datetime representation
melted_dt <- melted_dt[, Time := as.POSIXct(Time, format = "u%H_%M")]
setkey(melted_dt, link_id, trav_dir, Time)
### Create traffic_intensity as speed / max(speed) ----
# Derive max(speed) by road link
max_speeds <- melted_dt[, .SD[which.max(Speed)], by=list(link_id, trav_dir)]
max_speeds <- max_speeds[, .(link_id, trav_dir, Time, Speed)]
names(max_speeds) <- c("link_id", "trav_dir", "Time of Max Speed", "Max Speed")
# Add traffic_intensity variable
melted_dt <- melted_dt[max_speeds]
melted_dt$traffic_intensity <- melted_dt[, .(traffic_intensity = Speed / `Max Speed`)]
# Save resulting CSVs
write_csv(melted_dt, "4_r_data_output/melted_dt.csv", na = "")
melted_dt_sample <- melted_dt[func_class <= 3 & trav_dir]
write_csv(melted_dt_sample, "4_r_data_output/melted_dt_sample.csv", na = "")
### Create time interval clusters ----
library(lubridate)
library(Ckmeans.1d.dp)
# Group all links together by 15min interval where func_class == 1
cluster_data <- melted_dt[func_class == 1, .(Minutes = hour(Time) * 60 + minute(Time), AvgSpeed = mean(Speed)), by = Time]
x <- cluster_data[["Minutes"]]
y <-
cluster_data[["AvgSpeed"]] %>%
head %>%
{1 - (. - min(.)) / (max(.) - min(.))} %>% # Scale values to min = 1 and max = 0 such that low speeds represent high traffic
{0.5 + 1.5 * .} # Spread by factor 1.5 and scale from 0.5 to 2
clusters <- Ckmeans.1d.dp(x = x, k = 6, y = y)
plot(clusters)
# Create dataframe from Ckmeans.1d.dp output clusters
cluster_output <- data.frame(Minutes = today() + minutes(x),
RelAvgSpeed = y,
Cluster = clusters$cluster)
# Manually add points after 11pm to Cluster 1
cluster_output$Cluster <- cluster_output %$%
ifelse(x > 23 * 60, 1, Cluster)
write_csv(cluster_output, "4_r_data_output/cluster_output.csv", na = "")
|
9400712a60d5725b513353c589c48ea13fe78689
|
5833b6528abf04acc1c595942a37d2cd336d59dd
|
/R/waheco.R
|
1d866a5d16d693b3bc0cdae5b319785d09516372
|
[] |
no_license
|
sakho3600/WAHEco
|
f108a705e9372b07c098dcba9599128abde2e5e6
|
8dc13ed6df08bbee56860725d13f59df80845414
|
refs/heads/master
| 2020-05-20T12:49:52.297351
| 2017-08-21T22:06:14
| 2017-08-21T22:06:14
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 725
|
r
|
waheco.R
|
#' WAHEco: Web Application for Health Economics Decision Making
#'
#' `WAHEco` is an R toolset for creating a general custom
#' web application. It has a demo web modules for medico-economics
#' decision making, and also, anyone can create new customized modules
#' or components for later they are added to the web application
#' container AppRoot.
#'
#' @docType package
#' @name WAHEco
#'
#' @importFrom R6 R6Class
#' @importFrom shiny.router router_ui
#' @importFrom shiny.router make_router
#' @importFrom shiny.router route
#'
#' @importFrom shinydashboard dashboardPage
#' @importFrom shinydashboard dashboardHeader
#' @importFrom shinydashboard sidebarMenu
#' @importFrom shinydashboard dashboardBody
#'
NULL
|
18cbc8545075d781e485d6fc6a40b268d200bfe2
|
29c16e6ef5368f71d6b7a9990b6e5cf04cfe7525
|
/cachematrix.R
|
cb79bbad36d12aa0a29def52cff3c5111e7f6d8b
|
[] |
no_license
|
dario-fabietti/ProgrammingAssignment2
|
87f17fca437443b8f1816c892280f13db07e5b1a
|
cda5fa732f30b87258b8c1d4437f60373b2ef66a
|
refs/heads/master
| 2020-03-29T00:14:54.424233
| 2018-09-19T18:56:25
| 2018-09-19T18:56:25
| 149,331,110
| 0
| 0
| null | 2018-09-18T18:01:11
| 2018-09-18T18:01:10
| null |
UTF-8
|
R
| false
| false
| 1,361
|
r
|
cachematrix.R
|
## Creates a function with getters/setters for base matrix (get/set) and inverse (getinverse/setinverse)
makeCacheMatrix <- function(x = matrix()) {
i <- NULL
set <- function(y) {
x <<- y
i <<- NULL
}
get <- function() x
setinverse <- function(solve) i <<- solve
getinverse <- function() i
list(set = set, get = get,
setinverse = setinverse,
getinverse = getinverse)
}
## declare a function that takes a matrix as argument and:
## - if the inverse matrix has NOT been previously calculated, calculates the inverse of a matrix (supposed to be invertible by requirement)
## - if the inverse matrix has already been calculated, takes its value from the cache via $getinverse()
cacheSolve <- function(x, ...) {
i <- x$getinverse()
if(!is.null(i)) {
message("getting cached data")
return(i)
}
data <- x$get()
i <- solve(data, ...)
x$setinverse(i)
i
## Return a matrix that is the inverse of 'x'
}
# EXAMPLE:
# (code to run to test the function; please run it again in the very unlucky case of a not invertible matrix)
# [the probability of a random matrix to be non invertible is quite small (zero if calculated via integration on pdf)]
# CODE:
# B <- matrix(rnorm(4,0,1),2,2)
# show(B)
# B1 <- makeCacheMatrix(B)
# show(B1$get())
# cacheSolve(B1)
# B1$get()
# cacheSolve(B1)
|
23883dd3c69b66894f8b822f227668ea804a5357
|
ffdea92d4315e4363dd4ae673a1a6adf82a761b5
|
/data/genthat_extracted_code/MCMCpack/examples/HMMpanelRE.Rd.R
|
391bc3fc2ad3457f3fa6570051ab155ff83fb640
|
[] |
no_license
|
surayaaramli/typeRrh
|
d257ac8905c49123f4ccd4e377ee3dfc84d1636c
|
66e6996f31961bc8b9aafe1a6a6098327b66bf71
|
refs/heads/master
| 2023-05-05T04:05:31.617869
| 2019-04-25T22:10:06
| 2019-04-25T22:10:06
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 2,231
|
r
|
HMMpanelRE.Rd.R
|
library(MCMCpack)
### Name: HMMpanelRE
### Title: Markov Chain Monte Carlo for the Hidden Markov Random-effects
### Model
### Aliases: HMMpanelRE
### Keywords: models
### ** Examples
## Not run:
##D ## data generating
##D set.seed(1977)
##D Q <- 3
##D true.beta1 <- c(1, 1, 1) ; true.beta2 <- c(-1, -1, -1)
##D true.sigma2 <- c(2, 5); true.D1 <- diag(.5, Q); true.D2 <- diag(2.5, Q)
##D N=30; T=100;
##D NT <- N*T
##D x1 <- runif(NT, 1, 2)
##D x2 <- runif(NT, 1, 2)
##D X <- cbind(1, x1, x2); W <- X; y <- rep(NA, NT)
##D
##D ## true break numbers are one and at the center
##D break.point = rep(T/2, N); break.sigma=c(rep(1, N));
##D break.list <- rep(1, N)
##D id <- rep(1:N, each=NT/N)
##D K <- ncol(X);
##D ruler <- c(1:T)
##D
##D ## compute the weight for the break
##D W.mat <- matrix(NA, T, N)
##D for (i in 1:N){
##D W.mat[, i] <- pnorm((ruler-break.point[i])/break.sigma[i])
##D }
##D Weight <- as.vector(W.mat)
##D
##D ## data generating by weighting two means and variances
##D j = 1
##D for (i in 1:N){
##D Xi <- X[j:(j+T-1), ]
##D Wi <- W[j:(j+T-1), ]
##D true.V1 <- true.sigma2[1]*diag(T) + Wi%*%true.D1%*%t(Wi)
##D true.V2 <- true.sigma2[2]*diag(T) + Wi%*%true.D2%*%t(Wi)
##D true.mean1 <- Xi%*%true.beta1
##D true.mean2 <- Xi%*%true.beta2
##D weight <- Weight[j:(j+T-1)]
##D y[j:(j+T-1)] <- (1-weight)*true.mean1 + (1-weight)*chol(true.V1)%*%rnorm(T) +
##D weight*true.mean2 + weight*chol(true.V2)%*%rnorm(T)
##D j <- j + T
##D }
##D ## model fitting
##D subject.id <- c(rep(1:N, each=T))
##D time.id <- c(rep(1:T, N))
##D
##D ## model fitting
##D G <- 100
##D b0 <- rep(0, K) ; B0 <- solve(diag(100, K))
##D c0 <- 2; d0 <- 2
##D r0 <- 5; R0 <- diag(c(1, 0.1, 0.1))
##D subject.id <- c(rep(1:N, each=T))
##D time.id <- c(rep(1:T, N))
##D out1 <- HMMpanelRE(subject.id, time.id, y, X, W, m=1,
##D mcmc=G, burnin=G, thin=1, verbose=G,
##D b0=b0, B0=B0, c0=c0, d0=d0, r0=r0, R0=R0)
##D
##D ## latent state changes
##D plotState(out1)
##D
##D ## print mcmc output
##D summary(out1)
##D
##D
##D
## End(Not run)
|
4aacde7e292370172915bdcd6ec1fa201b896615
|
773204c6936ef4c5fe916494ab61e8acb320758c
|
/WGCNA_StepwiseScripts_CAGEseq/WGCNA_S2_FronSamps_AllCasesNConts_CAGEseq.R
|
ffc892adfbe1fe28c38ffabce325628ac1fb87b9
|
[] |
no_license
|
Tenzin-Nyima-1/RiMod-FTD
|
2f0c96df599776a83a74d0ade9550acfe971551c
|
f8bbb879397834011910b877a77090e89317ba4b
|
refs/heads/master
| 2020-03-06T16:05:26.283354
| 2018-05-04T07:42:46
| 2018-05-04T07:42:46
| 126,967,131
| 1
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 10,289
|
r
|
WGCNA_S2_FronSamps_AllCasesNConts_CAGEseq.R
|
# Date: 26-27 March 2018;
# Project: RIMOD-FTD;
# Aim: Weighted Gene Coexpression Network Analysis(WGCNA) of frontal samples: all cases and control samples;
# NOTE: the annotation of the count-table was carried using RIKEN's CAT gtf file as the gene model;
# NOTE 2: the same script could be adapted for the count table from: all 7 brain regions; and, temporal brain;
# Workflow (main steps);
# Step 2: step-by-step network construction and module detection;
options(stringsAsFactors=FALSE)
library(AnnotationDbi)
library(edgeR)
library(stringi)
library(stringr)
library(muStat)
library(WGCNA)
library(edgeR)
library(impute)
library(GO.db)
library(preprocessCore)
enableWGCNAThreads()
# #################################################################;
# I. Upload the workspace that prepared the data for WGCNA analysis;
# _________________________________________________________________;
#load("/home/student-ag-javier/Documents/Project-FTD/Correct_RefGenome_Res_Count_Data/Analy_4m_20Jan17_RikenGeneModel/Scripts/Fron_WGCNA_RWrkSpc/WGCNA_S1_FronSamps_AllCasesNConts_CAGEseq.RData")
# 1. recall the final expression data;
dim(reqFro_alCasesVsCont_normd_logcpm_Trs)
# [1] 48 21051;
Fro_alCasesVsCont_nGenes
# [1] 21051;
Fro_alCasesVsCont_nSamples
# [1] 48;
# -@sample-id prepared from the above table;
# reqFro_AlCaseVSCont_SampIDs <- stri_split_fixed(stri_split_fixed(rownames(reqFro_alCasesVsCont_normd_logcpm_Trs), "sample_", simplify = T)[,2], "_no", simplify = T)[,1]
length(reqFro_AlCaseVSCont_SampIDs)
# [1] 48;
# 2. trait info ordered as per the order of samples in the expres set;
dim(reqFroAlCaseVsCont_TraitInfo)
# [1] 48 14; NOTE: contains cols with char class;
# -@all col classes converted into numeric classes;
dim(finFroAlCaseVsCont_TraitInfo)
# [1] 48 13;
# --@Crosschecks if the transformation was done correctly;
TemChkCols <- c("Number_of_input_reads", "Number_of_Uniquely_mapped_reads", "Ratio_of_uniquely_mapped_reads", "RIN", "LINKERS", "AGE", "GENDER", "PMD.MIN.", "PH")
unique(unlist(
lapply(seq(TemChkCols), function(i)
if(is.numeric(reqFroAlCaseVsCont_TraitInfo[, TemChkCols[i]])) {
identical(finFroAlCaseVsCont_TraitInfo[, TemChkCols[i]], as.numeric(reqFroAlCaseVsCont_TraitInfo[, TemChkCols[i]]))
} else {
identical(finFroAlCaseVsCont_TraitInfo[, TemChkCols[i]], as.numeric(factor(reqFroAlCaseVsCont_TraitInfo[, TemChkCols[i]])))
})
))
# [1] TRUE;
# ---@keep only the required objects;
length(ls())
# [1] 32;
rm(list = ls()[-(match(c("reqFro_alCasesVsCont_normd_logcpm_Trs", "Fro_alCasesVsCont_nGenes", "Fro_alCasesVsCont_nSamples", "reqFro_AlCaseVSCont_SampIDs", "reqFroAlCaseVsCont_TraitInfo", "finFroAlCaseVsCont_TraitInfo"), ls()))])
length(ls())
# [1] 6;
collectGarbage();
# ######################################;
# II. Unsigned network topology analysis;
# ______________________________________;
setwd("/home/student-ag-javier/Documents/Project-FTD/Correct_RefGenome_Res_Count_Data/Analy_4m_20Jan17_RikenGeneModel/Fron_WGCNA/Unsigned/All_CasesVSCont/Step2")
# 1. Construct weighted gene network: choose soft thresholding power beta to be raised to coexpression similarity to calculate adjacency;
powers <- 1:20
cex1 <- 0.9;
# -@Recommends: Choose the smallest power for which R^2>0.8 or if a saturation curve results, choose the power at the kind of the saturation curve;
sft_UnSign <- pickSoftThreshold(reqFro_alCasesVsCont_normd_logcpm_Trs, powerVector = powers, networkType = "unsigned", verbose = 5)
# a.1) plot the result;
h1 <- 0.8781268
pdf("Fro_alCasesVsCont_SFT_UnSign", width = 14, height = 8)
par(mfrow = c(1,2));
# Scale-free topology fit index as a function of the soft-thresholding power
plot(sft_UnSign$fitIndices[,1], -sign(sft_UnSign$fitIndices[,3])*sft_UnSign$fitIndices[,2], xlab="Soft Threshold (power)",ylab="Scale Free Topology Model Fit,signed R^2",type="n", main = paste("Scale independence"));
text(sft_UnSign$fitIndices[,1], -sign(sft_UnSign$fitIndices[,3])*sft_UnSign$fitIndices[,2], labels=powers,cex=cex1,col="red");
# this line corresponds to using an R^2 cut-off of h
abline(h=h1,col="red")
# Mean connectivity as a function of the soft-thresholding power
plot(sft_UnSign$fitIndices[,1], sft_UnSign$fitIndices[,5], xlab="Soft Threshold (power)",ylab="Mean Connectivity", type="n", main = paste("Mean connectivity"))
text(sft_UnSign$fitIndices[,1], sft_UnSign$fitIndices[,5], labels=powers, cex=cex1,col="red")
dev.off()
# -@get the Rsq value for the chosen beta value;
sft_UnSignplotDF <- data.frame(pwr = sft_UnSign$fitIndices[,1], RSq = -sign(sft_UnSign$fitIndices[,3])*sft_UnSign$fitIndices[,2])
dim(sft_UnSignplotDF)
# [1] 20 2;
sft_UnSignplotDF[7:8,]
# pwr RSq
# 7 7 0.8781268
# 8 8 0.8869769
# a.1.1) inspect scalefree plot for both: beta = 7; and, beta = 8;
Fro_alCasesVsCont_k7 <- softConnectivity(reqFro_alCasesVsCont_normd_logcpm_Trs, power = 7, type = "unsigned")
Fro_alCasesVsCont_k8 <- softConnectivity(reqFro_alCasesVsCont_normd_logcpm_Trs, power = 8, type = "unsigned")
pdf("Fro_alCasesVsCont_UnSign_ScaleFreePlot", width = 14, height = 8)
par(mfrow = c(1,2))
scaleFreePlot(Fro_alCasesVsCont_k7, main = "Scale free plot(beta=7) - unsigned Fro(all cases and controls)\n", truncated = T)
scaleFreePlot(Fro_alCasesVsCont_k8, main = "Scale free plot(beta=8) - unsigned Fro(all cases and controls)\n", truncated = T)
dev.off()
# a.2) calculate coexpression similarity and adjacency using the chosen soft thresholding power;
sft_UnSign_Pwr <- 7;
# Note: chose this power coz-least power @saturation-mean connectivity >power8-Rsq almost 0.9;
Fro_alCasesVsCont_Adj <- adjacency(reqFro_alCasesVsCont_normd_logcpm_Trs, type = "unsigned", power = sft_UnSign_Pwr)
dim(Fro_alCasesVsCont_Adj)
# [1] 21051 21051;
# a.3) convert adjacency into TOM in order to avoid noisy association and calculate the dissimilarities;
Fro_alCasesVsCont_Adj2TOM <- TOMsimilarity(Fro_alCasesVsCont_Adj, TOMType = "unsigned")
dim(Fro_alCasesVsCont_Adj2TOM)
# [1] 21051 21051;
Fro_alCasesVsCont_Adj2TOM_disim <- 1 - Fro_alCasesVsCont_Adj2TOM
dim(Fro_alCasesVsCont_Adj2TOM_disim)
# [1] 21051 21051;
# a.4) clustering using TOM;
Fro_alCasesVsCont_Adj2TOM_disim_geneTre <- hclust(as.dist(Fro_alCasesVsCont_Adj2TOM_disim), method = "average")
# a.4.1 plot the results;
pdf("Fro_alCasesVsCont_TOMdisim_geneTre_UnSign", width = 10, height = 6)
plot(Fro_alCasesVsCont_Adj2TOM_disim_geneTre, xlab="", sub="", main = "Frontal: gene clustering on TOM-based dissimilarity", labels = FALSE, hang = 0.04)
dev.off()
# a.5) module identification using dynamic tree cut;
Fro_alCasesVsCont_dynamicMods <- cutreeDynamic(dendro = Fro_alCasesVsCont_Adj2TOM_disim_geneTre, distM = Fro_alCasesVsCont_Adj2TOM_disim, deepSplit = 3, pamRespectsDendro = FALSE, minClusterSize = 50)
sort(table(Fro_alCasesVsCont_dynamicMods))
# a.5.1 convert numeric labels into colors;
Fro_alCasesVsCont_dynamicMods_Cols <- labels2colors(Fro_alCasesVsCont_dynamicMods)
table(Fro_alCasesVsCont_dynamicMods_Cols)
# a.5.2 plot the dendogram and colors underneath;
pdf("Fro_alCasesVsCont_TOMdisim_geneTre_min50ModMap", width = 15, height = 10)
plotDendroAndColors(Fro_alCasesVsCont_Adj2TOM_disim_geneTre, Fro_alCasesVsCont_dynamicMods_Cols, "Dynamic Tree Cut", dendroLabels = FALSE, hang = 0.03, addGuide = TRUE, guideHang = 0.05, main = "Frontal-brain: gene dendrogram and module colors")
dev.off()
# a.6) Merge modules with similar expression profiles;
# a.6.1 Calculate eigengenes;
reqFro_alCasesVsCont_normd_logcpm_Trs_MELs <- moduleEigengenes(reqFro_alCasesVsCont_normd_logcpm_Trs, colors = Fro_alCasesVsCont_dynamicMods_Cols, softPower = sft_UnSign_Pwr)
reqFro_alCasesVsCont_normd_logcpm_Trs_MEs <- reqFro_alCasesVsCont_normd_logcpm_Trs_MELs$eigengenes
dim(reqFro_alCasesVsCont_normd_logcpm_Trs_MEs)
# [1] 48 43;
# a.6.2 Calculate dissimilarity of module eigengenes
reqFro_alCasesVsCont_normd_logcpm_Trs_MEDiss <- 1-cor(reqFro_alCasesVsCont_normd_logcpm_Trs_MEs)
dim(reqFro_alCasesVsCont_normd_logcpm_Trs_MEDiss)
# [1] 43 43;
# a.6.3 Cluster module eigengenes;
reqFro_alCasesVsCont_normd_logcpm_Trs_disMETree <- hclust(as.dist(reqFro_alCasesVsCont_normd_logcpm_Trs_MEDiss), method = "average");
# @Plot the module eigengenes cluster;
pdf("unSgnFro_alCasesVsCont_MEsclusters_min50", width = 15, height = 10)
plot(reqFro_alCasesVsCont_normd_logcpm_Trs_disMETree, main = "Frontal: clustering of module eigengenes", xlab = "", sub = "")
MEDissThres = 0.1
# Plot the cut line into the dendrogram
abline(h=MEDissThres, col = "red")
dev.off()
# a.6.4 Call an automatic merging function;
Fro_alCasesVsCont_normlogcpmTrs_meMrg <- mergeCloseModules(reqFro_alCasesVsCont_normd_logcpm_Trs, Fro_alCasesVsCont_dynamicMods_Cols, cutHeight = MEDissThres, verbose = 3)
dim(Fro_alCasesVsCont_normlogcpmTrs_meMrg$newMEs)
# [1] 48 39;
# @the merged module colors
Fro_alCasesVsCont_normlogcpmTrs_meMrgCols <- Fro_alCasesVsCont_normlogcpmTrs_meMrg$colors;
# @Eigengenes of the new merged modules:
Fro_alCasesVsCont_normlogcpmTrs_meMrg_MEs <- Fro_alCasesVsCont_normlogcpmTrs_meMrg$newMEs;
dim(Fro_alCasesVsCont_normlogcpmTrs_meMrg_MEs)
# [1] 48 39;
# 5. visualize the clusters along with the profile before and after merging the modules;
pdf("Fro_alCasesVsCont_TOMdisimgeneTre_b4afModMrg_min50Cpt1", width = 15, height = 10)
plotDendroAndColors(Fro_alCasesVsCont_Adj2TOM_disim_geneTre, cbind(Fro_alCasesVsCont_dynamicMods_Cols, Fro_alCasesVsCont_normlogcpmTrs_meMrgCols), c("Dynamic Tree Cut", "Merged dynamic"), dendroLabels = FALSE, hang = 0.03, addGuide = TRUE, guideHang = 0.05, main = "Frontal: cluster dendrogram")
dev.off()
# #######################;
# III. Save the workspace;
# _______________________;
setwd("/home/student-ag-javier/Documents/Project-FTD/Correct_RefGenome_Res_Count_Data/Analy_4m_20Jan17_RikenGeneModel/Scripts/Fron_WGCNA_RWrkSpc")
save.image("WGCNA_S2_FronSamps_AllCasesNConts_CAGEseq.RData")
# load("/home/student-ag-javier/Documents/Project-FTD/Correct_RefGenome_Res_Count_Data/Analy_4m_20Jan17_RikenGeneModel/Scripts/Fron_WGCNA_RWrkSpc/WGCNA_S2_FronSamps_AllCasesNConts_CAGEseq.RData")
# END ____________________________________________________________________________________________________________________________________________________________________________________________;
|
9a9f5234d58dbc8683f9d7971f4e1017e0ae2bb4
|
0bc5b269a91a2c771728c9a81994ca442f2a9e8f
|
/Week 03/pre-class-03.R
|
e6c1a31b03a2082099a2906140cbbdc2d4c3902c
|
[] |
no_license
|
PHP-2560/pre-class-work-2018-lcamillo
|
d9841a57c9581d48ac4d48bbbc0190b9bf572cdd
|
84b18e5f946dcaf868e9aa92dada5026a081ea70
|
refs/heads/master
| 2020-03-28T22:11:00.131805
| 2018-12-02T14:23:17
| 2018-12-02T14:23:17
| 149,211,088
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,930
|
r
|
pre-class-03.R
|
#PHP2560 preclass03
#Lucas Paulo de Lima Camillo
#B01478147
install.packages("gapminder")
library(dplyr)
library(gapminder)
gapminder = gapminder # dataset is called gapminder
#1)
# 52 in Africa, 25 in Americas, 33 in Asia, 30 in Europe, 2 in Oceania
gapminder %>%
group_by(continent) %>%
distinct(country) %>%
count()
#2)
# It is Albania with 3193 for gdpPercap
gapminder %>%
filter(continent == "Europe", year == "1997") %>%
arrange(gdpPercap)
#3)
# 52.5 in Africa, 67.2 in Americas, 63.7 in Asia, 73.2 in Europe, 74.8 in Oceania
gapminder %>%
filter(year >= 1980, year < 1990) %>%
group_by(continent) %>%
summarise(avg_life = mean(lifeExp))
#4)
# Kuwait, Switzerland, Norway, United State, and Canada respectively
gapminder %>%
group_by(country) %>%
summarise(avg_gdp = sum(gdpPercap)) %>%
arrange(desc(avg_gdp))
#5)
#there were 22 data points with lifeExp of at least 80 years. Check output
gapminder %>%
group_by(country, year) %>%
filter(lifeExp >= 80) %>%
select(country, year, lifeExp)
#6)
# Brazil, Mauritania, France, Switzerland, Pakistan, Indonesia, Equatorial Guinea, Comoros, Nicaragua, and Guatemala respectively
gapminder %>%
group_by(country) %>%
summarise(correlation = cor(year, lifeExp)) %>%
arrange(desc(abs(correlation)))
#7)
# America in the more recent years has the highest average population per country
gapminder %>%
filter(continent != "Asia") %>%
group_by(continent, year) %>%
summarize(avg_pop = mean(pop)) %>%
arrange(desc(avg_pop))
#8)
# Sao Tome and Principe (45906), Iceland (48542), and Montenegro (99738)
gapminder %>%
group_by(country) %>%
summarise(sd_pop = sd(pop)) %>%
arrange(sd_pop)
#9)
# gm1992 is a tibble
gm1992 = gapminder %>%
filter(year == "1992")
class(gm1992)
#10)
#Check output
gapminder %>%
arrange(country, year) %>%
group_by(country) %>%
filter(lifeExp - lag(lifeExp) > 0, pop - lag(pop) < 0)
|
84fc49b3afd60671fb11da4003299e0c2a1eb49e
|
eeb0928fceb45fbb03cda7b01bd8aa2120b8b7c2
|
/my_functions/trading_calendar.R
|
375fbece2ba8fbc91835a054ebabfa6f02e9e073
|
[] |
no_license
|
wadewuma/R
|
37ee486f49f94702e1ad1446c23b96e05d1b8a6b
|
f5648f08f15b5c7c7d516484dc557dcd901dce66
|
refs/heads/master
| 2020-05-29T10:30:45.239956
| 2018-04-27T21:14:05
| 2018-04-27T21:14:05
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,208
|
r
|
trading_calendar.R
|
#
# Get a vector of trading days. These are week days, with holidays removed. The trading days
# are relative to the New York Stock Exchange (NYSE).
#
# startDate and endDate are Date objects. The function returns a date ordered vector of Date
# objects which are the trading days.
#
# This function requires the timeDate package
#
# Credit to Ian Kaplan, I modified syntax styling and renamed the function (tradingCalendar)
# Source: http://www.bearcave.com/finance/random_r_hacks/
trading_calendar <- function( startDate, endDate)
{
require(timeDate)
timeSeq <- timeSequence(from = as.character(startDate),
to = as.character(endDate),
by="day", format = "%Y-%m-%d",
zone = "NewYork", FinCenter = "America/New_York")
ix <- as.logical(isWeekday(timeSeq, wday = 1:5))
tradingDays <- timeSeq[ix]
startYear <- as.POSIXlt(startDate)$year + 1900
endYear <- as.POSIXlt(endDate)$year + 1900
tradingDays.dt <- as.Date(tradingDays)
hol <- as.Date(holidayNYSE(startYear:endYear))
ix <- which(tradingDays.dt %in% hol)
tradingDays.dt <- tradingDays.dt[-ix]
return(tradingDays.dt)
} # tradingCalendar
|
7f228edc25d100e679a3d90906e294250e0c96db
|
ff23cd7fb4222edef3e18400c8b73ff3d468fe44
|
/tests/testthat.R
|
43bf8cc023acf7a7eeb25bb05d1473fb0edec429
|
[
"MIT"
] |
permissive
|
DavisVaughan/vacation
|
428a1c6d08ea3630c827bb359ac2b418307baea7
|
5fcf4fecf83cbb43896b74ceda93f3413584937d
|
refs/heads/master
| 2022-05-23T07:54:28.778391
| 2020-04-28T21:22:03
| 2020-04-28T21:22:03
| 258,014,145
| 6
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 60
|
r
|
testthat.R
|
library(testthat)
library(vacation)
test_check("vacation")
|
64adc7ca94a0df201274666935194cdcd0c1ff8f
|
cd2b951244f109c24554502f0c17a3c8bc59f6bc
|
/R/data_preparation_target_species.R
|
7360687ef8778861f9b0af1d84ffbb3c27febf3f
|
[] |
no_license
|
SunnyTseng/RADI_Taiwan_2020
|
976089bab0283b99dab07357685268271599d867
|
002f00772354efcfc18847ed74d077c5e7d62aa4
|
refs/heads/main
| 2023-05-06T23:30:42.199016
| 2021-06-01T07:37:22
| 2021-06-01T07:37:22
| 319,213,515
| 2
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 6,748
|
r
|
data_preparation_target_species.R
|
#############################################################################
### Author: Sunny ###
### Project: RADI ###
### Purpose: Select a specific target species and output a analysis-ready ###
### dataset for model fitting ###
#############################################################################
data_preparation_target_species <- function(dir_eBird = here("data", "main_processed", "data_eBird_qualified.csv"),
dir_predictors = here("data", "main_processed", "data_eBird_qualified_predictors.csv"),
target_species,
path){
data_eBird <- read_csv(dir_eBird)
data_predictors <- read_csv(dir_predictors)
data <- left_join(data_eBird, data_predictors, by = "global_unique_identifier")
rm(data_eBird)
rm(data_predictors)
data_detection <- data %>%
filter(scientific_name == target_species) %>%
distinct(sampling_event_identifier, .keep_all = TRUE) %>%
mutate(detection = 1)
data_no_detection <- data %>%
filter(!sampling_event_identifier %in% data_detection$sampling_event_identifier) %>%
filter(!duplicated(sampling_event_identifier)) %>%
mutate(detection = 0)
data <- bind_rows(data_detection, data_no_detection) %>%
mutate(observation_count = if_else(detection == 1, observation_count, 0)) %>%
mutate(climate_2010s_prec = case_when(month == 1 ~ climate_2010s_prec01,
month == 2 ~ climate_2010s_prec02,
month == 3 ~ climate_2010s_prec03,
month == 4 ~ climate_2010s_prec04,
month == 5 ~ climate_2010s_prec05,
month == 6 ~ climate_2010s_prec06,
month == 7 ~ climate_2010s_prec07,
month == 8 ~ climate_2010s_prec08,
month == 9 ~ climate_2010s_prec09,
month == 10 ~ climate_2010s_prec10,
month == 11 ~ climate_2010s_prec11,
month == 12 ~ climate_2010s_prec12),
climate_2010s_temp = case_when(month == 1 ~ climate_2010s_temp01,
month == 2 ~ climate_2010s_temp02,
month == 3 ~ climate_2010s_temp03,
month == 4 ~ climate_2010s_temp04,
month == 5 ~ climate_2010s_temp05,
month == 6 ~ climate_2010s_temp06,
month == 7 ~ climate_2010s_temp07,
month == 8 ~ climate_2010s_temp08,
month == 9 ~ climate_2010s_temp09,
month == 10 ~ climate_2010s_temp10,
month == 11 ~ climate_2010s_temp11,
month == 12 ~ climate_2010s_temp12),
climate_2010s_tmax = case_when(month == 1 ~ climate_2010s_tmax01,
month == 2 ~ climate_2010s_tmax02,
month == 3 ~ climate_2010s_tmax03,
month == 4 ~ climate_2010s_tmax04,
month == 5 ~ climate_2010s_tmax05,
month == 6 ~ climate_2010s_tmax06,
month == 7 ~ climate_2010s_tmax07,
month == 8 ~ climate_2010s_tmax08,
month == 9 ~ climate_2010s_tmax09,
month == 10 ~ climate_2010s_tmax10,
month == 11 ~ climate_2010s_tmax11,
month == 12 ~ climate_2010s_tmax12),
climate_2010s_tmin = case_when(month == 1 ~ climate_2010s_tmin01,
month == 2 ~ climate_2010s_tmin02,
month == 3 ~ climate_2010s_tmin03,
month == 4 ~ climate_2010s_tmin04,
month == 5 ~ climate_2010s_tmin05,
month == 6 ~ climate_2010s_tmin06,
month == 7 ~ climate_2010s_tmin07,
month == 8 ~ climate_2010s_tmin08,
month == 9 ~ climate_2010s_tmin09,
month == 10 ~ climate_2010s_tmin10,
month == 11 ~ climate_2010s_tmin11,
month == 12 ~ climate_2010s_tmin12),
climate_2010s_tra = case_when(month == 1 ~ climate_2010s_tra01,
month == 2 ~ climate_2010s_tra02,
month == 3 ~ climate_2010s_tra03,
month == 4 ~ climate_2010s_tra04,
month == 5 ~ climate_2010s_tra05,
month == 6 ~ climate_2010s_tra06,
month == 7 ~ climate_2010s_tra07,
month == 8 ~ climate_2010s_tra08,
month == 9 ~ climate_2010s_tra09,
month == 10 ~ climate_2010s_tra10,
month == 11 ~ climate_2010s_tra11,
month == 12 ~ climate_2010s_tra12))
data <- data %>%
select(detection, observation_count,
duration_minutes, protocol_type, effort_distance_km, number_observers,
hour, week, day, year,
latitude, longitude,
starts_with("dtm"), starts_with("climate_2010s_bio"), starts_with("landuse_2010s_"), starts_with("other_"),
climate_2010s_prec,
climate_2010s_temp,
climate_2010s_tmax,
climate_2010s_tmin,
climate_2010s_tra)
rm(data_detection)
rm(data_no_detection)
write_csv(data, path)
return(NULL)
}
|
31fe3b17b67ef3de4047b79f3a3ff7b5ff871660
|
6a9ff2d2cc2a27326b8dc426681ab3fc3b3d7b01
|
/man/facts.Rd
|
b0fb034dd0ad7b414dafd6be85412b5f3d43b2b1
|
[] |
no_license
|
cran/usa
|
f8a742862085bb1ffed296b5b8ac25aed1ad685a
|
e3e3055a3fe8f401b39b09526e0904ae1151b9ea
|
refs/heads/master
| 2021-01-13T21:42:28.130219
| 2020-02-23T08:30:03
| 2020-02-23T08:30:03
| 242,503,998
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 1,847
|
rd
|
facts.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/data.R
\docType{data}
\name{facts}
\alias{facts}
\title{US State Facts}
\format{A tibble with 52 rows and 9 variables:
\describe{
\item{name}{Full state name}
\item{population}{Population estimate (September 26, 2019)}
\item{admission}{The data which the state was admitted to the union}
\item{income}{Per capita income (2018)}
\item{life_exp}{Life expectancy in years (2017-18)}
\item{murder}{Murder rate per 100,000 population (2018)}
\item{high}{Percent adult population with at least a high school degree (2019)}
\item{bach}{Percent adult population with at least a bachelor's degree or greater (2019)}
\item{heat}{Mean number of degree days (temperature requires heating) per year from 1981-2010}
}}
\source{
\itemize{
\item Population: \url{https://www2.census.gov/programs-surveys/popest/datasets/2010-2018/state/detail/SCPRC-EST2018-18+POP-RES.csv}
\item Income: \url{https://data.census.gov/cedsci/table?tid=ACSST1Y2018.S1903}
\item GDP: \url{https://www.bea.gov/system/files/2019-11/qgdpstate1119.xlsx}
\item Literacy: \url{https://nces.ed.gov/naal/estimates/StateEstimates.aspx}
\item Life Expectancy: \url{https://www.cia.gov/library/publications/the-world-factbook/geos/aq.html}
\item Murder: \url{https://ucr.fbi.gov/crime-in-the-u.s/2018/crime-in-the-u.s.-2018/tables/table-4/table-4.xls/output.xls}
\item Education: \url{https://data.census.gov/cedsci/table?q=S1501}
\item Temperature: \url{ftp://ftp.ncdc.noaa.gov/pub/data/normals/1981-2010/products/temperature/ann-cldd-normal.txt}
}
}
\usage{
facts
}
\description{
Updated version of the \link[datasets:state.x77]{datasets::state.x77} matrix, which provides eights
statistics from the 1970's. This version is a modern data frame format
with updated (and alternative) statistics.
}
\keyword{datasets}
|
42f9b3b3cfe599029cc7c59e7f66180ece8ead12
|
ffdea92d4315e4363dd4ae673a1a6adf82a761b5
|
/data/genthat_extracted_code/ev.trawl/examples/ComputeB3Exp.Rd.R
|
e17bea56e3ba3543d55fe2463a2df3e1c5c36e6b
|
[] |
no_license
|
surayaaramli/typeRrh
|
d257ac8905c49123f4ccd4e377ee3dfc84d1636c
|
66e6996f31961bc8b9aafe1a6a6098327b66bf71
|
refs/heads/master
| 2023-05-05T04:05:31.617869
| 2019-04-25T22:10:06
| 2019-04-25T22:10:06
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 265
|
r
|
ComputeB3Exp.Rd.R
|
library(ev.trawl)
### Name: ComputeB3Exp
### Title: Wrapper to compute difference area between 't1' and 't2' under
### exponential trawl function.
### Aliases: ComputeB3Exp
### ** Examples
ComputeB3Exp(1, t1 = 3, t2 = 5)
ComputeB3Exp(0.2, t1 = 7, t2 = 3)
|
9fbe6a6738a568aedb3cf93c290a8add0e9aedf7
|
18632d8838856a9752ed3a6e513e6c2f24b19fc3
|
/man/extract_unique.Rd
|
7c0c431b1b7478148446a8aaa27f714a7a04580b
|
[] |
no_license
|
gibonet/distrr
|
fa31e66d62984a106cbf95023be4c43ad4c6643b
|
140d2b339a492a4d4bbb81a4925b563c0c2a24f6
|
refs/heads/master
| 2021-09-22T19:24:30.861074
| 2021-09-13T06:17:33
| 2021-09-13T06:17:33
| 101,790,664
| 6
| 2
| null | null | null | null |
UTF-8
|
R
| false
| true
| 721
|
rd
|
extract_unique.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/gibutils.R
\name{extract_unique}
\alias{extract_unique}
\alias{extract_unique2}
\alias{extract_unique3}
\alias{extract_unique4}
\alias{extract_unique5}
\title{Functions to be used in conjunction with 'dcc' family}
\usage{
extract_unique(df)
extract_unique2(df)
extract_unique3(df)
extract_unique4(df)
extract_unique5(df)
}
\arguments{
\item{df}{a data frame}
}
\value{
a list whose elements are character vectors of the unique values of each column
}
\description{
Functions to be used in conjunction with 'dcc' family
}
\examples{
data("invented_wages")
tmp <- extract_unique(df = invented_wages[ , c("gender", "sector")])
tmp
str(tmp)
}
|
50df3af1783279e1c807a833c7adb13bc8307c9a
|
77157987168fc6a0827df2ecdd55104813be77b1
|
/palm/inst/testfiles/pbc_distances/libFuzzer_pbc_distances/pbc_distances_valgrind_files/1612988054-test.R
|
66817cff9ed26cd1348bec82ac38deb665fd46e8
|
[] |
no_license
|
akhikolla/updatedatatype-list2
|
e8758b374f9a18fd3ef07664f1150e14a2e4c3d8
|
a3a519440e02d89640c75207c73c1456cf86487d
|
refs/heads/master
| 2023-03-21T13:17:13.762823
| 2021-03-20T15:46:49
| 2021-03-20T15:46:49
| 349,766,184
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,301
|
r
|
1612988054-test.R
|
testlist <- list(lims = structure(c(-1.76555087703612e-19, 5.22851419827981e+54, 9.8165023169435e-310, 5.22851419824833e+54, 5.22851419824833e+54, 5.22851419824833e+54, 5.22851419824833e+54, 5.22851419824833e+54, 5.36359494858027e+54, 5.22829688768159e+54, 1.09586952513592e-193, 7.81911558904982e-148, 0, 5.43164610119136e-312, 2.39844110096982e-191, 1.2136247081529e+132, 1.75066789819473e+180, 5.02025180386348e-251, 5.78739763822273e-275, 4.69579933409069e-304, 8.76165456480013e-308, 1.95850479489951e+179, 2.08227334401466e+262, 1.57613751107026e-152, 4.14080040471846e+204, 2.09675031221576e-231, 1.22034214522788e-321, 9.94699670869689e-203, 0, 0), .Dim = c(3L, 10L)), points = structure(c(1.38536620187834e-309, 5.59747981919088e-275, 2.52961610670718e-320, 6.76183164942935e-80, 8.8963109262115e-259, NaN, 35740566642812256256, 8.62726246366681e-308, 7.60176995525638e-270, NA, -Inf, 7.2911220195564e-304, 3.40896759841774e-82, 2.69156984300327e-231, 1.41597971816977e+48, 1.26530211899943e-320, 27597764530107584512, 4.76772865070877e-157, 0, 0, 7.29070490054344e-304, 30836163553328627712, 3.9928958199855e-305, 1.32879384686912e-309, -Inf, 6.76527620989911e-251, 1.66826217421774e-308, NaN), .Dim = c(7L, 4L)))
result <- do.call(palm:::pbc_distances,testlist)
str(result)
|
707f1bd6841768c2082bd960c27570c0714695b1
|
b401f67c511f11bce10b226c04dbae1d6135df3d
|
/tests/testthat/test.R
|
9ae154c09e58f389a18de4e58c225fcef49b4699
|
[] |
no_license
|
cran/variosig
|
5584a40bb9f5971b279a6e829089d4e1ecaa54e5
|
5d83aa4d8b48ee5a23aa4459bd47272a96cd21b9
|
refs/heads/master
| 2022-04-03T06:34:25.501151
| 2020-02-09T18:20:02
| 2020-02-09T18:20:02
| 110,669,080
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,129
|
r
|
test.R
|
library(variosig)
library(gstat)
library(sp)
library(testthat)
data(meuse)
coordinates(meuse) = ~x+y
vario0 <- variogram(zinc~1, data = meuse)
perm <- envelope(vario0, data = meuse, formula = zinc~1, nsim = 9, cutoff = 1500)
expect_that(perm, is_a("list"))
expect_that(envsig(perm, method = "eb"), is_a("list"))
expect_that(envplot(perm), is_a("character"))
vario1 <- variogram(log(zinc)~1, data = meuse)
perm <- envelope(vario1, data = meuse, formula = log(zinc)~1, nsim = 9)
expect_that(perm, is_a("list"))
expect_that(envsig(perm, index = 2, method = "eb"), is_a("list"))
expect_that(envplot(perm), is_a("character"))
if (requireNamespace("geoR", quietly = TRUE)) {
library(geoR)
meuse <- as.geodata(obj = meuse, data.col = 4, covar.col = 1:3,
covar.names = c("cadmium", "copper","lead"))
meuse$data <- log(meuse$data)
vario2 <- variog(meuse,max.dist=1500)
perm2 <- envelope(vario2, meuse, nsim=9, cluster = TRUE, n.cluster = 10, max.dist=1500)
expect_that(perm2, is_a("list"))
expect_that(envsig(perm2, method = "eb"), is_a("list"))
expect_that(envplot(perm2), is_a("character"))
}
|
25e4b7088a3ceabfd418ddbce306db436996684f
|
3125c61f87df5ed3a56f0bb08a017ff7ee4d6f3a
|
/ERE-analysis/analysis/R_scripts/annotate_overlaps_local.R
|
39032b143ca0f87262b59e920dbb5910a05e85c9
|
[] |
no_license
|
meyer-lab-cshl/transcriptomic-diversity-human-mTECs
|
7b1bbadcb9105ec388867ad309f52871a88586b3
|
d33bd5196a3ded0f7b0419c357f77f3fe3017993
|
refs/heads/main
| 2023-04-11T02:28:09.671527
| 2023-01-03T18:37:11
| 2023-01-03T18:37:11
| 313,447,588
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 3,091
|
r
|
annotate_overlaps_local.R
|
library(tidyverse)
library(GenomicRanges)
<<<<<<< HEAD
library(glue)
working_directory = '~/Desktop/thymus-epitope-mapping/ERE-analysis/analysis'
results_df_local_ERE = readRDS(file = glue('{working_directory}/R_variables/results_df_local_ERE'))
input = readRDS(file = glue('{working_directory}/R_variables/GRanges_ERE_start')) %>%
subset(locus %in% subset(results_df_local_ERE, significant == T)$locus)
GRanges_gene_extended = readRDS(file = glue('{working_directory}/R_variables/GRanges_gene_extended'))
up_genes = readRDS(file = glue('{working_directory}/R_variables/up_genes'))
down_genes = readRDS(file = glue('{working_directory}/R_variables/down_genes'))
=======
results_df_local_ERE = readRDS(file = '/grid/meyer/home/mpeacey/thymus-epitope-mapping/ERE-analysis/analysis/R_variables/results_df_local_ERE')
input = readRDS(file = '/grid/meyer/home/mpeacey/thymus-epitope-mapping/ERE-analysis/analysis/R_variables/GRanges_ERE_start') %>%
subset(locus %in% results_df_local_ERE$locus)
GRanges_gene_extended = readRDS(file = '/grid/meyer/home/mpeacey/thymus-epitope-mapping/ERE-analysis/analysis/R_variables/GRanges_gene_extended')
up_genes = readRDS(file = '/grid/meyer/home/mpeacey/thymus-epitope-mapping/ERE-analysis/analysis/R_variables/up_genes')
down_genes = readRDS(file = '/grid/meyer/home/mpeacey/thymus-epitope-mapping/ERE-analysis/analysis/R_variables/down_genes')
>>>>>>> 73018c40b61fd01ae27c86a356ad3cdcf97896c3
diff_genes = append(up_genes, down_genes)
overlaps = as.data.frame(findOverlaps(query = input,
subject = GRanges_gene_extended))
report = vector(length = length(unique(overlaps$queryHits)))
for (entry in 1:length(input)){
if(!(entry %in% overlaps$queryHits)){
report[entry] = 'none'
}
else{
gene_hits = subset(overlaps, queryHits == entry)$subjectHits
up = GRanges_gene_extended[gene_hits, ]$Geneid %in% up_genes
down = GRanges_gene_extended[gene_hits, ]$Geneid %in% down_genes
unchanged = !(GRanges_gene_extended[gene_hits, ]$Geneid %in% diff_genes)
expression = list('up' = length(up[up == T]),
'down' = length(down[down == T]),
'unchanged' = length(unchanged[unchanged == T]))
if(which.max(expression) == 1){
report[entry] = 'up'
}
if(which.max(expression) == 2){
report[entry] = 'down'
}
if(which.max(expression) == 3){
report[entry] = 'unchanged'
}
if(which.max(expression) != 1 &which.max(expression) != 2 & which.max(expression) != 3){
report[entry] = 'other'
}
}
print(entry/length(input) * 100)
}
input$overlap_expression = report
<<<<<<< HEAD
saveRDS(input, file = glue('{working_directory}/R_variables/overlap_annotated_GRanges_gene_extended'))
=======
saveRDS(input, file = '/grid/meyer/home/mpeacey/thymus-epitope-mapping/ERE-analysis/analysis/R_variables/overlap_annotated_GRanges_gene_extended')
>>>>>>> 73018c40b61fd01ae27c86a356ad3cdcf97896c3
|
4d9d85fb73a7eeaaa427abbfda4e1470d245af8d
|
e182802d73e309594c913322614f2ba625af416a
|
/inst/examples/example2.R
|
977115347212d656a44c4eb7703a04b86d28004a
|
[
"MIT"
] |
permissive
|
smbache/zeroclipr
|
9c2a7f8978b17711d1b7e0e41844b7d9e6a81bb1
|
9a9a319fa72cc41901e717ee71a0b9c85279695e
|
refs/heads/master
| 2021-01-10T04:12:32.499888
| 2015-11-02T08:46:39
| 2015-11-02T08:46:39
| 45,118,516
| 4
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,357
|
r
|
example2.R
|
library(zeroclipr)
library(shiny)
# Simple example of a clipboard button for copying data as csv to clipboard
#
runApp(list(
# The UI
ui = bootstrapPage(
# Control the styling of the button. This is not necessary but will make the
# button feel more natural, with the flash object on top:
singleton(tags$head(tags$style(
'
.zeroclipboard-is-hover { background-color: steelblue; }
.zeroclipboard-is-active { background-color: firebrick; }
'
))),
# A select input for selecting subset
selectInput("species", "Select Species: ", choices = c("setosa", "versicolor", "virginica")),
# The UI placeholder for the copy button
uiOutput("clip"),
# A text input for testing the lipboard content.
textInput("paste", "Paste here:")
),
# The server
server = function(input, output) {
# A reactive data source, based on the input$species
species_data <- reactive({
subset(iris, Species == input$species)
})
# Create the text representation of the subset for the clipboard and
# setup the copy button.
output$clip <- renderUI({
str <- textConnection("irisdata", open = "w")
write.csv(species_data(), str, row.names = FALSE)
close(str)
zeroclipButton("clipbtn", "Copy", paste(irisdata, collapse= "\n"), icon("clipboard"))
})
}
))
|
b108f383469f3d2fe6d2cd5d362452f4e0259c55
|
5747d6d6462b3df323d69950ef1ba559522d3fad
|
/ascca_CV_gamma_command_line.R
|
f22f7e03edf8075902438324870cc15939b976bf
|
[] |
no_license
|
rzliu2001/TF-finder
|
2a389ed411cb0c982067af1b86433e39f345ba62
|
087275d3e277b000a13126f40312271b754d199a
|
refs/heads/master
| 2023-03-27T02:32:31.631582
| 2016-08-03T02:10:16
| 2016-08-03T02:10:16
| 574,774,812
| 0
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 8,924
|
r
|
ascca_CV_gamma_command_line.R
|
#options(echo = FALSE)
library(MASS)
library(stats)
source("func.R")
# Import x.data
# Import y.data
args <- commandArgs(trailingOnly = TRUE)
x.data = read.table(args[1])
x.map = x.data[,1]
y.data = read.table(args[2])
y.map = y.data[,1]
#rows are observations, columns are genes(variables)
x.data = t(x.data[,-1])
y.data = t(y.data[,-1])
n.cv <- 5 # Number of cross-validation steps to select sparseness parameters
p <- length(x.map) # Number of varibles in X
q <- length(y.map) # Number of varibles in Y
n.sample <- nrow(x.data) # Sample size (# of observations or chips)
### _______________________________________________________________________________________
### Setting sparseness parameters
### _______________________________________________________________________________________
max.v = 0.4
max.u = 0.4
step.lambda = 0.01
min.gamma = 0 # start of gamma
max.gamma = 2 # end of gamma
step.gamma = 0.1
lambda.v.seq <- seq(0, max.v, by=step.lambda) # Possible values of sparseness parameters for data Y. Lower bouns should be 0, upper bound can be increased to 0.2.
lambda.u.seq <- seq(0, max.u, by=step.lambda) # Possible values of sparseness parameters for data X. Lower bouns should be 0, upper bound can be increased to 0.2.
gamma.seq <- seq(min.gamma, max.gamma, by=step.gamma) # Possible values of gammas.
n.lambdas.u <- length(lambda.u.seq)
n.lambdas.v <- length(lambda.v.seq)
n.gamma <- length(gamma.seq)
#lambda.v.matrix <- matrix(rep(lambda.v.seq, n.lambdas.u), nrow=n.lambdas.u, byrow=T)
lambda.v.matrix <- matrix(rep(lambda.v.seq, n.lambdas.u), nrow=n.lambdas.u, byrow=T)
lambda.u.matrix <- matrix(rep(lambda.u.seq, n.lambdas.v), nrow=n.lambdas.u, byrow=F)
lambda.v.array = array(rep(lambda.v.matrix,n.gamma),c(n.lambdas.u,n.lambdas.v,n.gamma))
lambda.u.array = array(rep(lambda.u.matrix,n.gamma),c(n.lambdas.u,n.lambdas.v,n.gamma))
gamma.array <- array(as.numeric(gl(n.gamma,n.lambdas.u*n.lambdas.v)),c(n.lambdas.u,n.lambdas.v,n.gamma))
ones.p <- rep(1, p)/p
ones.q <- rep(1, q)/q
### _______________________________________________________________________________________
### Analysis
### _______________________________________________________________________________________
out.file = "out_ascca_CV_gamma.txt"
#out.file = paste("ascca_CV_gamma_",n.cv,"_",max.u,"_",max.v,"_",step.lambda,"_",min.gamma,"_",max.gamma,"_",step.gamma,".txt",sep = "")
cat(date(),"\n",file = out.file)
cat("n.cv, max.u, max.v, step.lambda, min.gamma, max.gamma and step.gamma are:\n",n.cv,max.u,max.v,step.lambda,min.gamma,max.gamma,step.gamma,"\n",file = out.file,append = TRUE)
cat("begin select sparseness parameters:\n",file = out.file,append = TRUE)
#cat("begin select sparseness parameters:\n",file = "ascca_CV_gamma_iteration.txt",append = TRUE)
n.cv.sample <- trunc(n.sample/n.cv)
whole.sample <- seq(1, n.sample)
predict.corr.scca <- array(0, c(n.lambdas.u, n.lambdas.v, n.gamma)) # This array will contain average test sample correlation for each combination of sparseness parameters and gamma
#_______Cross-validation to select optimal combination of sparseness parameters____________
for (i.cv in 1:n.cv)
{
cat("cross validation",i.cv,":\n",file = out.file,append = TRUE)
testing.sample <- whole.sample[((i.cv-1)*n.cv.sample+1):(i.cv*n.cv.sample)]
training.sample <- whole.sample[!whole.sample%in%testing.sample]
k <- sample.sigma12.function(x.data[training.sample, ], y.data[training.sample, ])
# Get starting values for singular vectors
# as column and row means from matrix K
u.initial <- k %*% ones.q
u.initial <- u.initial /sqrt(as.numeric(t(u.initial)%*%u.initial))
v.initial <- t(k) %*% ones.p
v.initial <- v.initial /sqrt(as.numeric(t(v.initial)%*%v.initial))
# _______________Data for Predicted correlation (testing sample)_________________
x.predict <- x.data[testing.sample, ]
y.predict <- y.data[testing.sample, ]
# Standardize data
x.predict <- x.predict - mean(x.predict)
y.predict <- y.predict - mean(y.predict)
sigma11.predict <- var(x.predict)
sigma22.predict <- var(y.predict)
x.predict <- x.predict %*% diag( 1/sqrt(diag(sigma11.predict)) )
y.predict <- y.predict %*% diag( 1/sqrt(diag(sigma22.predict)) )
uv.svd = svd(k,nu=1,nv=1)
u.svd = uv.svd$u
v.svd = uv.svd$v
for(j.gamma in 1:n.gamma)
{
gamma = gamma.seq[j.gamma]
cat("when gamma = ",gamma,"\n",file = out.file,append = TRUE)
# ____________Loops for sparseness parameter combinations__________
for(j.lambda.v in 1:n.lambdas.v)
{
flag.na <- 0
for(j.lambda.u in 1:n.lambdas.u)
{
lambda.v <- lambda.v.seq[j.lambda.v] # sparseness parameter for Y
lambda.u <- lambda.u.seq[j.lambda.u] # sparseness parameter for X
if(flag.na==0)
{
uv <- adaptive.scca.function(k, u.initial, v.initial, lambda.u, lambda.v, u.svd, v.svd, gamma)
vj <- uv$v.new
uj <- uv$u.new
# Calculate predicted correlation for SCCA
predict.corr.scca[j.lambda.u, j.lambda.v, j.gamma] <- predict.corr.scca[j.lambda.u, j.lambda.v, j.gamma] + abs(cor(x.predict%*%uj, y.predict%*%vj))
#when either uj or vj or both are zero vector
if(is.na(predict.corr.scca[j.lambda.u, j.lambda.v, j.gamma]))
{
flag.na <- 1
cat("NA at",lambda.u,"and",lambda.v,"\n",file = out.file,append = TRUE)
#cat(uj,"\n",file = out.file,append = TRUE)
#cat(vj,"\n",file = out.file,append = TRUE)
}
} # close if
if(flag.na==1)
{
predict.corr.scca[j.lambda.u:n.lambdas.u, j.lambda.v, j.gamma] <- predict.corr.scca[j.lambda.u:n.lambdas.u, j.lambda.v, j.gamma] + NA
break
}
} # close loop on lambda.u
} # close loop on lambda.v
} # close loop on gamma
} # close cross-validation loop
# ______________Identify optimal sparseness parameter combination___________
predict.corr.scca[is.na(predict.corr.scca)] <- 0
predict.corr.scca <- predict.corr.scca/n.cv
best.predict.corr.scca <- max(abs(predict.corr.scca), na.rm=T)
best.lambda.v <- lambda.v.array[predict.corr.scca==best.predict.corr.scca]
best.lambda.u <- lambda.u.array[predict.corr.scca==best.predict.corr.scca]
best.gamma <- gamma.seq[gamma.array[predict.corr.scca==best.predict.corr.scca]]
cat("best average test sample correlation:",best.predict.corr.scca,"is at",best.lambda.u,"and",best.lambda.v," gamma = ",best.gamma,"\n",file = out.file,append = TRUE)
# ______________________________________________________________________________________________________________
# _____Compute singular vectors using the optimal sparseness parameter combination for the whole data___________
# ______________________________________________________________________________________________________________
cat("begin analyze the whole data:","\n",file = out.file,append = TRUE)
#cat("begin analyze the whole data:","\n",file = "ascca_CV_gamma_iteration.txt",append = TRUE)
k <- sample.sigma12.function(x.data, y.data)
# Get starting values for singular vectors
# as column and row means from matrix K
u.initial <- k %*% ones.q
u.initial <- u.initial /sqrt(as.numeric(t(u.initial)%*%u.initial))
v.initial <- t(k) %*% ones.p
v.initial <- v.initial /sqrt(as.numeric(t(v.initial)%*%v.initial))
uv.svd = svd(k,nu=1,nv=1)
u.svd = uv.svd$u
v.svd = uv.svd$v
uv <- adaptive.scca.function(k, u.initial, v.initial, best.lambda.u, best.lambda.v, u.svd, v.svd, best.gamma)
vj <- uv$v.new # sparse singular vector (canonical vector for Y)
uj <- uv$u.new # sparse singular vector (canonical vector for X)
cat("converged after",uv$i,"iterations.","\n",file = out.file,append = TRUE)
corr.scca <- abs(cor(x.data%*%uj, y.data%*%vj)) # canonical correlation for X and Y data
cat("the final overall correlation is",corr.scca,"\n",file = out.file,append = TRUE)
cat("between",sum(uj != 0),"x variables and",sum(vj != 0),"y variables. \n",file = out.file,append = TRUE)
cat("Transcription Factors \n", file = out.file, append = TRUE)
index.u = uj != 0
t1 = uj[index.u]
t2 = x.map[index.u]
index.map = order(t1,decreasing=T)
index.abs.map = order(abs(t1),decreasing=T)
answer.u = data.frame(t1,t2,sort(t1,decreasing=T),t2[index.map],t1[index.abs.map],t2[index.abs.map])
write.table(answer.u,file = out.file,append = TRUE,row.names = TRUE, col.names = FALSE)
cat("\nTarget Genes \n",file = out.file,append = TRUE)
index.v = vj != 0
t1 = vj[index.v]
t2 = y.map[index.v]
index.map = order(t1,decreasing=T)
index.abs.map = order(abs(t1),decreasing=T)
answer.v = data.frame(t1,t2,sort(t1,decreasing=T),t2[index.map],t1[index.abs.map],t2[index.abs.map])
write.table(answer.v,file = out.file,append = TRUE,row.names = TRUE, col.names = FALSE)
|
69c9525e65cfe4479d01cf1b1119194e12409abe
|
29666f9152da5d06e208a248add2393d2130180c
|
/man/source_all.Rd
|
4c2f23a8e4bad035f853aa97f912d99c6ba15c95
|
[] |
no_license
|
Nicktz/fmxdat
|
147433ce18ec493fdf3431e4fe40e05fc8e4a253
|
ec52cdcc65d3431edce67a0ff65b5258a5184bd7
|
refs/heads/master
| 2023-08-18T14:03:45.452330
| 2023-08-08T07:27:38
| 2023-08-08T07:27:38
| 205,252,472
| 0
| 1
| null | null | null | null |
UTF-8
|
R
| false
| true
| 646
|
rd
|
source_all.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/source_all.R
\name{source_all}
\alias{source_all}
\title{source_all}
\usage{
source_all(Loc)
}
\arguments{
\item{Loc}{Provided location to source all functions from
On a Windows PC, this could e.g. be: 'C:/Temp_Folder/code'
Or if called from within a project, simply source_all("code")}
}
\value{
Sources all the .R scripts in a given folder.
}
\description{
Sources all the .R scripts in a folder. Important to ensure the .R files in a folder are all functions - not running scripts.
}
\examples{
source_all("C:/Temp/FinMetrics/Practical1/code")
source_all("code")
}
|
93c214c95d721e25d2cdee237c8ff5ff17edf0cd
|
f993dd62e1367e79077a8e898c1f6dd606a59490
|
/source_scripts/Normailize_Functions.R
|
cce9ae7d99a2910bcbf5b37b3506af6ad8bb41ed
|
[
"Apache-2.0"
] |
permissive
|
SEL-Columbia/nmis_R_scripts
|
5f0450f8642eba7f15413af8756046c97781fa15
|
daba5df29fcd58294aec69bb326887ee0b0fc5d9
|
refs/heads/master
| 2021-01-22T05:16:11.308603
| 2014-08-06T19:08:02
| 2014-08-06T19:08:02
| 12,162,798
| 1
| 1
| null | 2014-06-12T16:40:58
| 2013-08-16T16:21:58
|
R
|
UTF-8
|
R
| false
| false
| 5,317
|
r
|
Normailize_Functions.R
|
require(plyr)
require(doBy)
require(digest)
require(stringr)
# search for slug_names that contain part of the input string
slugsearch <- function(nm, df=edu_661){
names(df)[grep(nm, names(df), ignore.case=T)]
}
# Basically the table function, but shows NA value
see <- function(nm, df=edu_113)
{
table(df[,nm],exclude=NULL)
}
#returns number of NAs in specific column
na_num <- function(vec) length(which(is.na(vec)))
#returns proportion of NAs in specific column
na_prop <- function(vec) {
print(class(vec))
na_num(vec)/length(vec)
}
#Takes a list/vector of string(names of the Data.frame)
# and returns a function that take a STRING(slug_name/variable_name)
# Which checks if the slug_name is contained in the list of the input data.frames.
common_slug <- function(df_names)
{
function(slug)
{
dfs <- lapply(df_names, function(x) get(x))
names(dfs) <- df_names
flgs <- sapply(dfs, function(x) slug %in% names(x))
if(all(flgs) == T){
sprintf("%s is contained in all data sets",slug)
}
else{
sprintf("%s does NOT have slug: %s", paste(names(dfs)[!flgs], collapse=", "), slug)
}
}
}
#Takes a list/vector of string(names of the Data.frame)
# and returns a function that take a STRING(slug_name/variable_name)
# Which checks if the CLASS of input slug is identical in input data.frames .
common_type <- function(df_names)
{
#Define the compareson function campare all other elements with 1st element and see if they'r all equal
my_compare <- function(vec){
all(sapply(vec[-1], function(x) {x == vec[1]}))
}
function(slug)
{
dfs <- lapply(df_names, function(x) get(x))
names(dfs) <- df_names
my_class <- function(df_name) {
out <- tryCatch(
out <- class(get(df_name)[,slug]),
error=function(cond) {
message(paste(df_name, ":data frame doesnt have this slug"))
message(cond)
# Choose a return value in case of error
return(NA)
}
)
return(out)
}
flgs <- NULL
j <- 1
for (i in 1:length(df_names)){
tmp_class <- my_class(df_names[i])
if (is.na(tmp_class) == F){
flgs[j] <- tmp_class
names(flgs)[j] <- df_names[i]
j <- j+1
}
}
cat(paste("\n", length(flgs), "data.frames contained", slug))
if(my_compare(flgs) == T){
cat(paste('\n', slug, "has same type in all data.frame.\n The Type of", slug, "is: ", flgs[2]))
}else{
warning(paste('\n',slug, " is NOT same in all data.frame"))
warning(paste(names(flgs), collapse=", "))
warning(paste(flgs, collapse=", "))
}
}
}
numeric_batch <- function(df, list_of_column_names){
list_of_column_names <- column_exists(df, list_of_column_names)
l_ply(list_of_column_names, function(col) {
if (!class(df[,col]) %in% c("numeric", "integer")){
suppressWarnings(df[,col] <<- as.numeric(df[,col]))
}
})
return(df)
}
yes_no_converter <- function(df, col_name)
{
vec <- df[,col_name]
if (class(vec) != "logical"){
num_yn <- length(grep("(yes|no|true|false)", vec,ignore.case=T))
num_itm <- length(which(!is.na(vec)))
yn_prop <- num_yn/num_itm
vec <- tolower(vec)
if (yn_prop < 0.99){
warning(sprintf("%s: yes and no values has only a proportion of %.3f", col_name, yn_prop))
}
suppressMessages(vec <- as.logical(revalue(vec, c('yes' = TRUE,'no' = FALSE,
'true' = TRUE,'false' = FALSE)
)))
}
return(vec)
}
yes_no_batch <- function(df, list_of_column_names)
{
list_of_column_names <- column_exists(df, list_of_column_names)
l_ply(list_of_column_names, function(col) {
df[,col] <<- yes_no_converter(df=df, col_name=col)
})
return(df)
}
column_exists <- function(df, list_of_column_names){
if (length(which(! list_of_column_names %in% names(df))) > 0){
warning(paste("following cloumns are not in the data.frame: ",
paste(list_of_column_names[which(! list_of_column_names %in% names(df))],
collapse=", ")))
return(list_of_column_names[which(list_of_column_names %in% names(df))])
}else{
return(list_of_column_names)
}
}
batch_type <- function(df, list_of_column_names)
{
list_of_column_names <- column_exists(df, list_of_column_names)
types <- unlist(lapply(list_of_column_names, function(x) class(df[,x])))
names(types) <- list_of_column_names
return(types)
}
smart_batch_type_convert <- function(df, column_list,
type_to_list=c("logical"), convert_func){
check_type <- batch_type(df, column_list)
column_list <- names(check_type)[! check_type %in% type_to_list]
df <- convert_func(df, column_list)
return(df)
}
|
3c1f43b56e5fc031c62f73bece72d7a1293a113a
|
6be70ffdb95ed626d05b5ef598b842c5864bac4d
|
/old/dev/dv_iv_plot_senate_confint.R
|
5cfafec875a5e73029ccd08dd1031a37f9bbaf3d
|
[] |
no_license
|
Hershberger/partycalls
|
c4f7a539cacd3120bf6b0bfade327f269898105a
|
8d9dc31dd3136eae384a8503ba71832c78139870
|
refs/heads/master
| 2021-09-22T17:54:29.106667
| 2018-09-12T21:16:56
| 2018-09-12T21:16:56
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 2,477
|
r
|
dv_iv_plot_senate_confint.R
|
library(partycalls)
library(ggplot2)
theme_set(theme_bw())
load("test_data/senate_data_lm.RData")
senate_data <- senate_data[drop == 0, ]
senate_data[congress == 107 & caucus == "Republican", maj := 0]
senate_data[congress == 107 & caucus == "Democrat", maj := 1]
senate_data[, south := as.factor(south)]
senate_data[, majority := as.factor(maj)]
senate_data[, gingrich_senator := as.factor(gingrich_senator)]
senate_dem <- senate_data[caucus == "Democrat", ]
senate_rep <- senate_data[caucus == "Republican"]
ggplot(senate_dem, aes(ideological_extremism, pirate100)) +
geom_point(color = "blue2", shape = 16, alpha = .75) +
geom_smooth(method = loess, color = "blue2") +
labs(x = "Ideological Extremism", y = "Party Call Response Rate")
ggsave("plots/senate_dem_iv-dv_all_confint.pdf")
dev.off()
ggplot(senate_dem, aes(ideological_extremism, pirate100, color = south)) +
geom_point(shape = 16, alpha = .75) +
scale_color_manual(breaks = c("0", "1"),
values = c("blue2", "gray5")) +
geom_smooth(method=loess) +
labs(x = "Ideological Extremism", y = "Party Call Response Rate")
ggsave("plots/senate_dem_iv-dv_south_confint.pdf")
dev.off()
ggplot(senate_dem, aes(ideological_extremism, pirate100, color = majority)) +
geom_point(shape = 16, alpha = .75) +
scale_color_manual(breaks = c("0", "1"),
values = c("blue2", "gray5")) +
geom_smooth(method=loess) +
labs(x = "Ideological Extremism", y = "Party Call Response Rate")
ggsave("plots/senate_dem_iv-dv_majority_confint.pdf")
dev.off()
ggplot(senate_rep, aes(ideological_extremism, pirate100)) +
geom_point(color = "red2", shape = 16, alpha = .75) +
geom_smooth(method = loess, color = "red2") +
labs(x = "Ideological Extremism", y = "Party Call Response Rate")
ggsave("plots/senate_rep_iv-dv_all_confint.pdf")
dev.off()
ggplot(senate_rep, aes(ideological_extremism, pirate100, color = gingrich_senator)) +
geom_point(shape = 16, alpha = .75) +
scale_color_manual(breaks = c("0", "1"),
values = c("red2", "gray5")) +
geom_smooth(method=loess) +
labs(x = "Ideological Extremism", y = "Party Call Response Rate")
ggsave("plots/senate_rep_iv-dv_gingrich_confint.pdf")
dev.off()
ggplot(senate_rep, aes(ideological_extremism, pirate100, color = majority)) +
geom_point(shape = 16, alpha = .75) +
scale_color_manual(breaks = c("0", "1"),
values = c("red2", "gray5")) +
geom_smooth(method=loess)
ggsave("plots/senate_rep_iv-dv_majority_confint.pdf")
dev.off()
|
433d45f3c55472a8840cadb7ff2ae87512e3985b
|
16f9d05fa1d0b6aadd313cc2896d7730d0e0d7c6
|
/RGtkGen/man/coerceRValueCode.Rd
|
cf8168a2531866195115367904f54f366047f9ac
|
[] |
no_license
|
statTarget/RGtk2
|
76c4b527972777c567fb115587418f9dea61bf29
|
42c2d5bc7c8a462274ddef2bec0eb0f51dde1b53
|
refs/heads/master
| 2023-08-24T05:13:35.291365
| 2021-10-24T18:27:46
| 2021-10-24T19:24:55
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,899
|
rd
|
coerceRValueCode.Rd
|
\name{coerceRValueCode}
\alias{coerceRValueCode}
\alias{getPrimitiveTypeAs}
\title{Generate R code to coerce R object to type.}
\description{
This generates R code which checks and
potentially coerces an argument
to an S function to the appropriate type
as required by C code to which the argument
will be passed.
}
\usage{
coerceRValueCode(type, name, defs)
getPrimitiveTypeAs(x)
}
\arguments{
\item{type}{the target type to which the variable \code{name} is to be
converted.}
\item{name}{the name of the S variable to be used in the generated converstion code}
\item{defs}{the collection of class, enumeration, etc. definitions
collected from the .defs files in which to find information about
the types.
}
\item{x}{the name of the primitive S type. This is then used to index
the table \code{PrimitveTypeCoercion} by name.}
}
\details{
This is not extensible, but instead uses a collection of
if-else statements to determine how to convert the
variable to the appropriate type.
It checks for primitives, enumerations, a string array
and then punts by assuming \code{\link[RGtk]{gtkCheckInherits}}
will suffice.
}
\value{
A list giving the
name of the variable and the code to coerce it.
\item{name}{the value of the \code{name} argument converted to
an S variable name using \code{\link{nameToS}}}
\item{coerce}{the string giving the code to convert to the variable
to the appropriate form.}
\code{getPrimitiveTypeAs} returns the name of the S function that
converts the primitive S value to the appropriate type.
}
\references{
\url{http://www.omegahat.net/RGtk/}
\url{http://www.omegahat.net/GtkAutoBindingGen}
\url{http://www.gtk.org}
}
\author{Duncan Temple Lang}
\seealso{
\code{\link{genCode}}
\code{\link{nameToS}}
}
\examples{
data(GtkDefs)
coerceRValueCode("GtkWidget", "w", GtkDefs)
}
\keyword{programming}
|
8ed8212be6acf8144ae8d24461168746fd43119c
|
d1c45cc19ea6a37e57f57ef8d973114f58ca6c24
|
/scripts/eggs.R
|
a08abab47e32980566c0cad525c740b8148cdc4a
|
[
"MIT"
] |
permissive
|
genner-lab/reproduction-temporal-trends
|
236063a4eba4d94691705cfe2f77a3c76dcb559f
|
53dc62b0085459f66b8887aca82d2909cf579f95
|
refs/heads/main
| 2023-04-18T03:28:13.427305
| 2022-07-18T19:41:05
| 2022-07-18T19:41:05
| 362,065,020
| 2
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 4,023
|
r
|
eggs.R
|
#!/usr/bin/env Rscript
# load libs and funs
source(here::here("scripts/load-data.R"))
# collapse eggs
trad.collapsed <- trad.master %>% collapse_taxonomy(rmfw=TRUE,lifestage="Eggs",collapse=TRUE)# EGGS
edna.collapsed <- edna.filt %>% collapse_taxonomy(rmfw=TRUE,lifestage="Eggs",collapse=TRUE)# EGGS
# get sample n
glue("\nNumber eDNA samples total = {edna.collapsed %>% filter(partnerID=='MBA' & !grepl('WHIT',eventID)) %>% distinct(sampleHash) %>% count() %>% pull()}",.trim=FALSE)
# get time duration
dur.edna <- edna.collapsed %>% filter(partnerID=="MBA" & !grepl("WHIT",eventID)) %>% distinct(eventDate) %>% arrange(eventDate) %>% pull(eventDate)
# get n samples during duration
glue("\nNumber unique ichthyoplankton samples concurrent with eDNA survey = {
trad.collapsed %>%
filter(partnerID=='MBA' & !grepl('WHIT',eventID)) %>%
filter(eventDate %within% lubridate::interval(start=first(dur.edna),end=last(dur.edna))) %>%
distinct(eventID,eventDate,fieldNumber) %>%
count() %>%
pull()
}",.trim=FALSE)
# make a list of species common to edna and trad (by partnerID)
comb.spp <- full_join(distinct(edna.collapsed,partnerID,species),distinct(trad.collapsed,partnerID,species),by=c("species","partnerID")) %>% arrange(partnerID,species)
# expand
trad.expanded <- trad.collapsed %>% expand_and_summarise(sppdf=comb.spp,eventdf=events,primerdf=NA,method="traditional",spatialgroup="locality",temporalgroup="day",correct=NA)# EGGS
edna.expanded <- edna.collapsed %>% expand_and_summarise(sppdf=comb.spp,eventdf=events,primerdf=primer.bias,method="edna",spatialgroup="locality",temporalgroup="day",correct=FALSE)# EGGS
# join and clean
surveys.joined <- join_and_clean(trad=trad.expanded,edna=edna.expanded)
# filter the required data and change the month label
surveys.joined %<>% mutate(month=lubridate::month(temporalGroup,label=TRUE)) %>% filter(partnerID=="MBA" & localityID!="WHIT") # EGGS
# get stats
glue("\nData included in model ...",.trim=FALSE)
glue("Total number species = {surveys.joined %>% distinct(species) %>% count() %>% pull(n)}")
glue("Total number samples = {surveys.joined %>% distinct(temporalGroup,localitySite,sampleHash) %>% count() %>% pull(n)}")
glue("Total number sampling dates = {surveys.joined %>% distinct(temporalGroup) %>% count() %>% pull(n)}")
# reduce and plot EGGS
surveys.joined.coll <- surveys.joined %>%
select(species,sampleHash,temporalGroup,spatialGroup,readsBySampleProportion,individualsByGroupRate) %>%
filter(species=="Sardina pilchardus") %>%
group_by(temporalGroup,spatialGroup) %>%
summarise(readsByGroupProportionMean=mean(fourth_root(readsBySampleProportion),na.rm=TRUE), sem=se(fourth_root(readsBySampleProportion)), individualsByGroupRateMean=mean(individualsByGroupRate,na.rm=TRUE),.groups="drop")
# check lms
# check lms
pdf(file=here("temp/results/figures/larvae-check-lm.pdf"))
lm(readsByGroupProportionMean ~ individualsByGroupRateMean, data=surveys.joined.coll) %>% performance::check_model() %>% plot()
dev.off()
glue("\nLinear model summary ...",.trim=FALSE)
lm(readsByGroupProportionMean ~ individualsByGroupRateMean, data=surveys.joined.coll) %>% summary()
# plot EGGS
p <- surveys.joined.coll %>% ggplot(aes(y=readsByGroupProportionMean,x=individualsByGroupRateMean,ymin=readsByGroupProportionMean-sem,ymax=readsByGroupProportionMean+sem)) +
geom_pointrange(color="gray30",size=0.5) +
annotate(geom="label",x=225,y=0.1,label=extract_p(surveys.joined.coll,y="readsByGroupProportionMean",x="individualsByGroupRateMean",type="lm",dp=3),size=3) +
geom_smooth(method="lm",formula=y~x,alpha=0.5,color="#2f8685",fill="gray90") +
theme_clean(base_size=12) +
labs(x="Pilchard egg ichthyoplankton abundance\n(CPUE)",y="Proportion of fish community (eDNA)\n(4th root transformed CPUE")
#plot(p)
ggsave(filename=here("temp/results/figures/eggs.svg"),plot=p,width=120,height=120,units="mm")
# report
glue("\nFigures saved to 'temp/results/figures'",.trim=FALSE)
|
80bfe77e7f1bcb9c2d830a0f26c54afea0b03575
|
b8412af1de74d092cc0959886d55219bab555e68
|
/SV/MetastasisPairs/StructuralVariation_Metastasis_Pairs.R
|
9ee89fc9a6bdbc168dbd694a99be74d00fadcd08
|
[] |
no_license
|
DGendoo/PDACDiseaseModels
|
df49fea5b7c9bb0eaf46fc8a4e7931ac2666895a
|
53e14a3faed8d854b843fe6f4ff3b90688ef62eb
|
refs/heads/master
| 2021-03-27T12:02:12.169406
| 2020-12-17T09:54:38
| 2020-12-17T09:54:38
| 71,823,065
| 2
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 3,936
|
r
|
StructuralVariation_Metastasis_Pairs.R
|
# Deena M.A. Gendoo
# November 1, 2016, and final update on June 23, 2017
# Parse Structural Variation (SV) data (in TSV format) for Paired Tumour-PDX samples
# FINAL_StructuralVariation_LIVER_PairsNews.R
###################################################################################################################
# Read all TSVs containing structural variation
TSVs<-sort(list.files(pattern = ".annotatedSV.tsv"))
TSVList<-lapply(TSVs,function(x){read.csv(x)})
TSVs<-sapply(TSVs,function(x) gsub(x,pattern = ".annotatedSV.tsv", replacement = ""))
names(TSVList)<-TSVs
names(TSVList)<-gsub(x = names(TSVList),pattern = "_526",replacement = "" )
#Rename TSV list to show the unique pair elements
TSVs<-sapply(TSVs,function(x){gsub(x,pattern = "_Pa_[P,X].*", replacement = "")})
TSVs<-sapply(TSVs,function(x){gsub(x,pattern = "_Lv_[M,X].*", replacement = "")})
SampleNames<-sort(unique(TSVs))
###################################################################################################################
# Generate plot for total number of SV events per sample
#Generate a loop and assess each pair individually
###################################################################################################################
library(reshape2)
SV_Counts<-matrix(nrow=4,ncol=length(TSVs),data = NA)
rownames(SV_Counts)<-c("DEL","DUP","INV","TRA")
colnames(SV_Counts)<-gsub(names(TSVList),pattern = "_526.*",replacement = "")
colnames(SV_Counts)<-gsub(colnames(SV_Counts),pattern = "_Lv",replacement = "")
for (count in 1:length(TSVs))
{
message("Working on Sample #:",count)
SV_Counts["DEL",count]<-length(which(TSVList[[count]]$type == "DEL"))
SV_Counts["DUP",count]<-length(which(TSVList[[count]]$type == "DUP"))
SV_Counts["INV",count]<-length(which(TSVList[[count]]$type == "INV"))
SV_Counts["TRA",count]<-length(which(TSVList[[count]]$type == "TRA"))
}
#Quick formatting for figures!
ParamsTable<-as.matrix(do.call(cbind.data.frame, list(SV_Counts[,1:2],NA,SV_Counts[,3:4],NA,SV_Counts[,5:6],NA,
SV_Counts[,7:8],NA,SV_Counts[,9:10],NA,SV_Counts[,11:12])))
# Stacked barplot of total SV counts across all samples
pdf("SV_Counts_Total_Across_Samples.pdf",onefile = F,width = 10,height = 7)
par(mar=c(10,6,2,2)+0.2)
barplot(ParamsTable,col=c("#d73027","#2166ac","#66bd63","#542788"),las=2,ylab="Number of SV Events",names.arg = colnames(ParamsTable))
legend("topleft",c("Deletion","Duplication","Inversion","Translocation"),fill=c("#d73027","#2166ac","#66bd63","#542788"))
dev.off()
write.csv(SV_Counts,file="Total_SV_CountsPerSample.csv")
###################################################################################################################
# Generate plot for total number of SV events per chromosome, for each sample (To highlight chromothripsis)
###################################################################################################################
SV_Counts_Chrom<-matrix(nrow=length(TSVs),ncol=24,data = NA)
colnames(SV_Counts_Chrom)<-c(paste("chr",rep(1:22),sep = ""),"chrX","chrY")
rownames(SV_Counts_Chrom)<-gsub(names(TSVList),pattern = "_526.*",replacement = "")
rownames(SV_Counts_Chrom)<-gsub(rownames(SV_Counts_Chrom),pattern = "_Lv",replacement = "")
for (count in 1:length(TSVs))
{
message("Working on Sample #:",count)
Samplee<-TSVList[[count]]
Samplee$chr1<-factor(as.character(lapply(Samplee$chr1,function(x){strsplit(as.character(x),"_")[[1]][1]})))
Samplee$chr2<-factor(as.character(lapply(Samplee$chr2,function(x){strsplit(as.character(x),"_")[[1]][1]})))
for(chromcount in 1:24)
{
SV_Counts_Chrom[count,chromcount]<-(length(which(Samplee$chr1 == colnames(SV_Counts_Chrom)[chromcount]))
+length(which(Samplee$chr2 == colnames(SV_Counts_Chrom)[chromcount])))/2
}
}
write.csv(SV_Counts_Chrom,"SV_Counts_PerChrom_Across_Samples_LIVER.csv")
|
75590caa4f47b8029c7521de8b616d2e9e23e2b7
|
7a95abd73d1ab9826e7f2bd7762f31c98bd0274f
|
/multivariance/inst/testfiles/doubleCenterBiasCorrectedUpperLower/libFuzzer_doubleCenterBiasCorrectedUpperLower/doubleCenterBiasCorrectedUpperLower_valgrind_files/1612796315-test.R
|
cd184c92ca48d7a099614645260b945eedbe62f3
|
[] |
no_license
|
akhikolla/updatedatatype-list3
|
536d4e126d14ffb84bb655b8551ed5bc9b16d2c5
|
d1505cabc5bea8badb599bf1ed44efad5306636c
|
refs/heads/master
| 2023-03-25T09:44:15.112369
| 2021-03-20T15:57:10
| 2021-03-20T15:57:10
| 349,770,001
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 319
|
r
|
1612796315-test.R
|
testlist <- list(n = 0L, x = structure(c(1.06559867695611e-255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0), .Dim = c(6L, 8L)))
result <- do.call(multivariance:::doubleCenterBiasCorrectedUpperLower,testlist)
str(result)
|
c573a9ca1f6dc69c8b5c6389ef9dd85839921775
|
d1c4c959363620cb261dc1b6c2f484bf85a35fa3
|
/RScripts/WebScraping.R
|
f53ea08f9c72e258590bc3857d6ed24c0bfe8feb
|
[] |
no_license
|
fungyip/MTR
|
fa6e0747fa7961599e0bf8df65537a1c6a025390
|
1aa0250bfd77dc99ecff563d188644cb18b70feb
|
refs/heads/master
| 2021-03-24T13:06:46.840018
| 2017-02-04T08:54:39
| 2017-02-04T08:54:39
| 80,493,229
| 0
| 1
| null | null | null | null |
UTF-8
|
R
| false
| false
| 1,820
|
r
|
WebScraping.R
|
library(rvest)
library(tidyverse)
url = "https://en.wikipedia.org/wiki/List_of_MTR_stations"
# All <- read_html(url, encoding="UTF-8") %>% html_nodes(xpath="//tr/td") %>%
# html_text() %>%
# as.data.frame()
# Name <- read_html(url, encoding="UTF-8") %>% html_nodes("td:nth-child(2) a") %>%
# html_text() %>%
# as.data.frame()
#
# District <- read_html(url, encoding="UTF-8") %>% html_nodes("tr :nth-child(5)") %>%
# html_text() %>%
# as.data.frame()
#
# Opened <- read_html(url, encoding="UTF-8") %>% html_nodes("tr :nth-child(6)") %>%
# html_text() %>%
# as.data.frame()
#
# Code <- read_html(url, encoding="UTF-8") %>% html_nodes("tr :nth-child(7)") %>%
# html_text() %>%
# as.data.frame()
#
# Photo <- read_html(url, encoding="UTF-8") %>% html_nodes("td:nth-child(3) a") %>%
# html_attr("src")
All_long <- read_html(url, encoding="UTF-8") %>% html_nodes(xpath="//tr") %>%
html_text() %>%
as.character()
All_long_v1 <- strsplit(All_long, "\\n")
All <- as.data.frame(matrix(unlist(All_long_v1), nrow=length(unlist(All_long_v1[1]))))
All_t <- t(All) %>%
as.data.frame()
write_csv(All_t,"./DataIn/WebScraping.csv")
# #Select DataFrame - East Rail Line
# East_Rail <- dd[1:8, c(2:15)]
# East_Rail_t <- t(East_Rail)
#
# #Select DataFrame - Kwun Tong LIne
# Kwun_Tong <- dd[1:8, c(17:33)]
# Kwun_Tong_t <- t(Kwun_Tong)
#
# #Select DataFrame - Tsuen Wan LIne
# Tsuen_Wan <- dd[1:8, c(35:47)]
# Tsuen_Wan_t <- t(Tsuen_Wan)
#
# Tsuen_Wan_2 <- dd[1:8, c(48:50)]
# Tsuen_Wan_2_t <- t(Tsuen_Wan_2)
#
# #Select DataFrame - Island LIne
# Island <- dd[1:8, c(52:66)]
# Island_t <- t(Island)
##Other Data Manuplication Techniques
home<-lapply(All_long_v1,rbind)
All_long_v1[1:123]
aa<-t(dd)
my.matrix<-do.call("rbind", All_long_v1)
a.matrix<-do.call("cbind", All_long_v1)
|
f98f5af8526064181398e580462f7d00ac231e88
|
ffdea92d4315e4363dd4ae673a1a6adf82a761b5
|
/data/genthat_extracted_code/hIRT/examples/summary.hIRT.Rd.R
|
05524f054ce5cd3709c3ccddeb864d6c4256a99f
|
[] |
no_license
|
surayaaramli/typeRrh
|
d257ac8905c49123f4ccd4e377ee3dfc84d1636c
|
66e6996f31961bc8b9aafe1a6a6098327b66bf71
|
refs/heads/master
| 2023-05-05T04:05:31.617869
| 2019-04-25T22:10:06
| 2019-04-25T22:10:06
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 344
|
r
|
summary.hIRT.Rd.R
|
library(hIRT)
### Name: summary.hIRT
### Title: Summarizing Hierarchical Item Response Theory Models
### Aliases: summary.hIRT print.summary_hIRT
### ** Examples
y <- nes_econ2008[, -(1:3)]
x <- model.matrix( ~ party * educ, nes_econ2008)
z <- model.matrix( ~ party, nes_econ2008)
nes_m1 <- hgrm(y, x, z)
summary(nes_m1, by_item = TRUE)
|
2b2a9799bed57fa8b8384a39c56f767a4fc196b5
|
e63dc395ac553ac461b77128b4bea171832ad7e6
|
/Scripts/MaAsLin2.R
|
df9596332447f8c2add05fc675580b14d78bca9a
|
[
"MIT"
] |
permissive
|
yassourlab/GMAP
|
2c97318c5df37c02e24f7072edc4e6d8ecbbe576
|
14e94efab1d0c3a87a0e834bcd63814ce06f8f12
|
refs/heads/main
| 2023-06-19T11:09:44.761200
| 2021-07-09T13:30:44
| 2021-07-09T13:30:44
| 384,099,327
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 5,577
|
r
|
MaAsLin2.R
|
##########################################################
###################### MaAsLin2 Run ######################
##########################################################
# data
{
rm(list = ls())
try(setwd("~/GMAP/Docs/GMAP_Paper/"), silent = T)
try(setwd("/Volumes/ehudda/GMAP/Docs/GMAP_Paper/"), silent = T)
try(setwd("/vol/sci/bio/data/moran.yassour/lab/Projects/GMAP/Docs/GMAP_Paper/"), silent = T)
metadata <- read.table("Results/TSVs/metadata.tsv", header=T)
tab.n <- read.table("Results/TSVs/FeatureTable.tsv",header = T)
## I want to use these rows instide of the code genarate these files
samples_raw <- readLines("Results/MaAsLin2/input/SubsetLists.csv")
}
# Install maaslin if needed
if (!"Maaslin2" %in% rownames(installed.packages())){
if(!requireNamespace("BiocManager", quietly = TRUE))
install.packages("BiocManager")
BiocManager::install("Maaslin2")
}
# libraries
{
library(Maaslin2)
library(dplyr)
library(clue)
library(tibble)
}
# fixed effects
{
fixed_effects <- list( "case lastDiet pro" = c("case_id", "visit_age_mo", "mode_of_delivery", "lastDiet", "probiotics_firstyr"),
"symptoms lastDiet pro (Control as reference)" = c("symptoms","visit_age_mo", "mode_of_delivery", "lastDiet", "probiotics_firstyr"),
"symptoms lastDiet pro (Symptomatic as reference)" = c("symptoms","visit_age_mo", "mode_of_delivery", "lastDiet", "probiotics_firstyr")
)
}
# samples
{
# skip 1 row because this description
samples <- list()
splited <- strsplit(samples_raw, ",")
for (i in 2:length(splited)) {
# first argument is the subset name
samples[[splited[[i]][1]]] <- splited[[i]][-1]
}
}
# random effects
{
subset_with_rand_effects <- c("All Samples", "All Samples Partially BF", "All Samples Formula",
"All Samples Exclusively BF", "samples 0-2 Model", "samples 0-4 Model",
"samples 4-6 Model", "samples 6-9 Model")
random_effects_list <- sapply(names(samples), function(x) ifelse(x %in% subset_with_rand_effects, "record_id",""))
}
{
reference_list <- list("case_id" = "case_id;AP Case", # this is actually dont change for masslin. because this is 2 categories variable, the reference is outomatically by Alephbeit order
"symptoms" = "symptoms;Control",
"symptoms2" = "symptoms;Symptomatic",
"mode_of_delivery" = "mode_of_delivery;C-section",
"lastDiet" = "lastDiet;Exclusively BF",
"probiotics_firstyr" = "probiotics_firstyr;Pro -")
}
transform <- "AST"
path <- "Results/MaAsLin2/output/"
h <- 1
j <- 1
for (h in 1:length(samples)) {
chosenIdx <- which(names(tab.n) %in% samples[[h]])
input_tab <- tab.n[,chosenIdx]
for (j in 1:length(fixed_effects)) {
subset_name <- names(fixed_effects)[j]
random_effects <- random_effects_list[names(samples)[h]]
vector_of_names <- c(names(samples[h]), subset_name)
output_directory <- paste0(path, paste(vector_of_names, collapse = "/"), "/")
dir.create(output_directory,recursive = T,showWarnings = F)
input_metadata <- metadata %>%
select(sampleID, fixed_effects[[j]], record_id) %>%
filter(sampleID %in% samples[[h]])
# define refenrence variables
l <- sapply(names(input_metadata), function(x) if(x %in% names(reference_list)) reference_list[[x]])
if (stringr::str_detect(subset_name,"Symptomatic")) l$symptoms <- reference_list$symptoms2
i <- sapply(l, Negate(is.null))
reference <- paste0(l[i], collapse = ",")
if (length(unique(metadata$sampleID)) != nrow(metadata)){
message("You have not enought unique sample IDs")
message(table(metadata$sampleID))
}
rownames(input_metadata) <- input_metadata$sampleID
input_metadata <- input_metadata %>% select(-sampleID)
if (nrow(input_metadata) != nrow(input_metadata %>% na.omit())){
message("You have some not valid values (with NA)")
}
meta_name <- paste0(output_directory, "metadata_input", ".tsv")
feature_table_name <- paste0(output_directory, "feature_table_input", ".tsv")
readme_file_loc <- paste0(output_directory, "README", ".txt")
write.table(x = input_metadata,file = meta_name,sep = "\t",quote = F)
write.table(x = input_tab,file = feature_table_name)
write("####### Model Variables #########", file = readme_file_loc, append = F)
write(c("fixed effects:", subset_name, fixed_effects[[j]]), file = readme_file_loc, append = T, ncolumns = 100, ",")
write(c("random effects:", random_effects), file = readme_file_loc, append = T, ncolumns = 100, ",")
write(c("reference:", reference), file = readme_file_loc, append = T, ncolumns = 100, ",")
fit_data <- Maaslin2::Maaslin2(input_data = input_tab,
input_metadata = input_metadata,
output = output_directory,
plot_heatmap = F,
plot_scatter = F,
reference = reference,
transform = transform,
fixed_effects = fixed_effects[[j]],
random_effects = random_effects
)
}
}
|
107fae300061ffab0be28df653518e4e36f570a9
|
b112ed17ecb6841790ece799460f0e138a1078dc
|
/R/handlers.R
|
e08ccff8e8d0b1e96e223a2470fb3cb75607a6bb
|
[] |
no_license
|
iugrina/glycanr
|
546d469dad77f8fcc7415e52c4228b658c8691c2
|
1055905c75a92398061d3c10d6c7c12f7f21fc91
|
refs/heads/master
| 2021-07-12T06:56:37.944197
| 2021-03-29T15:56:54
| 2021-03-29T15:56:54
| 22,675,446
| 3
| 3
| null | null | null | null |
UTF-8
|
R
| false
| false
| 214
|
r
|
handlers.R
|
.onAttach <- function(...) {
msg <- "From version 0.3 functions tanorm and glyco.outliers expect data frames in long format."
packageStartupMessage(paste(strwrap(msg), collapse = "\n"))
return(TRUE)
}
|
e4dd68a7877418f614fd299b4309d442397d72ba
|
eed0d83d4b1060efbaaf167fe1908490ef43ea8d
|
/plot3.R
|
786f1ccc78830a51b2e9aa81ede8b9742cd671c9
|
[] |
no_license
|
MSamranB/ExData_Plotting1
|
13d286f4920feaf9bc1b0f524c0d4265ff705151
|
af94bd8651f3f1493d63f89a907994ddb7171df9
|
refs/heads/master
| 2022-11-13T20:57:00.083490
| 2020-07-09T07:14:00
| 2020-07-09T07:14:00
| 277,103,378
| 0
| 0
| null | 2020-07-04T12:14:07
| 2020-07-04T12:14:06
| null |
UTF-8
|
R
| false
| false
| 1,118
|
r
|
plot3.R
|
my_data<-read.csv("household_power_consumption.txt",header=T,sep=';',na.strings = "?",quote='\"')
head(my_data)
maindate1<-"1/2/2007"
maindate2<-"2/2/2007"
x<-as.Date(maindate1)
y<-as.Date(maindate2)
my_data1<-subset(my_data, Date %in% c(maindate1,maindate2))
mydata1Date<-as.Date(my_data1$Date,format="%d/%m/%Y")
datetime<-paste(mydata1Date,my_data1$Time)
datetime
z<-as.POSIXct(datetime)
my_data1$DateTime<-z
my_data1$Sub_metering_1
with(my_data1,{plot(Sub_metering_1~DateTime,type="l",xlab="",ylab="Energy sub metering")
lines(Sub_metering_2~DateTime,col="red")
lines(Sub_metering_3~DateTime,col="blue")})
legend("topright",col=c("black","red","blue"),lty = 1,lwd=2,legend=c("Sub_metering_1","Sub_metering_2","Sub_metering_3"))
png("plot3.png",width = 480,height = 480)
with(my_data1,{plot(Sub_metering_1~DateTime,type="l",xlab="",ylab="Energy sub metering")
lines(Sub_metering_2~DateTime,col="red")
lines(Sub_metering_3~DateTime,col="blue")})
legend("topright",col=c("black","red","blue"),lty = 1,lwd=2,legend=c("Sub_metering_1","Sub_metering_2","Sub_metering_3"))
dev.off()
|
060da73a935fc010c81cc7fa9aced00cb752b17c
|
0f97039d2b7be6f7c7c6f2b94914e72401ef12fc
|
/man/bntest.plot.Rd
|
ee587603dac4806370ebd88c99b85de2901fbe47
|
[
"MIT"
] |
permissive
|
jhk0530/Rstat
|
aaa6692c2f4c8a768707e1e51992f5f2dbd817ad
|
6b79d8047b6bdbb56e8ac7a6c6eed6c31694b079
|
refs/heads/master
| 2023-07-31T22:28:09.874352
| 2021-09-12T06:07:15
| 2021-09-12T06:07:15
| 263,038,018
| 4
| 3
| null | null | null | null |
UTF-8
|
R
| false
| true
| 720
|
rd
|
bntest.plot.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/bntest.plot.R
\name{bntest.plot}
\alias{bntest.plot}
\title{Exact Binomial Test}
\usage{
bntest.plot(x, n, p0, alp = 0.05, side = "two", dig = 4, dcol)
}
\arguments{
\item{x}{Vector of number of successes}
\item{n}{Sample size}
\item{p0}{Population ratio value under the null hypothesis}
\item{alp}{Level of significance, Default: 0.05}
\item{dig}{Number of digits below the decimal point, Default: 4}
\item{dcol}{Colors of the probability bars}
}
\value{
None.
}
\description{
Plot the Result of Exact Binomial Test
}
\examples{
bntest.plot(x = 2:4, n = 10, p0 = 0.1, side = "up")
bntest.plot(x = 6:4, n = 20, p0 = 0.5, side = "two")
}
|
2267cc678b1c0387c83d8314bcc0786dde60e8f1
|
ffdea92d4315e4363dd4ae673a1a6adf82a761b5
|
/data/genthat_extracted_code/pairwise/examples/gif.Rd.R
|
b6d4cfeb84a69e8b12347e3e99f806fb044e551b
|
[] |
no_license
|
surayaaramli/typeRrh
|
d257ac8905c49123f4ccd4e377ee3dfc84d1636c
|
66e6996f31961bc8b9aafe1a6a6098327b66bf71
|
refs/heads/master
| 2023-05-05T04:05:31.617869
| 2019-04-25T22:10:06
| 2019-04-25T22:10:06
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 408
|
r
|
gif.Rd.R
|
library(pairwise)
### Name: gif
### Title: Graphical Item Fit Plots
### Aliases: gif
### ** Examples
########
data(bfiN)
pers_obj <- pers(pair(bfiN))
#### plot empirical category probabilities
gif(pers_obj = pers_obj, itemnumber = 1 )
gif(pers_obj = pers_obj, itemnumber = 1 , integ=8) # integration over 8 points
gif(pers_obj = pers_obj, itemnumber = 1 , integ=8, kat=1) # only for category number 1
|
74c7926da2c196f5a0587a2ccd914bfa18e2f494
|
dfe1c6596f4195d1a8e521373262be9b5fd61bf4
|
/test.R
|
7e969d1d92702e03246a64ed22437f47a9cc2aa0
|
[] |
no_license
|
ashaia-rclco/hello
|
dce7a115a85c83cdcaac97eaa42d78e389729c7e
|
9da4ecbf5b0e039b78afea36bd8cd53bc4c0f80a
|
refs/heads/main
| 2023-03-25T14:38:53.614011
| 2021-03-26T17:40:59
| 2021-03-26T17:40:59
| 351,859,706
| 0
| 0
| null | 2021-03-26T17:41:00
| 2021-03-26T17:22:53
| null |
UTF-8
|
R
| false
| false
| 44
|
r
|
test.R
|
a = 1
b = 12
print (a+b)
print(Sys.time())
|
679afde3242567b0a40187e61ffd761eda664cf5
|
7669f7ac75623e7c4fa6d1ff3fd406f670de90a8
|
/plot1.R
|
b321df0c76e4e39ed39ddd789dde1e12bcacf479
|
[] |
no_license
|
israelcazares/ExData_Plotting1
|
41a83b55589a3c8dbc4a0050e682cd439af2813e
|
d5bc19976fa7dc9441c4a1db81ca72d6b7a5e546
|
refs/heads/master
| 2021-01-17T21:02:30.264762
| 2014-10-12T19:36:41
| 2014-10-12T19:36:41
| null | 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 857
|
r
|
plot1.R
|
# Plot 1. household_power_consumption.txt set should be in the working directory with R files.
#---------------------------#
# code for reading the data #
#---------------------------#
# Reading the Data
hpc_data <- read.csv("household_power_consumption.txt", na.string="?", sep=";")
# Extracting the correct dataset
hpc_data <- hpc_data[(hpc_data$Date=="1/2/2007" | hpc_data$Date=="2/2/2007"),]
# Combining Date and Time
hpc_data$DateTime <- strptime(paste(hpc_data$Date, hpc_data$Time, sep=" "), format="%d/%m/%Y %H:%M:%S")
#--------------------------------#
# code that creates the PNG file #
#--------------------------------#
# Open png device
png("plot1.png", width=480, height=480)
# Plot the graph
hist(hpc_data$Global_active_power, main="Global Active Power", xlab="Global Active Power (kilowatts)", col="red")
# Turn off png device
dev.off()
|
f756953bb2cf9bdea605b46b5ee8a6d9bf1177d0
|
0f28a286662bb60e5734d493e0e34f00b83883f7
|
/october2020_uploads/zip_supplemental/recover_tweets.R
|
fd75c5afad2c3cd938ad4738ba90f3064d55c2ce
|
[
"MIT"
] |
permissive
|
fboehm/jse-2019
|
b17edf417efe810a718e41e234933612e067570d
|
ba1c1c8d73ed9db87b543813d64930828caa7070
|
refs/heads/master
| 2021-07-04T20:38:10.267757
| 2020-11-29T16:57:43
| 2020-11-29T16:57:43
| 203,395,479
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 2,242
|
r
|
recover_tweets.R
|
#' Recovers Twitter damaged stream data (JSON file) into parsed data frame.
#'
#' @param path Character, name of JSON file with data collected by
#' \code{\link{stream_tweets}}.
#' @param dir Character, name of a directory where intermediate files are
#' stored.
#' @param verbose Logical, should progress be displayed?
#'
#' @family stream tweets
# https://gist.githubusercontent.com/JBGruber/dee4c44e7d38d537426f57ba1e4f84ab/raw/ab87bebb8d020c2f96c71a40a483dc96a4c80e54/recover_stream.R
recover_stream <- function(path, dir = NULL, verbose = TRUE) {
# read file and split to tweets
lines <- readChar(path, file.info(path)$size, useBytes = TRUE)
tweets <- stringi::stri_split_fixed(lines, "\n{")[[1]]
tweets[-1] <- paste0("{", tweets[-1])
tweets <- tweets[!(tweets == "" | tweets == "{")]
# remove misbehaving characters
tweets <- gsub("\r", "", tweets, fixed = TRUE)
tweets <- gsub("\n", "", tweets, fixed = TRUE)
# write tweets to disk and try to read them in individually
if (is.null(dir)) {
dir <- paste0(tempdir(), "/tweets/")
dir.create(dir, showWarnings = FALSE)
}
if (verbose) {
pb <- progress::progress_bar$new(
format = "Processing tweets [:bar] :percent, :eta remaining",
total = length(tweets), clear = FALSE
)
pb$tick(0)
}
tweets_l <- lapply(tweets, function(t) {
pb$tick()
id <- unlist(stringi::stri_extract_first_regex(t, "(?<=id\":)\\d+(?=,)"))[1]
f <- paste0(dir, id, ".json")
writeLines(t, f, useBytes = TRUE)
out <- tryCatch(rtweet::parse_stream(f),
error = function(e) {})
if ("tbl_df" %in% class(out)) {
return(out)
} else {
return(id)
}
})
# test which ones failed
test <- vapply(tweets_l, is.character, FUN.VALUE = logical(1L))
bad_files <- unlist(tweets_l[test])
# Let user decide what to do
if (length(bad_files) > 0) {
writeLines(bad_files, "broken_tweets.txt")
}
# clean up
unlink(dir, recursive = TRUE)
good_tweets_pre <- tweets_l[!test]
good_tweets <- lapply(X = good_tweets_pre, FUN = function(x){x$description <- as.character(x$description); return(x)})
out <- dplyr::bind_rows(good_tweets)
# return good tweets
return(out)
}
|
09b69c2ff6b8510e5e0410829abcd8bca8269d80
|
bb10ea2c03c9cd1a0d4458772ca2440f488b1008
|
/R/adjustment.R
|
869cccb70a7874102953655b50b1276f65ef980c
|
[
"GPL-3.0-only"
] |
permissive
|
Henning-Schulz/forecastic
|
1b375c95c77f3ceb88db049213b0762b5997c2c1
|
3f7517749b2701f7670e895568a2d85a84ff2a4f
|
refs/heads/master
| 2021-08-08T03:07:22.922224
| 2020-07-16T15:27:15
| 2020-07-16T15:27:19
| 202,161,301
| 0
| 0
|
Apache-2.0
| 2019-08-13T14:25:12
| 2019-08-13T14:25:12
| null |
UTF-8
|
R
| false
| false
| 1,376
|
r
|
adjustment.R
|
# adjustment.R
#' @author Henning Schulz
library(tidyverse)
adjustment_logger <- Logger$new("adjustment.R")
#' Adjusts the intensities using the specified adjustments
#' If required, it loads the behavior models from the elasticsearch and includes them in the adjustments
#'
#' @param intensities The (forecasted) intensities to be adjusted as tibble. It is required that the
#' first column is \code{timestamp} and the remaining ones are intensities.
#' @param adjustments The aggregations to be used as adjustments
adjust_and_finalize_workload <- function(intensities, adjustments) {
adjustment_logger$info("Adjusting using [", paste(adjustments, collapse = ", "), "]")
behavior <- NULL
for (adj in adjustments$type) {
source(str_c("adjustments/", adj, ".R"))
if (adjustment_requires_behavior) {
# TODO: read
behavior <- NULL
break
}
}
if (length(adj) > 0) {
for (i in 1:nrow(adjustments)) {
adj <- adjustments[i,]
props <- adj$properties %>% select_if(~sum(!is.na(.)) > 0)
source(str_c("adjustments/", adj$type, ".R"))
intensities <- do_adjustment(intensities, behavior, props)
}
}
formatted_intensities <- intensities %>%
rename_at(vars(starts_with("intensity")), list(~ str_sub(., start = 11)))
list(intensities = formatted_intensities)
}
|
873cb82af3baf354db30b933bf36a65b64e79219
|
48741a62f53570304779b3a4ae5729c8d4b436e4
|
/Otolith.images/Morphometrics.R
|
73992ca875ad4b7f9b28c570857b00d0bfe24320
|
[] |
no_license
|
Otoliths/Otolith-shape
|
d722385d991db55f6fc04f0c1fde474610a39fb7
|
5d859801459d6d167e07186bddb7c0b04d819f50
|
refs/heads/master
| 2022-04-11T22:27:38.749446
| 2020-03-13T04:08:55
| 2020-03-13T04:08:55
| 110,910,282
| 1
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 6,989
|
r
|
Morphometrics.R
|
###FIGURE 3 Reconstruction of the mean otolith shape in Schizothorax nukiangensis, based on Wavelet transformation, differentiated by sampling site
library("shapeR")
#read data and image
shape1 = shapeR("Otolith.images/img_bzlz","BZLZ.39.csv")
shape1 = detect.outline(shape1, threshold = 0.2, write.outline.w.org = TRUE)
shape1 = smoothout(shape1, n = 100)
shape1 = generateShapeCoefficients(shape1)
shape1 = enrich.master.list(shape1)
data1 <- getMeasurements(shape1)
write.csv(data1,file = "Otolith.images/img_bzlz/measure.BZLZ.csv")
plotWaveletShape(shape1, "pop", lwd = 4,lty = 1)
#########################################################
shape2 = shapeR("Otolith.images/img_cwlz","CWLZ.79.csv")
shape2 = detect.outline(shape2, threshold = 0.2, write.outline.w.org = TRUE)
shape2 = smoothout(shape2, n = 100)
shape2 = generateShapeCoefficients(shape2)
shape2 = enrich.master.list(shape2)
data2 <- getMeasurements(shape2)
write.csv(data2,file = "Otolith.images/img_cwlz/measure.CWLZ.csv")
plotWaveletShape(shape2, "pop", lwd = 4,lty = 1,asp = 1)
###########################################
shape3 = shapeR("Otolith.images/img_lkz","LKZ.87.csv")
shape3 = detect.outline(shape3, threshold = 0.2, write.outline.w.org = TRUE)
shape3 = smoothout(shape3, n = 100)
shape3 = generateShapeCoefficients(shape3)
shape3 = enrich.master.list(shape3)
data3 <- getMeasurements(shape3)
write.csv(data3,file = "Otolith.images/img_lkz/measure.LKZ.csv")
plotWaveletShape(shape3, "pop", lwd = 4,lty = 1,asp = 1)
###########################################
shape4 = shapeR("Otolith.images/img_mkz","MKZ.138.csv")
shape4 = detect.outline(shape4, threshold = 0.2, write.outline.w.org = TRUE)
shape4 = smoothout(shape4, n = 100)
shape4 = generateShapeCoefficients(shape4)
shape4 = enrich.master.list(shape4)
data4 <- getMeasurements(shape4)
write.csv(data4,file = "Otolith.images/img_mkz/measure.MKZ.csv")
plotWaveletShape(shape4, "pop", lwd = 4,lty = 1,asp = 1)
###########################################
shape5 = shapeR("Otolith.images/img_spz","SPZ.12.csv")
shape5 = detect.outline(shape5, threshold = 0.2, write.outline.w.org = TRUE)
shape5 = smoothout(shape5, n = 100)
shape5 = generateShapeCoefficients(shape5)
shape5 = enrich.master.list(shape5)
data5 <- getMeasurements(shape5)
write.csv(data5,file = "Otolith.images/img_spz/measure.SPZ.csv")
plotWaveletShape(shape5, "pop", lwd = 4,lty = 1,asp = 1)
###########################################
shape6 = shapeR("Otolith.images/img_ttz","TTZ.72.csv")
shape6 = detect.outline(shape6, threshold = 0.2, write.outline.w.org = TRUE)
shape6 = smoothout(shape6, n = 100)
shape6 = generateShapeCoefficients(shape6)
shape6 = enrich.master.list(shape6)
data6 <- getMeasurements(shape6)
write.csv(data6,file = "Otolith.images/img_ttz/measure.TTZ.csv")
plotWaveletShape(shape6, "pop", lwd = 4,lty = 1,asp = 1)
###########################################
shape7 = shapeR("Otolith.images/img_zyz","ZYZ.66.csv")
shape7 = detect.outline(shape7, threshold = 0.2, write.outline.w.org = TRUE)
shape7 = smoothout(shape7, n = 100)
shape7 = generateShapeCoefficients(shape7)
shape7 = enrich.master.list(shape7)
data7 <- getMeasurements(shape7)
write.csv(data7,file = "Otolith.images/img_zyz/measure.ZYZ.csv")
plotWaveletShape(shape7, "pop", lwd = 4,lty = 1,asp = 1)
########################
#shape8 = shapeR("D:/Documents/R Workplace/Otolith.images/img_all","All.493.csv")
#shape8 = detect.outline(shape8, threshold = 0.2, write.outline.w.org = TRUE)
#shape8 = smoothout(shape8, n = 100)
#shape8 = generateShapeCoefficients(shape8)
#shape8 = enrich.master.list(shape8)
#data8 <-getMeasurements(shape8)
#write.csv(data8,file = "D:/Documents/R Workplace/Otolith.images/img_all/measure.All.csv")
#plotWaveletShape(shape8, "pop", lwd = 4,lty = 1,asp = 1)
########################
a <- read.csv("Otolith.images/img_bzlz/BZLZ.39.csv")
b <- read.csv("Otolith.images/img_cwlz/CWLZ.79.csv")
c <- read.csv("Otolith.images/img_lkz/LKZ.87.csv")
d <- read.csv("Otolith.images/img_mkz/MKZ.138.csv")
e <- read.csv("Otolith.images/img_spz/SPZ.12.csv")
f <- read.csv("Otolith.images/img_ttz/TTZ.72.csv")
g <- read.csv("Otolith.images/img_zyz/ZYZ.66.csv")
newdata <- data.frame(rbind(a,b,c,d,e,f,g))
write.csv(newdata,file = "Otolith.images/img_total/newdata.csv")
shape = shapeR("Otolith.images/img_total","newdata.csv")
shape = detect.outline(shape, threshold = 0.2, write.outline.w.org = TRUE)
shape = smoothout(shape, n = 100)
shape = generateShapeCoefficients(shape)
shape = enrich.master.list(shape)
data <- getMeasurements(shape)
write.csv(data,file = "Otolith.images/img_total/measure.Total.csv")
plotWaveletShape(shape, "pop", show.angle = TRUE,lwd = 2,lty = 1,col=1:7)
#Mean otolith shape based on wavelet reconstruction
#pdf("D:/Documents/R Workplace/length_100_200/otolith.pdf",family="GB1")
##plotWaveletShape(shape1, "pop", show.angle = TRUE, lwd = 2,lty = 1)
#dev.off()
#plotFourierShape(shape, "pop",show.angle = TRUE,lwd=2,lty =2,col=1:7)
#legend(-1, 0.9,c("MKZ", "LKZ", "SPZ","BZLZ", "CWLZ", "LKXZ","ZYZ","TTZ"),col=1:7,lty = 1)
tapply(getMeasurements(shape)$otolith.area, getMasterlist(shape)$pop, mean)
#BZLZ CWLZ LKZ MKZ SPZ TTZ ZYZ
#2.114314 2.501707 1.482099 1.174326 1.808778 1.074255 1.546961
tapply(getMeasurements(shape)$otolith.length, getMasterlist(shape)$pop, mean)
#BZLZ CWLZ LKZ MKZ SPZ TTZ ZYZ
#1.858856 2.063710 1.563033 1.335997 1.711345 1.289928 1.606169
tapply(getMeasurements(shape)$otolith.width, getMasterlist(shape)$pop, mean)
#BZLZ CWLZ LKZ MKZ SPZ TTZ ZYZ
#1.370629 1.563305 1.211755 1.086305 1.314009 1.021240 1.219364
tapply(getMeasurements(shape)$otolith.perimeter, getMasterlist(shape)$pop, mean)
#BZLZ CWLZ LKZ MKZ SPZ TTZ ZYZ
#5.283325 5.948831 4.534124 3.936989 4.927616 3.757546 4.610932
est.list = estimate.outline.reconstruction(shape)
outline.reconstruction.plot(est.list, max.num.harmonics = 15)
shape = stdCoefs(shape, classes = "pop", "Standard_length.mm.", bonferroni = FALSE)
#op <- par(mfrow = c(1, 1))
plotWavelet(shape, level = 5, class.name = "pop", useStdcoef = TRUE)
###############################################
library(gplots)
library(vegan)
pdf("length_100_200/kkk.pdf",family="GB1")
shape <- stdCoefs(shape, classes = "pop", "Standard_length.mm.", bonferroni = FALSE)
cap.res <- capscale(getStdWavelet(shape) ~ getMasterlist(shape)$pop)
anova(cap.res, by = "terms", step = 1000)
eig = eigenvals(cap.res,constrained = T)
eig.ratio = eig/sum(eig)
cluster.plot(scores(cap.res)$sites[,1:2],getMasterlist(shape)$pop,
xlim = range(scores(cap.res)$sites[,1]),
ylim = range(scores(cap.res)$sites[,2]),
xlab = paste("CAP1 (",round(eig.ratio[1]*100,1),"%)",sep = ""),
ylab = paste("CAP2 (",round(eig.ratio[2]*100,1),"%)",sep = ""), plotCI = TRUE,
conf.level = 0.95,las = 1)
abline(v = 0,lty = 3)
abline(h = 0,lty = 3)
dev.off()
|
4bcafd2512d27c9f45730feacaf94097b5009983
|
da1997ab28c8dade0c3127c0a465ca990596ded9
|
/tests/testthat/test-perfectcorr.R
|
cb1c726874133ec0bfacc94c9ef26fb2fab12dfc
|
[
"MIT"
] |
permissive
|
kbroman/jeremyash
|
a5bb91ccae7e4586fe7f503547e8c241fb653904
|
f7e3ecbe4b4218c6ed2d4ce990b9b705245af7ed
|
refs/heads/master
| 2021-01-16T22:22:16.851195
| 2015-05-05T14:27:03
| 2015-05-05T14:27:03
| 34,909,755
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 249
|
r
|
test-perfectcorr.R
|
context("perfect correlation")
test_that("vector_corr returns a perfect correlation (= 2)", {
perfect_corr <- 2
set.seed(1001)
x <- matrix(rnorm(200, 3, 1), ncol=2)
x_x <- vector_corr(x, x)
expect_equal(perfect_corr, unname(x_x[1]))
})
|
41cbcaa51feaf65ff07b12f3460cff2cb6861ac5
|
2a7e39ab70308111591d3f688cca438cc1b95045
|
/exploration/SecondApproachWithCaret.R
|
cb6d27306f0691cbaf200b12845122132ab03f86
|
[] |
no_license
|
cokeSchlumpf/30200-miniprojects
|
748fcb6a0db90c3406cacede78c35e892251f21c
|
857be78c12168174d3102b72ea6c72844049cf87
|
refs/heads/master
| 2020-12-21T04:23:07.329898
| 2020-01-28T09:52:18
| 2020-01-28T09:52:18
| 236,305,399
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 9,171
|
r
|
SecondApproachWithCaret.R
|
rm(list = ls())
#--------------------------------
# Load Libraries:
library(caret)
library(dplyr)
library(ggplot2)
library(tictoc)
library(MLmetrics)
#--------------------------------
#Load Datasets:
#-----------
#Load: Comp. 1 Data - Risse
setwd("~/Studium/DataScience/HS Alb. Sig/Machine Learning 30200/Präsenzwochenende/Data Mining Mini-Projekt/data_miniproject_stud/data_miniproject_stud/comp1")
df_raw_risse_train <- read.csv("data_risse.csv", header=F, sep = ";")
df_raw_risse_test <- read.csv("data_risse_test.csv", header=F, sep = ";")
df_raw_feature_vec_train <-df_raw_risse_train
df_raw_feature_vec_test <-df_raw_risse_test
selected_competition <- "comp1_risse"
#-----------
#Load: Comp. 2 Data - Schalungsanker
setwd("~/Studium/DataScience/HS Alb. Sig/Machine Learning 30200/Präsenzwochenende/Data Mining Mini-Projekt/data_miniproject_stud/data_miniproject_stud/comp2")
df_raw_anker_train <- read.csv("data_schalungsanker.csv", header=F, sep = ";")
df_raw_anker_test <- read.csv("data_schalungsanker_test.csv", header=F, sep = ";")
df_raw_feature_vec_train <-df_raw_anker_train
df_raw_feature_vec_test <-df_raw_anker_test
selected_competition <- "comp2_anker"
#-----------
# str(df_raw_feature_vec_train)
# df_raw_feature_vec_train[1:5,c(1:10)]
summary_df_raw_risse_train <- summary(df_raw_feature_vec_train)
#View(summary_df_raw_risse_train)
names(df_raw_feature_vec_train) <- c("actual_cat", names(df_raw_feature_vec_test))
names(df_raw_feature_vec_train)
names(df_raw_feature_vec_test)
df_raw_feature_vec_train$ID <- seq.int(nrow(df_raw_feature_vec_train))
df_feature_vec_train <- df_raw_feature_vec_train %>%
mutate(
actual_cat = as.factor(actual_cat)
)
df_feature_vec_test <- df_raw_feature_vec_test
#Remove Feature Vectors with only NA or only 0, these columns have no information value
# columns_with_no_value <- sapply(df_feature_vec_train, function(x)all(is.na(x) | x == 0))
# df_feature_vec_train <- Filter(function(x) !(all(x==""|x==0)), df_feature_vec_train)
# sum(columns_with_no_value)
# ncol(df_feature_vec_train)
#
# df_feature_vec_test <- Filter(function(x) !(all(x==""|x==0)), df_feature_vec_test)
# df_feature_vec_test
# ncol(df_feature_vec_test)
#--------------------------------
# Data Prepartion incl. data split
df_feature_vec_train$ID <- seq.int(nrow(df_feature_vec_train))
set.seed(42)
index <- createDataPartition(df_feature_vec_train$ID, p = 0.90, list = FALSE)
yx_fit <- df_feature_vec_train[index,]
yx_validate <- df_feature_vec_train[-index,]
yx_fit <- yx_fit[,-which(names(yx_fit)=="ID")]
yx_validate <- yx_validate[-which(names(yx_validate)=="ID")]
#Checks:
max(yx_fit[,-which(names(yx_fit)=="actual_cat")])
max(yx_validate[,-which(names(yx_fit)=="actual_cat")])
dim(yx_fit)
dim(yx_validate)
dim(df_feature_vec_test)
anyNA(yx_fit)
anyNA(yx_validate)
#--------------------------------
# Train Model
# Simple GBM Train
train_ctrl <- trainControl(## 10-fold CV
method = "repeatedcv",#"repeatedcv",
number = 5,
repeats = 2
)
set.seed(123)
tic("Runtime - Train Simple GBM")
fitted_s_gbm <- train(actual_cat ~ .,
data = yx_fit,
method = "gbm",
trControl = train_ctrl,
verbose = FALSE)
fitted_model <- fitted_s_gbm
fitted_s_gbm
toc()
#--------
# More advanced GBM Train
train_ctrl <- trainControl(## 10-fold CV
method = "repeatedcv",#"repeatedcv",
number = 5,
repeats = 2
)
gbmGrid <- expand.grid(interaction.depth = c(10),#,12,14),
n.trees = (2:4)*50,
shrinkage = 0.1,
n.minobsinnode = 20)
set.seed(234)
tic("Runtime - Train more advanced GBM")
fitted_a_gbm <- train(actual_cat ~ .,
data = yx_fit,
method = "gbm",
trControl = train_ctrl,
verbose = FALSE,
## Now specify the exact models
## to evaluate:
tuneGrid = gbmGrid)
fitted_model <- fitted_a_gbm
fitted_a_gbm
toc()
plot(fitted_a_gbm)
#--------
# Extrem Gradient Boosting
train_ctrl <- trainControl(## 10-fold CV
method = "repeatedcv",#"repeatedcv",
number = 5,
repeats = 2
)
xgbGrid <- expand.grid(
nrounds = c(10),#,20,30,50, 100), # (# Boosting Iterations)
max_depth = 4:8, # (Max Tree Depth)
eta = c(0.075, 0.1) , #(Shrinkage)
gamma = 0 , #(Minimum Loss Reduction)
colsample_bytree = c(0.3, 0.4, 0.5), #(Subsample Ratio of Columns)
min_child_weight = c(2.0, 2.25), #(Minimum Sum of Instance Weight)
subsample = 1 #(Subsample Percentage)
)
dim(xgbGrid)
set.seed(345)
tic("Runtime - Extrem Gradient Boosting")
fitted_xgbTree <- train(actual_cat ~ .,
data = yx_fit,
method = "xgbTree",
tuneGrid =xgbGrid,
trControl = train_ctrl,
verbose = FALSE)
fitted_model <- fitted_xgbTree
fitted_xgbTree
plot(fitted_xgbTree)
toc()
#--------
#Random Forest
train_ctrl <- trainControl(## 10-fold CV
method = "repeatedcv",#"repeatedcv",
number = 5,
repeats = 2
)
set.seed(456)
tic("Runtime - Train Random Forest")
fitted_rf <- train(actual_cat ~ .,
data = yx_fit,
method = "rf",
trControl = train_ctrl,
verbose = FALSE)
fitted_model <- fitted_rf
fitted_rf
toc()
#--------
#Random Forest (Ranger Implementation)
train_ctrl <- trainControl(## 10-fold CV
method = "repeatedcv",#"repeatedcv",
number = 5,
repeats = 2
)
set.seed(456)
tic("Runtime - Train Ranger Random Forest Version")
fitted_rf_ranger <- train(actual_cat ~ .,
data = yx_fit,
method = "ranger",
trControl = train_ctrl,
verbose = FALSE)
fitted_model <- fitted_rf_ranger
fitted_rf
toc()
#--------
# SVM with linear Kernel
train_ctrl <- trainControl(## 10-fold CV
method = "repeatedcv",#"repeatedcv",
number = 5,
repeats = 2
)
set.seed(456)
tic("Runtime - Train SVM with linear Kernel")
fitted_svmLinear <- train(actual_cat ~ .,
data = yx_fit,
method = "svmLinear3",
trControl = train_ctrl,
verbose = FALSE)
fitted_model <- fitted_svmLinear
fitted_rf
toc()
#--------
#SVM with Least Squares with Polynomial Kernel
train_ctrl <- trainControl(## 10-fold CV
method = "repeatedcv",#"repeatedcv",
number = 5,
repeats = 2
)
set.seed(456)
tic("Runtime - Train SVM with polynominal Kernel")
fitted_svm_poly <- train(actual_cat ~ .,
data = yx_fit,
method = "lssvmPoly",
trControl = train_ctrl,
verbose = FALSE)
fitted_model <- fitted_svm_poly
fitted_rf
toc()
#--------------------------------
# Generate predictions on validation dataset:
predict_on_vds <- yx_validate %>%
select(
actual_cat
) %>%
mutate(
predicted_cat = predict(fitted_model, newdata = yx_validate[,-which(names(yx_validate)=="actual_cat")]),
)
#--------------------------------
# Compare Actual vs. prediction:
predict_on_vds <- predict_on_vds %>%
mutate(
#predicted_cat = if_else(predicted_con > 0.5, 1, 0),
#residuen_con = abs(predicted_con - actual_cat),
residuen_cat = as.factor(if_else(predicted_cat != actual_cat, 1, 0)),
residuen_cat_label = as.factor(if_else(residuen_cat == 0, "Correct","Wrong")),
actual_cat_label = if_else(actual_cat == 1, "positiv","negativ"),
predicted_cat_label = as.factor(if_else(actual_cat == 1, "positiv","negativ"))
)
#View(predict_on_vds)
sum(predict_on_vds$residuen_cat)/nrow(predict_on_vds)
# ggplot(predict_on_vds, aes(residuen_con)) +
# geom_histogram(bins = 500)
#
#
# ggplot(predict_on_vds, aes(predicted_con, actual_cat)) +
# geom_point()
caret::confusionMatrix(as.factor(predict_on_vds$predicted_cat), as.factor(predict_on_vds$actual_cat))
t1_value_on_traindata <- F1_Score(y_pred = predict_on_vds$predicted_cat, y_true = predict_on_vds$actual_cat) #, positive = "1")
t1_value_on_traindata
#--------------------------------
# Predict on test dataset
predict_on_tds <- df_feature_vec_test %>%
mutate(
predicted_cat = predict(fitted_model, newdata = df_feature_vec_test)
) %>%
mutate(
predicted_cat = as.integer(as.character(predicted_cat))
) %>%
select(
predicted_cat
)
filename <- paste(selected_competition, "_estimated_t1-value_", t1_value_on_traindata, ".csv", sep = "")
write.csv(predict_on_tds, paste('C:/Users/Staab/Documents/Studium/DataScience/HS Alb. Sig/Machine Learning 30200/Präsenzwochenende/Data Mining Mini-Projekt/Development/',filename, sep =""), row.names = FALSE, col.names = FALSE, sep = ";")
|
e474ec6a4d019fdddb68d8399437e455e0e6cf2e
|
4c8dd7c80f4f8e9618a70ad2bfab5432b95028c5
|
/other_models/scripts/data_scripts/known_lat_all_chases_data_cleaning.R
|
a6603d003bc14ce4503aee3579da3f1b422bf472
|
[] |
no_license
|
MCMaurer/Fish_Group_Models
|
fc907c329fbba78fb3834b92e4cdb2a466015f3c
|
add641c4aaaff3d0c45e2158344ba1318817cf4d
|
refs/heads/master
| 2021-04-28T08:06:01.787605
| 2019-11-14T03:50:55
| 2019-11-14T03:50:55
| 122,240,331
| 0
| 1
| null | 2019-08-09T19:35:30
| 2018-02-20T18:47:49
|
HTML
|
UTF-8
|
R
| false
| false
| 1,122
|
r
|
known_lat_all_chases_data_cleaning.R
|
library(tidyverse)
chase_data <- read_rds("chases/data/cleaned/full_chase_group_size.rds")
known_lat_data <- read_rds("latency/data/cleaned/latency_typical_food.rds") %>%
select(-date_time, -camera, -video)
all_chase_spread <- chase_data %>%
filter(measurement == "numchases") %>%
mutate(percap_numchases = value/treatment) %>%
select(treatment, trial, group_ID, assay, percap_numchases) %>%
spread(key = assay, value = percap_numchases)
all_chase_spread
d <- left_join(known_lat_data, all_chase_spread) %>%
mutate(group_ID = factor(group_ID))
saveRDS(d, "other_models/data/cleaned/known_lat_all_chases_data.rds")
novel_lat_data <- read_rds("latency/data/cleaned/latency_novel_food.rds")
d <- left_join(novel_lat_data, all_chase_spread) %>%
mutate(group_ID = factor(group_ID))
d
saveRDS(d, "other_models/data/cleaned/novel_lat_all_chases_data.rds")
pred_lat_data <- read_rds("latency/data/cleaned/latency_pred_cue_final.rds")
d <- left_join(pred_lat_data, all_chase_spread) %>%
mutate(group_ID = factor(group_ID))
d
saveRDS(d, "other_models/data/cleaned/pred_lat_all_chases_data.rds")
|
3420898cbce6e822130624a685cbd651ad8dd60f
|
f69c24cf40bfddce66cb7fc74176d726037eb37e
|
/man/list_values.Rd
|
d7241e1f50f205decc2e8166b907507345f3dbfb
|
[] |
no_license
|
cran/AntMAN
|
899ff61b2b7bac112e30e9d7aa37c5aa133f2d7d
|
5e4f4a093ea9b80a20d05d7261fe368481a8a0b8
|
refs/heads/master
| 2021-07-30T18:18:38.715761
| 2021-07-23T09:00:02
| 2021-07-23T09:00:02
| 236,548,124
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 362
|
rd
|
list_values.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/AM_mcmc.R
\name{list_values}
\alias{list_values}
\title{Internal function that produces a string from a list of values}
\usage{
list_values(x)
}
\arguments{
\item{x}{a list of values}
}
\description{
Internal function that produces a string from a list of values
}
\keyword{internal}
|
8808e615b24657d482765b47b4cc9894c9826464
|
e35fd2643c18e9a62b20b7f6dea0db140896d76c
|
/man/igstd.Rd
|
4e7cdc4e60bb1fa36d4758d70ce47c736fb11588
|
[] |
no_license
|
PBG-Ec/igstrd
|
f6273e65e6ffde19f33105389985e7e21cbc471c
|
c583980d4acce62d506d881db4d27519966cd823
|
refs/heads/master
| 2020-04-23T03:48:28.062889
| 2019-02-15T16:06:25
| 2019-02-15T16:06:25
| 170,888,613
| 0
| 0
| null | null | null | null |
UTF-8
|
R
| false
| true
| 1,024
|
rd
|
igstd.Rd
|
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/igstrd.R
\name{igstd}
\alias{igstd}
\title{Modified igrowup}
\usage{
igstd(mydf, sex, age, age.month = F, weight = rep(NA, dim(mydf)[1]),
lenhei = rep(NA, dim(mydf)[1]), measure = rep(NA, dim(mydf)[1]),
headc = rep(NA, dim(mydf)[1]), armc = rep(NA, dim(mydf)[1]),
triskin = rep(NA, dim(mydf)[1]), subskin = rep(NA, dim(mydf)[1]),
oedema = rep("n", dim(mydf)[1]), sw = rep(1, dim(mydf)[1]))
}
\description{
Calculate the z-scores for the indicators: length/height-for-age, weight-for-age, weight-for-legnth/height and body mass index-for-age
}
\details{
WHO Child Growth Standards Department of Nutrition for Health and Development Last modified on 07/10/2013-Developed using R version 3.0.1
}
\note{
This code conrcerns the standard approach for the prevalences, i.e. the calculation of the prevalences takes into account all the valid (non-missing) z-scores for each of the indicators.
}
\keyword{igrowup}
\keyword{who}
\keyword{zscore}
|
0c9469760b323ca0bd21d6d79d809a75e22b0e02
|
126934bebe894dc5b4f376873a9e46ac2012588d
|
/R/rolling_apply_specialized.R
|
822ceca2e92e2d968a40b911d7af6156821ba85d
|
[] |
no_license
|
andreas50/utsOperators
|
b978e373d25e8b23f0eb9cc19f0d35ac3cd42b66
|
b8d2fc43bd8ba8be80f5072352182480a753000c
|
refs/heads/master
| 2021-03-30T16:57:32.053060
| 2018-08-13T14:58:41
| 2018-08-13T14:58:41
| 35,906,344
| 13
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 6,891
|
r
|
rolling_apply_specialized.R
|
#############################################################################
# Specialized implementations of rolling_apply() for certain choices of FUN #
#############################################################################
#' Apply Rolling Function (Specialized Implementation)
#'
#' This function provides a fast, specialized implementation of \code{\link{rolling_apply}} for certain choices of \code{FUN} and for \code{by=NULL} (i.e. when moving the rolling time window one observation at a time, rather than by a fixed temporal amount).
#'
#' It is usually not necessary to call this function, because it is called automatically by \code{\link{rolling_apply}} whenever a specialized implementation is available.
#'
#' @param x a numeric time series object with finite, non-NA observation values.
#' @param width a finite, positive \code{\link[lubridate]{duration}} object, specifying the temporal width of the rolling time window.
#' @param FUN a function to be applied to the vector of observation values inside the half-open (open on the left, closed on the right) rolling time window.
#' @param align either \code{"right"}, \code{"left"}, or \code{"center"}. Specifies the alignment of each output time relative to its corresponding time window. Using \code{"right"} gives a causal (i.e. backward-looking) time series operator, while using \code{"left"} gives a purely forward-looking time series operator.
#' @param interior logical. Should time windows lie entirely in the interior of the temporal support of \code{x}, i.e. inside the time interval \code{[start(x), end(x)]}?
#' @param \ldots further arguments passed to or from methods.
#'
#' @references Eckner, A. (2017) \emph{Algorithms for Unevenly Spaced Time Series: Moving Averages and Other Rolling Operators}.
#' @keywords internal
rolling_apply_specialized <- function(x, ...) UseMethod("rolling_apply_specialized")
#' @describeIn rolling_apply_specialized Implementation for \code{"uts"} objects with finite, non-NA observation values.
#'
#' @examples
#' rolling_apply_specialized(ex_uts(), dhours(12), FUN=length)
#' rolling_apply_specialized(ex_uts(), dhours(12), FUN=length, align="center")
#' rolling_apply_specialized(ex_uts(), dhours(12), FUN=length, align="left")
#'
#' rolling_apply_specialized(ex_uts(), dhours(12), FUN=length)
#' rolling_apply_specialized(ex_uts(), dhours(12), FUN=length, interior=TRUE)
#'
#' # Rolling sum
#' rolling_apply_specialized(ex_uts(), ddays(1), FUN=sum)
#' rolling_apply_specialized(ex_uts(), ddays(1), FUN=sum) - rolling_apply(ex_uts(), ddays(1), FUN=sum)
#'
#' # Rolling min/max
#' rolling_apply_specialized(ex_uts(), ddays(1), FUN=min)
#' rolling_apply_specialized(ex_uts(), ddays(1), FUN=max)
#'
#' # Rolling prodcut
#' rolling_apply_specialized(ex_uts(), ddays(0.5), FUN=prod)
rolling_apply_specialized.uts <- function(x, width, FUN, align="right", interior=FALSE, ...)
{
# Extract the name of the function to be called
if (is.function(FUN)) {
if (identical(FUN, length))
FUN <- "length"
else if (identical(FUN, mean))
FUN <- "mean"
else if (identical(FUN, min))
FUN <- "min"
else if (identical(FUN, max))
FUN <- "max"
else if (identical(FUN, median))
FUN <- "median"
else if (identical(FUN, prod))
FUN <- "prod"
else if (identical(FUN, sd))
FUN <- "sd"
else if (identical(FUN, sum))
FUN <- "sum"
else if (identical(FUN, var))
FUN <- "var"
else {
FUN <- deparse(substitute(FUN))
if (length(FUN) > 1)
stop("Custom functions (FUN) are not supported")
}
}
# Select C function
if (FUN == "length")
C_fct <- "rolling_num_obs"
else if (FUN == "min")
C_fct <- "rolling_min"
else if (FUN == "max")
C_fct <- "rolling_max"
else if (FUN == "mean")
C_fct <- "rolling_mean"
else if (FUN == "median")
C_fct <- "rolling_median"
else if (FUN == "prod")
C_fct <- "rolling_product"
else if (FUN == "sd")
C_fct <- "rolling_sd"
else if (FUN == "sum")
C_fct <- "rolling_sum"
else if (FUN == "sum_stable")
C_fct <- "rolling_sum_stable"
else if (FUN == "var")
C_fct <- "rolling_var"
else
stop("This function does not have a specialized rolling_apply() implementation")
# Determine the window width before and after the current output time, depending on the window alignment
check_window_width(width)
if (align == "right") {
width_before <- width
width_after <- 0
} else if (align == "left") {
width_before <- 0
width_after <- width
} else if (align == "center") {
width_before <- width / 2
width_after <- width / 2
} else
stop("'align' has to be either 'left', 'right', or 'center")
# Call C function
out <- generic_C_interface(x, width_before=width_before, width_after=width_after, C_fct=C_fct)
# Replace NaN by NA in output to be consistent with generic rolling_apply()
out$values[is.nan(out$values)] <- NA
# Optionally, drop output times for which the corresponding time window is not completely inside the temporal support of x
if (interior)
out <- window(out, start=start(out) + width_before, end(out) - width_after)
out
}
#' Specialized Rolling Apply Available?
#'
#' Check whether \code{\link{rolling_apply_specialized.uts}} can be called for a given \code{\link{uts}} object with arguments \code{FUN} and \code{by}.
#'
#' @param x a \code{"uts"} object.
#' @param FUN see \code{\link{rolling_apply_specialized}}.
#' @param by see \code{\link{rolling_apply_specialized}}.
#'
#' @keywords internal
#' @examples
#' have_rolling_apply_specialized(ex_uts(), FUN=mean)
#' have_rolling_apply_specialized(ex_uts(), FUN="mean")
#' have_rolling_apply_specialized(ex_uts(), FUN=mean, by=ddays(1))
#' have_rolling_apply_specialized(uts(NA, Sys.time()), FUN=mean)
#'
#' FUN <- mean
#' have_rolling_apply_specialized(ex_uts(), FUN=FUN)
have_rolling_apply_specialized <- function(x, FUN, by=NULL)
{
# Extract the name of the function to be called
if (is.function(FUN)) {
if (identical(FUN, length))
FUN <- "length"
else if (identical(FUN, mean))
FUN <- "mean"
else if (identical(FUN, min))
FUN <- "min"
else if (identical(FUN, max))
FUN <- "max"
else if (identical(FUN, median))
FUN <- "median"
else if (identical(FUN, prod))
FUN <- "prod"
else if (identical(FUN, sd))
FUN <- "sd"
else if (identical(FUN, sum))
FUN <- "sum"
else if (identical(FUN, sum))
FUN <- "var"
else
FUN <- deparse(substitute(FUN))
}
# Determine if fast special purpose implementation is available
(length(FUN) == 1) &&
(FUN %in% c("length", "mean", "min", "max", "median", "prod", "sd", "sum", "sum_stable", "var")) &&
is.null(by) && (is.numeric(x$values)) && (!anyNA(x$values)) && (all(is.finite(x$values)))
}
|
c137cb961efabbc233ceeb006ee436bee301ae8e
|
1ca3bd0b72b17af6364e97607c8e88c45c040464
|
/2015_Bombus_Survey/2015Survey.R
|
c6642b0b44d3c4755e69ef0d325d6c31285e3842
|
[] |
no_license
|
samanthaannalger/AlgerProjects
|
9662bf7bdc373acbf76a1262222c3a08a90e4ace
|
96f4a7812c4d3782433e5c4923c5700067ef2c18
|
refs/heads/master
| 2020-12-30T23:47:37.920687
| 2019-10-25T13:07:46
| 2019-10-25T13:07:46
| 80,625,540
| 1
| 0
| null | null | null | null |
UTF-8
|
R
| false
| false
| 39,321
|
r
|
2015Survey.R
|
###########################################################################################
# Data Analysis for 2015 Bombus Virus Study
# Samantha Alger and P. Alexander Burnham
# July 10, 2017
# Edited by Alex on June 30, 2018
###########################################################################################
#Preliminaries:
# Clear memory of characters
ls()
rm(list=ls())
# Call Packages
library("RColorBrewer")
library("ggplot2")
library("dplyr")
library("plyr")
library("spdep")
library("lme4")
library("car")
library("ape")
library("MuMIn")
# Set Working Directory
# FOR SAM:
#setwd("~/AlgerProjects/2015_Bombus_Survey/CSV_Files")
# FOR ALEX
setwd("~/Documents/GitHub/AlgerProjects/2015_Bombus_Survey/CSV_Files")
# load in data
BombSurv <- read.csv("BombSurvNHBS.csv", header=TRUE, stringsAsFactors=FALSE)
BQCVrun <- read.csv("NegStrandSamplesRan.csv", header=TRUE, stringsAsFactors=FALSE)
flowers <- read.csv("plants2015DF.csv", header=TRUE, stringsAsFactors=FALSE)
ddply(flowers, c("target_name", "apiary_near_far"), summarise,
n = length(BINYprefilter),
mean = mean(BINYprefilter))
# plant virus prevalence data:
Plants <- read.csv("plants2015DF.csv",header=TRUE,sep=",",stringsAsFactors=FALSE)
# load site level data and merge pathogen data with GIS HB colony/apiary output:
SpatDat <- read.table("SpatDatBuffs.csv", header=TRUE,sep=",",stringsAsFactors=FALSE)
SpatDat <- dplyr::select(SpatDat, -elevation, -town, -apiary, -siteNotes, -apiaryNotes)
SurvData <- read.csv("MixedModelDF.csv", header=TRUE, sep = ",", stringsAsFactors=FALSE)
SpatialDat <- merge(SurvData, SpatDat, by = "site")
# merge data to create final APC data frame:
SpatDat <- dplyr::select(SpatDat, -lat, -long)
BombSurv <- merge(BombSurv, SpatDat, by = "site")
# remove unneeded columns from the DF
BombSurv <- dplyr::select(BombSurv, -X, -Ct_mean, -Ct_sd, -quantity_mean, -quantity_sd, -run, -date_processed, -dil.factor, -genome_copbee, -Ct_mean_hb, -ID, -ACT_genome_copbee, -City, -Name, -virusBINY_PreFilter, -siteNotes, -X)
# remove unwanted sites and bombus species
BombSurv<-BombSurv[!BombSurv$site==("PITH"),]
BombSurv<-BombSurv[!BombSurv$site==("STOW"),]
BombSurv<-BombSurv[!BombSurv$species==("Griseocollis"),]
BombSurv<-BombSurv[!BombSurv$species==("Sandersonii"),]
# create variable that bins apiaries by how many colonies are there
BombSurv$ColoniesPooled <- ifelse(BombSurv$sumColonies1 <= 0, "0", ifelse(BombSurv$sumColonies1 <= 20, "1-19","20+"))
###############################################################################################
################################ SPACIAL AUTO-CORRLATION ######################################
###############################################################################################
# formatting bombsurv to test Spatial Autocorralation on
BeeAbund <- read.table("BeeAbund.csv", header=TRUE, sep=",", stringsAsFactors=FALSE)
# create log virus data:
BombSurv$logVirus <- log(1+BombSurv$norm_genome_copbee)
BombSurv$logHB <- log(1+BombSurv$norm_genome_copbeeHB)
BombSurv <- merge(BombSurv, BeeAbund, by = "site")
BombSurv$HBdensRatio <- BombSurv$Density/((BombSurv$apis+0.0000000000000001)/10)
# two data frames for DWV and BQCV for Morans I
BQCV <- subset(BombSurv, target_name=="BQCV")
DWV <- subset(BombSurv, target_name=="DWV")
# create Plants dataframe:
Plants <- merge(Plants, BeeAbund, all.x=TRUE, all.y=FALSE)
Plants <- merge(Plants, SpatialDat, by=c("site","target_name"), all.x=TRUE, all.y=FALSE)
BombSurv$isHB <- ifelse(BombSurv$site=="TIRE" |
BombSurv$site=="CLERK" |
BombSurv$site=="NEK" |
BombSurv$site=="FLAN",
"noHB", "HB")
pos <- BombSurv[BombSurv$norm_genome_copbeeHB>=1,]
ddply(pos, c("target_name"), summarise,
n = length(norm_genome_copbeeHB),
max = max(norm_genome_copbeeHB),
min = min(norm_genome_copbeeHB),
maxLOG = max(log10(norm_genome_copbeeHB)),
minLOG = min(log10(norm_genome_copbeeHB)))
###################################################################################################
# CREATING MODELS TO TEST FOR SPATIAL AUTOCORRELATION
###################################################################################################
# create data frames to test spatial AC
SpatialDatBQCV <- subset(SpatialDat, target_name=="BQCV")
SpatialDatDWV <- subset(SpatialDat, target_name=="DWV")
#----------------------------------------------------------------------------------------------------
# BQCV PREV:
BQCVprev <- lm(data=SpatialDatBQCV, BombPrev ~ sumColonies1)
BQCVprevResid <- summary(BQCVprev)
BQCVprevResid$residual
#----------------------------------------------------------------------------------------------------
# DWV PREV
DWVprev <- lm(data=SpatialDatDWV, BombPrev ~ sumColonies1)
DWVprevResid <- summary(DWVprev)
DWVprevResid$residual
#----------------------------------------------------------------------------------------------------
# DWV LOAD
DWVload <- lm(data=SpatialDatDWV, BombusViralLoad ~ sumColonies1)
DWVloadResid <- summary(DWVload)
DWVloadResid$residual
#----------------------------------------------------------------------------------------------------
# BQCV LOAD
BQCVload <- lm(data=SpatialDatBQCV, BombusViralLoad ~ sumColonies1)
BQCVloadResid <- summary(BQCVload)
BQCVloadResid$residual
#----------------------------------------------------------------------------------------------------
# DWV HB LOAD
DWVhb <- lm(data=SpatialDatDWV, HBviralLoad ~ sumColonies1)
HBbqcvResid <- summary(DWVhb)
HBbqcvResid$residual
#----------------------------------------------------------------------------------------------------
# BQCV HB LOAD
BQCVhb <- lm(data=SpatialDatBQCV, HBviralLoad ~ sumColonies1)
HBdwvResid <- summary(BQCVhb)
HBdwvResid$residual
#----------------------------------------------------------------------------------------------------
# CREATING DISTANCE MATRICES FOR MORANS.I TEST:
#For DWV:
DWV.dists <- as.matrix(dist(cbind(SpatialDatDWV$long, SpatialDatDWV$lat)))
DWV.dists.inv <- 1/DWV.dists
diag(DWV.dists.inv) <- 0
#For BQCV:
BQ.dists <- as.matrix(dist(cbind(SpatialDatBQCV$long, SpatialDatBQCV$lat)))
BQ.dists.inv <- 1/BQ.dists
diag(BQ.dists.inv) <- 0
###################################################################################################
# TESTING FOR SPATIAL AUTOCORRELATION
###################################################################################################
# BQCV PREV:
Moran.I(BQCVprevResid$residuals, BQ.dists.inv) # YES SPACIAL-AUTO COR (clustered)
# DWV PREV:
Moran.I(DWVprevResid$residuals, DWV.dists.inv) # NO SPACIAL-AUTO COR
# BQCV LOAD:
Moran.I(BQCVloadResid$residual, BQ.dists.inv) # NO SPACIAL-AUTO COR
# DWV LOAD:
Moran.I(DWVloadResid$residual, DWV.dists.inv) # YES SPACIAL-AUTO COR (clustered)
# BQCV HB LOAD
Moran.I(HBbqcvResid$residual, BQ.dists.inv) # NO SPACIAL-AUTO COR
# DWV HB LOAD:
Moran.I(HBdwvResid$residual, DWV.dists.inv) # NO SPACIAL-AUTO COR
# END MODELS
###############################################################################################
################## MODEL SELECTION FOR COLONY AND APIARY VARIABLES FROM GIS ###################
###############################################################################################
############################################################
# function name: AICfinderPrev
# description:finds p val and AIC for glmer model
# parameters:
# data = data frame, yvar and xvar
# returns a list (requires library(lme4))
############################################################
AICfinderPrev <- function(X=Xvar, Y="virusBINY", data=DWV){
data$y <- data[,Y]
data$x <- data[,X]
Fullmod <- glmer(data=data, formula = y~x + (1|site/species),
family = binomial(link = "logit"))
x <- summary(Fullmod)
return(list(x$AICtab[1], paste("P=", x$coefficients[2,4])))
}
###############################################################
# END OF FUNCITON
###############################################################
# create vector of explainitory variables to test:
Xvar <- c("sumApiaries800", "sumColonies800","sumApiaries1", "sumColonies1","sumApiaries2", "sumColonies2","sumApiaries3", "sumColonies3","sumApiaries4", "sumColonies4","sumApiaries5", "sumColonies5")
# apply funciton to run though every iteration of DWV prev:
sapply(X=Xvar, FUN=AICfinderPrev, data=DWV)
# apply funciton to run though every iteration of BQCV prev:
sapply(X=Xvar, FUN=AICfinderPrev, data=BQCV)
###########################################################################
# function name: AICfinderLoad
# description:finds p val and AIC for glmer model
# parameters:
# data = data frame, yvar and xvar
# returns a list (requires library(lme4))
###########################################################################
AICfinderLoad <- function(X=Xvar, Y="logVirus", data=DWV){
data$y <- data[,Y]
data$x <- data[,X]
Fullmod <- lmer(data=data, formula = y~x + (1|site/species))
z<-Anova(Fullmod)
return(list(AIC(Fullmod), paste("P=", z$`Pr(>Chisq)`)))
}
###########################################################################
# END OF FUNCITON
###########################################################################
# apply function to run though every iteration of DWV load:
sapply(X=Xvar, FUN=AICfinderLoad, data=DWV)
# apply function to run though every iteration of BQCV load:
sapply(X=Xvar, FUN=AICfinderLoad, data=BQCV)
# DECIDED TO USE "sumColony1" as predictor varable based on these data (AIC and P val)
##################################################################################################
##################################################################################################
######################################### GRAPHICS!!!!! ##########################################
##################################################################################################
##################################################################################################
###################################################################################################
##################### CREATING FINAL PUBLICATION GRAPHICS FOR BOMBUS VIRUSES ######################
###################################################################################################
# remove unwanted target:
BombSurvNoAIPV<-BombSurv[!BombSurv$target_name==("IAPV"),]
###################################################################################################
# Load:
my_y_title <- expression(paste("# ", italic("Apis"), " colonies within 1km radius"))
# remove 0s
BombSurvNoAIPVno0<-BombSurvNoAIPV[!BombSurvNoAIPV$logVirus==0,]
#Create plot in ggplot
plot <- ggplot(data = BombSurvNoAIPVno0,
aes(x = ColoniesPooled,
y = logVirus,
fill = target_name)
) + geom_boxplot(color="black") + coord_cartesian(ylim = c(5, 20)) + labs(x = my_y_title, y = "log(genome copies/bee)", fill="Virus:")
# add a theme
plot + theme_bw(base_size = 17) + scale_fill_manual(values=c("white", "gray40"))
###################################################################################################
# Prevalence
VirusSum <- ddply(BombSurvNoAIPV, c("target_name", "ColoniesPooled"), summarise,
n = length(virusBINY),
mean = mean(virusBINY),
sd = sqrt(((mean(virusBINY))*(1-mean(virusBINY)))/n))
#Create plot in ggplot
plot1 <- ggplot(data = VirusSum,
aes(x = ColoniesPooled,
y = mean,
shape = target_name)
) + geom_point(size=4) + coord_cartesian(ylim = c(0, 1)) + labs(x = my_y_title, y = "% prevalence", shape="Virus:") + scale_y_continuous(labels = scales::percent) + geom_errorbar(aes(ymin = mean - sd, ymax = mean + sd, width = 0.2))
# add a theme
VirusSum1 <- ddply(BombSurvNoAIPV, c("target_name", "apiary_near_far"), summarise,
n = length(virusBINY),
mean = mean(virusBINY),
sd = sqrt(((mean(virusBINY))*(1-mean(virusBINY)))/n))
VirusSum1$apiary_near_far <- as.character(VirusSum1$apiary_near_far)
colors <- c("white", "grey25")
#Create a bar graph for viruses by bombus species (aes= aesthetics):
plot1 <- ggplot(VirusSum1, aes(x=target_name, y=mean, fill=apiary_near_far)) +
geom_bar(stat="identity", color="black",
position=position_dodge()) + labs(x="Virus", y = "% Prevalence") + geom_errorbar(aes(ymin = mean - sd, ymax = mean + sd, width = 0.2),position=position_dodge(.9))
plot1 + theme_bw(base_size = 23) + scale_fill_manual(values=colors, name="Site Type:", labels=c("Apiary Absent", "Apiary Present")) + theme(legend.position=c(.8, .85)) + coord_cartesian(ylim = c(0, 1)) + scale_y_continuous(labels = scales::percent) + annotate(geom = "text", x = 1, y = .98, label = "*",cex = 12) + annotate(geom = "text", x = 2, y = .25, label = "*",cex = 12) + annotate(geom = "text", x = 0.75, y = .75, label = "N = 219",cex = 7) + annotate(geom = "text", x = 1.75, y = .13, label = "N = 219",cex = 7) + annotate(geom = "text", x = 1.2, y = .97, label = "N = 116",cex = 7) + annotate(geom = "text", x = 2.22, y = .23, label = "N = 116",cex = 7)
########################################
BombSurvDWV<- BombSurvNoAIPV[ which(BombSurvNoAIPV$target_name=='DWV'), ]
table(BombSurvDWV$virusBINY, BombSurvDWV$species)
BombSurvBQCV <- BombSurvNoAIPV[ which(BombSurvNoAIPV$target_name=='BQCV'), ]
table(BombSurvBQCV$virusBINY)
BombSurvBQCV2 <- BombSurvBQCV[ which(BombSurvBQCV$apiary == 'N'), ]
table(BombSurvBQCV2$virusBINY)
newdata <- mydata[ which(mydata$gender=='F'
& mydata$age > 65), ]
###################################################################################################
######################## CREATING PUBLICATION GRAPHICS FOR PLANT PREV #############################
###################################################################################################
# create a binary varaible for apiary or no apiary
Plants$apiary <- ifelse(Plants$apiary_near_far == 0, "no apiary","apiary")
Plants$HBlowHigh <- ifelse(Plants$apis <= 4, "Low HB","High HB")
#ddply summarize:
fieldPlantsSum <- ddply(Plants, c("target_name", "apiary"), summarise,
n = length(BINYprefilter),
mean = mean(BINYprefilter, na.rm=TRUE),
sd = sqrt(((mean(BINYprefilter))*(1-mean(BINYprefilter)))/n))
# remove 0 (make NA) for values so they dont plot error bars
fieldPlantsSum$sd[fieldPlantsSum$sd==0] <- NA
fieldPlantsSum$mean[fieldPlantsSum$mean==0] <- NA
#creating the figure
#choosing color pallet
colors <- c("white", "grey30")
plot1 <- ggplot(fieldPlantsSum, aes(x=apiary, y=mean, fill=target_name)) +
geom_bar(stat="identity", color="black",
position=position_dodge()) + labs(y="% plants with virus detected", x="Site Type") + geom_errorbar(aes(ymin = mean - sd, ymax = mean + sd, width = 0.2),position=position_dodge(.9))
plot1 + theme_bw(base_size = 23) + scale_fill_manual(values=colors, name="Virus", labels=c("BQCV", "DWV")) + theme(legend.position=c(.86, .8),legend.background = element_rect(color = "black", fill = "white", size = .4, linetype = "solid")) + scale_y_continuous(labels = function(x) paste0(x*100, "%"), limits= c(0,.5)) + annotate(geom = "text", x = 1, y = .43, label = "N = 15",cex = 8) + annotate(geom = "text", x = 2, y = .25, label = "N = 21",cex = 8)
###################################################################################################
############################## CREATING PUBLICATION GRAPHICS FOR HB ###############################
###################################################################################################
# histogram showing apis DWV load (bimodal)
# summary of viral load for by target and site
CopDist <- ddply(BombSurv, c("target_name", "site"), summarise,
n = length(norm_genome_copbeeHB),
mean = mean(norm_genome_copbeeHB, na.rm=TRUE),
sd = sd(norm_genome_copbeeHB, na.rm=TRUE),
se = sd / sqrt(n))
# remove BQCV and IAPV:
CopDist<-CopDist[!CopDist$target_name==("BQCV"),]
CopDist<-CopDist[!CopDist$target_name==("IAPV"),]
my_y_title <- expression(paste(italic("Apis"), " DWV log(viral load)"))
ggplot(data=CopDist, aes(log(1 + mean))) +
geom_histogram(breaks=seq(5, 25, by = 1),
col="black",
fill="grey30") +
labs(x=my_y_title, y="Frequency") + theme_bw(base_size=23)
################################################################################################
# bar plot showing DWV level in apis by DWV prev in bombus
my_y_title <- expression(paste("Level of DWV in " , italic("Apis")))
my_x_title <- expression(paste("% Prevalence in " , italic("Bombus")))
lab <- expression(paste("No " , italic("Apis"), " caught"))
lb <- c("High","Low",lab)
HBSiteSum <- ddply(DWV, c("HBSiteBin", "target_name"), summarise,
n = length(virusBINY),
mean = mean(virusBINY, na.rm=TRUE),
sd = sqrt(((mean(virusBINY))*(1-mean(virusBINY)))/n))
# remove 0 (make NA) for values so they dont plot error bars
HBSiteSum$sd[HBSiteSum$sd==0] <- NA
HBSiteSum$mean[HBSiteSum$mean==0] <- NA
colors <- c("grey30", "white", "white")
plot1 <- ggplot(HBSiteSum, aes(x=HBSiteBin, y=mean, fill=colors)) +
geom_bar(stat="identity", color = "black") + labs(x=my_y_title, y = my_x_title) + scale_x_discrete(labels=lb)
plot1 + theme_bw(base_size = 23) + scale_fill_manual(values=colors) + coord_cartesian(ylim = c(0, 0.25)) + scale_y_continuous(labels = function(x) paste0(x*100, "%"), limits= c(0,.5)) + theme(legend.position=c(3, 3)) + geom_errorbar(aes(ymin = mean - sd, ymax = mean + sd, width = 0.2),position=position_dodge(.9)) + annotate(geom = "text", x = 1, y = .22, label = "N = 121",cex = 8) + annotate(geom = "text", x = 2, y = .12, label = "N = 150",cex = 8) + annotate(geom = "text", x = 3, y = .05, label = "N = 64",cex = 8)
##################################################################################################
##################################################################################################
######################################### MODELS!!!!! ############################################
##################################################################################################
##################################################################################################
###################################################################################################
# CREATING MODELS FOR PLANT PREV:
###################################################################################################
#Check to make sure that plant density did not differ based on apiary near/far:
n <- aov(Plants$apiary_near_far~Plants$Density)
summary(n)
# p = 0.307
# Full, Null and Reduced Models
PlantsFull <- glmer(data=Plants, formula = BINYprefilter ~ apis + bombus + target_name + Density + (1|site), family = binomial(link = "logit"))
PlantsNull <- glmer(data=Plants, formula = BINYprefilter ~ 1 + (1|site), family = binomial(link = "logit"))
PlantsApis <- glmer(data=Plants, formula = BINYprefilter ~ target_name + bombus + Density + (1|site), family = binomial(link = "logit"))
PlantsTarg <- glmer(data=Plants, formula = BINYprefilter ~ apis + bombus + Density + (1|site), family = binomial(link = "logit"), control=glmerControl(optimizer="bobyqa"))
PlantsBombus <- glmer(data=Plants, formula = BINYprefilter ~ apis + target_name + Density + (1|site), family = binomial(link = "logit"))
PlantsDensity <- glmer(data=Plants, formula = BINYprefilter ~ bombus + apis + target_name + (1|site), family = binomial(link = "logit"))
# liklihood ratio tests between models for significance
anova(PlantsFull, PlantsNull, test="LRT") # full model versus the null model
anova(PlantsFull, PlantsApis, test="LRT")
anova(PlantsFull,PlantsTarg, test="LRT")
anova(PlantsFull, PlantsBombus, test="LRT")
anova(PlantsFull, PlantsDensity, test="LRT")
# To view effects and std. errors of each variable:
summary(PlantsFull)
###################################################################################################
# CREATING FULL MODELS FOR HB:
###################################################################################################
# rename NAs "no apis caught"
DWV$HBSiteBin[is.na(DWV$HBSiteBin)] <- "No Apis Caught"
# Full, Null and Reduced Models
ApisFull <- glmer(data=DWV, formula = virusBINY ~ HBSiteBin + Density + apis + (1|site), family = binomial(link = "logit"))
ApisNull <- glmer(data=DWV, formula = virusBINY ~ 1 + (1|site), family = binomial(link = "logit"))
ApisNoHB <- glmer(data=DWV, formula = virusBINY ~ Density + apis + (1|site), family = binomial(link = "logit"))
ApisNoApis <- glmer(data=DWV, formula = virusBINY ~ HBSiteBin + Density + (1|site), family = binomial(link = "logit"))
ApisNoDens <- glmer(data=DWV, formula = virusBINY ~ HBSiteBin + apis + (1|site), family = binomial(link = "logit"))
# liklihood ratio tests between models for significance
anova(ApisFull, ApisNull, test="LRT")
anova(ApisFull, ApisNoHB, test="LRT")
anova(ApisFull, ApisNoApis, test="LRT")
anova(ApisFull, ApisNoDens, test="LRT")
# To view effects and std. errors of each variable:
summary(ApisFull)
###################################################################################################
# CREATING FULL MODELS FOR BOMBUS VIRUSES:
###################################################################################################
###########################################################################
# function name: TheExtractor
# description:extracts log liklihood test stats and p vals for null vs full
# and the reduced models
# parameters:
# Full = full model (glmer or lmer)
# Null = null model
# Density = density removed
# Colonies = colonies removed
# Species = species removed
###########################################################################
TheExtractor <- function(Full, Null, Colonies, Density, Species){
sumFull <- summary(Full)
modelFit <- anova(Full, Null, test="LRT")
Cols <- anova(Full, Colonies, test="LRT")
Dens <- anova(Full, Density, test="LRT")
Spec <- anova(Full, Species, test="LRT")
ModFit <- list("Model Fit P"=modelFit$`Pr(>Chisq)`[2], "Model Fit Df"=modelFit$`Chi Df`[2], "Model Fit Chi2"=modelFit$Chisq[2])
ColFit <- list("Colony Fit P"=Cols$`Pr(>Chisq)`[2],"Colony Fit Df"=Cols$`Chi Df`[2],"Colony Fit Chi2"=Cols$Chisq[2])
DensFit <- list("Density Fit P"=Dens$`Pr(>Chisq)`[2],"Density Fit Df"=Dens$`Chi Df`[2],"Density Fit Chi2"=Dens$Chisq[2])
SpecFit <- list("Species Fit P"=Spec$`Pr(>Chisq)`[2],"Species Fit Df"=Spec$`Chi Df`[2],"Species Fit Chi2"=Spec$Chisq[2])
return(list(sumFull$coefficients[1:4,1:2],ModFit, ColFit, DensFit, SpecFit))
}
###########################################################################
# END OF FUNCITON
###########################################################################
#####################################################################################
# DWV PREV ##########################################################################
#####################################################################################
# Full, Null and Reduced Models
DWVprevModFull <- glmer(data=DWV, formula = virusBINY~apiary_near_far + Density + species + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
DWVprevModNull <- glmer(data=DWV, formula = virusBINY~1 + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
DWVprevModnoCols <- glmer(data=DWV, formula = virusBINY~ Density + species + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
DWVprevModnoDens <- glmer(data=DWV, formula = virusBINY~apiary_near_far + species + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
DWVprevModnoSpec <- glmer(data=DWV, formula = virusBINY~apiary_near_far + Density + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
# run the function to get results of models
TheExtractor(Full=DWVprevModFull,
Null=DWVprevModNull,
Colonies=DWVprevModnoCols,
Density=DWVprevModnoDens,
Species = DWVprevModnoSpec)
#####################################################################################
# DWV LOAD ##########################################################################
#####################################################################################
# remove 0s to look at viral load of infected
DWVno0 <- DWV[!DWV$virusBINY==0,]
# Full, Null and Reduced Models
DWVloadModFull <- lmer(data=DWVno0, formula = logVirus ~ apiary_near_far + Density + species + (1|site) + (1|species) + (1|lat) + (1|long))
DWVloadModNull <- lmer(data=DWVno0, formula = logVirus ~ 1 + (1|site) + (1|lat) + (1|long))
DWVloadModnoCols <- lmer(data=DWVno0, formula = logVirus ~ Density + species + (1|site) + (1|species) + (1|lat) + (1|long))
DWVloadModnoDens <- lmer(data=DWVno0, formula = logVirus ~ apiary_near_far + species + (1|site) + (1|species) + (1|lat) + (1|long))
DWVloadModnoSpec <- lmer(data=DWVno0, formula = logVirus ~ apiary_near_far + Density + (1|site) + (1|species) + (1|lat) + (1|long))
# run the function to get results of models
TheExtractor(Full=DWVloadModFull,
Null=DWVloadModNull,
Colonies=DWVloadModnoCols,
Density=DWVloadModnoDens,
Species = DWVloadModnoSpec )
#####################################################################################
# BQCV PREV #########################################################################
#####################################################################################
# Full, Null and Reduced Models
BQCVprevModFull <- glmer(data=BQCV, formula = virusBINY~apiary_near_far + Density + species + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
BQCVprevModNull <- glmer(data=BQCV, formula = virusBINY~1 + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
BQCVprevModnoCols <- glmer(data=BQCV, formula = virusBINY~Density + species + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
BQCVprevModnoDens <- glmer(data=BQCV, formula = virusBINY~apiary_near_far + species + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
BQCVprevModnoSpec <- glmer(data=BQCV, formula = virusBINY~apiary_near_far + Density + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
# run the function to get results of models
TheExtractor(Full=BQCVprevModFull,
Null=BQCVprevModNull,
Colonies=BQCVprevModnoCols,
Density=BQCVprevModnoDens,
Species = BQCVprevModnoSpec)
#####################################################################################
# BQCV LOAD #########################################################################
#####################################################################################
# remove 0s to look at viral load of infected
BQCVno0 <- BQCV[!BQCV$virusBINY==0,]
# Full, Null and Reduced Models
BQCVloadModFull <- lmer(data=BQCVno0, formula = logVirus ~ apiary_near_far + Density + species + (1|site) + (1|lat) + (1|long))
BQCVloadModNull <- lmer(data=BQCVno0, formula = logVirus ~ 1 + (1|site) + (1|lat) + (1|long))
BQCVloadModnoCols <- lmer(data=BQCVno0, formula = logVirus ~ Density + species + (1|lat) + (1|long) + (1|site))
BQCVloadModnoDens <- lmer(data=BQCVno0, formula = logVirus ~ apiary_near_far + species + (1|lat) + (1|long) + (1|site))
BQCVloadModnoSpec <- lmer(data=BQCVno0, formula = logVirus ~ apiary_near_far + Density + (1|site) + (1|lat) + (1|long))
# run the function to get results of models
TheExtractor(Full=BQCVloadModFull,
Null=BQCVloadModNull,
Colonies=BQCVloadModnoCols,
Density=BQCVloadModnoDens,
Species = BQCVloadModnoSpec)
###############################################################################################
# REGRESSION ANALYSIS #########################################################################
###############################################################################################
# regressions run on sum colones excluding sites that do not have any colonies:
# DWV load by number of colonies
DWVno0just_HB <- DWVno0[!DWVno0$sumColonies1==0,]
DWVloadModFullHB <- lmer(data=DWVno0just_HB, formula = logVirus ~ sumColonies1 + Density + species + (1|site) + (1|lat) + (1|long))
DWVloadSumColonies <- lmer(data=DWVno0just_HB, formula = logVirus ~ Density + species + (1|site) + (1|lat) + (1|long))
DWVloadDensity <- lmer(data=DWVno0just_HB, formula = logVirus ~ sumColonies1 + species + (1|site) + (1|lat) + (1|long))
DWVloadSpecies <- lmer(data=DWVno0just_HB, formula = logVirus ~ sumColonies1 + Density + (1|site) + (1|lat) + (1|long))
DWVloadModNullHB <- lmer(data=DWVno0just_HB, formula = logVirus ~ 1 + (1|site) + (1|lat) + (1|long))
anova(DWVloadModFullHB, DWVloadModNullHB, test="LRT")
anova(DWVloadModFullHB, DWVloadSumColonies, test="LRT")
anova(DWVloadModFullHB, DWVloadDensity, test="LRT")
anova(DWVloadModFullHB, DWVloadSpecies, test="LRT")
summary(DWVloadModFullHB)
# BQCV load by number of colonies
BQCVno0just_HB <- BQCVno0[!BQCVno0$sumColonies1==0,]
BQCVloadModFullHB <- lmer(data=BQCVno0just_HB, formula = logVirus ~ sumColonies1 + Density + species + (1|site) + (1|lat) + (1|long))
BQCVloadSumColonies <- lmer(data=BQCVno0just_HB, formula = logVirus ~ Density + species + (1|site) + (1|lat) + (1|long))
BQCVloadDensity <- lmer(data=BQCVno0just_HB, formula = logVirus ~ sumColonies1 + species + (1|site) + (1|lat) + (1|long))
BQCVloadSpecies <- lmer(data=BQCVno0just_HB, formula = logVirus ~ sumColonies1 + Density + (1|site) + (1|lat) + (1|long))
BQCVloadModNullHB <- lmer(data=BQCVno0just_HB, formula = logVirus ~ 1 + (1|site) + (1|lat) + (1|long))
anova(BQCVloadModFullHB, BQCVloadModNullHB, test="LRT")
anova(BQCVloadModFullHB, BQCVloadSumColonies, test="LRT")
anova(BQCVloadModFullHB, BQCVloadDensity, test="LRT")
anova(BQCVloadModFullHB, BQCVloadSpecies, test="LRT")
summary(BQCVloadModFullHB)
# DWV prev by number of colonies
DWVjust_HB <- DWV[!DWV$sumColonies1==0,]
DWVprevModFullHB <- glmer(data=DWVjust_HB, formula = virusBINY~ Density + species + sumColonies1 + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
DWVprevSumColonies <- glmer(data=DWVjust_HB, formula = virusBINY~ Density + species + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
DWVprevDensity <- glmer(data=DWVjust_HB, formula = virusBINY~ species + sumColonies1 + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
DWVprevSpecies <- glmer(data=DWVjust_HB, formula = virusBINY~ Density + sumColonies1 + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
DWVprevModNullHB <- glmer(data=DWVjust_HB, formula = virusBINY~ 1 + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
anova(DWVprevModFullHB, DWVprevModNullHB, test="LRT")
anova(DWVprevModFullHB, DWVprevSumColonies, test="LRT")
anova(DWVprevModFullHB, DWVprevDensity, test="LRT")
anova(DWVprevModFullHB, DWVprevSpecies, test="LRT")
summary(DWVprevModFullHB)
# BQCV prev by number of colonies
BQCVjust_HB <- BQCV[!BQCV$sumColonies1==0,]
BQCVprevModFullHB <- glmer(data=BQCVjust_HB, formula = virusBINY~sumColonies1 + Density + species + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
BQCVprevModNullHB <- glmer(data=BQCVjust_HB, formula = virusBINY~ Density + species + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
anova(BQCVprevModFullHB, BQCVprevModNullHB, test="LRT")
anova(BQCVprevModFullHB, BQCVprevSumColonies, test="LRT")
anova(BQCVprevModFullHB, BQCVprevDensity, test="LRT")
anova(BQCVprevModFullHB, BQCVprevSpecies, test="LRT")
summary(BQCVprevModFullHB)
###############################################################################################
# SCATTER PLOTS FOR SUM COLS VS PATHOGENS #####################################################
###############################################################################################
# load in data
spatDat <- read.csv("spatialMerge.csv", header=TRUE, stringsAsFactors=FALSE)
spatDat$logViralLoad <- log(spatDat$BombusViralLoad + 1)
# split up my data frame:
splitSpat <- split(spatDat, spatDat$target_name)
spatDWV <- splitSpat$DWV
spatBQCV <- splitSpat$BQCV
# looking at floral density by presence or absence of apiaries (NOT SIG)
x <- aov(data=spatDat, Density~apiary_near_far)
summary(x)
# DWV load:
ggplot(spatDWV, aes(x=sumColonies1, y=logViralLoad)) +
geom_point(size=4) + theme_bw(base_size = 23) + labs(x="# colonies in 1km", y = "DWV Bombus Viral Load") + coord_cartesian(ylim = c(0, 17))
# BQCV load:
ggplot(spatBQCV, aes(x=sumColonies1, y=logViralLoad)) +
geom_point(size=4) + theme_bw(base_size = 23) + labs(x="# colonies in 1km", y = "BQCV Bombus Viral Load") + coord_cartesian(ylim = c(10, 20))
# DWV prev:
ggplot(spatDWV, aes(x=sumColonies1, y=BombPrev)) +
geom_point(size=4) + theme_bw(base_size = 23) + labs(x="# colonies in 1km", y = "DWV Bombus Viral Prevalence") + coord_cartesian(ylim = c(0, 1))
# BQCV prev:
ggplot(spatBQCV, aes(x=sumColonies1, y=BombPrev)) +
geom_point(size=4) + theme_bw(base_size = 23) + labs(x="# colonies in 1km", y = "BQCV Bombus Viral Prevalence") + coord_cartesian(ylim = c(0, 1))
###############################################################################################
# Full, Null and Reduced Models
DWVFullAbund <- glmer(data=DWV, formula = virusBINY ~ apis + Density + species + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
DWVNullAbund <- glmer(data=DWV, formula = virusBINY ~ 1 + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
DWVFullAbundnoApis <- glmer(data=DWV, formula = virusBINY ~ Density + species + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
DWVFullAbundnoDensity <- glmer(data=DWV, formula = virusBINY ~ apis +species + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
DWVFullAbundnoSpecies <- glmer(data=DWV, formula = virusBINY ~ apis + Density + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
# run the function to get results of models
TheExtractor(Full=DWVFullAbund,
Null=DWVNullAbund,
Colonies=DWVFullAbundnoApis,
Density=DWVFullAbundnoDensity,
Species = DWVFullAbundnoSpecies)
###############################################################################################
###############################################################################################
# Full, Null and Reduced Models
DWVloadModFull <- lmer(data=DWVno0, formula = logVirus ~ apis + Density + species + (1|site) + (1|species) + (1|lat) + (1|long))
DWVloadModNull <- lmer(data=DWVno0, formula = logVirus ~ 1 + (1|site) + (1|species) + (1|lat) + (1|long))
DWVloadModFullnoApis <- lmer(data=DWVno0, formula = logVirus ~ Density + species + (1|site) + (1|species) + (1|lat) + (1|long))
DWVloadModFullnoDensity <- lmer(data=DWVno0, formula = logVirus ~ apis + species + (1|site) + (1|species) + (1|lat) + (1|long))
DWVloadModFullnoSpecies <- lmer(data=DWVno0, formula = logVirus ~ apis + Density + (1|site) + (1|species) + (1|lat) + (1|long))
# run the function to get results of models
TheExtractor(Full=DWVloadModFull,
Null=DWVloadModNull,
Colonies=DWVloadModFullnoApis,
Density=DWVloadModFullnoDensity,
Species = DWVloadModFullnoSpecies)
###############################################################################################
###############################################################################################
# Full, Null and Reduced Models
BQCVprevModFull <- glmer(data=BQCV, formula = virusBINY~ apis + Density + species + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
BQCVprevModNull <- glmer(data=BQCV, formula = virusBINY~ 1 + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
BQCVprevModnoApis <- glmer(data=BQCV, formula = virusBINY~ Density + species + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
BQCVprevModnoDens <- glmer(data=BQCV, formula = virusBINY~ apis + species + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
BQCVprevModFullnoSpp <- glmer(data=BQCV, formula = virusBINY~ apis + Density + (1|site) + (1|lat) + (1|long), family = binomial(link = "logit"))
# run the function to get results of models
TheExtractor(Full=BQCVprevModFull,
Null=BQCVprevModNull,
Colonies=BQCVprevModnoApis,
Density=BQCVprevModnoDens,
Species =BQCVprevModFullnoSpp)
###############################################################################################
###############################################################################################
# Full, Null and Reduced Models
BQCVloadModFull <- lmer(data=BQCVno0, formula = logVirus ~ apis + Density + species + (1|site) + (1|lat) + (1|long))
BQCVloadNull <- lmer(data=BQCVno0, formula = logVirus ~ 1 + (1|site) + (1|lat) + (1|long))
BQCVloadModnoApis <- lmer(data=BQCVno0, formula = logVirus ~ Density + species + (1|site) + (1|lat) + (1|long))
BQCVloadModnoDens <- lmer(data=BQCVno0, formula = logVirus ~ apis + species + (1|site) + (1|lat) + (1|long))
BQCVloadModnoSpp <- lmer(data=BQCVno0, formula = logVirus ~ apis + Density + (1|site) + (1|lat) + (1|long))
# run the function to get results of models
TheExtractor(Full=BQCVloadModFull,
Null=BQCVloadNull,
Colonies=BQCVloadModnoApis,
Density=BQCVloadModnoDens,
Species = BQCVloadModnoSpp)
###############################################################################################
x <- select(BombSurv, site, apis, bombus, apiary_near_far)
deduped.data <- unique(x)
library(reshape2)
deduped.data <- melt(deduped.data, id.vars = c("site", "apiary_near_far"))
ap <- expression(italic("Apis"))
bo <- expression(italic("Bombus"))
deduped.data$apiary_near_far <- ifelse(deduped.data$apiary_near_far==1, "Apiary Present", "Apiary Absent")
#Create plot in ggplot
plot <- ggplot(data = deduped.data,
aes(x = apiary_near_far,
y = value,
fill = variable)
) + geom_boxplot(color="black") +
geom_point(aes(fill=variable), size = 7, shape = 21, position = position_jitterdodge()) + coord_cartesian(ylim = c(1, 65)) + labs(x = "Site Type", y = "Bee Abundance") + theme_bw(base_size = 20) + scale_fill_manual(name = NULL, values=c("white", "gray40"), labels=c(ap, bo)) + theme(legend.position = c(.15, .85))
plot
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.