ID int64 1 1.07k | Comment stringlengths 8 1.13k | Code stringlengths 10 4.28k | Label stringclasses 4
values | Source stringlengths 21 21 | File stringlengths 4 82 |
|---|---|---|---|---|---|
701 | Clean up dataframe and create a grouping variable 50m or 200m based on translocation distance | trajectory.df <- trajectory.df %>% ungroup %>% mutate(trans_group = ifelse(dist.y > 100, "200m", "50m")) %>% dplyr::select(id, sex, trans_group, trans_dist = dist.y, dt = dt.x, x_utm = x_utm.x, y_utm = y_utm.x, x_new, y_new, dist = dist.x, time_lag_min) | Data Variable | https://osf.io/3bpn6/ | os_homing_dataproc.R |
702 | Barplot of homing success with percents as title 50m | bar_50m <- homing_variables %>% filter(trans_group == "50m") %>% ggplot(aes(x = sex, fill = sex, alpha = as.factor(homing_bin))) + geom_bar(position = "fill", color = "black", size = 2) + scale_fill_manual(values = c("#F8766D", "#00BFC4")) + coord_flip() + theme(legend.position="none", aspect.ratio = 0.4, axis.ticks.le... | Visualization | https://osf.io/3bpn6/ | os_homing_dataproc.R |
703 | Barplot of homing success with percents as title 200m | bar_200m <- homing_variables %>% filter(trans_group == "200m") %>% ggplot(aes(x = sex, fill = sex, alpha = as.factor(homing_bin))) + geom_bar(position = "fill", color = "black", size = 2) + scale_fill_manual(values = c("#F8766D", "#00BFC4")) + coord_flip() + theme(legend.position="none", aspect.ratio = 0.4, axis.ticks.... | Visualization | https://osf.io/3bpn6/ | os_homing_dataproc.R |
704 | 2 Factor Model, correlated factors | model_2f <- 'f_a =~ parcel_a_1+parcel_a_2+parcel_a_3 # Agency f_c =~ parcel_c_1+parcel_c_2+parcel_c_3 # Communion ' fit.model_2f <- cfa(model_2f, data=data, std.lv=T) summary(fit.model_2f, fit.measures=TRUE, standardized = TRUE) # Robust model: CFI = .99, RMSEA = .02 | Statistical Modeling | https://osf.io/6579b/ | 02_Main_Analyses.R |
705 | Factor Anaylsis with 3 factors | factors <- fa(items, nfactors = 3) | Statistical Modeling | https://osf.io/74qnu/ | Script_Predicting_Social_Skill_Expression.R |
706 | binomial logistic regression testing for effect of condition | model1 <- glm(cbind(PredictTotal, PredictTrials-PredictTotal) ~ Condition, data = study1, family=binomial) summary(model1) Anova(model1, type="III", test="Wald") | Statistical Modeling | https://osf.io/4kmdv/ | analysis code.R |
707 | binomial logistic regression testing for effects of behavior and context | model3 <- glm(cbind(PredictMatch, NumQs-PredictMatch) ~ Behavior + Context + Behavior*Context, data = study2, family=binomial) Anova(model3, type="III", test="Wald") | Statistical Modeling | https://osf.io/4kmdv/ | analysis code.R |
708 | Function that creates a TRUE/FALSE matrix of nonbiological father availibility from two numbers that are recorded in the data FROM and TILL. The function does this by evaluating the presence of the caretaker using < and > for each interval in question. | giveage<-function(data){ FROM<-data$Grew_up_from_nonbiol TILL<-data$Grew_up_till_nonbiol ages<-data.frame(x=rep(NA,nrow(data))) for(i in 1:15){ j<-i-1 text<-paste("grew",j,i,"<-TILL>",j,"&FROM<",i,sep="") eval(parse(text=text)) text<-paste("ages<-cbind(ages,grew",j,i,")",sep="") eval(parse(text=text)) } ages<-ages[,-1]... | Data Variable | https://osf.io/greqt/ | functions1.R |
709 | Function that calculates differences between nonbiological fathers and partners in both groups (Nonbiological father currently present, nonbiological father currently absent) for each year in the analysis. | deltas<-function(diff,ages){ deltaT<-NA deltaF<-NA for(i in 1:15){ deltaT[i]<-mean(diff[ages[i]==T],na.rm=T) deltaF[i]<-mean(diff[ages[i]==F],na.rm=T) } return(rbind(deltaT,deltaF)) } | Data Variable | https://osf.io/greqt/ | functions1.R |
710 | Function that repeats last and first item of a vector usefull if we want to draw plots and CI all the way to the border of the plotting region. | ad<-function(v){ return(c(v[1],v,v[length(v)])) } | Visualization | https://osf.io/greqt/ | functions1.R |
711 | Function that plots a text with an outline. it is equivalent to TeachingDemos shadowtext function described at: https://stackoverflow.com/questions/29303480/textlabelswithoutlineinr | shadowtext <- function(x, y=NULL, labels, col='white', bg='black', theta= seq(pi/4, 2*pi, length.out=40), r=0.1, ... ) { xy <- xy.coords(x,y) xo <- r*strwidth('A') yo <- r*strheight('A') for (i in theta) { text( xy$x + cos(i)*xo, xy$y + sin(i)*yo, labels, col=bg, ... ) } text(xy$x, xy$y, labels, col=col, ... ) } | Visualization | https://osf.io/greqt/ | functions1.R |
712 | Functions coonducting logit and inverse logit transformation (p to logodds and back) | logit<-function(x){log(x/(1-x))} inv_logit<-function(x){exp(x)/(1+exp(x))} | Statistical Modeling | https://osf.io/greqt/ | functions1.R |
713 | read in all text files at working directory: all_data read_dir( pattern "\\.txt$", stringsAsFactors FALSE, fill TRUE, header TRUE ) look at the columns and what they contain: str(all_data) look at data range, potential outliers: peek_neat(all_data, 'rt') the same per various groupings: ... | filenames = list.files(pattern = "^expsim_color_valence_.*\\.txt$") # get all result file names for (file_name in enum(filenames)) { | Data Variable | https://osf.io/49sq5/ | example_analysis.R |
714 | look at rt data range and distribution, potential outliers | peek_neat( data_final, values = c( 'rt_green_negative', 'rt_red_negative', 'rt_green_positive', 'rt_red_positive' ), group_by = 'condition', f_plot = plot_neat ) | Data Variable | https://osf.io/49sq5/ | example_analysis.R |
715 | now ANOVA on RTs for the main question: Color/Valence/Group interaction with basic factorial plot of mean rt means (95% CI for error bars by default) | anova_neat( data_final, values = c( 'rt_green_negative', 'rt_green_positive', 'rt_red_negative', 'rt_red_positive' ), within_ids = list( color = c('green', 'red'), valence = c('positive', 'negative') ), between_vars = 'condition', plot_means = TRUE, norm_tests = 'all', norm_plots = TRUE, var_tests = TRUE ) | Statistical Test | https://osf.io/49sq5/ | example_analysis.R |
716 | kmeans clustering Repeating exhaustive search for the kmeans clustering, with a plot of the best clustering for each k | KM2 <- kmeans.ex(PCA.Z$x, 2) KM2b <- KM2$best with(PCA.Z, { plot(x[, 1:2], pch = 20, asp = 1, col = cols[KM2b$cluster]) text(x[, 1:2], rownames(x), pos = 3) addhull(x[, 1], x[, 2], factor(KM2b$cluster), col.h = cols[1:2]) }) points(KM2b$centers[, 1], KM2b$centers[, 2], pch = 15, col = cols[1:2]) KM3 <- kmeans.ex(PCA.Z$... | Visualization | https://osf.io/6ukwg/ | codes_reanalysis.R |
717 | write the number of observed variables to the top | header[1] <- paste(ifelse(Flow==FALSE , length(unique(df$Point)), length(unique(df$Point)) + 1), " : number of observed variables") write(header[1:(grep("subbasin number",header) - 2)], file = outfile) header[1] <- paste(n + n1," : number of observed variables") write(header[1:(grep("subbasin number",header) - ... | Data Variable | https://osf.io/5ezfk/ | SWATCUPfunctions.R |
718 | "Remove" first word of each page, except on the first page where the title was the first word (but the title has already been removed) | PAST_M_21$gazedur[!duplicated(PAST_M_21$page) & PAST_M_21$page > 1] <- NaN PAST_M_21$fixdur[!duplicated(PAST_M_21$page) & PAST_M_21$page > 1] <- NaN PRES_O_21$gazedur[!duplicated(PRES_O_21$page) & PRES_O_21$page > 1] <- NaN PRES_O_21$fixdur[!duplicated(PRES_O_21$page) & PRES_O_21$page > 1] <- NaN PRES_M_21$gazedur[!dup... | Data Variable | https://osf.io/qynhu/ | subject21.R |
719 | create a vector with the rounded values (names(valM) adj_dimension) | valM <- round(colMeans(d, na.rm = T), 2) | Data Variable | https://osf.io/egpr5/ | Analysisscript.R |
720 | Divide estimates, posterior sd, lower CI, and upper CI of the withinperson effects by the withinperson SDs of social interactions to obtain coefficients that are standardized with respect to the DV only | Values_Analysis1_Model1$est[rows] <- Values_Analysis1_Model1$est[rows] / sqrt(variances) Values_Analysis1_Model1$posterior_sd[rows] <- Values_Analysis1_Model1$posterior_sd[rows] / sqrt(variances) Values_Analysis1_Model1$lower_2.5ci[rows] <- Values_Analysis1_Model1$lower_2.5ci[rows] / sqrt(variances) Values_Analysis1_Mo... | Statistical Modeling | https://osf.io/jpxts/ | Main Tables.R |
721 | Heart rate median considered as mean when mean not "available" | DATA$HR <- ifelse(is.na(DATA$"all_mean_HR")==T, DATA$all_median_HR, DATA$"all_mean_HR") | Data Variable | https://osf.io/cxv5k/ | data_preparation.R |
722 | Keep the removed people to plot separately | nograph_removed <- nograph[nograph$belief_in_medicine >= 7,] graph_removed <- graph[graph$belief_in_medicine >= 7,] | Visualization | https://osf.io/zh3f4/ | regression analysis.R |
723 | Means and Standard Deviations of Conditions Experiencer | exp <- data.all[ which(data.all$condition == 'experiencer'),] round(mean(exp$rating), 2) round(sd(exp$rating), 2) | Data Variable | https://osf.io/9tnmv/ | Exp2_OnlineExp_POST.R |
724 | Q3 create a new data set that contains the first 300 cases from the subset you have just created above | working_data_2 <- slice(working_data_1, 1:300) | Data Variable | https://osf.io/94jyp/ | Ex1_ Data Wrangling_answers.R |
725 | Q5 create one single subset (based on your original dataset) where you select the variables id sex age source1 discuss flushot vacc1 and refus select cases 150 450 only change the name of the variable source1 to 'Main_Source' select particpants older than 39 years of age, and who vaccinate their own children (... | working_data_final <- ex1_data %>% select(id, sex, age, source1, discuss, flushot, vacc1, refus) %>% rename(Main_Source=source1) %>% slice(150:450) %>% filter(age > 39 & vacc1 == 'Mandatory + all recommended') | Data Variable | https://osf.io/94jyp/ | Ex1_ Data Wrangling_answers.R |
726 | calculating r for each iteration, r as the mean of those five r RAQ_R and QoL | r_QoL <- (with(subsample_PG, cor(RAQ_Totalscore[.imp==1 & Sample==1], QoL[.imp==1 & Sample==1]))+ with(subsample_PG, cor(RAQ_Totalscore[.imp==2 & Sample==1], QoL[.imp==2 & Sample==1]))+ with(subsample_PG, cor(RAQ_Totalscore[.imp==3 & Sample==1], QoL[.imp==3 & Sample==1]))+ with(subsample_PG, cor(RAQ_Totalscore[.imp==4 ... | Data Variable | https://osf.io/73y8p/ | RAQ-R_Analyses.R |
727 | scaling RAQR Score | reg_with_PG_CG_scaled <- with(data=as.mids(reg_PG_CG), exp=lm(scale(RAQ_Totalscore)~Sample+Gender+Age_groups+Level_of_education)) str(summary(pool(reg_with_PG_CG_scaled))) pooled_reg_PG_CG_scaled <- summary(pool(reg_with_PG_CG_scaled)) pooled_reg_PG_CG_scaled$p.value pooled_reg_PG_CG_scaled$estimate | Data Variable | https://osf.io/73y8p/ | RAQ-R_Analyses.R |
728 | regression for each iteration in order to check diagnostic plots look for differences between the adjusted Rsquared of the five iterations | reg_PG_CG_1 <- reg_PG_CG[reg_PG_CG$.imp == 1, ] summary(lm(data=reg_PG_CG_1, RAQ_Totalscore~Sample+Gender+Age_groups+Level_of_education)) reg_with_PG_CG_1 <- with(data=reg_PG_CG_1, exp=lm(RAQ_Totalscore~Sample+Gender+Age_groups+Level_of_education)) plot(reg_with_PG_CG_1) reg_PG_CG_2 <- reg_PG_CG[reg_PG_CG$.imp == 2, ] ... | Statistical Modeling | https://osf.io/73y8p/ | RAQ-R_Analyses.R |
729 | calculating the mean of the five residual standard errors, R squared, adjusted Rsquared and F statistic | Residual_standard_error_reg_PG <- (10.35+10.45+10.16+10.29+10.28)/5 R_squared_reg_PG <- (0.639+0.630+0.652+0.642+0.642)/5 R_squared_adjusted_reg_PG <-(0.6267+0.6175+0.6403+0.6301+0.6298)/5 F_statistic_reg_PG <- (53.04+51.05+56.18+53.81+53.73)/5 | Statistical Test | https://osf.io/73y8p/ | RAQ-R_Analyses.R |
730 | Graph the correlation between secure attachment and visual cortex response | ggplot(mydata, aes(SECURE.ATTACHMENT, LINGUAL.GYRUS..VISUAL.CORTEX.), scale="globalminmax") + geom_smooth(method = "lm", fill = "green", alpha = 0.6)+ geom_point(size =5 ) + theme_minimal()+ theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),panel.grid.minor = element_blank(), axis.li... | Visualization | https://osf.io/s6zeg/ | Criticism-Attachment-RCode_v2.R |
731 | Graph the correlation between avoidant attachment and visual cortex response | ggplot(mydata, aes(AVOIDANT.ATTACHMENT, LINGUAL.GYRUS..VISUAL.CORTEX.), scale="globalminmax") + geom_smooth(method = "lm", fill = "red")+ geom_point(size = 5) + theme_minimal()+ theme_bw() + theme(panel.border = element_blank(), panel.grid.major = element_blank(),panel.grid.minor = element_blank(), axis.line = element_... | Visualization | https://osf.io/s6zeg/ | Criticism-Attachment-RCode_v2.R |
732 | Graph the interaction plot of amygdala and visual cortex activation, with AVOIDANT attachment as the moderator. | p1 = interact_plot(fiti, pred = AMYGDALA, modx = AVOIDANT.ATTACHMENT,robust = FALSE, x.label = "Amygdala", y.label = "LG Visual Cortex", main.title = "Avoidant Attachment", legend.main = "Avoidant Levels", colors = "red",interval = TRUE, int.width = 0.8)+ theme_bw() + theme(panel.border = element_blank(), panel.grid.m... | Visualization | https://osf.io/s6zeg/ | Criticism-Attachment-RCode_v2.R |
733 | Graph the interaction plot of amygdala and visual cortex activation, with SECURE attachment as the moderator. | p2 = interact_plot(fiti2, pred = AMYGDALA, modx = SECURE.ATTACHMENT,robust = FALSE, x.label = "Amygdala", y.label = "LG Visual Cortex", main.title = "Secure Attachment", legend.main = "Secure Levels", colors = "green",interval = TRUE, int.width = 0.8)+ theme_bw() + theme(panel.border = element_blank(), panel.grid.majo... | Visualization | https://osf.io/s6zeg/ | Criticism-Attachment-RCode_v2.R |
734 | Returns the ratio between the geometric means of two independent samples and its 95% BCa bootstrap CI. | ratioGeomMeanCI.bootstrap <- function(group1, group2) { group1 <- log(group1) group2 <- log(group2) samplemean <- function(x, d) {return(mean(x[d]))} pointEstimate <- samplemean(group1) - samplemean(group2) set.seed(0) # make deterministic bootstrap_samples <- two.boot(sample1 = group1, sample2 = group2, FUN = sampleme... | Statistical Test | https://osf.io/zh3f4/ | CI.helpers.R |
735 | Returns the 95% confidence interval of a single proportion using the Wilson score interval. | propCI <- function(numberOfSuccesses, sampleSize) { CI <- scoreci(x = numberOfSuccesses, n = sampleSize, conf.level = conf.level) c(numberOfSuccesses/sampleSize, CI$conf.int[1], CI$conf.int[2]) } | Statistical Modeling | https://osf.io/zh3f4/ | CI.helpers.R |
736 | Returns the difference between two linear regression slopes and its 95% BCa bootstrap CI. | diff.slopes.bootstrap <- function(x1, y1, x2, y2) { groups <- c(rep(1, length(x1)), rep(2, length(y1)), rep(3, length(x2)), rep(4, length(y2))) data <- data.frame(obs = c(x1, y1, x2, y2), group = groups) diffslope <- function(d, i) { db <- d[i,] x1 <- db[db$group==1,]$obs y1 <- db[db$group==2,]$obs x2 <- db[db$group==3... | Statistical Modeling | https://osf.io/zh3f4/ | CI.helpers.R |
737 | replace current subjects "fixdata" by fixation_info | if (any(S==replace_eye)){ fixdata[[S]] <- data.frame(fixation_info) colnames(fixdata[[S]]) <- c("Event.Start.Raw.Time..ms.","Event.End.Raw.Time..ms.","Event.Duration.Trial.Time..ms.", "Fixation.Position.X..px.","Fixation.Position.Y..px.","AOI.Name","intrialonset","trialnr") } } | Data Variable | https://osf.io/qrv2e/ | DivNorm_R_EyeTrack.R |
738 | Mean Differences Comparing signed and unsigned reviews on four aspects For each, first ttest then violin plot Paneled plot created at end of series Testing word count | t.test(revdat$wc ~ revdat$signed) wc_sign <- ggplot(revdat, aes(signed, wc, fill=signed)) + geom_violin( trim = FALSE, draw_quantiles = c(0.25, 0.5, 0.75), alpha = 0.5) + geom_jitter( width = 0.20, height = 0, alpha = 0.5, size = 1) + xlab("Signed Reviews") + ylab("Word Count") | Statistical Test | https://osf.io/uf63k/ | MsReviewsAnalysisScript.R |
739 | Correlations Examining correlations between time and aspects of reviews For each, first compute r and then scatter plot Plots also mark whether or not review was signed Paneled plot created at end of series | cor.test(revdat$order, revdat$wc) wc_time <- ggplot(revdat, aes(order, wc, color=signed)) + geom_point(size=1)+ xlab("Time") + ylab("Word Count") cor.test(revdat$order, revdat$posemo) pos_time <- ggplot(revdat, aes(order, posemo, color=signed)) + geom_point(size=1) + xlab("Time") + ylab("Positive Emotion Words") cor.te... | Visualization | https://osf.io/uf63k/ | MsReviewsAnalysisScript.R |
740 | removes nonalphanumeric characters | full.data <- multigsub("[^[:alnum]]", " ", full.data, fixed = TRUE) train.data <- multigsub("[^[:alnum]]", " ", train.data, fixed = TRUE) valid.data <- multigsub("[^[:alnum]]", " ", valid.data, fixed = TRUE) | Data Variable | https://osf.io/tnbev/ | lewis-acid-base-researchers.R |
741 | removes leading & trailing whitespaces | full.data <- trimws(full.data) train.data <- trimws(train.data) valid.data <- trimws(valid.data) | Data Variable | https://osf.io/tnbev/ | lewis-acid-base-researchers.R |
742 | construct a maximal glmer() model This model contains a fixed withinsubjects effect of Ambiguity (effectcoded with 0.5 amb), codes for modality effects and interactions, plus random effects by participants and items. | Acc.max <- glmer(Correct ~ 1 + Ambiguity.code + Modality.code1 + Modality.code2 + Interaction.code1 + Interaction.code2 + (1 + Ambiguity.code + Modality.code1 + Modality.code2 + Interaction.code1 + Interaction.code2 | Participant.Private.ID) + (1 | Item), data = Data.CohOnly, family = "binomial", control = glmerContr... | Statistical Modeling | https://osf.io/m87vg/ | Exp1_BehaviouralAnalyses_Code.R |
743 | construct a maximal lmer() model This model contains codes for modality effects, plus random effects by participants and items. | RT.AmbOnly.max <- lmer(logRT ~ 1 + Modality.code1 + Modality.code2 + (1 + Modality.code1 + Modality.code2 | Participant.Private.ID) + (1 | Item), data = Data.AmbOnly, REML=FALSE) RT.ListandRead.max <- lmer(logRT ~ 1 + Modality.code2 + (1 + Modality.code2 | Participant.Private.ID) + (1 | Item), data = Data.ListandRe... | Statistical Modeling | https://osf.io/m87vg/ | Exp1_BehaviouralAnalyses_Code.R |
744 | construct a maximal lmer() model This model contains a fixed effect for Ambiguity, plus random effects by participants and items. | RT.ListeningOnly.max <- lmer(logRT ~ 1 + Ambiguity.code + (1 + Ambiguity.code | Participant.Private.ID) + (1 | Item), data = Data.ListeningOnly, REML=FALSE) RT.ReadingOnly.max <- lmer(logRT ~ 1 + Ambiguity.code + (1 + Ambiguity.code | Participant.Private.ID) + (1 | Item), data = Data.ReadingOnly, REML=FALSE) RT.RSV... | Statistical Modeling | https://osf.io/m87vg/ | Exp1_BehaviouralAnalyses_Code.R |
745 | As this may be problematic for later analyses, we transform urbanity into its log: | dslong %<>% mutate(urbanity_log = log(urbanity)) | Data Variable | https://osf.io/3hgpe/ | 01_data-preparation-variable-setup.R |
746 | calculate Bayes factors for difference using logspline fit | prior <- dnorm(0,1) fit.posterior <- logspline(samples$BUGSoutput$sims.list$mu_alpha) posterior <- dlogspline(0, fit.posterior) # this gives the pdf at point delta = 0 prior/posterior | Statistical Modeling | https://osf.io/meh5w/ | multiplicationFactor_ttest.R |
747 | use normal distribution to approximate pvalue | FullAim1aPNcoefs$p.z <- 2 * (1 - pnorm(abs(FullAim1aPNcoefs$t.value))) FullAim1aPNcoefs effectsize::standardize_parameters(FullAim1aPN) FullAim1bPNcoefs$p.z <- 2 * (1 - pnorm(abs(FullAim1bPNcoefs$t.value))) FullAim1bPNcoefs effectsize::standardize_parameters(FullAim1bPN) FullAim1cPNcoefs$p.z <- 2 * (1 - pnorm(abs(FullA... | Statistical Test | https://osf.io/mcy6r/ | BeerGogglesorLiquidCouragePPARatingAnalyses.R |
748 | this function give the integral of the survival curve given by S.hat ont the time.grid: Y.grid | expected_survival <- function(S.hat, Y.grid) { grid.diff <- diff(c(0, Y.grid, max(Y.grid))) c(base::cbind(1, S.hat) %*% grid.diff) } threshol_list <- function(l,threshold){ l_threshold = c() for (x in (l)){ if (is.na(x)){ return(NA) } else{ if (x < - threshold){ l_threshold <- c(l_threshold,- threshold) } else{ if(x > ... | Statistical Modeling | https://osf.io/dr8gy/ | utils_surv.R |
749 | change data type of PHDYEAR to numeric | author_phd_data$PHDYEAR <-as.numeric(author_phd_data$PHDYEAR) | Data Variable | https://osf.io/uhma8/ | MMCPSRAuthorAnalysis.R |
750 | calculate seniority of each author at the time of each article | author_phd_data$status_article1 <- ifelse(author_phd_data$PHDYEAR == 0 |author_phd_data$PHDYEAR > author_phd_data$Article.1.Year.published, "Grad Student", ifelse(author_phd_data$yrs_to_article1 == 0|author_phd_data$yrs_to_article1 < 7, "Junior Scholar", ifelse(author_phd_data$yrs_to_article1 > 6, "Senior Scholar", "NA... | Data Variable | https://osf.io/uhma8/ | MMCPSRAuthorAnalysis.R |
751 | run check to see if any authors have missing data | check <- subset(mmcpsr_authors, is.na(mmcpsr_authors$Title)) | Data Variable | https://osf.io/uhma8/ | MMCPSRAuthorAnalysis.R |
752 | stacked graph regarding how participants came to see each advocacy type | ggplot(detected_advocacy, aes(x = Advocacy, y = Percentage, fill = Response, label = Percentage)) + geom_bar(position ="stack", stat="identity") + coord_flip(ylim=c(0,100)) + scale_y_continuous(labels = scales::percent_format(scale = 1)) + geom_text(aes(label = Percentage), size = 3, position = position_stack(vjust = 0... | Visualization | https://osf.io/uhma8/ | 8RememberingAdvocacy_Spanish.R |
753 | Define adjustment to calculate Hedge's g. To calculate Cohen's d set J < 1. Remember to change the true.ratio value of J as well. | J <- j <- 1 - 3/(4*(n + m - 2) - 1) G <- J*SMD sds <- sqrt((n + m)/(n*m) + SMD^2/2/(n + m)) V <- (J^2)*(sds^2) w <- 1/V if(method == "REML"){r.model <- rma(yi = G, vi = V,method = method, control=list(stepadj=0.5, maxiter=10000000000000000000000000))} else{ r.model <- rma(yi = G, vi = V,method = method) } fit.model <... | Statistical Test | https://osf.io/gwn4y/ | Reproducible_Simulations_line_plots.R |
754 | Generate parcellated data and do CFA generate parcellated datasets must specify data and the number of allocations (nAlloc) | list1=parcelAllocation(mod.par, data=usm, par.names, mod.items, nAlloc=2, do.fit=F, std.lv=T) | Data Variable | https://osf.io/w7afh/ | CFA script.R |
755 | Multifit() conducts a CFA for each data.frame in a list (saved from parcelAllocation) and returns all of the results of interest on one row per data.frame The results consist of 21 columns with fit measures ("npar", "chisq", "df", "pvalue", "cfi", "tli", "rmsea", "rmsea.pvalue", "srmr") followed by the latent r betwe... | multifit=function(data) { rows=c(131:136) # rows indexing the parameter estimates of interest (correlations among general factors) names=c("npar", "chisq", "df", "pvalue", "cfi", "tli", "rmsea", "rmsea.pvalue", "srmr", "IA_NFC", "IA_IU", "IA_URS", "NFC_IU", "NFC_URS", "IU_URS", "IA_NFC_se", "IA_IU_se", "IA_URS_se", "NF... | Statistical Modeling | https://osf.io/w7afh/ | CFA script.R |
756 | Get residuals of tie strength adjusted tie strength model_tiestrength glm(strength.mean~no.of.papers:Gender, family"gaussian", data new) model with strength not transformed looked a bit dodgy, so we logtransformed strength | model_tiestrength = glm(log(strength.mean)~no.of.papers:Gender+no.of.papers, family="gaussian", data = new) model_tiestrength = glm(log(strength.mean)~no.of.papers:Gender, family="gaussian", data = new) hist(resid(model_tiestrength), main = "residuals of model with log(tie strength)") qqnorm(resid(model_tiestrength)) s... | Statistical Modeling | https://osf.io/7v4ep/ | Collaboration boosts career progression_part |
757 | get number of papers (and other stats) per gender | ddply(new, "Gender",summarise, mean = mean(no.of.papers, na.rm = TRUE), median = median(no.of.papers, na.rm = TRUE), sd = sd(no.of.papers, na.rm = TRUE), N = sum(!is.na(no.of.papers)), se = sd / sqrt(N), min = min(no.of.papers, na.rm = TRUE), max = max(no.of.papers, na.rm = TRUE)) | Data Variable | https://osf.io/7v4ep/ | Collaboration boosts career progression_part |
758 | Create censoring VARIABLE PI status: 1 if PI, 0 if author did not make it to PI | new.PI$status.PI = ifelse(!is.na(new.PI$Time.to.PI),1,0) | Data Variable | https://osf.io/7v4ep/ | Collaboration boosts career progression_part |
759 | Gender effect of Gender on time to PI | flexgender<-flexsurvreg(Surv(new.PI$time.PI,new.PI$status.PI)~Gender, dist="lnorm", data=new.PI) flexgender | Data Variable | https://osf.io/7v4ep/ | Collaboration boosts career progression_part |
760 | fit_lognormal Lowest AIC, best fit Package flexsurv provides access to additional distributions Also allows plotting options want red lines to match as close as possible to KM curve | fit_exp<-flexsurvreg(Surv(new$n.years,new$status)~1, dist="exp") fit_weibull<-flexsurvreg(Surv(new$n.years,new$status)~1, dist="weibull") fit_gamma<-flexsurvreg(Surv(new$n.years,new$status)~1, dist="gamma") fit_gengamma<-flexsurvreg(Surv(new$n.years,new$status)~1, dist="gengamma") fit_genf<-flexsurvreg(Surv(new$n.years... | Visualization | https://osf.io/7v4ep/ | Collaboration boosts career progression_part |
761 | Generalized gamma distribution is not sig. better fit than lognormal distribution. Justifies using lognormal. AFT model AFT model uses survreg function from Survival package using lognormal distribution according to earlier exploration of the best distribution for the data use survreg for stepwise then flexsurvreg to g... | flexAFTgender<-flexsurvreg(Surv(n.years,status)~Gender,dist="lnorm",data=new) flexAFTgender plot(flexAFTgender,col=c("blue","red"),ci=T,xlab="Years",ylab="Survival probability") | Statistical Modeling | https://osf.io/7v4ep/ | Collaboration boosts career progression_part |
762 | Reverses the factor level ordering for labels after coord_flip() | df$labeltext<-factor(df$labeltext, levels=rev(df$labeltext)) df$colour <- c("dark grey","dark grey","black","black","dark grey","dark grey") df_TPICL$labeltext_TPICL <- factor(df_TPICL$labeltext_TPICL, levels=rev(df_TPICL$labeltext_TPICL)) df_TPICL$colour <- c("dark grey","dark grey","dark grey","dark grey","dark grey"... | Data Variable | https://osf.io/7v4ep/ | Collaboration boosts career progression_part |
763 | Add label to plot C | plotTPICL2<-plotTPICL1+ labs(tag="C")+ theme(plot.tag.position = c(0.19,0.5),plot.tag = element_text(size=14,face="bold")) | Visualization | https://osf.io/7v4ep/ | Collaboration boosts career progression_part |
764 | Now layout plots and plot them labels for plots A and B added here | ggarrange(plotPIstatus,plotTPICL2,heights=c(1,1.7),ncol=1,nrow=2,labels=c("A","B"),label.x=0.18,label.y=0.95) | Visualization | https://osf.io/7v4ep/ | Collaboration boosts career progression_part |
765 | Display raincloud plot | ggplot(dataset_anova, aes(session, score, fill = group)) + geom_rain(alpha = .5, cov = "group", rain.side = 'f2x2') + theme(text=element_text(size=20)) | Visualization | https://osf.io/dez9b/ | BaIn_ANOVAinR.R |
766 | Create a covariance matrix of the means in the placebo group | cov_group1 = matrix(c(vcov(cov_fit)[1,1],vcov(cov_fit)[1,3],vcov(cov_fit)[3,1], vcov(cov_fit)[3,3]),2,2) | Statistical Modeling | https://osf.io/dez9b/ | BaIn_ANOVAinR.R |
767 | Create a covariance matrix of the means in the treatment group | cov_group2 = matrix(c(vcov(cov_fit)[2,2],vcov(cov_fit)[2,4],vcov(cov_fit)[4,2], vcov(cov_fit)[4,4]),2,2) | Statistical Modeling | https://osf.io/dez9b/ | BaIn_ANOVAinR.R |
768 | Contrasting Two vs Fourstakes paradigms Decision generates table of beta coefficients and associated statistics | coef.stakes_decision <- data.frame(cbind(summary(m.stakes_decision)$coefficients, m.stakes_decision.CI)) %>% tibble::rownames_to_column('Variable') %>% rename(SE = Std..Error) %>% rename(Z = z.value) %>% rename(P = Pr...z..) %>% rename(CI_L = X2.5..) %>% rename(CI_U = X97.5..) %>% mutate(Variable = factor(Variable, lev... | Statistical Modeling | https://osf.io/uygpq/ | Model figures.R |
769 | Calculating the conditional entropy of denomination given designs by authorities separately for each df HIGHER/LOWER Higher denominations | authority <- sort(unique(higher$AUTHORITY)) CEDenomination.Designs <- c() HDenominations <- c() NormCEDenomination.Designs <- c() Ncoins <- c() AUTHORITY <- c() for(i in authority){ high_sub <- subset(higher, higher$AUTHORITY == i) denom_high <- as.data.frame(high_sub[,256:309]) motifs_high <- as.data.frame(high_sub[,3... | Statistical Modeling | https://osf.io/uckzx/ | P3_analysis.R |
770 | destructure string into an array using "_" as the delimiter | stringArr <- unlist(strsplit(as.character(string), split = "_", fixed = T)) | Data Variable | https://osf.io/4a9b6/ | PrefLook_functions.r |
771 | log transform RTs for statistical analysis | RTdata$logRT <- log10(RTdata$rt) hist(RTdata$logRT) | Data Variable | https://osf.io/4sjxz/ | ScenePrimingCFS_Analysis.R |
772 | fit a lienar mixed effects model with all theoretically relevant fixed effects and random intercepts for participant and target context | modelRT <- mixed(logRT ~ congruency*soa*mask_contrast + (1| participant) + (1|target_context), data = RTdata, method = "S", type=3) modelRT | Statistical Modeling | https://osf.io/4sjxz/ | ScenePrimingCFS_Analysis.R |
773 | pearon correlations within each of these four conditions | ca_subset <- CA %>% filter(soa == "200 ms" & mc == "100% contrast") cor.test(ca_subset$PC, ca_subset$CE) ca_subset <- CA %>% filter(soa == "400 ms" & mc == "100% contrast") cor.test(ca_subset$PC, ca_subset$CE) ca_subset <- CA %>% filter(soa == "200 ms" & mc == "20% contrast") cor.test(ca_subset$PC, ca_subset$CE) ca_sub... | Statistical Test | https://osf.io/4sjxz/ | ScenePrimingCFS_Analysis.R |
774 | descriptive statistics of accuracy / errors | er_desc <- data_er %>% group_by(participant) %>% summarize(n_correct = sum(acc), n_trials = length(participant)) | Data Variable | https://osf.io/4sjxz/ | ScenePrimingCFS_Analysis.R |
775 | fit a GLMM model with all theoretically relevant fixed effects | modelER <- mixed(acc ~ congruency*soa*mask_contrast + (1| participant) + (1|target_context), family = binomial("logit"), data = data_er, method = "LRT", type =3) summary(modelER) anova(modelER) | Statistical Modeling | https://osf.io/4sjxz/ | ScenePrimingCFS_Analysis.R |
776 | fit a GLMM model with all theoretically relevant effects | modelER <- mixed(acc ~ soa*mask_contrast + (1| participant) + (1|prime_context), family = binomial("logit"), data = acc_data, method = "LRT", type=3) modelER | Statistical Modeling | https://osf.io/4sjxz/ | ScenePrimingCFS_Analysis.R |
777 | calculate overall proportion | prop_overall := N / sum(N)] | Data Variable | https://osf.io/dqc3y/ | analysis_fullset.R |
778 | creating variables for lagged values and moving averages | delay_rf <- function(v,lag) c(rep(NA,lag),v[1:(length(v)-lag)]) moving_avg <- function(x,n) c(stats::filter(x,rep(1/n,n),sides=1)) res <- lapply(res,transform,cd4Rcd8=cd4/cd8) res <- lapply(res,transform,cd4_ma12=moving_avg(cd4,12),cd8_ma12=moving_avg(cd8,12),cd4Rcd8_ma12=moving_avg(cd4Rcd8,12),cd4_ma24=moving_avg(cd4,... | Data Variable | https://osf.io/gy5vm/ | risk_factors_monthly.R |
779 | create dummy variables gender | Sample2$Dfemale <- recode(Sample2$gender, "'weiblich'=1;; 'maennlich'=0;; 'divers'=0;; 'keine Angabe'=0") Sample2$Dmale <- recode(Sample2$gender, "'maennlich'=1;; 'weiblich'=0;; 'divers'=0;; 'keine Angabe'=0") Sample2$Ddiverse <- recode(Sample2$gender, "'divers'=1;; 'weiblich'=0;; 'maennlich'=0;; 'keine Angabe'=0") Sam... | Data Variable | https://osf.io/ezdgt/ | 3_Regression analyses.R |
780 | Plots Anxiety main effects LISD | ggplot(Sample2_Reg,aes(y=Anxiety,x=z_LISD_State_F1))+ geom_point(color = "indianred4")+geom_smooth(method="lm", color = "black", size = 0.5)+theme_classic() ggplot(Sample2_Reg,aes(y=Anxiety,x=z_LISD_Trait_F1))+ geom_point(color = "indianred4")+geom_smooth(method="lm", color = "black", size = 0.5)+theme_classic() ggplot... | Visualization | https://osf.io/ezdgt/ | 3_Regression analyses.R |
781 | Plots Depressed plot preparation: divide z_age into SD, mean, +SD | attach(Sample2_Reg) Sample2_Reg$age_3groups <- case_when(z_age > mean(z_age)+sd(z_age) ~ "high", z_age < mean(z_age)+sd(z_age) & z_age > mean(z_age)-sd(z_age) ~ "mean", z_age < mean(z_age)-sd(z_age) ~ "low") detach(Sample2_Reg) | Visualization | https://osf.io/ezdgt/ | 3_Regression analyses.R |
782 | Plots Manuscript Plots Manuscript with raw WLISD scores Anxiety | plot1_1 <- Sample2_Reg %>% ggplot(aes(x = LISD_State_F1, y = Anxiety)) + geom_point(color = "#00AFBB")+ geom_smooth(method = lm, color = "black", size = 0.5)+ xlim(1,5)+ ylim(-2,3)+ labs(x = "lonely & isolated (State 1)", y = "Anxiety (z)", Title = "State Factor 1")+ theme_classic() plot2_1 <- Sample2_Reg %>% ggplot(ae... | Visualization | https://osf.io/ezdgt/ | 3_Regression analyses.R |
783 | Create a line graph articles per year | ggplot(ArticlesbyYear, aes(x=Year, y=n, group = 1)) + geom_line(color="orange") + labs(y = "Number of Articles", angle = 45) + theme_minimal(base_size = 12) | Visualization | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
784 | Create a barplot articles per year | ggplot(ArticlesbyYear, aes(x = Year, y = n)) + geom_bar(stat = "identity", color = "steelblue3", fill = "steelblue3") + theme_minimal(base_size = 12) + labs(y = "Articles", angle = 45) + geom_text(aes(label = n), vjust ="center", size=3, hjust = "center", nudge_y = 1) | Visualization | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
785 | Create a barplot articles per journal | ggplot(Journals, aes(x = reorder(Journal, Percent), y = Percent)) + geom_bar(stat = "identity", color = "steelblue3", fill = "steelblue3") + coord_flip() + theme_minimal(base_size = 11) + labs(y = "Percent", x = "Journal") + geom_text(aes(label = Percent), vjust ="center", size=3, hjust = "center", nudge_y = 0.01) | Visualization | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
786 | Create barplot Subfields Percents counts | ggplot(subfield_count, aes(x = reorder(Subfield, Percent) , y = Percent, fill = Subfield)) + geom_bar(stat = "identity") + scale_fill_brewer(palette = "Blues", guide = FALSE) + coord_flip() + theme_minimal(base_size = 13) + labs(y = "Percent of Articles", x = "Subfield") + geom_text(aes(label = Percent), vjust ="center... | Visualization | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
787 | authorlevel count and proportion for all gender identity categories | gender_author <- author_data %>% subset(!is.na(Gender.apsa)) %>% dplyr::summarize(count = c(sum(male_apsa, na.rm=T), sum(female_apsa, na.rm=T), sum(nonbinary_apsa, na.rm=T))) gender_author <- gender_author %>% mutate(proportion = round(count / sum(count), 2)) %>% mutate(gender = c("male", "female", "nonbinary")) %>% dp... | Data Variable | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
788 | calculate count and proportion of gender author structure | gender_article_dedup <- gender_article %>% distinct(article_title, single_authored_male, single_authored_female, co_authored_male, co_authored_female, co_authored_mixed) gender_article_count <- data.frame(matrix(NA, nrow = 5, ncol = 3)) colnames(gender_article_count) <- c("Author_Gender", "Frequency", "Percent") gender... | Statistical Modeling | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
789 | articlelevel authorship structure for race / ethnic identity categories | race_ethnicity_article <- race_ethnicity_article %>% group_by(article_title) %>% mutate(white_authors = ifelse(mean(white) == 1, 1, 0), black_authors = ifelse(mean(black) == 1, 1, 0), east_asian_authors = ifelse(mean(east_asian) == 1, 1, 0), south_asian_authors = ifelse(mean(south_asian) == 1, 1, 0), latino_authors = i... | Data Variable | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
790 | count of articles that generated data using experimental techniques (includes articles that use both, percentage calculated using total empirical articles) | GenerateData[2,2] <- sum(MMCPSR_emp$EHPdata) GenerateData[2,3] <- sum(MMCPSR_emp$EHPdata)/nrow(MMCPSR_emp) | Data Variable | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
791 | Number (%) of articles drawing on data collected via Survey (alone + in combo. w/other techniques) Raw Count/percent alone | SurveySoloCount <- sum(MMCPSR_emp$`Ethnography / participant observation` ==0 & MMCPSR_emp$`Interviews/focus groups` == 0 & MMCPSR_emp$Survey == 1 & MMCPSR_emp$EHPdata == 0 & MMCPSR_emp$gendataNHP == 0 & MMCPSR_emp$`Employed data/information from pre-existing primary or secondary sources`==0) SurveySoloCount formattabl... | Data Variable | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
792 | Number (%) of articles using Field experiments (alone + in combo. w/other techniques) Count/percent alone | FieldExpSoloCount <- sum(MMCPSR_emp$`Survey experiment` ==0 & MMCPSR_emp$Field == 1 & MMCPSR_emp$Lab == 0 & MMCPSR_emp$OHPdata == 0 & MMCPSR_emp$gendataNHP == 0 & MMCPSR_emp$`Employed data/information from pre-existing primary or secondary sources`==0) FieldExpSoloCount formattable::percent(FieldExpSoloCount/nrow(MMCPS... | Data Variable | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
793 | Number (%) of articles using data generated through interaction with Dom gov | sum(MMCPSR_emp$`EHP - Domestic government` == 1 | MMCPSR_emp$`OHP - Domestic government`==1) formattable::percent(sum(MMCPSR_emp$`EHP - Domestic government` == 1 | MMCPSR_emp$`OHP - Domestic government`==1) /nrow(MMCPSR_emp), digits = 1) | Data Variable | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
794 | Number (%) of articles using data generated through interaction with Media | sum(MMCPSR_emp$`EHP - Media` == 1 | MMCPSR_emp$`OHP - Media`==1) formattable::percent(sum(MMCPSR_emp$`EHP - Media` == 1 | MMCPSR_emp$`OHP - Media`==1)/nrow(MMCPSR_emp), digits = 1) | Data Variable | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
795 | probit and marginal effects experimental data vs time | out.1.probit <-glm(EHPdata~Year, data = MMCPSR_emp, family = binomial(link = "probit")) summary(out.1.probit) probitmfx(out.1.probit, data = MMCPSR_emp) | Statistical Modeling | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
796 | create correlation matrix author gender vs methods categories | MethodGender_cor <- rcorr(as.matrix(MethodGender_sub)) | Data Variable | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
797 | DV Formal modeling only OLS Modeling only vs time | out.12 <- lm(MMCPSR_emp$ModelingOnly ~ MMCPSR_emp$Year) summary(out.12) out.15$coefficients[2] | Statistical Modeling | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
798 | Conclusion Policy Recommendations count and percent of all articles | sum(MMCPSR_Data$`Policy Recommendation`) formattable::percent(sum(MMCPSR_Data$`Policy Recommendation`)/nrow(MMCPSR_Data), digits = 1) | Data Variable | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
799 | DV time OLS policy recommendations vs time | out.21 <- lm(MMCPSR_emp$`Policy Recommendation`~MMCPSR_emp$Year) summary(out.21) out.21$coefficients[2] out.21b <- lm(MMCPSR_emp$`Policy Recommendation`~as.factor(MMCPSR_emp$Year)) summary(out.21b) plot_coefs(out.21b) plot_coefs(lm(MMCPSR_emp$`Policy Recommendation`~ 0 + as.factor(MMCPSR_emp$Year))) | Data Variable | https://osf.io/uhma8/ | MMCPSRAnalysis.R |
800 | scaling all the variables of interest (between 01) for a composite score | data8$gincdif_s = rescale(data8$gincdif) data8$smdfslv_s = rescale(data8$smdfslv) data8$sbstrec_r = 6 - data8$sbstrec # reverse scores first data8$sbstrec_r_s = rescale(data8$sbstrec_r) | Data Variable | https://osf.io/k853j/ | ESS_openness_2016.R |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.