cjerzak commited on
Commit
17d6042
·
verified ·
1 Parent(s): 0a26c7a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -3
README.md CHANGED
@@ -1,3 +1,76 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # Survey on Social Issues in the United States (2016 Election Study)
6
+
7
+ ## Overview
8
+ This data product contains individual-level responses to an online survey experiment conducted by **Connor Jerzak and co-authors** in the run-up to—and immediately after—the 2016 U.S. presidential election. Fieldwork began on **12 September 2016** and continued through **mid-November 2016**, giving researchers a rare before/after snapshot of attitudes shaped by a highly salient national campaign.
9
+
10
+ **Key design features**
11
+
12
+ * **Crime-framing experiment.** Respondents read a mock police-blotter story with experimentally varied details (suspect race, number of break-ins, presence/absence of racial information) before answering questions about policy, crime perceptions, and social spending.
13
+ * **Rich demographics & ideology.** Over 50 items capture party identification, vote choice, income, employment security, family status, education, racial identity, and core value trade-offs.
14
+ * **Panel structure.** A subset of respondents was re-contacted after Election Day, enabling within-person analyses of opinion change.
15
+
16
+ ## File Manifest
17
+
18
+ | File | Description |
19
+ |------|-------------|
20
+ | `survey_results.csv` | Clean, respondent-level dataset (wide format). Each column corresponds to a survey variable prefixed by its original Qualtrics question ID. |
21
+ | `Oct21_survey.pdf` | Archived survey instrument, including consent form and full questionnaire. |
22
+
23
+ ## Quick Start (R)
24
+
25
+ ```r
26
+ library(tidyverse)
27
+ df <- read_csv("survey_results.csv")
28
+
29
+ # Recode experimental treatment
30
+ # Q42 == "No" → Control
31
+ # Q42 == "Yes" & Q43 gives race
32
+ df <- df %>%
33
+ mutate(treat = case_when(
34
+ Q42 == "No" ~ "Control",
35
+ Q43 == "Black" ~ "Black",
36
+ Q43 == "White" ~ "White"
37
+ ))
38
+
39
+ # Estimate effect of racial cue on support for longer sentences
40
+ lm(long_sentences ~ treat + party_id + age, data = df)
41
+ ```
42
+
43
+ ## Variable Highlights
44
+
45
+ * **Safety perceptions:** `Q2`–`Q4`, `Q37`, `Q39`
46
+ * **Crime policy preferences:** `Q11`, `Q12`
47
+ * **Redistribution & welfare attitudes:** `Q8`, `Q9`, `Q46`–`Q51`
48
+ * **2016 vote intention & choice:** `Q41`, `Q44`, `Q45`
49
+ * **Economic security:** `Q29`–`Q32`
50
+ * **Child-rearing values:** `Q33`–`Q36`
51
+
52
+ See `Oct21_survey.pdf` for exact wording and response options.
53
+
54
+ ## Possible Use Cases
55
+
56
+ 1. **Election-season opinion dynamics** – exploit the before/after panel to examine how campaign events (debates, the Comey letter, Election Day) shifted perceptions of crime, policing, or redistribution.
57
+ 2. **Stereotype activation & policy support** – estimate causal effects of suspect-race cues on punitive crime policies or welfare attitudes.
58
+ 3. **Replication exercises** – reproduce classic findings from ANES or GSS items using a contemporary MTurk sample; ideal for teaching regression, causal inference, or text analysis (e.g., coding open-ended crime causes in `Q10`).
59
+ 4. **Value trade-off scaling** – model latent moral or parenting value dimensions with the paired choice items (`Q33`–`Q36`).
60
+ 5. **Small-N machine-learning demos** – demonstrate text classification, topic modeling, or mixed-effects models on a manageable survey.
61
+
62
+ ## Sampling & Fieldwork
63
+
64
+ Respondents were recruited via **Amazon Mechanical Turk**. Each wave paid \$0.25 and took ~5 minutes. The instrument included an informed-consent screen and was approved by the Harvard CUHS IRB. IP geo-coordinates (rounded to 3 decimals) were recorded for coarse location checks; no personally identifying information is included.
65
+
66
+ | Wave | Dates | N (unique) | Notes |
67
+ |------|-------|------------|-------|
68
+ | Pre-Election | 12 Sep – 04 Nov 2016 | ~1,200 | Prior to Election Day |
69
+ | Post-Election | 09 Nov – 15 Nov 2016 | 420 | Post Election Dady |
70
+
71
+ ## Data Quality Notes
72
+
73
+ * **Non-probability sample.** MTurk respondents skew younger, more educated, and more politically engaged than the general U.S. adult population.
74
+ * **Attention checks.** Various items (e.g., number of break-ins retention check) facilitate quality screening.
75
+ * **Missing values.** Skipped or invalid responses are coded `NA`.
76
+