| # README | |
| ## Overview | |
| The code in this replication package constructs the analysis tables, figures and scalars found in our paper using Stata and R. | |
| The results presented in our paper are obtained in three steps. | |
| In the first step, all original raw data is processed from our server. | |
| In the second step, the raw data is stripped of any PII elements and all anonymized datasets are merged together to create a dataset named `final_data_sample.dta`. | |
| All of the analysis presented in the paper is based on this anonymized data. | |
| In the third step, all descriptive tables and figures as well as all regression outputs are produced. | |
| In this replication archive, we provide the code necessary to carry out all three steps. We provide the anonymized dataset `final_data_sample.dta` in a separate archive (Allcott, Gentzkow, and Song, 2023): https://doi.org/10.7910/DVN/GN636M. | |
| Under access to that separate archive, please download `final_data_sample.dta`. Then, manually add the dataset to this archive under the folder `data/temptation/output`. This will allow you to run the `/analysis/` module (third step) which constructs the tables, figures and scalars found in the paper. The module `/data/` relies on confidential data which is not provided, and therefore will not properly run. | |
| This replication archive contains additional files to help replication in the `/docs/` folder. | |
| The first is called `DescriptionOfSteps.pdf` and it describes which modules are included in either steps as well as how they relate to each other. The file `Step1_Step2_DAG.pdf` illustrates how step 1 and 2 are carried via a directed-acyclic graph. | |
| The third is called `MappingsTablesAndFigures.pdf` and it provides a mapping of all the tables | |
| and figures to their corresponding program. | |
| The replication routine can be run by following the instructions in the **Instructions to replicators** section of this README. | |
| Support by the authors for replication is provided if necessary. The replicator should expect the code to run for about 40 minutes. | |
| ## Data Availability and Provenance Statements | |
| This archive includes data that was collected from an Android application and from surveys as detailed in Section 3 of the paper. | |
| The folder `experiment_design` contains the questionnaires of all 5 surveys (recruitment survey and the next 4 surveys administered to our sample). It also contains a subfolder `AppScreenshots` that has various screenshots of our application Phone Dashboard. | |
| In the separate archive, we provide the anonymized dataset `final_data_sample.dta` which gathers aggregated usage data from the application and survey data. Each individual in our final sample is assigned a user ID. Variables that correspond to information coming from the application start with `PD_P1`, `PD_P2`, `PD_P3`, `PD_P4`, `PD_P5` (depending on which period 1-5 they were collected). Variables that correspond to information coming from surveys start with either `S1_`, `S2_`, `S3_` or `S4_` (depending on which survey 1-4 they were collected). | |
| The `codebook.xlsx` file at the root of the repository is the codebook for `final_data_sample.dta`. It lists all the variables found in the dataset along with their labels, units and values (if applicable). | |
| ### Statement about Rights | |
| We certify that the author(s) of the manuscript have legitimate access to and permission to use the data used in this manuscript. | |
| ### License for Data | |
| All databases, images, tables, text, and any other objects are available under a Creative Commons Attribution 4.0 International Public License. Please refer to the document `LICENSE-data.md` at the root of the repository. | |
| ### Summary of Availability | |
| Some data **cannot be made** publicly available. | |
| ### Details on each Data Source | |
| The raw data for this project are confidential and were collected by the authors. The authors will assist with any reasonable replication attempts and can be contacted by email. This paper uses data obtained from an Android application Phone Dashboard and surveys. | |
| The dataset `final_data_sample.dta`, provided in the separate archive, combines the data from both our surveys and our Phone Dashboard application. It is derived after processing all the raw confidential data from the Phone Dashboard application. This dataset aggregates usage data at the user level and combines it with variables obtained from our surveys. All variables in this dataset have corresponding value labels. One can also refer to the provided codebook, `codebook.xlsx`, at the root of the repository, for more information on each variable. | |
| ## Dataset list | |
| - As detailed in the graph `doc/Step1_Step2_DAG.pdf`, our pipeline processes the raw data from our Phone Dashboard application as well as from our surveys. The code for this data-processing is provided in the `/data/` folder. Multiple intermediate files are generated through this pipeline. These files are not provided as part of this replication archive for confidentiality reasons. | |
| - The file `final_data_sample.dta`, provided in the separate archive, is obtained at the end of the data-processing pipeline. It combines data from the application and the surveys. It serves as input for the analysis figures and tables. This file is provided in this replication archive. | |
| ## Computational requirements | |
| All requirements must be installed and set up for command line usage. For further detail, see the **Command Line Usage** section below. | |
| We manage Python and R installations using conda or miniconda. | |
| To build the repository as-is, the following applications are additionally required: | |
| * LyX 2.3.5.2 | |
| * R 3.6.3 | |
| * Stata 16.1 | |
| * Python 3.7 | |
| These software are used by the scripts contained in the repository in the `setup` folder. Instructions to set up the environment are found below in the section `Instructions to run the repository`. | |
| ### Software requirements | |
| The file `setup/conda_env.yaml` will install all the R and Python dependencies. Please refer to the section `Instructions to run the repository` for detailed steps on how to install the required environment and run the scripts. | |
| Below we list the softwares and packages required to run the repository with the version used. | |
| - Python 3.7 | |
| - `pyyaml` (5.3.1) | |
| - `numpy` (1.16.2) | |
| - `pandas` (0.25.0) | |
| - `matplotlib` (3.0.3) | |
| - `gitpython` (2.1.15) | |
| - `termcolor` (1.1.0) | |
| - `colorama` (0.4.3) | |
| - `jupyter` (4.6.3) | |
| - `future` (0.17.1) | |
| - `linearmodels` (4.17) | |
| - `patsy` (0.5.1) | |
| - `stochatreat` (0.0.8) | |
| - `pympler` (0.9) | |
| - `memory_profiler` | |
| - `dask`(1.2.1) | |
| - `openpyxl` (2.6.4) | |
| - `requests` (2.24.0) | |
| - `pip` (19) | |
| - R 3.6 | |
| - `yaml` (2.2.1) | |
| - `haven` (2.3.1) | |
| - `tidyverse` (1.3.1) | |
| - `r.utils` (4.0.3) | |
| - `plm` (2.6.1) | |
| - `janitor` (2.1.0) | |
| - `rio` (0.5.26) | |
| - `lubridate` (1.7.10) | |
| - `magrittr` (2.0.1) | |
| - `stargazer` (5.2.2) | |
| - `rootSolve` (1.8.2.1) | |
| - `rlist` (0.4.6.1) | |
| - `ebal` (0.1.6) | |
| - `latex2exp` (0.5.0) | |
| - `estimatr` (0.30.2) | |
| ### Controlled Randomness | |
| We control randomness by setting random seeds. | |
| 1. For the data-processing: The program`/data/source/clean_master/cleaner.py` has its own random seed set on line 24. The program `data/source/build_master/builder.py` calls the `/lib/` file `lib/data_helpers/clean_survey.py` that sets a random seed on line 21.The program `lib/experiment_specs/study_config.py` contains parameters used by `data/source/clean_master/management/earnings.py` and `data/source/clean_master/management/midline_prep.py` which include a random seed set on line 459. | |
| 2. For the analysis: The program `lib/ModelFunctions.R` contains parameters used by `structural/code/StructuralModel.R` and `treatment_effects/code/ModelHeterogeneity.R` which include a random seed set on line 48. | |
| ### Memory and Runtime Requirements | |
| The folder `/data/` is responsible for all the data-processing using the raw Phone Dashboard data as well as the survey data. At the end of this data-processing, the file `final_data_sample.dta` is created. In the presence of the raw confidential data (which is not provided with this replication archive), this whole process normally takes around 60 hours on 20 CPUs and 12GB memory per CPU. | |
| The folder `/analysis/` is responsible for the construction of all the figures, plots and scalars used in the paper using the `final_data_sample.dta` dataset provided in the separate archive. The replicator will be able to run all scripts in this folder. The whole analysis takes around 40 minutes to run on a computer with 4 cores and 16GB of memory. Most files within `analysis` take less than 5 minutes to run. However, the file `analysis/code/StructuralModel.R` takes around 20 minutes to run. | |
| #### Summary | |
| Approximate time needed to reproduce the analyses on a standard (2022) desktop machine is <1 hour. | |
| #### Details | |
| The `analysis` code was last run on a **4-core Intel-based laptop with MacOS version 10.15.5**. | |
| The `data` code was last run on a **an Intel server with 20 CPUs and 12GB of memory per CPU**. Computation took 60 hours. | |
| ## Description of programs/code | |
| In this replication archive : | |
| - The folder `/data/source/` is responsible for all the data processing of our Phone Dashboard application and our surveys. | |
| The subfolders `/data/source/build_master/`, `/data/source/clean_master/` and `/data/source/exporters/` contains Python files that define the classes and auxiliary functions called in the main script `/data/run.py`. This main script generates the master files gathering all information at the user-level or at the user-app-level. | |
| - The folder `/data/temptation/` is responsible for cleaning the master files produced as output of `/data/source/`. | |
| It outputs the anonymized dataset `final_data_sample.dta` which contains all the information at the user level. This dataset is used throughout the analysis of the paper. | |
| - The folder `/analysis/` contains all the programs generating the tables, figures and scalars in the paper. The programs in the `/analysis/` folder has been categorised under three subfolders: | |
| 1. `/analysis/descriptive/` produces tables and charts of descriptives statistics. It contains the below programs: | |
| * `code/CommitmentDemand.do` (willingness-to-pay and limit tightness plots) | |
| * `code/COVIDResponse.do` (survey stats on response to COVID) | |
| * `code/DataDescriptive.do` (sample demographics and attrition tables) | |
| * `code/HeatmapPlots.R` (predicted vs. actual FITSBY usage) | |
| * `code/QualitativeEvidence.do` (descriptive plots for addiction scale, interest in bonus/limit) | |
| * `code/SampleStatistics.do` (statistics about completion rates for study) | |
| * `code/Scalars.do` (statistics about MPL and ideal usage reduction) | |
| * `code/Temptation.do` (plots desired usage change for various tempting activities) | |
| 2. `/analysis/structural/` estimates parameters and generates plots for our structural model. It contains the below program: | |
| * `code/StructuralModel.R` | |
| 3. `/analysis/treatment_effects/` produces model-free estimates of treatment effects. It contains the below programs : | |
| * `code/Beliefs.do` (compares actual treatment effect with predicted treatment effect) | |
| * `code/CommitmentResponse.do` (plots how treatment effect differs by SMS addiction scale and other survey indicators) | |
| * `code/FDRTable.do` (estimates how treatment effect differs by SMS addiction scale and other indicators, adjusted for false-discovery rate. Also plots some descriptive statistics) | |
| * `code/HabitFormation.do` (compares actual and predicted usage) | |
| * `code/Heterogeneity.do` (plots heterogeneous treatment effects) | |
| * `code/HeterogeneityInstrumental.do` (plots heterogeneous treatment effects) | |
| * `code/ModelHeterogeneity.R` (generates other heterogeneity plots, some temptation plots) | |
| * `code/SurveyValidation.do` (plots effect of rewarding accurate usage prediction on usage prediction accuracy) | |
| Most of the programs in the analysis folder rely on the dataset `final_data_sample.dta`. However, some programs further require the datasets `final_data.dta` and `AnalysisUser.dta` to compte certain scalars mentioned in the paper. These programs are `/analysis/descriptive/code/DataDescriptive.do`, `/analysis/descriptive/code/SampleStatistics.do`, `/analysis/descriptive/code/Scalars.do` and `/analysis/treatment_effects/code/ModelHeterogeneity.R`. Since these two datasets are not provided with the replication archive for confidentiality reasons, the portions of the code requiring them have been commented out in the relevant programs. | |
| - The folder `/lib/` contains auxiliary functions and helpers. | |
| - The folder `/paper_slides/` contains all the input and files necessary to the compiling of the paper. The subfolder `/paper_slides/figures/` contains screenshots and other figures that are not derived from programs. The subfolder `/paper_slides/figures/` contains the paper Lyx file, the bibliography as well as the `motivation_correlation.lyx` Lyx table. | |
| - The folder `setup` contains files to setup the conda environment as well as to install the R, Python and Stata dependencies. | |
| - The folder `experiment_design` contains the questionnaires to our surveys as well as screenshots from the Phone Dashboard application. | |
| - The folder `/docs/` contains additional documents to guide the replicator. The file `docs/DescriptionOfSteps.pdf` gives a high-level overview of the steps involved in the data processing from our | |
| application Phone Dashboard to the analysis in the paper. It splits the data-processing into three steps : | |
| 1) Processing the Raw Data from PhoneDashboard (done by the `/data/source/` folder) | |
| 2) Cleaning the Original Data from PhoneDashboard (done by the `/data/temptation/` folder) | |
| 3) Analyze the Anonymized Data (done by the `/analysis/` folder) | |
| Since the data inputs for step 1 and 2 are not provided with this replication archive, we include a further document `docs/Step1_Step2_DAG.pdf` that illustrate how we carried them internally via a | |
| directed-acyclic graph. Finally, the file `docs/MappingsTablesAndFigures.pdf` provides a mapping of all the tables and figures to their corresponding program. | |
| Note that the modules or portions of programs that cannot be run due to unshared data have been commented out in the relevant main run scripts. | |
| ### License for code | |
| All code is available under a MIT License. Please refer to the document `LICENSE-code.md` at the root of the repository. | |
| ## Instructions to replicators | |
| ### Setup | |
| 1. Create a `config_user.yaml` file in the root directory. A template can be found in the `setup` subdirectory. See the **User Configuration** section below for further detail. If you do not have any external paths you wish to specify, and wish to use the default executable names you can skip this step and the default `config_user.yaml` will be copied over in step 4. | |
| 2. If you already have conda setup on your local machine, feel free to skip this step. If not, this will install a lightweight version of conda that will not interfere with your current python and R installations. | |
| Install miniconda and jdk to be used to manage the R/Python virtual environment, if you have not already done this. You can install these programs from their websites [here for miniconda](https://docs.conda.io/en/latest/miniconda.html) and [here for jdk](https://www.oracle.com/java/technologies/javase-downloads.html). If you use homebrew (which can be download [here](https://brew.sh/)) these two programs can be downloaded as follows: | |
| ``` | |
| brew install --cask miniconda | |
| brew install --cask oracle-jdk | |
| ``` | |
| Once you have done this you need to initialize conda by running the following lines and restarting your terminal: | |
| ``` | |
| conda config --set auto_activate_base false | |
| conda init $(echo $0 | cut -d'-' -f 2) | |
| ``` | |
| 3. Create conda environment with the command: | |
| ``` | |
| conda env create -f setup/conda_env.yaml | |
| ``` | |
| 4. Run the `check_setup.py` file. One way to do this is to run the following bash command in a terminal from the `setup` subdirectory: | |
| ``` | |
| python3 check_setup.py | |
| ``` | |
| 5. Install R dependencies that cannot be managed using conda with the `setup_r.r` file. One way to do this is to run the following bash command in a terminal from the `setup` subdirectory: | |
| ``` | |
| Rscript setup_r.r | |
| ``` | |
| ### Usage | |
| Once you have succesfully completed the **Setup** section above, each time that you run any analysis make sure the virtual environment associated with this project is activated, using the command below (replacing with the name of this project). | |
| ``` | |
| conda activate PROJECT_NAME | |
| ``` | |
| If you wish to return to your base installation of python and R you can easily deactivate this virtual environment using the command below: | |
| ``` | |
| conda deactivate | |
| ``` | |
| ### Adding Packages | |
| #### Python | |
| Add any required packages to `setup/conda_env.yaml`. If possible add the package version number. If there is a package that is not available from `conda` add this to the `pip` section of the `yaml` file. In order to not re-run the entire environment setup you can download these individual files from `conda` with the command | |
| ``` | |
| conda install -c conda-forge <PACKAGE> | |
| ``` | |
| #### R | |
| Add any required packages that are available via CRAN to `setup/conda_env.yaml`. These must be prepended with `r-`. If there is a package that is only available from GitHub and not from CRAN, add this package to `setup/setup_r.r`. These individual packages can be added in the same way as Python packages above (with the `r-` prepend). | |
| #### Stata | |
| Install Stata dependencies using `setup/download_stata_ado.do`. We keep all non-base Stata ado files in the `lib` subdirectory, so most non-base Stata ado files will be versioned. To add additional stata dependencies, use the following bash command from the `setup` subdirectory: | |
| ``` | |
| stata-mp -e download_stata_ado.do | |
| ``` | |
| ### Build | |
| 1. Follow the *Setup* instructions above. | |
| 2. From the root of repository, run the following bash command: | |
| ``` | |
| python run_all.py | |
| ``` | |
| ### Command Line Usage | |
| For specific instructions on how to set up command line usage for an application, refer to the [RA manual](https://github.com/gentzkow/template/wiki/Command-Line-Usage). | |
| By default, the repository assumes the following executable names for the following applications: | |
| ``` | |
| application : executable | |
| python : python | |
| lyx : lyx | |
| r : Rscript | |
| stata : statamp (will need to be updated if using a version of Stata that is not Stata-MP) | |
| ``` | |
| Default executable names can be updated in `config_user.yaml`. For further detail, see the **User Configuration** section below. | |
| ## User Configuration | |
| `config_user.yaml` contains settings and metadata such as local paths that are specific to an individual user and thus should not be committed to Git. For this repository, this includes local paths to [external dependencies](https://github.com/gentzkow/template/wiki/External-Dependencies) as well as executable names for locally installed software. | |
| Required applications may be set up for command line usage on your computer with a different executable name from the default. If so, specify the correct executable name in `config_user.yaml`. This configuration step is explained further in the [RA manual](https://github.com/gentzkow/template/wiki/Repository-Structure#Configuration-Files). | |
| ## Windows Differences | |
| The instructions above are for Linux and Mac users. However, with just a handful of small tweaks, this repo can also work on Windows. | |
| If you are using Windows, you may need to run certain bash commands in administrator mode due to permission errors. To do so, open your terminal by right clicking and selecting `Run as administrator`. To set administrator mode on permanently, refer to the [RA manual](https://github.com/gentzkow/template/wiki/Repository-Usage#Administrator-Mode). | |
| The executable names are likely to differ on your computer if you are using Windows. Executable names for Windows will typically look like the following: | |
| ``` | |
| application : executable | |
| python : python | |
| lyx : LyX#.# (where #.# refers to the version number) | |
| r : Rscript | |
| stata : StataMP-64 (will need to be updated if using a version of Stata that is not Stata-MP or 64-bit) | |
| ``` | |
| To download additional `ado` files on Windows, you will likely have to adjust this bash command: | |
| ``` | |
| stata_executable -e download_stata_ado.do | |
| ``` | |
| `stata_executable` refers to the name of your Stata executable. For example, if your Stata executable was located in `C:\Program Files\Stata15\StataMP-64.exe`, you would want to use the following bash command: | |
| ``` | |
| StataMP-64 -e download_stata_ado.do | |
| ``` | |
| ## List of tables and programs | |
| The file `docs/MappingsTablesAndFigures.pdf` provides a mapping of all the tables and figures to their corresponding program. | |
| ## References | |
| Allcott, Hunt, Matthew Gentzkow, and Lena Song. “Data for: Digital Addiction.” Harvard Dataverse, 2023. https://doi.org/10.7910/DVN/GN636M. | |
| Allcott, Hunt, Matthew Gentzkow, and Lena Song. “Digital Addiction.” American Economic Review 112, no. 7 (July 2022): 2424–63. https://doi.org/10.1257/aer.20210867. | |