clear all set more off set maxvar 10000 set mem 500m *Set path *** *cd ---- specify path to \data_replication\psid\ local procdata "./proc_data/" local rawdata "./raw_data/" cd proc_data *Create consumption panel *!!!!!!!!!!!!!!!!!!!!! consumption files are generated by the do-file 'consumption.do' use cons1999.dta, clear forval x=2001(2)2011{ append using cons`x'.dta } sort hid year xtset hid year save health_costs_for_income.dta, replace *import income data use for_reg.dta, clear *merge with health costs joinby hid year using health_costs_for_income.dta, unmatched(both) tab _merge drop if _merge!=3 *Prepare for regressions *take out health costs *first deflate costs *Define the CPI annual average for each relevant year. *These are defined for the relevant year of the survey (e.g cpi1999 is for 1998) local cpi1999 = 163.01 local cpi2001 = 172.19 local cpi2003 = 179.87 local cpi2005 = 188.91 local cpi2007 = 201.56 local cpi2009 = 215.25 local cpi2011 = 218.09 gen heal_cons_deflated=heal_cons forval x=1999(2)2011{ replace heal_cons_deflated=heal_cons_deflated*(`cpi1999' / `cpi`x'') if year==`x' } *take out equivalized! costs *replace inc=inc-heal_cons_deflated/equiv drop if inc<0 drop if year>2007 drop if age<25 & year==1999 drop if age>85 & year==2007 bys hid: gen nyear=[_N] keep if nyear==5 *Generate the age polynomial gen a2=(age^2)/10 *Generate year dummies tab year, gen(time_dummy) *Convert income into logs and keep only the relevant variables replace inc=log(inc) keep inc time_dummy* age a2 hid year *Create retirement dummy gen retired=1 if age>=65 replace retired=0 if retired!=1 *Find fit before retirement reg inc age a2 time_dummy* retired ************************************************ ************************************************ *Report the coefficients of the age polynomial estimates table, keep(age a2 retired) b ************************************************ ************************************************ predict inc_hat *Compute the deviation after permanent differences have been removed gen x_tilda=inc //use just log income *Clean all observations that are above or below a threshold !!!!!!!!!!!! *Make years consecutive replace year=1 if year==1999 replace year=2 if year==2001 replace year=3 if year==2003 replace year=4 if year==2005 replace year=5 if year==2007 xtset hid year *Compute mean by household (across time) bysort hid: egen mean_hid_inc=mean(inc) *Generate deviation from mean gen dif_mean=inc-mean_hid_inc *Find the top and bottom 1 percent sum dif_mean, d gen p1=r(p1) gen p99=r(p99) *Remove observations that are in the tails replace x_tilda=. if dif_mean<=p1 replace x_tilda=. if dif_mean>=p99 *################################################ *FIND AUTOCOVARIANCES *################################################ preserve *Generate lag structure to compute the autocovariances gen lag1_x_tilda=L.x_tilda gen lag2_x_tilda=L2.x_tilda gen lag3_x_tilda=L3.x_tilda gen lag4_x_tilda=L4.x_tilda *Compute autocovariances *1)Variance correlate x_tilda x_tilda, covariance gen v0=r(Var_1) *2)Lags forval x=1(1)4{ correlate x_tilda lag`x'_x_tilda, covariance g float v`x'=r(cov_12) } keep v* duplicates drop ************************************************ ************************************************ *Export results for autocovariances save autocov_final_new_NO_health_ALL_v3.dta, replace ************************************************ ************************************************ restore gen lag1_x_tilda=L.x_tilda keep x_tilda lag1_x_tilda gen dif_x_tilda=x_tilda-lag1_x_tilda egen SD = sd(dif_x_tilda) egen KURT = kurt(dif_x_tilda) egen SKEW = skew(dif_x_tilda) keep SD KURT SKEW duplicates drop merge using autocov_final_new_NO_health_ALL_v3.dta drop _merge *keep only the relevant moments and order them according for table 2 keep SD v0 v1 v2 replace SD = round(SD, 0.01) replace v0 = round(v0, 0.01) replace v1 = round(v1, 0.01) replace v2 = round(v2, 0.01) order v0 v1 v2 SD cd .. cd output outsheet using tab2_income.csv, c replace