File size: 4,280 Bytes
2a8276b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
clear all
set more off
set maxvar 10000
set mem 500m

*Set path
***


*cd ---- specify path to \data_replication\psid\

local procdata  "./proc_data/"
local rawdata  "./raw_data/"

cd proc_data

*Create consumption panel
*!!!!!!!!!!!!!!!!!!!!! consumption files are generated by the do-file 'consumption.do'
use cons1999.dta, clear
forval x=2001(2)2011{
append using cons`x'.dta
}

sort hid year
xtset hid year

save health_costs_for_income.dta, replace



*import income data
use for_reg.dta, clear
*merge with health costs
joinby hid year using health_costs_for_income.dta, unmatched(both)
tab _merge
drop if _merge!=3

*Prepare for regressions
*take out health costs
*first deflate costs
*Define the CPI annual average for each relevant year. 
*These are defined for the relevant year of the survey (e.g cpi1999 is for 1998)
local cpi1999 = 163.01
local cpi2001 = 172.19
local cpi2003 = 179.87
local cpi2005 = 188.91
local cpi2007 = 201.56 
local cpi2009 = 215.25
local cpi2011 = 218.09
gen heal_cons_deflated=heal_cons
forval x=1999(2)2011{
replace heal_cons_deflated=heal_cons_deflated*(`cpi1999' / `cpi`x'') if year==`x'
}
*take out equivalized! costs
*replace inc=inc-heal_cons_deflated/equiv

drop if inc<0


drop if year>2007
drop if age<25 & year==1999
drop if age>85 & year==2007
bys hid: gen nyear=[_N]
keep if nyear==5

*Generate the age polynomial
gen a2=(age^2)/10

*Generate year dummies
tab year, gen(time_dummy)

*Convert income into logs and keep only the relevant variables
replace inc=log(inc)
keep inc time_dummy* age a2 hid year

*Create retirement dummy
gen retired=1 if age>=65
replace retired=0 if retired!=1

*Find fit before retirement
reg inc age a2 time_dummy* retired



************************************************
************************************************
*Report the coefficients of the age polynomial
estimates table, keep(age a2 retired) b
************************************************
************************************************

predict inc_hat 


*Compute the deviation after permanent differences have been removed
gen x_tilda=inc   //use just log income



*Clean all observations that are above or below a threshold !!!!!!!!!!!!
*Make years consecutive
replace year=1 if year==1999
replace year=2 if year==2001
replace year=3 if year==2003
replace year=4 if year==2005
replace year=5 if year==2007

xtset hid year

*Compute mean by household (across time)
bysort hid: egen mean_hid_inc=mean(inc)
*Generate deviation from mean
gen dif_mean=inc-mean_hid_inc
*Find the top and bottom 1 percent
sum dif_mean, d
gen p1=r(p1)
gen p99=r(p99)
*Remove observations that are in the tails
replace x_tilda=. if dif_mean<=p1
replace x_tilda=. if dif_mean>=p99

*################################################
*FIND AUTOCOVARIANCES
*################################################






preserve

*Generate lag structure to compute the autocovariances
gen lag1_x_tilda=L.x_tilda
gen lag2_x_tilda=L2.x_tilda
gen lag3_x_tilda=L3.x_tilda
gen lag4_x_tilda=L4.x_tilda

*Compute autocovariances
*1)Variance
correlate x_tilda x_tilda, covariance
gen v0=r(Var_1)
*2)Lags
forval x=1(1)4{
correlate x_tilda lag`x'_x_tilda, covariance
g float v`x'=r(cov_12)
}
keep v*
duplicates drop

************************************************
************************************************
*Export results for autocovariances
save autocov_final_new_NO_health_ALL_v3.dta, replace
************************************************
************************************************

restore

gen lag1_x_tilda=L.x_tilda
keep x_tilda lag1_x_tilda

gen dif_x_tilda=x_tilda-lag1_x_tilda

egen SD = sd(dif_x_tilda)

egen KURT = kurt(dif_x_tilda)

egen SKEW = skew(dif_x_tilda)

keep SD KURT SKEW


duplicates drop 

merge using autocov_final_new_NO_health_ALL_v3.dta

drop _merge

*keep only the relevant moments and order them according for table 2
keep SD v0 v1 v2
replace SD = round(SD, 0.01)
replace v0 = round(v0, 0.01)
replace v1 = round(v1, 0.01)
replace v2 = round(v2, 0.01)

order v0 v1 v2 SD

cd ..

cd output

outsheet using tab2_income.csv, c replace