Sunday, May 13, 2012

Average Treatement Effects and Correlated Random Coefficients


* Average Treatement Effects and Correlated Random Coefficients

* Wooldridge, J. (2003). Further results on instrumental
* variables estimation of average treatment effects in
* the correlated random coefficient model. Economics
* Letters, 79(2), 185-191. doi:10.1016/S0165-1765
* (02)00318-X

* This simulation follows the paper by Wooldridge which
* shows that it is possible to consistently estimate
* the population average of the correlated random
* coefficient (CRC) model with multiple treatment
* variables under the standard assumptions under
* which instrumental variables (IV) estimators are
* consistent.

* In order to understand CRC think of the following
* model. Equation (1):

* E(y|a,b,w)=a+wb=a + b1*w1 + b2*w2 ... + bg*wg     (1)

* b can depend on unobserved heterogeniety as well
* as w.  Writing the equation in error form (2):

* y = a + wb + e, E(e|a,b,w)=0

* b can vary for every individual observation i.

* Therefore we are not trying to estimate each individual
* b which is impossible, but instead we are attempting
* to estimate the E(b) or the average treatment effect (ATE).

* This in general is difficult when bj is correlated with
* unobserved heterogeniety of individual j.

* In order to tackle this problem Wooldridge introduces
* a instrumental variable which is redundant or ignorable
* in covariates x and instruments z ie.

* E(y|a,b,w,x,z)=E(y|a,b,w)                          (6)

* The next assumption is what seperates xs from zs.

* E(a|x,z)=E(a|x)=gamma0 + x*gamma                   (7)

* This says that the mean a can depend on the explanatory
* variables x but not on the instrumental variable z.

* E(bj|x,z)=E(b|x)=beta0 + (x-E(x))*deltaj

* This assumption says that the average coefficient can
* depend on the explanatory variables x but not on the
* instrumental variable z.

* We can write a = gamma0 + x*gamma + c , E(c|x,z)=0 (8)
* and b=beta0 + (x-E(x))*deltaj + vj, E(vj|x,z)=0    (9)

* Ultimately by substituting this back into (1) we get:

* y=gamma0 + xgamma + wbeta + w1(x-E(x))*delta1 +....
*  wG(x-E(x))*deltaG + c + wv + e                    (10)

* The composite error term is c + wv + e.  Under the
* assumptions thus far E(c|x,z)=E(e|x,z)=0.  However
* E(wv|x,z)!=0 because b is generally not a
* deterministic linear function of x.

* Let us simulate up to this point imagining that we do
* have our bs as deterministic linear functions of xs.

clear
set obs 10000

gen x1 = rnormal()
gen x2 = rnormal()

* We will force w to be uncorrelated with x.
gen w1 = rnormal()
gen w2 = rnormal()

* Each idividual has his/her own intercept or starting
* point c
gen c=rnormal()
gen a= -5 + 3*x1 - 2*x2 + c

* Let us first imagine b being a deterministic function
* of observables x.  Each idividual has a unique response
* w.
gen b1 =  1 + .5*x1 + -2*x2
gen b2 = -2 + 1.75*x1 + 3*x2

* Imagine w1 as being years of education, w2 as being
* offered a job right out of college and x1 as intelligence
* and x2 as GPA.  The interesting thing is having a high
* GPA might be correlated with years of experience
* but it also might help explain the effect years of education
* has on y (future income).

* Let us generate our reduced form error
gen e=rnormal()*10

* Now let's generate our y variables
gen y = a + b1*w1 + b2*w2+e

* First let us generate our w1(x-E(x)) variables:
* The average of observed x is of course not the E(x) but it is
* a consistent estimator of E(x).

sum x1
gen w1_x1 = w1*(x1-r(mean))
gen w2_x1 = w2*(x1-r(mean))

sum x2
gen w1_x2 = w1*(x2-r(mean))
gen w2_x2 = w2*(x2-r(mean))

* Now we can estimate all of our coefficients directly using
* equation 10.

* y=gamma0 + x gamma + w beta + w1(x-E(x))*delta1 +....
*  wG(x-E(x))*deltaG + c + wv + e                    (10)
reg y x1 x2 w1 w2 w1_x1 w1_x2 w2_x1 w2_x2

* One can see that in the case where b is completely
* linearly dependant on x the above CRC estimator works fine.

* Stay tuned for what happens when there is some error in b!

No comments:

Post a Comment