* This simulation tests our ability to estimate consumer preferences* Stata simulation and estimation

* v=max{x,y}{x^a * y^b} s.t. (W=xPx + yPy)

* since ln is monotonic and increasing this is the same as:

* max{x,y}{a*ln(x)+b*ln(y)} s.t. (W=xPx + yPy)

* L=a*ln(x)+b*ln(y) M(W>=xPx + yPy)

* Lx:= a/x + M*Px=0

* Ly:= b/y + M*Py=0

* LM:= W-xPx - yPy=0

* a/xPx = -M

* b/yPy = -M

* a/xPx = b/yPy

* xPx/a = yPy/b

* xPx = ayPy/b

* W-ayPy/b + yPy = 0

* W-(Py(a/b) + Py)y = 0

* y = W/(Py(a/b + 1))

* because we know the solutions are symetic

* x = W/(Px(b/a + 1))

clear

* so Let's start the simulation

* Say we have a cross section with 1000 cities

set seed 101

set obs 1000

* first let us set the parameters a and b, these are not observable

gen a = .7

gen b = .3

* each city faces a different price. the .5 is so that no city

* gets a price too close to zero.

gen Px=runiform()+.5

gen Py=runiform()+.5

* each city has a different level of wealth

gen W=runiform()*5+1

* Lets generate the errors in X. We want the errors to be correlated

* which can be gotten at by a little trick but I also wanted the errors

* to be inform. Unfortunately adding together 2 uniform distributions

* will make another uniform distribution

gen rv1=runiform()

gen rv2=runiform()

* The correlation between the variables is

gl rho12 = .5

gen v1=rv1

gen v2=rv1*$rho12 + rv2*(1-$rho12)^.5

corr v1 v2

* Not quite perfect but a good approximation

* Now let's generate consumption level

gen y = W/(Py*(a/b + 1)) + v1*4

label var y "Food"

gen x = W/(Px*(b/a + 1)) + v2()*4

label var x "Entertainment"

sum x y

************************************************************

* simulation END

************************************************************

* Estimation begin

* now we want to recover the underlying a and b coefficients

* y = W/(Py(a/b + 1)) = (W/Py)/(a/b + 1)

* x = W/(Px(b/a + 1)) = (W/Px)/(b/a + 1)

* To make things simple let's call a/b gamma

* To make things simple let's call b/a gamma_inv

* Now lets see how well we can retrieve the a/b ratio

* (Note: In this case the magnitudes of a and b are irrelevant)

nl (y = (W/Py)/({gamma_hat} + 1))

* save the cofficient

local gamma_hat = _b[/gamma_hat]

predict y_hat

label var y_hat "Food"

nl (x = (W/Px)/({gamma_inv_hat} + 1))

local gamma_inv_hat = _b[/gamma_inv_hat]

predict x_hat

label var x_hat "Entertainment"

gen p = x

label var p "perfict fit"

two (line x_hat x , sort) (line y_hat y , sort) (line p p , sort color(red))

local gamma_inv_hat_inv =1/`gamma_inv_hat'

di "Does the inverse of gamma_inv_hat_inv = gamma_hat? (ie`gamma_inv_hat_inv'=`gamma_hat'?)"

* What we really want to know is:

* is (gamma_hat-gamma_inv_hat)!=0?

* In order to evaluate (gamma_hat-gamma_inv_hat) we need to find a standard error

* for (gamma_hat-gamma_inv_hat).

* There are several ways of doing this. The most important thing to recognize

* is that both gamma_hat and gamma_inv_hat have their own standard errors.

* We will bootstrap to find the standard errors on the gammma_inv_inv estimate

* we might as well bootstrap the errors of the gamma_hat as well

cap program drop gamma_inv_hat

program define gamma_inv_hat, rclass

nl (y = (W/Py)/({gamma_hat} + 1))

return scalar gamma_hat = _b[/gamma_hat]

nl (x = (W/Px)/({gamma_inv_hat} + 1))

return scalar gammma_inv_inv = 1/_b[/gamma_inv_hat]

end

gamma_inv_hat

return list

bs gamma_hat=r(gamma_hat) gammma_inv_inv=r(gammma_inv_inv), reps(50): gamma_inv_hat

local se_gamma_hat = _se[gamma_hat]

local se_gammma_inv_inv = _se[gammma_inv_inv]

* if we assume that gamma_hat and gammma_inv_inv are perfectly correlated then

* se(gamma_hat + gammma_inv_inv) = se(gamma_hat) + se(gammma_inv_inv)

* this is the most conservative assumption when you are trying to test if

* two estimators are different. But because we are trying to test that they are the

* same this is the most generous assumption

local se_tot= `se_gamma_hat' + `se_gammma_inv_inv'

di "so the total se() is se_gamma_hat + se_gammma_inv_inv =" ///

_newline " `se_gamma_hat' + `se_gammma_inv_inv' = `se_tot'" ///

_newline "and gamma_inv_hat - gamma_hat = " `=1/`gamma_inv_hat'' - `gamma_hat' ///

_newline "And we want to test the statistical magnitude of:" ///

_newline " (gamma_inv_hat_inv - gamma_hat)/(se_gamma_hat + se_gammma_inv_inv)" ///

_newline " =(`gamma_inv_hat_inv'-`gamma_hat')/(`se_gamma_hat' + `se_gammma_inv_inv')" ///

_newline " ="(`gamma_inv_hat_inv'-`gamma_hat')/(`se_gamma_hat' + `se_gammma_inv_inv')

***********************************STOP****************************************

* So what is the deal? Is the non-linear estimator so inconsistent that it will never

* converge on the correct answer?

* Check your model specifications!

* Where do you allow for the error to enter? This might not be a big deal with a E(u)=0

* error, but since we used runiform as our error specification things get a little

* more dicy, b.c. E(U(0,1))=.5

* So let's try this again:

nl (y = (W/Py)/({gamma_hat} + 1) + {alpha})

* save the cofficient

local gamma_hat = _b[/gamma_hat]

nl (x = (W/Px)/({gamma_inv_hat} + 1) + {alpha})

local gamma_inv_hat = _b[/gamma_inv_hat]

local gamma_inv_hat_inv =1/`gamma_inv_hat'

di "Does the inverse of gamma_inv_hat_inv = gamma_hat? (ie`gamma_inv_hat_inv'=`gamma_hat'?)"

cap program drop gamma_inv_hat

program define gamma_inv_hat, rclass

nl (y = (W/Py)/({gamma_hat} + 1)+ {alpha})

return scalar gamma_hat = _b[/gamma_hat]

nl (x = (W/Px)/({gamma_inv_hat} + 1)+ {alpha})

return scalar gammma_inv_inv = 1/_b[/gamma_inv_hat]

end

gamma_inv_hat

return list

bs gamma_hat=r(gamma_hat) gammma_inv_inv=r(gammma_inv_inv), reps(50): gamma_inv_hat

local se_gamma_hat = _se[gamma_hat]

local se_gammma_inv_inv = _se[gammma_inv_inv]

* if we assume that gamma_hat and gammma_inv_inv are now uncorrelated

* se(gamma_hat + gammma_inv_inv) = (se(gamma_hat)^2 + se(gammma_inv_inv)^2)^.5

* this is the most conservative assumption when you are trying to test if

* two estimators are different. But because we are trying to test that they are the

* same this is the most generous assumption

local test_stat = (`gamma_inv_hat_inv'-`gamma_hat')/(`se_gamma_hat'^2 + `se_gammma_inv_inv'^2)^.5

di "so the total se() is se_gamma_hat + se_gammma_inv_inv =" ///

_newline " `se_gamma_hat' + `se_gammma_inv_inv' = `se_tot'" ///

_newline "and gamma_inv_hat - gamma_hat = " `=1/`gamma_inv_hat'' - `gamma_hat' ///

_newline "And we want to test the statistical magnitude of:" ///

_newline " (gamma_inv_hat_inv - gamma_hat)/(se_gamma_hat^2 + se_gammma_inv_inv^2)^.5" ///

_newline " =(`gamma_inv_hat_inv'-`gamma_hat')/(`se_gamma_hat'^2 + `se_gammma_inv_inv'^2)^.5" ///

_newline " = `test_stat'"

* I am not entirely sure how many degrees of freedom to use with the resulting test

* We are testing 1 restriction with, _N - 2 degrees of freedom.

local fstat=F(1,_N - 2,`test_stat'^2)

di "If I am doing this right then the probability of seeing this results based on the f" ///

_newline " distribution is equal to 1-F(1,_N - 2,`=`test_stat'^2') =" 1-`fstat'

di "So we fail to reject the null!"

* v=max{x,y}{x^a * y^b} s.t. (W=xPx + yPy)

* since ln is monotonic and increasing this is the same as:

* max{x,y}{a*ln(x)+b*ln(y)} s.t. (W=xPx + yPy)

* L=a*ln(x)+b*ln(y) M(W>=xPx + yPy)

* Lx:= a/x + M*Px=0

* Ly:= b/y + M*Py=0

* LM:= W-xPx - yPy=0

* a/xPx = -M

* b/yPy = -M

* a/xPx = b/yPy

* xPx/a = yPy/b

* xPx = ayPy/b

* W-ayPy/b + yPy = 0

* W-(Py(a/b) + Py)y = 0

* y = W/(Py(a/b + 1))

* because we know the solutions are symetic

* x = W/(Px(b/a + 1))

clear

* so Let's start the simulation

* Say we have a cross section with 1000 cities

set seed 101

set obs 1000

* first let us set the parameters a and b, these are not observable

gen a = .7

gen b = .3

* each city faces a different price. the .5 is so that no city

* gets a price too close to zero.

gen Px=runiform()+.5

gen Py=runiform()+.5

* each city has a different level of wealth

gen W=runiform()*5+1

* Lets generate the errors in X. We want the errors to be correlated

* which can be gotten at by a little trick but I also wanted the errors

* to be inform. Unfortunately adding together 2 uniform distributions

* will make another uniform distribution

gen rv1=runiform()

gen rv2=runiform()

* The correlation between the variables is

gl rho12 = .5

gen v1=rv1

gen v2=rv1*$rho12 + rv2*(1-$rho12)^.5

corr v1 v2

* Not quite perfect but a good approximation

* Now let's generate consumption level

gen y = W/(Py*(a/b + 1)) + v1*4

label var y "Food"

gen x = W/(Px*(b/a + 1)) + v2()*4

label var x "Entertainment"

sum x y

************************************************************

* simulation END

************************************************************

* Estimation begin

* now we want to recover the underlying a and b coefficients

* y = W/(Py(a/b + 1)) = (W/Py)/(a/b + 1)

* x = W/(Px(b/a + 1)) = (W/Px)/(b/a + 1)

* To make things simple let's call a/b gamma

* To make things simple let's call b/a gamma_inv

* Now lets see how well we can retrieve the a/b ratio

* (Note: In this case the magnitudes of a and b are irrelevant)

nl (y = (W/Py)/({gamma_hat} + 1))

* save the cofficient

local gamma_hat = _b[/gamma_hat]

predict y_hat

label var y_hat "Food"

nl (x = (W/Px)/({gamma_inv_hat} + 1))

local gamma_inv_hat = _b[/gamma_inv_hat]

predict x_hat

label var x_hat "Entertainment"

gen p = x

label var p "perfict fit"

two (line x_hat x , sort) (line y_hat y , sort) (line p p , sort color(red))

local gamma_inv_hat_inv =1/`gamma_inv_hat'

di "Does the inverse of gamma_inv_hat_inv = gamma_hat? (ie`gamma_inv_hat_inv'=`gamma_hat'?)"

* What we really want to know is:

* is (gamma_hat-gamma_inv_hat)!=0?

* In order to evaluate (gamma_hat-gamma_inv_hat) we need to find a standard error

* for (gamma_hat-gamma_inv_hat).

* There are several ways of doing this. The most important thing to recognize

* is that both gamma_hat and gamma_inv_hat have their own standard errors.

* We will bootstrap to find the standard errors on the gammma_inv_inv estimate

* we might as well bootstrap the errors of the gamma_hat as well

cap program drop gamma_inv_hat

program define gamma_inv_hat, rclass

nl (y = (W/Py)/({gamma_hat} + 1))

return scalar gamma_hat = _b[/gamma_hat]

nl (x = (W/Px)/({gamma_inv_hat} + 1))

return scalar gammma_inv_inv = 1/_b[/gamma_inv_hat]

end

gamma_inv_hat

return list

bs gamma_hat=r(gamma_hat) gammma_inv_inv=r(gammma_inv_inv), reps(50): gamma_inv_hat

local se_gamma_hat = _se[gamma_hat]

local se_gammma_inv_inv = _se[gammma_inv_inv]

* if we assume that gamma_hat and gammma_inv_inv are perfectly correlated then

* se(gamma_hat + gammma_inv_inv) = se(gamma_hat) + se(gammma_inv_inv)

* this is the most conservative assumption when you are trying to test if

* two estimators are different. But because we are trying to test that they are the

* same this is the most generous assumption

local se_tot= `se_gamma_hat' + `se_gammma_inv_inv'

di "so the total se() is se_gamma_hat + se_gammma_inv_inv =" ///

_newline " `se_gamma_hat' + `se_gammma_inv_inv' = `se_tot'" ///

_newline "and gamma_inv_hat - gamma_hat = " `=1/`gamma_inv_hat'' - `gamma_hat' ///

_newline "And we want to test the statistical magnitude of:" ///

_newline " (gamma_inv_hat_inv - gamma_hat)/(se_gamma_hat + se_gammma_inv_inv)" ///

_newline " =(`gamma_inv_hat_inv'-`gamma_hat')/(`se_gamma_hat' + `se_gammma_inv_inv')" ///

_newline " ="(`gamma_inv_hat_inv'-`gamma_hat')/(`se_gamma_hat' + `se_gammma_inv_inv')

***********************************STOP****************************************

* So what is the deal? Is the non-linear estimator so inconsistent that it will never

* converge on the correct answer?

* Check your model specifications!

* Where do you allow for the error to enter? This might not be a big deal with a E(u)=0

* error, but since we used runiform as our error specification things get a little

* more dicy, b.c. E(U(0,1))=.5

* So let's try this again:

nl (y = (W/Py)/({gamma_hat} + 1) + {alpha})

* save the cofficient

local gamma_hat = _b[/gamma_hat]

nl (x = (W/Px)/({gamma_inv_hat} + 1) + {alpha})

local gamma_inv_hat = _b[/gamma_inv_hat]

local gamma_inv_hat_inv =1/`gamma_inv_hat'

di "Does the inverse of gamma_inv_hat_inv = gamma_hat? (ie`gamma_inv_hat_inv'=`gamma_hat'?)"

cap program drop gamma_inv_hat

program define gamma_inv_hat, rclass

nl (y = (W/Py)/({gamma_hat} + 1)+ {alpha})

return scalar gamma_hat = _b[/gamma_hat]

nl (x = (W/Px)/({gamma_inv_hat} + 1)+ {alpha})

return scalar gammma_inv_inv = 1/_b[/gamma_inv_hat]

end

gamma_inv_hat

return list

bs gamma_hat=r(gamma_hat) gammma_inv_inv=r(gammma_inv_inv), reps(50): gamma_inv_hat

local se_gamma_hat = _se[gamma_hat]

local se_gammma_inv_inv = _se[gammma_inv_inv]

* if we assume that gamma_hat and gammma_inv_inv are now uncorrelated

* se(gamma_hat + gammma_inv_inv) = (se(gamma_hat)^2 + se(gammma_inv_inv)^2)^.5

* this is the most conservative assumption when you are trying to test if

* two estimators are different. But because we are trying to test that they are the

* same this is the most generous assumption

local test_stat = (`gamma_inv_hat_inv'-`gamma_hat')/(`se_gamma_hat'^2 + `se_gammma_inv_inv'^2)^.5

di "so the total se() is se_gamma_hat + se_gammma_inv_inv =" ///

_newline " `se_gamma_hat' + `se_gammma_inv_inv' = `se_tot'" ///

_newline "and gamma_inv_hat - gamma_hat = " `=1/`gamma_inv_hat'' - `gamma_hat' ///

_newline "And we want to test the statistical magnitude of:" ///

_newline " (gamma_inv_hat_inv - gamma_hat)/(se_gamma_hat^2 + se_gammma_inv_inv^2)^.5" ///

_newline " =(`gamma_inv_hat_inv'-`gamma_hat')/(`se_gamma_hat'^2 + `se_gammma_inv_inv'^2)^.5" ///

_newline " = `test_stat'"

* I am not entirely sure how many degrees of freedom to use with the resulting test

* We are testing 1 restriction with, _N - 2 degrees of freedom.

local fstat=F(1,_N - 2,`test_stat'^2)

di "If I am doing this right then the probability of seeing this results based on the f" ///

_newline " distribution is equal to 1-F(1,_N - 2,`=`test_stat'^2') =" 1-`fstat'

di "So we fail to reject the null!"

## No comments:

## Post a Comment