## Wednesday, April 10, 2013

### The effect of non-convergence on MLE estimates * Maximum likelihood proceedures have become widely used to solve a variety of econometric problems.

* Unfortunately there is no guarantee that these proceedures will yeild a single solution which satisfies the convergence criteria of the maximizing function.

* This might occur for reasons difficult to detect such as localy flat spots or discontinuous areas.

* Maximization proceedures are usually evaluated based on their 1. efficiency (speed of convergence) or 2. on their robustness at detecting optimal values.

* The problem is that sometimes in simulation we need to limit the time a MLE proceedure takes in attempting to find a solution.

* What effect does that limitation result in and what do we do with estimates that result from non-convergence?
* 1. Keep them or 2. throw them out

* This simulation will explore both options

* In this simulation we will fall back on the widely used estimator which is equivalent when the standard errors are not structurally estimated to the OLS estimator.

* This is the "normal" regression estimator.  IE the MLE maximization that allows for linearly modeled heteroskedasticity.
cap program drop myNormalReg
program define myNormalReg
args lnlk xb sigma2
qui replace `lnlk' = -ln(sqrt(`sigma2'*2*_pi)) - (\$ML_y-`xb')^2/(2*`sigma2')
end

* First let's generate a sample data set
clear
set obs 300

* I am going to try to make the problem hard to solve by including both addative and multiplicative error.
gen u = (runiform()-.5)
* I made this error small because actually when the error is small it is harder to estimate the variance of the error.
* It takes a little work with simulations to generate data which does not converge.

gen v1 = rnormal()
gen x1 = runiform()-.5

gen v2 = rnormal()
gen x2 = runiform()-.5

gen y = 3 + (1+v1)*x1 + (1+v2)*x2 + u

reg y x1 x2

ml model lf myNormalReg (reg: y=x1 x2) (sigma2:)
ml maximize

* This is the more efficient model because it is explicitly modeling the error.
gen x1_2 = x1^2
gen x2_2 = x2^2

ml model lf myNormalReg (reg: y=x1 x2) (sigma2: x1_2 x2_2)
ml maximize

* It seems that typically this model frequently converges.

* Let's see if we can't dilute the maximization:
gen x1x2 = x1*x2

gen x1abs = abs(x1)
gen x2abs = abs(x2)

ml model lf myNormalReg (reg: y=x1 x2 x1_2 x2_2 x1x2 x1abs x2abs) (sigma2: x1 x2 x1_2 x2_2 x1x2 x1abs x2abs)
ml maximize, iterate(100)

* It looks to me about half the time I run this code it does not converge within a 100 iterations.

* Now lets specify our functions we will use to test differences in results depending upon our method of dealing with convergence.

* This program is just a condensation of the above code.
cap program drop sim_converge
program define sim_converge

clear
set obs 300

gen u = (runiform()-.5)

gen v1 = rnormal()
gen x1 = runiform()-.5

gen v2 = rnormal()
gen x2 = runiform()-.5
gen y = 3 + (1+v1)*x1 + (1+v2)*x2 + u
gen x1_2 = x1^2
gen x2_2 = x2^2

ml model lf myNormalReg (reg: y=x1 x2) (sigma2: x1_2 x2_2)
ml maximize

gen x1x2 = x1*x2

gen x1abs = abs(x1)
gen x2abs = abs(x2)

ml model lf myNormalReg (reg: y=x1 x2 x1_2 x2_2 x1x2 x1abs x2abs) (sigma2: x1 x2 x1_2 x2_2 x1x2 x1abs x2abs)
ml maximize, iterate(`1')
* The only difference is that iterate is specified by the user.
end

* Leaving the first argument blank will not specify a maximum convergence iteration.
sim_converge
sim_converge 50

* Let's first define what we would like to save from the MLE.
* Yes, I am going to use a forbidden global :)
gl savelist ic=e(ic)
* e(ic) is the macro in which the number of iterations used is saved.

foreach i in reg sigma2 {
foreach v in x1 x2 x1_2 x2_2 x1x2 x1abs x2abs {
gl savelist \$savelist `i'`v'=[`i']_b[`v']
}
}

* Let's see what our savelist looks like:
di "\${savelist}"

* looking pretty good.

simulate \${savelist} , rep(100) seed(32): sim_converge 50
tab ic
* In my simulation 51 times the MLE did not converge by the 50th iteration.

* Let's see if there are systematic differences between estimates.
sum if ic==50
sum if ic<50 p="">  * We can see that if the estimator did converge then it is much more precise (smaller sd) than in the cases when it did not converge.

* The mean estimates of regx1 and regx2 and sigmax1_2 and sigmax2_2 are much closer to 1 which is the true parameter values.

* Let's try it again setting convergence at a higher bar:
simulate \${savelist} , rep(100) seed(32): sim_converge 250
tab ic
sum if ic==250
sum if ic<250 p="">
* Raising the max iteration does not lead to any of the observations converging.

* This is problematic because we want to know if there is a systematic difference in the draws for the estimates which converged and those that did not.

* By the results so far we might be tempted just to include the results of the iterations that did converge.

* First off let's see if the estimates that converged quickly are better or worse than those that converged more slowly.

recode ic (1/14=0) (15/49=1), gen(grp)

bysort grp: sum regx1 regx2
anova regx1 grp if grp<250 p="">* It seems there is no detectable difference in the means for those observations that converged more quickly than 15 iterations than those that converged more slowly.

* This implies, assuming the results are generalizable that truncating the simulation to only the results that converge might produce unbiased estimates.

* We should run the simulation again with more repetitions in order to confirm this.

simulate \${savelist} , rep(500) seed(32): sim_converge 50
tab ic

/* tab ic

e(ic) |      Freq.     Percent        Cum.
------------+-----------------------------------
10 |          1        0.20        0.20
11 |         18        3.63        3.83
12 |         24        4.84        8.67
13 |         48        9.68       18.35
14 |         57       11.49       29.84
15 |         50       10.08       39.92
16 |         24        4.84       44.76
17 |         10        2.02       46.77
18 |         18        3.63       50.40
19 |         10        2.02       52.42
20 |          4        0.81       53.23
21 |          6        1.21       54.44
22 |          1        0.20       54.64
23 |          3        0.60       55.24
24 |          4        0.81       56.05
25 |          2        0.40       56.45
26 |          3        0.60       57.06
29 |          1        0.20       57.26
32 |          1        0.20       57.46
33 |          1        0.20       57.66
50 |        210       42.34      100.00
------------+-----------------------------------
Total |        496      100.00
*/

sum if ic==50
sum if ic<50 p="">
recode ic (1/14=0) (15/49=1), gen(grp)

bysort grp: sum regx1 regx2

/*
-> grp = 0

Variable |       Obs        Mean    Std. Dev.       Min        Max
-------------+--------------------------------------------------------
regx1 |       148    .9918629    .1076053    .732878   1.296374
regx2 |       148    .9807507     .112081    .684716   1.447506

-----------------------------------------------------------------------------------------
-> grp = 1

Variable |       Obs        Mean    Std. Dev.       Min        Max
-------------+--------------------------------------------------------
regx1 |       138    .9967074    .1183514   .7492483   1.318624
regx2 |       138    .9913069    .1161812   .6711526   1.407439

-----------------------------------------------------------------------------------------
-> grp = 50

Variable |       Obs        Mean    Std. Dev.       Min        Max
-------------+--------------------------------------------------------
regx1 |       210     .814391    .3268973   .0518116   1.590542
regx2 |       210    .8106436    .3278075   .0707162   1.775239

*/
anova regx1 grp if grp<50 p="">
/*
Number of obs =     286     R-squared     =  0.0005
Root MSE      = .112917     Adj R-squared = -0.0031

Source |  Partial SS    df       MS           F     Prob > F
-----------+----------------------------------------------------
Model |  .001675983     1  .001675983       0.13     0.7172
|
grp |  .001675983     1  .001675983       0.13     0.7172
|
Residual |  3.62106459   284  .012750227
-----------+----------------------------------------------------
Total |  3.62274057   285   .01271137
*/

* We can see that even when the sample size is larger (about 140 per iteration group) there is no discernable difference between those draws that converge before the first 15 iterations and those that converge after.

* If we assume that for those draws that do not converge within 250 draws are sampling from the same probability of convergence distribution then this evidence suggests that rate of convergence is independent of the actual estimates and therefore it might be safe to exclude observations in which convergence did not occur.

* Now finally what we might interested in is seeing how our estimates change (for the values in which convergence is achieved) as we include more and more estimates by way of increasing our threshold of max iterations.

* How do we do this?

* Well, this might take a little while but we basically loop through the simulation saving the results it terms of means and standard deviations from each run.

forv i = 15(5)35 {
simulate \${savelist} , rep(100) seed(32): sim_converge `i'
sum regx1 if ic<`i'
global mean_x1_`i'=r(mean)
global var_x1_`i'=r(sd)^2
sum regx2 if ic<`i'
global mean_x2_`i'=r(mean)
global var_x2_`i'=r(sd)^2
}
* By the way this is a highly redundant and inefficient method.

clear
set obs 5
gen mean_x1 = .
label var mean_x1 "Mean of x1 estimates"
gen mean_x2 = .
label var mean_x2 "Mean of x2 estimates"
gen var_x1 = .
label var var_x1 "Variance of x1 estimates"
gen var_x2 = .
label var var_x2 "Variance of x2 estimates"

gen i = .
label var i "Max # iterations"

* Save the results as variables
forv i = 1(1)5 {
local ii = 10+`i'*5
replace mean_x1 = \${mean_x1_`ii'} if _n==`i'
replace mean_x2 = \${mean_x2_`ii'} if _n==`i'
replace var_x1  = \${var_x1_`ii'}  if _n==`i'
replace var_x2  = \${var_x2_`ii'}  if _n==`i'
replace i = `ii' if _n==`i'
}

two (connected mean_x1 i, msize(large) lwidth(thick)) ///
(connected mean_x2 i, msize(large) lwidth(thick)), name(means, replace)
two (connected var_x1  i, msize(large) lwidth(thick)) ///
(connected var_x2  i, msize(large) lwidth(thick)), name(vars,  replace)

graph combine means vars, col(1) title("Estimates are insensitive to speed of convergence")

* The take away seems to be that it is safe (at least in this simulation) to exclude from your analysis simulations that did not converge in the specified iteration count.

* This simulation also suggests that it is not ideal to include in your results MLE estimates from iterations in which no convergence was achieved.