Wednesday, March 27, 2013

ML and initial values

* This post explores the setting of initial values with the ML command.

* There are several reasons you may be interested in doing this.

* 1. You are having difficulty with your data converging and you think that
*        you are having a problem with your initial values being unfeasible.
* 2. You may believe that you might already have reasonable estimates and
*        therefore you believe you might gain efficiency by plugging in those
*        initial values.
* 3. You are interested in using Stata's approximation algorithm to compute
*        standard errors.

* Let's first start with a simple data set.
set obs 1000

gen x = runiform()*2

gen u = rnormal()*5

gen y = 2 + 2*x + u
* x predicts y

graph twoway (lfitci y x) (scatter y x), title("X predicts reasonably Y")

* Let's say we are interested in estimating the three parameters in our data.
* coefficient on x and the constant as well as the variances of the error (25)
* We can use the maximum likelihood function myNormalReg to solve this problem
* simultaneously.

cap program drop myNormalReg
program define myNormalReg
  args lnlk xb sigma2
  qui replace `lnlk' = -ln(sqrt(`sigma2'*2*_pi)) - ($ML_y-`xb')^2/(2*`sigma2')

ml model lf myNormalReg (reg: y = x) (sigma2:)
  ml max
* Our estimates look reasonable.  We iterated 4 times and converged quickly.
* This is our benchmark.

* We also know that OLS is equivalent to the above MLE except it does not directly estimate sigma
*    though it does give a more precise estimate of the coefficients.
reg y x

* So let's try the previous MLE using our OLS estimates as starting values
ml model lf myNormalReg (reg: y = x) (sigma2:)
  ml init reg:x = `=_b[x]'
  ml init reg:_cons = `=_b[_cons]'
  ml max

* Unfortunately, perhaps this simple setup is too easy to solve.
* There is no noticable difference in convergence rates using the OLS estimates as starting values.

* Alternatively let's see if things improve if we use the true values.
ml model lf myNormalReg (reg: y = x) (sigma2:)
  ml init reg:x = 2
  ml init reg:_cons = 2
  ml init sigma2:_cons = 25
  ml max
* The results are pretty much the same as above.

* Finally, let's imagine that we would like to estimate the standard errors on the true parameter
*    values as if ml had converged on the true.  This might be useful if you have two methods for
*    maximizing an equation one that converged but did not give valid standard errors and one that
*    did not converged but gave valid standard errors.
ml model lf myNormalReg (reg: y = x) (sigma2:)
  ml init reg:x = 2
  ml init reg:_cons = 2
  ml init sigma2:_cons = 25
  ml max, iter(0)

No comments:

Post a Comment