Simulations
do not have much to say about consistency.
However, they can tell us a great deal about how biased an estimator is
under various scenarios. In general
simulations can examine what happens in gritty scenarios where normal
assumptions are violated. For instance,
what happens if the error is slightly correlated with the explanatory
variable.
They
can also serve to show how fast an estimator converges on the true value. In general we often only know that an
estimator is “consistent” or “relatively efficient”. What is not clear is, how badly does the
estimator perform when we have limited data.
Some estimators seem to work well with 100 observations, others with
10,000.
In
order to test the unbiasedness of estimators we do a Monte Carlos type
analysis. This analysis allows for the data
to be repeatedly drawn many times and the estimator to be used with each
draw. If the average of those estimated
draws are converging on the true parameter, as the number of simulation
increases, then the estimate is unbiased in the conditions created by the
simulation.
More
difficult to show, consistency can also be approached by simulations. This is generally what happens when you
change the number of observations of a simulation from 100, to 1,000, to 10,000. We usually see that the estimate gets better
as the simulated sample size also gets larger.
However, failure to approach the true parameter value within any level
of data from a difference of 100 to 100 million observations does not indicate
a failure of an estimate to converge. Convergence
is a technical term with a technical definition that allows for any speed of
convergence so long as convergence happens as the number of observations
approaches infinity.
Thus,
simulations of estimators are primarily useful for testing for the biasedness
of estimators under different scenarios.
In addition to that they can also tell us some things about the
consistency of estimators as well.
No comments:
Post a Comment