Standard errors are really just as important as coefficients when estimating a relationship because they provide a critical component to inference. Without standard errors coefficients are largely uninteresting.
What does it matter that the coefficient from the estimation is say 7.3 if we do not know the standard error? If the standard error is say 1 then a 7.3 is quite different from 0, if that is what is important. However, if the standard error is instead 100, then a coefficient of 7.3 is meaningless (likely) random noise.
Standard errors are interesting because unlike coefficients they alone are interesting without even having coefficient estimates. They provide a point value which gives us insight into how much we can expect our estimates to vary.
Likewise, standard errors are often much more difficult to calculate than coefficients and even more sensitive to correct specification. Biased standard errors can have the effect of making an outcome look less likely than it actually is (over-reject the null) or more likely than it actually is (under-reject the null).
A good example is failing to cluster standards errors when appropriate. Not clustering standard errors when appropriate might be a problem if you had data from twenty different summer camps. Not clustering data by summer camp but evaluating student outcomes is implicitly arguing that the outcomes of camp goers at the summer camps are independent of each other camp goer. Clustering at the camp level however allows for each camp to have common shared shock (error) that season.
Post a Comment