Bias of an estimator
From Wikipedia, the free encyclopedia
In statistics, the difference between an estimator's expected value and the true value of the parameter being estimated is called the bias. An estimator or decision rule having nonzero bias is said to be biased.
Although the term bias sounds pejorative, it is not necessarily used in that way in statistics. Biased estimators may have desirable properties. Not only do they sometimes have a smaller mean squared error than any unbiased estimator, but in some cases the only unbiased estimators are not even within the convex hull of the parameter space, so their use is absurd.
Contents |
[edit] Definition
Suppose we are trying to estimate the parameter
using an estimator
(that is, some function of the observed data). Then the bias of
is defined to be
In words, this would be "the expected value of the estimator
minus the true value
." This may be rewritten as
which would read "the expected value of the difference between the estimator and the true value" (the expected value of
is precisely
).
[edit] Examples
[edit] Estimating variance
Suppose X1, ..., Xn are independent and identically distributed normal random variables with expectation μ and variance σ2. Let
be the "sample average", and let
be a "sample variance". We also know that the variance σ2 is defined by:
where N is the population size and xi represents the member of the whole population.
Then S2 is a "biased estimator" of σ2 because
In other words, the sample variance does not equal the population variance, unless multiplied by the normalization factor.
Common sense would suggest to apply the population formula to the sample as well. The reason that it is biased is that the sample mean is generally somewhat closer to the observations in the sample than the population mean is, to these observations. This is so because the sample mean is, by definition, in the middle of the sample, while the population mean may even lie outside the sample. So the deviations to the sample mean will often be smaller than the deviations to the population mean, and so, if the same formula is applied to both, then this variance estimate will on average be somewhat smaller in the sample than in the population.
Note that when a transformation is applied to an unbiased estimator, the result is not necessarily itself an unbiased estimate of its corresponding population statistic. That is, for a non-linear function f and an unbiased estimator U of a parameter p, f(U) is usually not an unbiased estimator of f(p). For example the square root of the unbiased estimator of the population variance is not an unbiased estimator of the population standard deviation.
[edit] Estimating a Poisson probability
A far more extreme case of a biased estimator being better than any unbiased estimator is well-known: Suppose X has a Poisson distribution with expectation λ. It is desired to estimate
(For example, when incoming calls at a telephone switchboard are modeled as a Poisson process, and λ is the average number of calls per minute, then e−2λ is the probability that no calls arrive in the next two minutes.)
Since the expectation of an unbiased estimator δ(X) is equal to the estimand, i.e.
,
the only function of the data constituting an unbiased estimator is
.
If the observed value of X is 100, then the estimate is 1, although the true value of the quantity being estimated is obviously very likely to be near 0, which is the opposite extreme. And if X is observed to be 101, then the estimate is even more absurd: it is −1, although the quantity being estimated obviously must be positive.
The (biased) maximum likelihood estimator
is far better than this unbiased estimator. Not only is its value always positive, but it is also more accurate in the sense that its mean squared error (MSE)
is smaller; compare the unbiased estimator's MSE of
- 1 − e − 4λ.
The MSEs are a functions of the true value λ. The bias of the maximum-likelihood estimator is:
.
[edit] Maximum of a discrete uniform distribution
The bias of maximum-likelihood estimators can be substantial. Consider a case where n tickets numbered from 1 through to n are placed in a box and one is selected at random, giving a value X. If n is unknown, then the maximum-likelihood estimator of n is X, even though the expectation of X is only (n + 1)/2; we can only be certain that n is at least X and is probably more. In this case, the natural unbiased estimator is 2X − 1.










