Talk:Gauss–Markov theorem
From Wikipedia, the free encyclopedia
The introduction is confusing to me. Not clear what is assumed and what is not. And no further reference for why.
I don't understand the definition of the least squares estimators: they are supposed to make the sum of the squares as small as possible, but this sum of squares appears to be a random variable. Are they supposed to make the expected values of those random variables as small as possible? AxelBoldt 18:31 Jan 24, 2003 (UTC)
For fixed values of xi and Yi the sum of square is a function of βi for i=1,2. I'll add some material clarifying that. Michael Hardy 19:40 Jan 24, 2003 (UTC)
What does homoscedastic mean?
The term is explicitly defined in the article, but I will make it more conspicuous. It means: having equal variances. Michael Hardy 21:29 Jan 24, 2003 (UTC)
Should it be made explicit that the variance σ2 is unknown? Albmont 13:17, 27 February 2007 (UTC)
What is the meaning of
?
are scalars. Covariance is defined only for vectors, isn't it? Sergivs-en 06:05, 16 September 2007 (UTC)
- Usually the concept of covariance is initially introduced as the covariance between two scalar-valued random variables. See covariance. Michael Hardy 13:58, 16 September 2007 (UTC)
[edit] linear vs nonlinear least squares
A suggestion is for someone with an interest in this to review the treatment under least squares. The basic Gauss-Markov apparently applies to the linear case. It would be interesting if there are nonlinear generalizations.Dfarrar 14:10, 11 April 2007 (UTC)
Nice article. Assuming that the β's and x's are not random (only the ε's are), would it be more accurate to say, then, that Ŷ is a stochastic process? - just a thought Ernie shoemaker 02:20, 26 July 2007 (UTC)
[edit] Very confusing. Also, should be stated in general vector form of theorem
This article is very confusing and does not explain what it sets out to. Saying that the least squares estimator is the "best" one means nothing. How is "best" defined? Later in the article, it seems to imply that "best" means the one with smallest MSE, and so obviously the least squares estimate is "best".
Also, why not generalise to estimation of a vector of quantities, where the errors have a given correlation structure? —Preceding unsigned comment added by 198.240.128.75 (talk) 16:16, 5 February 2008 (UTC)
[edit] best?
What does best estimator mean exactely? Best like Consistent estimator? --217.83.22.80 (talk) 01:19, 14 March 2008 (UTC)
- At least in the context of "best linear unbiased estimator" (BLUE), "best" means "minimum mean squared error within the class of linear unbiased estimators" (according to this paper by G.K. Robinson). As we're restricting to unbiased estimators, minimum mean squared error implies minimum variance. So BLUE might less confusingly be "minimum variance linear unbiased estimator" (MVLUE?). Perhaps BLUE caught on simply as it's shorter and more pronouncible. Google gives this book preview which points out that the Gauss-Markov theorem states that "
is BLUE, not necessarily MVUE" as the MVUE estimator may in general be non-linear, but we can't know what it is without specifying a parametric probability distribution. The article needs editing to explain this better but estimation theory is really not my strong point. Qwfp (talk) 05:32, 14 March 2008 (UTC)
-
- Can there be more than one unbiased estimator? Is it possible to find another estimator with less mean squared error (should be similar to standard error?) with the standard OLS aproach if there is a violation of the gauß-markov-theorem? --217.83.23.127 (talk) 23:49, 16 March 2008 (UTC)
-
-
- Yes, of course there are many unbiased estimators. There's a space of "linear unbiased estimators" (each one a linear combination of the response variables) and there are also non-linear ones (e.g. the average of the max and the min of a sample is not linear in the observations, and is an unbiased estimator of the population mean). Michael Hardy (talk) 01:08, 17 March 2008 (UTC)
-

