Ex. 3.3
Ex. 3.3
Gauss-Markov theorem:
(a)
Prove the Gauss-Markov theorem: the least squares estimate of a parameter \(a^T\beta\) has variance no bigger than that of any other linear unbiased estimate of \(a^T\beta\) (Section 3.2.2).
(b)
The matrix inequality \(\textbf{B} \preceq \textbf{A}\) holds if \(\textbf{A} - \textbf{B}\) is positive semidefinite. Show that if \(\hat{\textbf{V}}\) is the variance-covariance matrix of the least squares estimate of \(\beta\) and \(\tilde{\textbf{V}}\) is the variance-covariance matrix of any other linear unbiased estimate, then \(\hat{\textbf{V}} \preceq \tilde{\textbf{V}}\).
Remark
Note the linear estimate explicitly specified in (a). See Section 5.4 in the text or Ex 2.7
Soln. 3.3
(a)
Let \(\tilde \theta = c^Ty\) be another unbiased linear estimate of \(a^T\beta\) with \(c=a(X^TX)^{-1}X^T + d\). We have
Since \(c^Ty\) is unbiased, we have \(dX=0\). Therefore,
The proof is therefore complete by noting \(d^Td\ge 0\).
(b)
This is almost like a matrix version of (a). Let \(C\) be a \(p\times N\) matrix and \(Cy\) is another linear unbiased estimate of \(\beta\). We write \(C = (X^TX)^{-1}X^T + D\), similarly we have \(DX=0\) and
The result follows because \(DD^T\) is positive semidefinite.