Ex. 3.3

Ex. 3.3

Gauss-Markov theorem:

(a)

Prove the Gauss-Markov theorem: the least squares estimate of a parameter \(a^T\beta\) has variance no bigger than that of any other linear unbiased estimate of \(a^T\beta\) (Section 3.2.2).

(b)

The matrix inequality \(\textbf{B} \preceq \textbf{A}\) holds if \(\textbf{A} - \textbf{B}\) is positive semidefinite. Show that if \(\hat{\textbf{V}}\) is the variance-covariance matrix of the least squares estimate of \(\beta\) and \(\tilde{\textbf{V}}\) is the variance-covariance matrix of any other linear unbiased estimate, then \(\hat{\textbf{V}} \preceq \tilde{\textbf{V}}\).

Remark

Note the linear estimate explicitly specified in (a). See Section 5.4 in the text or Ex 2.7

Soln. 3.3
(a)

Let \(\tilde \theta = c^Ty\) be another unbiased linear estimate of \(a^T\beta\) with \(c=a(X^TX)^{-1}X^T + d\). We have

\[\begin{equation} E[c^Ty] = a^T\beta + dX\beta.\nonumber \end{equation}\]

Since \(c^Ty\) is unbiased, we have \(dX=0\). Therefore,

\[\begin{eqnarray} \text{Var}(c^Ty) &=& \sigma^2\left(a(X^TX)^{-1}X^T+d\right)\left(a(X^TX)^{-1}X^T+d\right)^T\nonumber\\ &=&\sigma^2(a^T(X^TX)^{-1}a + d^Td)\nonumber\\ &=&\sigma^2\left(\text{Var}(a^T\hat\beta) + d^Td\right).\nonumber \end{eqnarray}\]

The proof is therefore complete by noting \(d^Td\ge 0\).

(b)

This is almost like a matrix version of (a). Let \(C\) be a \(p\times N\) matrix and \(Cy\) is another linear unbiased estimate of \(\beta\). We write \(C = (X^TX)^{-1}X^T + D\), similarly we have \(DX=0\) and

\[\begin{eqnarray} \tilde{\textbf{V}} &=& ((X^TX)^{-1}X^T + D)((X^TX)^{-1}X^T + D)^T\nonumber\\ &=&\hat{\textbf{V}} + DD^T.\nonumber \end{eqnarray}\]

The result follows because \(DD^T\) is positive semidefinite.