Ex. 9.1

Ex. 9.1

Show that a smoothing spline fit of \(y_i\) to \(x_i\) preserves the linear part of the fit. In other words, if \(y_i = \hat y_i + r_i\), where \(\hat y_i\) represents the linear regression fits, and \(\bb{S}\) is the smoothing matrix, then \(\bb{S}\bb{y} = \boldsymbol{\hat y} + \bb{S}\bb{r}\). Show that the same is true for local linear regression (Section 6.1.1). Hence argue that the adjustment step in the second line of (2) in Algorithm 9.1 is unnecessary.

Soln. 9.1

We need to show that \(\textbf{S}\boldsymbol{\hat y}=\boldsymbol{\hat y}\). Recall (5.9) in the text with \(\lambda = 0\) and \(y_i\) replaced with \(\hat y_i\). On the one hand, the solution is trivially seen as \(\hat f(x_i)=\hat y_i\), i.e., \(\hat f = \boldsymbol{\hat y}\). On the other hand, since \(\hat f = \textbf{S}\boldsymbol{\hat y}\) solves (5.9) (or equivalently (5.18)), we know that \(\textbf{S}\boldsymbol{\hat y}=\boldsymbol{\hat y}\).

For local linear regression (see (6.8) in the text), we have

\[\begin{equation} \textbf{S} = \textbf{X}(\textbf{X}^T\textbf{W}\textbf{X})^{-1}\textbf{X}^T\textbf{W},\non \end{equation}\]

so that

\[\begin{eqnarray} \textbf{S}\boldsymbol{\hat y} &=& \textbf{X}(\textbf{X}^T\textbf{W}\textbf{X})^{-1}\textbf{X}^T\textbf{W}\textbf{X}(\bX^T\bX)^{-1}\bX^T\by\non\\ &=&\textbf{X}(\bX^T\bX)^{-1}\bX^T\by\non\\ &=&\boldsymbol{\hat y}.\non \end{eqnarray}\]

The second step in (2) of Algorithm 9.1 is not needed, since the smoothing spline fit to a mean-zero response has mean zero.