Ex. 7.2
Ex. 7.2
For 0-1 loss with and , show that
where , is the Bayes classifier, and , the irreducible Bayes error at . Using the approximation , show that
In the above
the cumulative Gaussian distribution function, This is an increasing function, with value 0 at and value 1 at .
We can think of as a kind of boundary-bias term, as it depends on the true only through which side of the boundary that it lies. Notice also that the bias and variance combine in a multiplicative rather than additive fashion. If is on the same side of , then the bias is negative, and decreasing the variance will decrease the misclassification error. On the other hand, if is on the opposite side of to , then the bias is positive and it pays to increase the variance! Such an increase will improve the chance that falls on the correct side of (On bias, variance, 0/1—loss, and the curse-of-dimensionality).
Soln. 7.2
First consider the case when , we have , and
Similar arguments hold for the case when and . Therefore, we have showed
For the second part, again, we first consider the case when (thus ). In such case, we have
Similar arguments hold for the case when as well.