As we mentioned in the previous section, the least squares minimisation problem has multiple solutions when input features are correlated with one another. To see why that happens, consider the extreme example of a linear model with 3 perfectly correlated features:
where and . You can check that for any choice of that respects the two previous equalities, you would get the same prediction under the following weights:
The weights above are clearly very different, and there is an infinite number of combinations of weights that would produce the same output: for instance, we could keep increasing while decreasing the values of and accordingly to produce the same result. This is problematic from an intepretability viewpoint.
One way to avoid the issues introduced by highly correlated input features is to penalise the choice of large weights—this is known as regularisation. As we will see later, there are a number of techniques for linear regression and they differ on the penalty they impose on the choice of weights.
Ridge regression is a regularisation technique that gives preference to weights with smaller magnitude, i.e. a smaller -norm. In particular, the loss function for ridge regression is
where is a regularisation constant (also called ridge parameter) which controls the extent to which larger weight vectors should be penalised. Note that where is the identity matrix. Thus, the loss can be written as:
Now we set the derivative of the loss to zero and solve for
Adding the diagonal matrix to effecively lowers the correlations between features, making the resulting matrix easier to invert.
Let's take the dataset we used in the previous section and apply ridge regression to it. We will consider varying values of and we will see what happens to the magnitude of the vector .
As expected, the norm of the estimated weight vector decreases as we increase the regularisation constant .
But how do we choose an appropriate value for ? In a sense, is also a parameter of the model, but we are not fitting it to data. Such parameters are called hyperparameters.
Hyperparameter selection is commonly done via cross-validation. The idea behind it is quite simple:
We repeat these steps with different values of and different splits of training and validation data. In the end, we pick the value of that best predicted the validation set. If this sounds a bit confusing, don't worry: we will see cross-validation in action in subsequent modules.