In addition to ridge regression, there are two other regularisation techniques usually applied to linear regression: (i) Lasso, which stands for Least Absolute Shrinkage and Selection Operator, and (ii) Elastic net.
Both methods attempt to discourage large values for the weights of the model, but they differ in how they define the penalty applied in the loss function.
Lasso attempts to keep the -norm of the weights vector as small as possible. The corresponding loss function is:
where is the -norm of the vector . Note that the loss is not longer continuous because of the absolute value in . Thus, a closed-form solution to the parameter estimates is generally not possible. Instead, we must rely on numerical optimisation in order to arrive at the Lasso estimates. In practice, this is often done using an optimisation algorithm called coordinate descent.
Elastic net imposes a combination of and penalties, as follows:
Instead of a single regularisation parameter, we now have two parameters which control the extent to which we penalise the - and -norms. As with Lasso, we rely on numerical optimisation algorithms to estimate the parameters that minimise the loss.