manyspikes

Interpretability

Initialising environment...

Congratulations! You have reached the last section of the linear regression module. Before we finish though, we want to once again stress on the issue of interpretability. There are two things we want you to take home with you on this point:

  1. Even in the univariate case, the weights of a linear regression model express linear relationships between inputs and outputs, but they are oblivious with regard to causal effects. Don't be tempted to conclude that xx causes yy based on linear regression models—there are other techniques to infer causality, which we shall cover at a later point.
  2. You have seen that the presence of correlations between input features can lead to multiple valid solutions. Regularisation can help by systematically imposing penalties on unnecessarily large weights, and thereby improves model interpretability.

In general, the topic of model interpretability is a fairly complex one, especially when we start looking at more complex machine learning models. Understanding the limitations of the models, together with knowledge of the data at hand, will help you think critically about the conclusions you can—and cannot—draw after training a model.