In this section, we will start looking at linear regression based on a single feature: this is called univariate linear regression.
The dataset we will work with comes from the World Happiness Report, which essentially collects data on happiness levels across different countries.
Here, our task will be to predict happiness levels in different countries based on their GDP (Gross Domestic Product) per capita, which is a measure of how good an economy is.
The linear regression model for this problem can be written as:
where is a parameter that scales the value of GDP linearly and is a constant offset, often called a bias term. If is large, then happiness grows more rapidly with increases in GDP. If is small, then happiness grows slowly as the GDP increases.
Note: The model above defines a linear relationship between happiness and GDP, but it says nothing about causation! We will touch on this later, but for the time being remember that this model does not imply that e.g. increasing a country's GDP will necessarily improve happiness levels.
To make the equations less verbose, let's use to represent happiness and to represent GDP, so we can write:
The dataset above gives a collection of pairs of values for and . For instance, here are 3 examples from the dataset:
| Country | Happiness | log(GDP per capita) |
|-----------+--------------+----------------------|
| Denmark | 7.504 | 10.996 |
| Egypt | 3.881 | 9.480 |
| Portugal | 5.954 | 10.504 |
Let's say we take an example from the dataset and we plug it into equation (2), substituting by the happiness score and by the GDP per capita (note that we are actually working with the logarithm of the GDP per capita).
Imagine we selected Denmark's example, for which the happiness score was 7.504 and the GDP measure was 10.996. Substituting in equation (2) we get:
We have one equation and we want to find out the values of two unknowns: and . Unfortunately, that is not possible because there is an infinite number of values for and that will make equation (3) valid. Thinking about it geometrically, there is an infinite number of lines that can pass through a single point.
Let's take another example from the dataset to give us another equation to work with, say the Egypt example. We now have:
You probably recognise this as a system of linear equations with two equations and two unknowns. We can solve for and and that would essentially give us a linear model which we could use to make a prediction.
In this case, we got and . This would be the correct solution if all the examples in the dataset were accurately described by the line above, i.e. if all the examples were precisely on top of the orange line. Unfortunately, that is not the case. Let's add Portugal to the plot:
Unfortunately, there is no way we can find a line that covers all 3 points above. We have a system of linear equations with 3 equations and 2 unknowns, but it is an inconsistent system: we can't find a set of values for and that satisfy all three equations.
We need to accept that we won't be able to find a line that runs through all the examples in our dataset, so instead we can ask: can we find the line that gets as close as possible to all the examples in our dataset?
To do that, we need to define what we mean by as close as possible. Let's define some notation to help us formalise this idea. Let represent the example of our dataset. Additionally, let be the -coordinate of the line evaluated at the point , i.e.
We want the distance between and to be as small as possible, so one possibility would be to minimise following the loss function:
This loss function is known as the sum of squared errors (SSE).
Now, we want to find the values of and that minimise . In general, there are two ways to go about this sort of problem:
In many cases it is not possible to arrive at an analytical solution, so iterative optimisation is the best we can do. However, it is possible to work out the optimal parameters analytically for linear regression, as we shall see next.
We start by computing the partial derivatives of the loss with respect to and :
To arrive at the equation above, we departed from equation (6) and applied the chain rule of differentiation , with and .
As we know from calculus, the minimum of a differentiable function must be associated with a derivative equal to zero, so we need to find the values of and which satisfy:
Let's start with the partial with respect to .
We can get rid of the factor as that does not affect the solution. Furthermore, splitting up the sum into two separate sums we get
Let's make use of the fact that , where denotes the average of . We can thus write
Following the same logic, we can get rid of the remaining sum:
Finally, solving for we get
Following a similar approach to solve , we get to the solution
Equations (15) and (16) are called the ordinary least squares (OLS) solutions because they follow from minimising the sum of squared errors.
Now let's see this working in action. We have added more data to our dataset in the cell below and we will compute the estimates for and as per the OLS solutions above.
As expected, we get a pretty good fit to the data. We would find a pretty similar solution if we were to use an iterative optimisation approach (e.g. using gradient descent), but when possible we favour analytical solutions.
As a final note, notice the general approach we followed here: we started by defining a model, we wrote down it's loss function and finally we worked on finding the minimum of the loss function with respect to the parameters. We will be using this standard approach for pretty much every ML model.
In the next sections, we will move on to multivariate linear regression, where we use multiple features to predict the target value.