We will now look at derivatives from a slightly different perspective. Earlier in this module, we have seen how the derivative of a function at a given point corresponds to the slope of the tangent line at that point. Because differentiable functions vary smoothly, the tangent approximates the value of the function over a small enough interval around that point.
For example, let's have a look at an example where .
By the rules of derivation, we know that the derivative of is
Now let's plot the tangent to the function at . We know that it must have a slope is and the intercept must be 0 since . Let's plot it together with the function so we can see how well the approximation works in the interval $[-1, 1].
It looks like the tangent is a reasonable approximation around , but as we get further away from it the function deviates significantly from the tangent line. The reason for this is that as we move away from the derivative at no longer represents how the function is varying. In other words, the derivative of is itself changing over , and we need to account for that. The rate of change of the derivative of over is given by its derivative, which we refer to as the second derivative of , . In turn, the second derivative may also be varying over , and its variation is characterized by the third derivative . This can go on indefinitely for some functions, as it is the case for .
The Taylor series, also referred to as Taylor expansion, tells us how to use these successive derivatives to approximate a function. Formally, the Taylor series is defined as:
where is an anchor or reference point (the point at which we evaluate the function and its derivatives). In practice, we cannot indefinitely evaluate the expression above, so we need to restrict ourselves to a finite number of derivatives:
The equation above defines a polynomial of degree : this is known as the degree Taylor polynomial. Let's implement an approximation of using the 5th degree Taylor polynomial:
This looks like a much better approximation. However, if we look over a wider range, we see that the function is clearly different
works well. Taylor polynomials are commonly used to approximate functions because (i) they can
Increasing the degree of the Taylor polynomial widens the range over which the approximation works well. Taylor polynomials are commonly used to approximate functions because (i) they can provide good approximations and (ii) polynomials are easy to work with. For instance, they are easy to differentiate, which is helpful if we are looking to minimise a function.
using the first derivative in each dimension, i.e. the gradient, as follows:
The Taylor series can also be used to approximate multivariate functions. This section might feel a bit more difficult than the previous ones, so we'll move slightly slower. To start with, let's write down Taylor polynomial of degree 1 for approximating a univariate function :
The logic is the same in the multivariate case, but we have to account for the contribution of each partial derivative. For instance, let's assume that we are approximating a function with using a Taylor polynomial of degree 1 with an anchor point . Pretty much as in the univariate case, we can approximate the function by using the first derivative in each dimension, as follows:
We can rewrite this using vector notation:
where is the gradient of .
Now let's say we would like to improve our approximation using the second derivatives of , thereby adding a third term to the expansion (n=2). Because is a function of two variables, we have the following second derivatives at our disposal:
There are two things to note before we move forward. The first one is that each of these partials contributes to the overall change in . The second is that we are dealing with the term of degree 2, which means that the change in is now quadratic. Thus, the term of degree 2 can be written as:
We can now use this term to extend equation (7), and the Taylor expansion of degree 2 becomes:
We are only working with a function of two variables but this expression is already getting a bit complicated. However, expressing this in vector notation becomes much more readable:
In the equation above, the matrix is a 2-by-2 matrix containing the partial derivatives of :
The matrix is called the Hessian. In general, the Hessian of a function of variables is a -by- matrix and it is symmetric for continuous functions, which means that
Let's put all this theory into practice by approximating the multivariate function around the anchor point . Before we dive into the calculations, let's visualize this function as a 3D surface plot.
Let's see how well we can approximate this function using a Taylor polynomial of degree 2. As we have seen above, it takes the following form:
All we need to do is to find the gradient and the Hessian . We won't cover the analytical calculations step-by-step here for brevity, but if you would like to have a go at it, remember to use the chain rule and the product rule! Let's start by calculating the gradient:
Next, we calculate the second order partials for the Hessian:
The Hessian is thus
Now let's check that the results above are correct by approximating the function around the anchor point . In the implementation below, we
will use the np.vectorize
decorator to help us apply the gradient, the hessian and the Taylor approximation functions throughout a grid of values for
and . That means that we will write the functions as if x1
and x2
where scalar values and we decorate this function with np.vectorize
.
The resulting vectorized function is then able to accept arrays, applying the original function to their individual elements. Note: Using np.vectorize
is
not the most efficient way of performing this computation, but it is often convenient and can make the calculations easier to follow.
In the plot above, the purple wire frame plot depicts the approximation of our original function. As you can see, the approximation is pretty good around the anchor point . However, as we move further from it, the approximation naturally becomes worse, for the reasons we highlighted in the univariate scenario above.
Congratulations on reaching the end of this section! You should now have a good understanding of how derivatives can be useful to approximate univariate and multivariate functions using the Taylor series. This approach is sometimes used for optimisation purposes: when we cannot optimise a given function, we may be able to approximate it using a Taylor polynomial and optimise the approximation instead!