banner
IWSR

IWSR

我永远喜欢志喜屋梦子!

Taylor Series (to be completed)

Taylor series is an extremely powerful function approximation tool in mathematics - 3Blue1Brown.

I have been relearning about Taylor series these past few days and have decided to write an article as a learning record.

The general idea of Taylor's theorem is that if a function is smooth enough and we know the values of its derivatives at a certain point, we can use these derivative values as coefficients to construct a polynomial that approximates the function in the neighborhood of that point.

The formula is expressed as:

g(x)=f(a)+f(x)1!(xa)+f2(x)2!(xa)2+...+fn(x)n!(xa)n+Rn(x)g(x) = f(a) + \frac{f^{\prime}(x)}{1!} (x - a) + \frac{f^2(x)}{2!} (x - a)^2 + ... + \frac{f^n(x)}{n!} (x - a)^n + R_n(x)

Let's put aside the above equation for now and explain what kind of functions are considered smooth. A smooth function is a function that has infinitely many derivatives, and it can be differentiated infinitely many times at every point in its domain, with the derivatives being continuous at that point. For example, y=xy = x, its derivative is y=1y^{\prime} = 1, and the derivative of a constant is always 0, so y=xy = x is a first-order smooth function. Another example is y=sinxy = \sin x, which is a smooth function of infinite order because its derivatives are simply a cycle of cosx\cos x, sinx-\sin x, cosx-\cos x, sinx\sin x.

What is approximation?#

Approximation of a function refers to using another function to approximate the original function, with only a small error existing within a certain range. From this description, we can see that approximation can be used as a means of computing complex functions by approximating them with easier-to-compute functions. It may sound strange when understood only in terms of language, but an example might make it more intuitive. For example, when encountering a scenario that requires manual calculation of sin2\sin 2, it is usually approximated by Taylor expanding it around zero as n=0+(1)n(2n+1)!(2)2n+1\sum_{n=0}^{+\infty} \frac{(-1)^n}{(2n+1)!} (2)^{2n+1} (this is just an example, we will discuss how to expand it later), and only a few terms need to be taken to obtain an approximate solution. However, no matter how we approximate it, there will inevitably be an error (represented by Rn(x)R_n(x)) which we will also discuss later.

How to approximate a function#

The derivative of a function at a certain point refers to the rate of change of the function near that point (refer to here for more information). If we differentiate the result of differentiation again, it corresponds to the rate of change of the rate of change at that point. If this relationship continues to be pushed down, it will become endless. In general, if we want to approximate a function, we only need to make their nth order derivatives the same (as long as the rates of change, rates of change of rates of change, etc. are the same, their function graphs will inevitably be infinitely close). Therefore, if we want to approximate a function f(x), we can assume a corresponding g(x)=a0+a1(x)+a2(x2)+...+an(xn)g(x) = a_0 + a_1(x) + a_2(x^2) + ... + a_n(x^n), where n is any positive integer. The reason for assuming g(x) in this form is also very simple, this form of expression is more convenient for calculating its nth order derivatives.

Expand it around zero

{f(0)=g(0)f(0)=g(0)f2(0)=g2(0)..fn(0)=gn(0)\begin{cases} f(0) = g(0) \\ f^{\prime}(0) = g^{\prime}(0) \\ f^{2}(0) = g^{2}(0) \\ . \\ . \\ f^{n}(0) = g^{n}(0) \end{cases}

Here is the process of solving the nth order derivative ana_n

fn(0)=gn(0)fn(0)=ann!an=fn(0)n!\because f^{n}(0) = g^{n}(0) \\ \therefore f^{n}(0) = a_n n! \\ \therefore a_n = \frac{f^{n}(0)}{n!}

We can obtain

{a0=f(0)a1=f(0)1!a2=f2(0)2!..an=fn(0)n!\begin{cases} a_0 = f(0) \\ a_1 = \frac{f^{\prime}(0)}{1!} \\ a_2 = \frac{f^2(0)}{2!} \\ . \\ . \\ a_n = \frac{f^n(0)}{n!} \end{cases}

So we have the equation for Taylor expansion around zero (Maclaurin series)

g(x)=f(0)+f(0)1!(x)+f2(0)2!(x)2+...+fn(0)n!(x)n+Rn(x)g(x) = f(0) + \frac{f^{\prime}(0)}{1!} (x) + \frac{f^2(0)}{2!} (x)^2 + ... + \frac{f^n(0)}{n!} (x)^n + R_n(x)

To generalize this expansion to a point a, we just need to perform a right shift

g(x)=f(a)+f(a)1!(xa)+f2(a)2!(xa)2+...+fn(a)n!(xa)n+Rn(x)g(x) = f(a) + \frac{f^{\prime}(a)}{1!} (x - a) + \frac{f^2(a)}{2!} (x - a)^2 + ... + \frac{f^n(a)}{n!} (x - a)^n + R_n(x)

We call the Rn(x)R_n(x) in the above equation the remainder term, which will be discussed separately later. If this term is removed, the above equation can only be approximated with an approximate equal sign, because even if n is infinite, there will still be an n+1 term.

Handling the remainder term Rn(x)R_n(x)#

It's too complicated to derive, I'll write about it later.

Whether it's the process or the result, it actually expresses that the farther the target point is from the expansion point, the larger the error will be. For example, when we expand sin x around zero, but when evaluating it by substituting 2, the 2 becomes the target point. But why is it allowed to substitute 2 into the expansion around zero to calculate sin2\sin 2? This is related to the convergence interval, because the convergence interval of sin is (+-\infty +\infty), so it can be directly substituted for calculation. I'll write about it later, I'll write about it later.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.