i.e., polynomials can provide uniform approximation. This is only an existence result and does not give us a polynomial with a desired error bound. It also does not even tell us the degree of the polynomial.
The proof uses Bernstein polynomials. Take f:[0,1]→R a bounded function and define
The Taylor expansion provides another way to approximate a function in a neighbourhood of some point at which the expansion is written. A Taylor expansion will exactly recover a polynomial function so we consider somewhat more non-trivial but still simple function. E.g.
The remainder term does not give a sharp estimate. Plotting the error,
x = linspace(-1,1,100)
f = lambda x: exp(x)
p3 = lambda x: 1 + x + x**2/2 + x**3/6
e = f(x) - p3(x)
print("Max error = ", abs(e).max())
plot(x,e)
title('Error in cubic Taylor expansion')
grid(True), xlabel('x'), ylabel('$f(x)-p_3(x)$');
Max error = 0.05161516179237857
we see that it is not uniformly distributed in the interval [−1,1]. We have very good approximation near x=0 but poor approximation near the end-points of the interval.
We can get much better cubic polynomial approximation than the Taylor polynomial. Let us interpolate a cubic polynomial at {−1,−31,+31,+1} which yields
xi = linspace(-1,1,4)
yi = f(xi)
y = barycentric_interpolate(xi,yi,x)
e = f(x) - y
print("Max error = ",abs(e).max())
plot(x,e, xi, 0*xi, 'o')
title('Error in cubic interpolation')
grid(True), xlabel('x'), ylabel('$f(x)-p_3(x)$');
Max error = 0.009983970795226949
The maximum error is approximately 0.00998 which is almost five times smaller than the Taylor polynomial. The error is also more uniformly distributed and oscillates in sign.