Skip to article frontmatterSkip to article content

Piecewise polynomial interpolation

from pylab import *
from matplotlib import animation
from IPython.display import HTML

Consider a partition of [a,b][a,b] into a grid

a=x0<x1<<xN=ba = x_0 < x_1 < \ldots < x_N = b

The points xjx_j are called knots, breakpoints or nodes. Let p(x)p(x) be a polynomial on each of the subintervals

[x0,x1],[x1,x2],,[xN1,xN][x_0,x_1], \quad [x_1,x_2], \quad \ldots, [x_{N-1},x_N]

i.e.,

p(x)Pr,x[xi,xi+1]p(x) \in \poly_r, \qquad x \in [x_i,x_{i+1}]

Note that p(x)p(x) need not be a polynomial in [a,b][a, b]. We say that p(x)p(x) is a piecewise polynomial of degree rr if the degree of p(x)p(x) is r\le r on each of the sub-intervals. In general, no restrictions on the continuity of p(x)p(x) or its derivatives at the end points of the sub-intervals might exist.

1Piecewise linear interpolation

Let us demand the we want a continuous approximation. We are given the function values yi=f(xi)y_i = f(x_i) at all the nodes of our grid. We can construct the polynomial of each sub-interval

p(x)=xxi+1xixi+1yi+xxixi+1xiyi+1,xixxi+1p(x) = \frac{x - x_{i+1}}{x_i - x_{i+1}} y_i + \frac{x - x_i}{x_{i+1} - x_i} y_{i+1}, \qquad x_i \le x \le x_{i+1}

Clearly, this is linear and continuous. On each interval we know the interpolation error estimate

maxx[xi,xi+1]f(x)p(x)hi+122maxt[xi,xi+1]f(t),hi+1=xi+1xi\max_{x \in [x_i,x_{i+1}]} |f(x) - p(x)| \le \frac{h_{i+1}^2}{2} \max_{t \in [x_i,x_{i+1}]} |f''(t)|, \qquad h_{i+1} = x_{i+1} - x_i

from which we get the global error

maxx[x0,xN]f(x)p(x)h22maxt[x0,xN]f(t),h=maxihi\max_{x \in [x_0,x_N]} |f(x) - p(x)| \le \frac{h^2}{2} \max_{t \in [x_0,x_N]} |f''(t)|, \qquad h = \max_i h_i

Clearly, we have convergence as NN \to \infty, h0h \to 0. The convergence is quadratic; if hh becomes h/2h/2, the error reduces by a factor of 1/41/4.

1.1Finite element form

We have written down the polynomial in a piecewise manner but we can also write a global view of the polynomial by defining some basis functions

ϕi(x)={xxi1xixi1xi1xxixi+1xxi+1xixixxi+10otherwise\phi_i(x) = \begin{cases} \frac{x - x_{i-1}}{x_i - x_{i-1}} & x_{i-1} \le x \le x_i \\ \frac{x_{i+1} - x}{x_{i+1} - x_i} & x_i \le x \le x_{i+1} \\ 0 & \textrm{otherwise} \end{cases}

These are triangular hat functions with compact support. Then the piecewise linear approximation is given by

p(x)=i=0Nyiϕi(x)p(x) = \sum_{i=0}^N y_i \phi_i(x)

1.2An adaptive algorithm

We can use the local error estimate to improve the approximation. We have to first estimate the error in the an interval Ii=[xi1,xi]I_i = [x_{i-1},x_i] by some finite difference formula HiH_i. If we want the error to be ϵ\le \epsilon, then we must choose hih_i so that

hi28Hiϵ\frac{h_i^2}{8} H_i \le \epsilon

If in some interval, it happens that

hi28Hi>ϵ\frac{h_i^2}{8} H_i > \epsilon

divide IiI_i into two intervals

[xi112(xi1+xi)][12(xi1+xi),xi]\left[ x_{i-1} \half(x_{i-1}+x_i) \right] \qquad \left[ \half(x_{i-1}+x_i), x_i \right]

We can start with a uniform grid of NN intervals and apply the above algorithm. The code finds the interval with the maximum error and divides it into two intervals. The error in the interval [xi1,xi][x_{i-1},x_i] is estimated by fitting a cubic polynomial to the data xi2,xi1,xi,xi+1x_{i-2},x_{i-1},x_i,x_{i+1} using polyfit and finding the maximum value of its second derivative at xi1,xix_{i-1},x_i to estimate HiH_i. P = polyfit(x,f,3) returns a polynomial in the form

P[0]x3+P[1]x2+P[2]x+P[3]P[0]*x^3 + P[1]*x^2 + P[2]*x + P[3]

whose second derivative is

6P[0]x+2P[1]6*P[0]*x + 2*P[1]

2Piecewise quadratic interpolation

Consider a grid

x0<x1<<xNx_0 < x_1 < \ldots < x_N

and define the mid-points of the intervals

xi12=12(xi1+xi),i=1,2,,Nx_\imh = \half(x_{i-1} + x_i), \qquad i=1,2,\ldots,N

We are given the information

yi=f(xi),i=0,1,,Ny_i = f(x_i), \qquad i=0,1,\ldots,N
yi12=f(xi12),i=1,2,,Ny_\imh = f(x_\imh), \qquad i=1,2,\ldots,N

In the interval Ii=[xi1,xi]I_i = [x_{i-1},x_i] we know three values

yi1,yi12,yiy_{i-1}, y_\imh, y_i

and we can determine a quadratic polynomial the interpolates these values

p(x)=yi1ϕ0(xxi1hi)+yi12ϕ1(xxi1hi)+yiϕ2(xxi1hi)p(x) = y_{i-1} \phi_0\left(\frac{x-x_{i-1}}{h_i}\right) + y_\imh \phi_1\left(\frac{x- x_{i-1}}{h_i}\right) + y_i \phi_2\left(\frac{x-x_{i-1}}{h_i}\right)

where ϕ0,ϕ1,ϕ2\phi_0,\phi_1,\phi_2 are Lagrange polynomials given by

ϕ0(ξ)=2(ξ12)(ξ1),ϕ1(ξ)=4ξ(ξ1),ϕ2(ξ)=2ξ(ξ12)\phi_0(\xi) = 2(\xi - \shalf)(\xi - 1), \qquad \phi_1(\xi) = 4\xi(\xi-1), \qquad \phi_2(\xi) = 2\xi(\xi-\shalf)

Note that the quadratic interpolation is continuous but may not be differentiable.

3Piecewise degree kk interpolation

Consider a grid

a=x0<x1<<xN=ba = x_0 < x_1 < \ldots < x_N = b

In each interval [xi1,xi][x_{i-1},x_i] we want to construct a degree kk polynomial approximation, so we need to know k+1k+1 function values at k+1k+1 distinct points

xi1=xi,0<xi,1<<xi,k=xix_{i-1} = x_{i,0} < x_{i,1} < \ldots < x_{i,k} = x_i

Then the degree kk polynomial is given by

p(x)=j=0kf(xi,j)ϕj(xxi1hi)p(x) = \sum_{j=0}^k f(x_{i,j}) \phi_j\left( \frac{x-x_{i-1}}{h_i} \right)

where ϕj:[0,1]R\phi_j : [0,1] \to \re are Lagrange polynomials with the property

ϕj(ξl)=δjl,0j,lk,whereξl=xi,lxi1hi\phi_j(\xi_l) = \delta_{jl}, \quad 0 \le j,l \le k, \qquad \textrm{where} \quad \xi_l = \frac{x_{i,l} - x_{i-1}}{h_i}

Since we chose the internal nodes such that xi,0=xi1x_{i,0} = x_{i-1} and xi,k=xix_{i,k} = x_i, the piecewise polynomial is continuous, but may not be differentiable.

4Error estimate in Sobolev spaces

The estimate we have derived required functions to be differentiable. Here we use weaker smoothness properties.