Skip to article frontmatterSkip to article content

6.12Trigonometric interpolation

#%config InlineBackend.figure_format = 'svg'
from pylab import *

A smooth function u:[0,2π]Ru : [0,2\pi] \to \re can be written in terms of its Fourier series

u(x)=Su(x)=k=u^kϕk(x),ϕk(x)=eikxu(x) = Su(x) = \sum_{k=-\infty}^\infty \hat u_k \phi_k(x), \qquad \phi_k(x) = \ee^{\ii k x}

where

u^k=12π02πu(x)eikxdx\hat u_k = \frac{1}{2\pi} \int_0^{2\pi} u(x) \ee^{-\ii k x} \ud x

is the continuous Fourier transform. Recall that {ϕk}\{ \phi_k \} is an orthogonal family of functions.

We can truncate the series SuSu to obtain approximations. But this is not practically useful since we cannot compute the coefficients u^k\hat u_k; either because we may not know u(x)u(x) for all xx and also because the integrals cannot be computed exactly.

An approximation method which makes use of the function values at a discrete set of points only, will be more useful in computations.

6.12.1Interpolation problem

For any even integer N>0N>0, consider the set of points in [0,2π][0,2\pi]

xj=2πjN,j=0,1,,N1x_j = \frac{2\pi j}{N}, \qquad j=0,1,\ldots,N-1

which are uniformly spaced with spacing

h=2πNh = \frac{2\pi}{N}
<Figure size 800x200 with 1 Axes>

Note that x0=0x_0 = 0 and the last point xN1=2πh<2πx_{N-1} = 2\pi - h < 2\pi; we do not include the point x=2πx=2\pi in the grid since by periodicity, the value there is same as at x=0x=0.

In Python, we can generate the grid points like this.

h = 2 * pi / N
x = h * arange(0,N)

The degree N/2N/2 trigonometric polynomial is

INu(x)=k=N/2N/21u~kϕk(x)=k=N/2N/21u~keikxI_N u(x) = \sumf \tilu_k \phi_k(x) = \sumf \tilu_k \ee^{\ii k x}

We will determine the NN coefficients u~k\tilu_k from the interpolation conditions

uj=u(xj)=INu(xj),j=0,1,,N1u_j = u(x_j) = I_N u(x_j), \qquad j=0,1,\ldots,N-1

This is a system of N×NN \times N equations which requires solving a matrix. But we can get the solution without explicitly solving the matrix problem.

First prove the orthogonality relation

1Nj=0N1eipxj={1,p=Nm,m=0,±1,±2,0,otherwise\boxed{ \frac{1}{N} \sum_{j=0}^{N-1} \ee^{-\ii p x_j} = \begin{cases} 1, & p = Nm, \quad m=0, \pm 1, \pm 2, \ldots \\ 0, & \textrm{otherwise} \end{cases} }

Then computing

j=0N1u(xj)eikxj=j=0N1INu(xj)eikxj=j=0N1l=N/2N/21u~leilxjeikxj=l=N/2N/21u~lj=0N1ei(lk)xj=l=N/2N/21u~lNδlk\begin{align} \sum_{j=0}^{N-1} u(x_j) \ee^{-\ii k x_j} &= \sum_{j=0}^{N-1} \clr{red}{I_N u(x_j)} \ee^{-\ii k x_j} \\ &= \sum_{j=0}^{N-1} \clr{red}{\sum_{l=-N/2}^{N/2-1} \tilde u_l \ee^{\ii l x_j}} \ee^{-\ii k x_j} \\ &= \sum_{l=-N/2}^{N/2-1} \tilde u_l \clr{blue}{\sum_{j=0}^{N-1} \ee^{\ii (l-k)x_j}} \\ &= \sum_{l=-N/2}^{N/2-1} \tilde u_l \clr{blue}{N \delta_{lk}} \end{align}

we obtain the discrete Fourier coefficients

u~k=1Nj=0N1ujeikxj,k=N/2,,N/21\boxed{\tilu_k = \frac{1}{N} \sum_{j=0}^{N-1} u_j \ee^{-\ii k x_j}, \qquad k=-N/2, \ldots, N/2 - 1}

This is known as the discrete Fourier transform (DFT) of {uj}\{ u_j \}.

The interpolation conditions

uj=k=N/2N/21u~keikxj,0jN1\boxed{u_j = \sumf \tilu_k \ee^{\ii k x_j}, \qquad 0 \le j \le N-1}

will be called the inverse DFT (IDFT) of {u~k}\{ \tilu_k \}.

6.12.1.1Lagrange form^*

We can derive a Lagrange form of the interpolation as follows.

INu(x)=k=N/2N/21u~keikx=1Nk=N/2N/21j=0N1u(xj)eik(xxj)=j=0N1u(xj)1Nk=N/2N/21eik(xxj)=j=0N1u(xj)ψj(x)\begin{aligned} I_N u(x) &= \sumf \tilu_k \ee^{\ii k x} \\ &= \frac{1}{N} \sumf \sum_{j=0}^{N-1} u(x_j) \ee^{\ii k (x-x_j)} \\ &= \sum_{j=0}^{N-1} u(x_j) \frac{1}{N} \sumf \ee^{\ii k (x-x_j)} \\ &= \sum_{j=0}^{N-1} u(x_j) \psi_j(x) \end{aligned}

where

ψj(x)=1Nk=N/2N/21eik(xxj)\psi_j(x) = \frac{1}{N} \sumf \ee^{\ii k (x-x_j)}

The functions ψj\psi_j are trigonometric polynomials in SNS_N that satisfy

ψj(xl)=δjl,0j,lN1\psi_j(x_l) = \delta_{jl}, \qquad 0 \le j,l \le N-1

Let us define the set of trigonometric functions

SN={v:[0,2π]R,v=k=N/2N/21akϕk,akC}S_N = \left\{ v : [0,2\pi] \to \re, \quad v = \sumf a_k \phi_k, \quad a_k \in \complex \right\}

Define the discrete inner product

(u,v)N=2πNj=0N1u(xj)v(xj)\ip{u,v}_N = \frac{2\pi}{N} \sum_{j=0}^{N-1} u(x_j) \conj{v(x_j)}

and the usual continuous one

(u,v)=02πu(x)v(x)dx\ip{u,v} = \int_0^{2\pi} u(x) \conj{v(x)} \ud x

In fact, the discrete one can be obtained if we apply Trapezoidal rule to the continuous inner product.

As a consequence, (,)N\ip{\cdot,\cdot}_N is an inner product on SNS_N and

uN:=(u,u)N=(u,u)=u,uSN\norm{u}_N := \sqrt{\ip{u,u}_N} = \sqrt{\ip{u,u}} = \norm{u}, \qquad \forall u \in S_N

Hence the interpolant satisfies

(INuu,v)N=0,vSN\ip{I_N u - u, v}_N = 0, \qquad \forall v \in S_N

Thus

INuI_N u is the orthogonal projection of uu onto SNS_N with respect to the discrete inner product.

The error uINuu - I_N u is perpendicular to SNS_N and INuI_N u is the best approximation to uu in SNS_N wrt the discrete norm N\norm{\cdot}_N.

This is also related to the property that {ϕk(x)}\{ \phi_k(x) \} is an orthogonal family wrt to the discrete inner product

(ϕj,ϕk)N=2πδjk,0j,kN\ip{\phi_j, \phi_k}_N = 2 \pi \delta_{jk}, \qquad 0 \le j,k \le N

Define

PNu=k=N/2N/21u^kϕkSNP_N u = \sumf \hatu_k \phi_k \in S_N

which is a truncation of the Fourier series SuSu. Also define the summation notation

k==k=N/2N/21+kN/2\sum_{k=-\infty}^\infty = \sumf + \sumfr

The result of Lemma 4 and decay properties of Fourier transform are key to showing the error estimate of trigonometric interpolation.

6.12.2Computing the DFT

6.12.2.1DFT of a real function

If the function uu is real, then the DFT comes in complex conjugate pairs

u~k=u~k,1kN/21\tilu_{-k} = \conj{\tilu_k}, \qquad 1 \le k \le N/2-1

and u~0\tilu_0, u~N/2\tilu_{-N/2} are real. The first property is easy. We have

u~0=1Nj=0N1u(xj)\tilu_0 = \frac{1}{N} \sum_{j=0}^{N-1} u(x_j)

which is real, and

u~N/2=1Nj=0N1u(xj)eiπj=1Nj=0N1u(xj)cos(πj)\tilu_{-N/2} = \frac{1}{N} \sum_{j=0}^{N-1} u(x_j) \ee^{\ii \pi j} = \frac{1}{N} \sum_{j=0}^{N-1} u(x_j) \cos(\pi j)

is also real. When we want to evaluate INu(x)I_N u(x) at some arbitrary xx,

INu(x)=u~0+k=1N/21(u~kϕk(x)+u~kϕk(x))+u~N/2ϕN/2(x)I_N u(x) = \tilu_0 + \sum_{k=1}^{N/2-1} \left( \tilu_k \phi_k(x) + \conj{\tilu_k \phi_k(x)} \right) + \tilu_{-N/2} \phi_{-N/2}(x)

the above expression is not real because of the last term. We can modify it as

INu(x)=u~0+k=1N/21(u~kϕk(x)+u~kϕk(x))+12u~N/2ϕN/2(x)+12u~N/2ϕN/2(x)\begin{align} I_N u(x) &= \tilu_0 + \sum_{k=1}^{N/2-1} \left( \tilu_k \phi_k(x) + \conj{\tilu_k \phi_k(x)} \right) \\ & \qquad + \half \tilu_{-N/2} \phi_{-N/2}(x) + \half \tilu_{-N/2} \phi_{N/2}(x) \end{align}

i.e.,

INu(x)=k=N/2N/2u^kϕk(x),u^N/2=u^N/2I_N u(x) = {\sum_{k=-N/2}^{N/2}}^\prime \hatu_k \phi_k(x), \qquad \hatu_{N/2} = \hatu_{-N/2}

and the prime denotes that the first and last terms must be multiplied by half. This ensures that the interpolation conditions are still satisfied and INu(x)I_N u(x) is real for all x[0,2π]x \in [0,2\pi].

However, in actual numerical computation, we may get an imaginary part of order machine precision, and then we have to take the real part. Then, we may as well compute it as before and take real part

INu(x)=Realk=N/2N/21u~kϕk(x)I_N u (x) = \textrm{Real} \sumf \tilu_k \phi_k(x)

6.12.2.2Fast Fourier Transform (FFT)

Computing each u~k\tilu_k using (9) requires O(N)\order{N} floating point operations (FLOPS). Computing all NN coefficients thus requires O(N2)\order{N^2} FLOPS. This process can be written as a dense N×NN \times N matrix multiplying the vector of values

[u(x0), u(x1), , u(xN1)][u(x_0), \ u(x_1), \ \ldots, \ u(x_{N-1})]^\top

The matrix has special structure and using some tricks, the cost can be reduced to O(NlogN)\order{N \log N} using FFT. It was invented by Gauss and rediscovered 160 years later in Cooley & Tukey, 1965.

6.12.2.3FFT using Numpy

Numpy has routines to compute the DFT using FFT, see numpy.fft. It takes the trigonometric polynomial in the form

INu(x)=1Nk=N/2+1N/2u~keikx,x[0,2π]I_N u(x) = \frac{1}{N} \sum_{k=-N/2+1}^{N/2} \tilu_k \ee^{\ii k x}, \qquad x \in [0,2\pi]

which is different from our convention.

from numpy import pi,arange
import numpy.fft as fft
h = 2*pi/N
x = h*arange(0,N)
u = f(x)
u_hat = fft(u)

The output u_hat gives the DFT in the following order

u~k=u~0, u~1, , u~N/2, u~N/2+1, , u~2, u~1\tilu_k = \tilu_0, \ \tilu_1, \ \ldots, \ \tilu_{N/2}, \ \tilu_{-N/2+1}, \ \ldots, \ \tilu_{-2}, \ \tilu_{-1}

Scipy.fft also provides equivalent routines for FFT.

See more examples of using Fourier interpolation for approximation and differentiation here http://cpraveen.github.io/chebpy.

6.12.3Examples

The following function computes the trigonometric interpolant and plots it. It also computes error norm by using ne uniformly spaced points.

# Wave numbers are arranged as k=[0,1,...,N/2,-N/2+1,-N/2,...,-1]
def fourier_interp(N,f,ne=1000,fig=True):
    if mod(N,2) != 0:
        print("N must be even")
        return
    h = 2*pi/N; x = h*arange(0,N);
    v = f(x);
    v_hat = fft(v)
    k = zeros(N)
    n = N//2
    k[0:n+1] = arange(0,n+1)
    k[n+1:] = arange(-n+1,0,1)

    xx = linspace(0.0,2*pi,ne)
    vf = real(exp(1j*outer(xx,k)) @ v_hat) / N
    ve = f(xx)

    # Plot interpolant and exact function
    if fig:
        plot(x,v,'o',xx,vf,xx,ve)
        legend(('Data','Fourier','Exact'))
        xlabel('x'), ylabel('y'), title("N = "+str(N))

    errori = abs(vf-ve).max()
    error2 = sqrt(h*sum((vf-ve)**2))
    print("Error (max,L2) = ",errori,error2)
    return errori, error2

To measure rate of error convergence, we can compute error with NN and 2N2N points

EN=CNm,E2N=C(2N)mE_N = \frac{C}{N^m}, \qquad E_{2N} = \frac{C}{(2N)^m}

and estimate the rate

m=log(EN/E2N)log2m = \frac{\log(E_N / E_{2N})}{\log 2}

6.12.3.1Infinitely smooth, periodic function (m=)(m = \infty)

u(x)=exp(sinx),x[0,2π]u(x) = \exp(\sin x), \qquad x \in [0,2\pi]
f1 = lambda x: exp(sin(x))
fourier_interp(24,f1);
Error (max,L2) =  7.993605777301127e-14 6.463175355583166e-13
<Figure size 640x480 with 1 Axes>

6.12.3.2Infinitely smooth, derivative not periodic (m=1)(m = 1)

u(x)=sin(x/2),x[0,2π]u(x) = \sin(x/2), \qquad x \in [0,2\pi]
g = lambda x: sin(x/2)
e1i,e12 = fourier_interp(24,g)
Error (max,L2) =  0.024834020843920963 0.09773746633443824
<Figure size 640x480 with 1 Axes>
e2i,e22 = fourier_interp(48,g,fig=False)
print("Rate  (max,L2) = ", log(e1i/e2i)/log(2), log(e12/e22)/log(2))
Error (max,L2) =  0.012408518426188423 0.024996075910098184
Rate  (max,L2) =  1.0009869969530465 1.9672100793199772

6.12.3.3Continuous function (m=1)(m = 1)

f(x)={12πxπ,π/2x3π/20,otherwise,x[0,2π]f(x) = \begin{cases} 1 - \frac{2}{\pi} |x - \pi|, & \pi/2 \le x \le 3\pi/2 \\ 0, & \textrm{otherwise} \end{cases}, \qquad x \in [0,2\pi]
trihat = lambda x: (x >= 0.5*pi)*(x <= 1.5*pi)*(1 - 2*abs(x-pi)/pi)
e1i,e12 = fourier_interp(24,trihat)
Error (max,L2) =  0.03193459816557198 0.15767834270061085
<Figure size 640x480 with 1 Axes>
e2i,e22 = fourier_interp(48,trihat,fig=False)
print("Rate  (max,L2) = ", log(e1i/e2i)/log(2), log(e12/e22)/log(2))
Error (max,L2) =  0.01586894831194785 0.03976512461360404
Rate  (max,L2) =  1.0089137779952295 1.987408920855071

6.12.3.4Discontinuous function (m=0)(m=0)

Lemma 6 does not apply in this case.

f(x)={1,π/2<x<3π/20,otherwise,x[0,2π]f(x) = \begin{cases} 1, & \pi/2 < x < 3\pi/2 \\ 0, & \textrm{otherwise} \end{cases}, \qquad x \in [0,2\pi]
f2 = lambda x: (abs(x-pi) < 0.5*pi)
e1i,e12 = fourier_interp(24,f2)
Error (max,L2) =  0.9957898426758144 2.8589545556697407
<Figure size 640x480 with 1 Axes>
e2i,e22 = fourier_interp(48,f2)
print("Rate  (max,L2) = ", log(e1i/e2i)/log(2), log(e12/e22)/log(2))
Error (max,L2) =  0.9915466799725409 1.4514563367440128
Rate  (max,L2) =  0.006160606441865679 0.9779865149620691
<Figure size 640x480 with 1 Axes>

We do not get convergence in maximum norm but we see convergence in 2-norm at a rate of 1/N1/N. There is convergence away from the discontinuities but overall, it is not a good approximation.

References
  1. Cooley, J. W., & Tukey, J. W. (1965). An algorithm for the machine calculation of complex Fourier series. Mathematics of Computation, 19(90), 297–301. 10.1090/S0025-5718-1965-0178586-1
  2. Brunton, S. L., & Kutz, J. N. (2019). Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control (1st ed.). Cambridge University Press. 10.1017/9781108380690