from pylab import *
from scipy.interpolate import barycentric_interpolate
If f ( x ) f(x) f ( x ) is analytic inside a curve Γ enclosing the interpolation points { x j } \{ x_j \} { x j } , the error in polynomial interpolant is
f ( x ) − p ( x ) = 1 2 π i ∮ Γ ℓ ( x ) ℓ ( t ) f ( t ) ( t − x ) d t f(x) - p(x) = \frac{1}{2\pi\ii} \oint_{\Gamma} \frac{\ell(x)}{\ell(t)} \frac{f(t)}{(t-x)}
\ud t f ( x ) − p ( x ) = 2 π i 1 ∮ Γ ℓ ( t ) ℓ ( x ) ( t − x ) f ( t ) d t We have seen that if f f f is analytic inside the stadium S S S , the error goes to zero exponentially. If the function has a singularity within the stadium S, i.e., close to the real line, then we have to do more analysis and the effect of point distribution also has to be considered in the analysis.
Define
γ n ( x , t ) = ∣ ℓ ( t ) ℓ ( x ) ∣ 1 / ( n + 1 ) = ( ∏ j = 0 n ∣ t − x j ∣ ∣ x − x j ∣ ) 1 / ( n + 1 ) \gamma_n(x,t) = \left| \frac{\ell(t)}{\ell(x)} \right|^{1/(n+1)} =
\left( \prod_{j=0}^n \frac{ |t-x_j| }{ |x- x_j| } \right)^{1/(n+1)} γ n ( x , t ) = ∣ ∣ ℓ ( x ) ℓ ( t ) ∣ ∣ 1/ ( n + 1 ) = ( j = 0 ∏ n ∣ x − x j ∣ ∣ t − x j ∣ ) 1/ ( n + 1 ) then
∣ ℓ ( x ) ℓ ( t ) ∣ = [ γ n ( x , t ) ] − n − 1 \left| \frac{\ell(x)}{\ell(t)} \right| = [\gamma_n(x,t)]^{-n-1} ∣ ∣ ℓ ( t ) ℓ ( x ) ∣ ∣ = [ γ n ( x , t ) ] − n − 1 If γ n ( x , t ) > 1 \gamma_n(x,t) > 1 γ n ( x , t ) > 1 , for all t ∈ Γ t \in \Gamma t ∈ Γ , then we get exponential convergence as n → ∞ n \to \infty n → ∞ . Define
α n = min x ∈ X , t ∈ Γ γ n ( x , t ) , X = [ − 1 , + 1 ] \alpha_n = \min_{x \in X, t \in \Gamma} \gamma_n(x,t), \qquad X = [-1,+1] α n = x ∈ X , t ∈ Γ min γ n ( x , t ) , X = [ − 1 , + 1 ] If α n ≥ α > 1 \alpha_n \ge \alpha > 1 α n ≥ α > 1 for all sufficiently large n n n , and if f f f is analytic in the region bounded by Γ, then p ( x ) → f ( x ) p(x) \to f(x) p ( x ) → f ( x ) at the rate O ( α − n ) O(\alpha^{-n}) O ( α − n ) .
We can give a geometric interpretation of the condition α n > 1 \alpha_n > 1 α n > 1 ; the geometric mean distance of every t ∈ Γ t \in \Gamma t ∈ Γ from { x j } \{ x_j \} { x j } is strictly greater than the geometric mean distance of every x ∈ X x \in X x ∈ X to { x j } \{ x_j \} { x j } .
6.5.1 Discrete potential ¶ We want to study the behaviour of the function γ n ( x , t ) \gamma_n(x,t) γ n ( x , t ) . Taking logarithm
log γ n ( x , t ) = 1 n + 1 ∑ j = 0 n log ∣ t − x j ∣ − 1 n + 1 ∑ j = 0 n log ∣ x − x j ∣ \log \gamma_n(x,t) = \frac{1}{n+1} \sum_{j=0}^n \log|t-x_j| - \frac{1}{n+1}
\sum_{j=0}^n \log|x-x_j| log γ n ( x , t ) = n + 1 1 j = 0 ∑ n log ∣ t − x j ∣ − n + 1 1 j = 0 ∑ n log ∣ x − x j ∣ Define the potential function
u n ( s ) = 1 n + 1 ∑ j = 0 n log ∣ s − x j ∣ = 1 n + 1 log ∣ ω n ( s ) ∣ u_n(s) = \frac{1}{n+1} \sum_{j=0}^n \log|s-x_j| = \frac{1}{n+1} \log|\omega_n(s)| u n ( s ) = n + 1 1 j = 0 ∑ n log ∣ s − x j ∣ = n + 1 1 log ∣ ω n ( s ) ∣ Recall that ω n ( x ) \omega_n(x) ω n ( x ) is the polynomial factor in the error estimate. u n u_n u n is a harmonic function in the complex plane away from { x j } \{x_j\} { x j } . We may think of each x j x_j x j as a point charge of strength 1 n + 1 \frac{1}{n+1} n + 1 1 , like an electron, and of u n u_n u n as the potential induced by all the charges, whose gradient defines an electric field.
In terms of the potential
log γ n ( x , t ) = u n ( t ) − u n ( x ) and ∣ ℓ ( x ) ℓ ( t ) ∣ = e − ( n + 1 ) [ u n ( t ) − u n ( x ) ] \log \gamma_n(x,t) = u_n(t) - u_n(x)
\qquad \textrm{and} \qquad
\left| \frac{\ell(x)}{\ell(t)} \right| = \ee^{-(n+1)[ u_n(t) - u_n(x)]} log γ n ( x , t ) = u n ( t ) − u n ( x ) and ∣ ∣ ℓ ( t ) ℓ ( x ) ∣ ∣ = e − ( n + 1 ) [ u n ( t ) − u n ( x )] For convergence, we require u n ( t ) − u n ( x ) > 0 u_n(t)-u_n(x) > 0 u n ( t ) − u n ( x ) > 0 ; in particular if
min x ∈ X , t ∈ Γ [ u n ( t ) − u n ( x ) ] ≥ log α > 0 , for some α > 1 \min_{x \in X, t \in \Gamma}[ u_n(t) - u_n(x)] \ge \log\alpha > 0, \qquad \textrm{for some $\alpha > 1$} x ∈ X , t ∈ Γ min [ u n ( t ) − u n ( x )] ≥ log α > 0 , for some α > 1 then
∣ ℓ ( x ) ℓ ( t ) ∣ ≤ e − ( n + 1 ) log α = α − n − 1 → 0 \left| \frac{\ell(x)}{\ell(t)} \right| \le \ee^{-(n+1)\log\alpha} = \alpha^{-n-1} \to 0 ∣ ∣ ℓ ( t ) ℓ ( x ) ∣ ∣ ≤ e − ( n + 1 ) l o g α = α − n − 1 → 0 and the interpolants converge exponentially, ∥ f − p ∥ ∞ = O ( α − n ) \norm{f-p}_\infty = O(\alpha^{-n}) ∥ f − p ∥ ∞ = O ( α − n ) . We can write the condition as
min t ∈ Γ u n ( t ) − max x ∈ X u n ( x ) ≥ log α > 0 , for some α > 1 \min_{t \in \Gamma} u_n(t) - \max_{x \in X} u_n(x) \ge \log\alpha > 0, \qquad
\textrm{for some $\alpha > 1$} t ∈ Γ min u n ( t ) − x ∈ X max u n ( x ) ≥ log α > 0 , for some α > 1 Hence convergence depends on the difference of values taken by the potential function on the set of points X X X where the interpolant is to be evaluated and on a contour Γ inside which f f f is analytic. If f f f is analytic everywhere, we can take take Γ far out in the complex plane and we easily satisfy the above condition since
u n ( t ) ≈ log ∣ t ∣ , ∣ t ∣ ≫ 1 u_n(t) \approx \log|t|, \qquad |t| \gg 1 u n ( t ) ≈ log ∣ t ∣ , ∣ t ∣ ≫ 1 But if there is a singularity close to the real line, Γ cannot be taken too far away, and then we have to analyze the condition more carefully.
Plot contours of u n u_n u n corresponding to uniform and Chebyshev points for n = 50 n=50 n = 50 . The following function computes u n ( s ) u_n(s) u n ( s ) .
# x =[x0,x1,...,xn] are the interpolation points
def potential(x,xp):
fp = zeros(shape(xp))
for xi in x:
fp += log(abs(xp - xi))
return fp/len(x)
Make a grid in complex plane to draw contours of u n u_n u n .
xg = linspace(-1.5,1.5,200)
X, Y = meshgrid(xg,xg)
Z = X + 1j * Y
Uniformly spaced points
n = 50
x = linspace(-1,1,n+1)
contour(X,Y,potential(x,Z),levels=11)
contour(X,Y,potential(x,Z),levels=[-1+log(2)],colors='blue',linestyles='solid')
plot([-1,1],[0,0],'r-')
title('Contours of $u_{'+str(n)+'}$ for uniformly spaced points');
The blue color shows the contour line u n ( s ) = − 1 + log ( 2 ) u_n(s) = -1 + \log(2) u n ( s ) = − 1 + log ( 2 ) which approximtely passes through the points x = ± 1 x = \pm 1 x = ± 1 ; the red line is the real interval [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] .
Chebyshev points
n = 50
theta = linspace(0,pi,n+1)
x = cos(theta)
contour(X,Y,potential(x,Z),levels=11)
plot([-1,1],[0,0],'r-')
title('Contours of $u_{'+str(n)+'}$ for Chebyshev points');
It appears that the real interval [ − 1 , 1 ] [-1,1] [ − 1 , 1 ] is a contour line.
6.5.2 From discrete to continuous potentials ¶ Since we are interested in the case n → ∞ n \to \infty n → ∞ , we can examine the potential in this limit. We can write the potential u n u_n u n as a Lebesque-Stieltjes integral
u n ( s ) = ∫ − 1 1 log ∣ s − τ ∣ μ n ( τ ) d τ u_n(s) = \int_{-1}^1 \log|s-\tau| \mu_n(\tau) \ud\tau u n ( s ) = ∫ − 1 1 log ∣ s − τ ∣ μ n ( τ ) d τ where μ n \mu_n μ n is a measure consisting of a sum of Dirac delta functions, each of strength 1 / ( n + 1 ) 1/(n+1) 1/ ( n + 1 )
μ n ( τ ) = 1 n + 1 ∑ j = 0 n δ ( τ − x j ) \mu_n(\tau) = \frac{1}{n+1} \sum_{j=0}^n \delta(\tau - x_j) μ n ( τ ) = n + 1 1 j = 0 ∑ n δ ( τ − x j ) What is the limiting measure as n → ∞ n \to \infty n → ∞ ? The precise notion of convergence appropriate for this limit is known as “weak-star” convergence. This limit depends on the grid point distribution which is characterized by μ n \mu_n μ n . Note that
∫ − 1 1 μ n ( τ ) d τ = 1 , ∫ a b μ n ( τ ) d τ = number of nodes in [ a , b ] n + 1 \int_{-1}^1 \mu_n(\tau)\ud\tau = 1, \qquad \int_a^b \mu_n(\tau) \ud \tau =
\frac{\textrm{number of nodes in $[a,b]$}}{n+1} ∫ − 1 1 μ n ( τ ) d τ = 1 , ∫ a b μ n ( τ ) d τ = n + 1 number of nodes in [ a , b ] In the limit, we need
∫ − 1 1 μ ( τ ) d τ = 1 , ∫ a b μ ( τ ) d τ = number of nodes in [ a , b ] n + 1 \int_{-1}^1 \mu(\tau)\ud\tau = 1, \qquad \int_a^b \mu(\tau) \ud \tau =
\frac{\textrm{number of nodes in $[a,b]$}}{n+1} ∫ − 1 1 μ ( τ ) d τ = 1 , ∫ a b μ ( τ ) d τ = n + 1 number of nodes in [ a , b ] The second property can also be written as
μ ( τ ) d τ = number of nodes in [ τ − 1 2 d τ , τ + 1 2 d τ ] total number of nodes \mu(\tau) \ud\tau = \frac{\textrm{number of nodes in $[\tau-\half\ud\tau, \tau+ \half\ud\tau]$}}{\textrm{total number of nodes}} μ ( τ ) d τ = total number of nodes number of nodes in [ τ − 2 1 d τ , τ + 2 1 d τ ] The number of nodes in an interval of length Δ τ \Delta\tau Δ τ is proportional to Δ τ \Delta\tau Δ τ so that
μ ( τ ) d τ = C d τ \mu(\tau) \ud\tau = C \ud\tau μ ( τ ) d τ = C d τ and using the condition ∫ − 1 1 μ ( τ ) d τ = 1 \int_{-1}^1\mu(\tau)\ud\tau=1 ∫ − 1 1 μ ( τ ) d τ = 1 we get C = 1 2 C=\half C = 2 1 and
μ ( τ ) = 1 2 \mu(\tau) = \frac{1}{2} μ ( τ ) = 2 1 The limiting potential is
u ( s ) = 1 2 ∫ − 1 1 log ∣ s − τ ∣ d τ = − 1 + 1 2 Real [ ( s + 1 ) log ( s + 1 ) − ( s − 1 ) log ( s − 1 ) ] \begin{align}
u(s)
&= \half \int_{-1}^1 \log|s-\tau| \ud\tau \\
&= -1 + \half \textrm{Real} [(s+1)\log(s+1) - (s-1)\log(s-1)]
\end{align} u ( s ) = 2 1 ∫ − 1 1 log ∣ s − τ ∣ d τ = − 1 + 2 1 Real [( s + 1 ) log ( s + 1 ) − ( s − 1 ) log ( s − 1 )] At the end-points and middle, it takes the values
u ( ± 1 ) = − 1 + log ( 2 ) , u ( 0 ) = − 1 u(\pm 1) = -1 + \log(2), \qquad u(0) = -1 u ( ± 1 ) = − 1 + log ( 2 ) , u ( 0 ) = − 1 The equipotential curve u ( s ) = − 1 + log ( 2 ) u(s) = -1 + \log(2) u ( s ) = − 1 + log ( 2 ) is shown in red in figure and it cuts the imaginary axis at ± 0.52552491457 i \pm 0.52552491457\ii ± 0.52552491457 i and passes through the end
points x = ± 1 x=\pm 1 x = ± 1 .
Equipotential curves for uniform points. The red curve corresponds to u ( s ) = − 1 + log 2 u(s) = -1 + \log 2 u ( s ) = − 1 + log 2 .
If f f f has a singularity outside the red curve, then we can take Γ to be an equipotential curve
Γ = { s ∈ C : u ( s ) = u 0 > − 1 + log ( 2 ) } \Gamma = \{ s \in \complex : u(s) = u_0 > -1 + \log(2) \} Γ = { s ∈ C : u ( s ) = u 0 > − 1 + log ( 2 )} and
u ( x ) ≤ − 1 + log ( 2 ) , x ∈ X u(x) \le -1 + \log(2), \qquad x \in X u ( x ) ≤ − 1 + log ( 2 ) , x ∈ X Then
min t ∈ Γ u ( t ) − max x ∈ X u ( x ) ≥ u 0 + 1 − log ( 2 ) > 0 \min_{t \in \Gamma} u(t) - \max_{x \in X} u(x) \ge u_0 + 1 - \log(2) > 0 t ∈ Γ min u ( t ) − x ∈ X max u ( x ) ≥ u 0 + 1 − log ( 2 ) > 0 and we have exponential convergence. But if the singularity is inside the red curve, then we cannot choose a curve Γ that satisfies the above condition; interpolation at uniform nodes will not converge.
The function f ( x ) = 1 1 + 16 x 2 f(x) = \frac{1}{1 + 16x^2} f ( x ) = 1 + 16 x 2 1 is not analytic inside the red curve, e.g., at x = ± 0.25 i x = \pm 0.25\ii x = ± 0.25 i , and interpolating this at uniform points does not converge. The function f ( x ) = 1 1 + 50 18 x 2 f(x) = \frac{1}{1+\tfrac{50}{18}x^2} f ( x ) = 1 + 18 50 x 2 1 is analytic inside the red curve, it has poles at x = ± 0.6 i x = \pm 0.6\ii x = ± 0.6 i .
f = lambda x: 1.0/(1.0 + 100/36*x**2)
xe = linspace(-1,1,1000)
N = 30
for i in range(4):
subplot(2,2,i+1)
x = linspace(-1,1,N+1)
y = f(x)
ye = barycentric_interpolate(x,y,xe)
print("N, Error = %3d %12.4e" % (N,abs(ye - f(xe)).max()))
plot(x,y,'.'), plot(xe,ye)
text(0,0.5,'N='+str(N),ha='center')
N = N + 10
N, Error = 30 2.8451e-03
N, Error = 40 1.0511e-03
N, Error = 50 7.3228e-04
N, Error = 60 4.1339e-01
The errors initially decrease but then start to increase. In principle, we should expect convergence, but due to round-off errors caused by the large difference in barycentric weights, we see errors starting to increase for larger degrees.
6.5.2.2 Chebyshev points ¶ The Chebyshev points are uniformly distributed with respect to the variable θ ∈ [ 0 , π ] \theta \in [0,\pi] θ ∈ [ 0 , π ] where x = cos θ x = \cos\theta x = cos θ , so that
μ ( τ ) d τ = C d θ \mu(\tau)\ud\tau = C \ud\theta μ ( τ ) d τ = C d θ and using the condition
∫ − 1 1 μ ( τ ) d τ = 1 = ∫ 0 π C d θ \int_{-1}^1\mu(\tau)\ud\tau = 1 = \int_0^\pi C \ud\theta ∫ − 1 1 μ ( τ ) d τ = 1 = ∫ 0 π C d θ we get
C = 1 π , μ ( τ ) = C [ d τ d θ ] − 1 = 1 π 1 − τ 2 C = \frac{1}{\pi}, \qquad \mu(\tau) = C \left[ \dd{\tau}{\theta} \right]^{-1} = \frac{1} {\pi \sqrt{1-\tau^2}} C = π 1 , μ ( τ ) = C [ d θ d τ ] − 1 = π 1 − τ 2 1 The limiting potential is
u ( s ) = ∫ − 1 1 log ∣ s − τ ∣ π 1 − τ 2 d τ = log ∣ s + i 1 − s 2 ∣ − log 2 u(s) = \int_{-1}^1 \frac{\log|s-\tau|}{\pi\sqrt{1-\tau^2}} \ud\tau = \log|s + \ii \sqrt{1-s^2}| - \log 2 u ( s ) = ∫ − 1 1 π 1 − τ 2 log ∣ s − τ ∣ d τ = log ∣ s + i 1 − s 2 ∣ − log 2 For
s ∈ [ − 1 , + 1 ] , u ( s ) = log s 2 + ( 1 − s 2 ) − log 2 = − log 2 = const s \in [-1,+1], \qquad u(s) = \log\sqrt{s^2 + (1-s^2)} - \log 2 = -\log 2 = \textrm{const} s ∈ [ − 1 , + 1 ] , u ( s ) = log s 2 + ( 1 − s 2 ) − log 2 = − log 2 = const Thus
X = [ − 1 , + 1 ] X = [-1,+1] X = [ − 1 , + 1 ] is an equipotential curve of the potential function u ( s ) u(s) u ( s ) .
Equipotential curves for chebyshev points. The potential is u ( s ) = log ∣ s + i 1 − s 2 ∣ − log 2 u(s) = \log|s + \ii \sqrt{1-s^2}| - \log 2 u ( s ) = log ∣ s + i 1 − s 2 ∣ − log 2 . For u 0 > − log 2 u_0 > -\log 2 u 0 > − log 2 , the equipotential curve u ( s ) = u 0 u(s) = u_0 u ( s ) = u 0 is the Bernstein ellipse E ρ E_\rho E ρ with ρ = 2 e u 0 \rho = 2 \ee^{u_0} ρ = 2 e u 0 which encloses the set X X X .
This property helps us. No matter how close the singularity of f f f is to X X X , we can always find an equipotential curve u ( s ) = ρ u(s) = \rho u ( s ) = ρ with ρ > − log ( 2 ) \rho > -\log(2) ρ > − log ( 2 ) , which encloses X X X and inside which f f f is analytic. If we take Γ to be this equipotential curve then
min t ∈ Γ u ( t ) − max x ∈ X u ( x ) = ρ + log ( 2 ) > 0 \min_{t \in \Gamma} u(t) - \max_{x \in X} u(x) = \rho + \log(2) > 0 t ∈ Γ min u ( t ) − x ∈ X max u ( x ) = ρ + log ( 2 ) > 0 and we obtain exponential convergence.
Thus, interpolation with Chebyshev points will converge exponentially for real analytic functions. Note that the form of μ indicates that what is needed is that the interpolation points { x j } \{ x_j \} { x j } must be clustered near the end-points with a density of ( 1 − x 2 ) − 1 / 2 (1-x^2)^{-1/2} ( 1 − x 2 ) − 1/2 .
So far we have considered infinitely differentiable or analytic functions. Chebyshev interpolants will converge for functions which are less nice than analytic, e.g., those with a few continuous derivatives. The proof of this requires somewhat different techniques.