Skip to article frontmatterSkip to article content

Gauss quadrature - II

from pylab import *

For a positive weight function, we want to compute

I(f)=abw(x)f(x)dxIn(f)=j=1nwjf(xj)I(f) = \int_a^b w(x) f(x) \ud x \approx I_n(f) = \sum_{j=1}^n w_j f(x_j)

Moreover, we want the degree of precision of InI_n to be as large as possible, i.e., 2n12n-1. The main idea is to construct the Hermite interpolant of f(x)f(x) and take In(f)I_n(f) to be the exact integral of the Hermite interpolant.

1General case

Let {ϕn}\{ \phi_n\} be an orthogonal family of polynomials on [a,b][a,b] with respect to the above weight function ww. Denote the zeros of ϕn(x)\phi_n(x) by

a<x1<x2<<xn<ba < x_1 < x_2 < \ldots < x_n < b

These zeroes will come out as the nodes in the integration formula In(f)I_n(f). We recall some notation used previously. We write ϕn\phi_n in terms of monomials

ϕn(x)=Anxn+Bnxn1+\phi_n(x) = A_n x^n + B_n x^{n-1} + \ldots

and define

an=An+1An,γn=(ϕn,ϕn)=abw(x)[ϕn(x)]2dxa_n = \frac{A_{n+1}}{A_n}, \qquad \gamma_n = \ip{\phi_n, \phi_n} = \int_a^b w(x) [\phi_n(x)]^2 \ud x

2Gauss-Legendre quadrature

For weight function w(x)=1w(x) = 1 on [1,1][-1,1] the orthogonal polynomials are the Legendre functions Pn(x)P_n(x). The integral is approximated by

I(f)=11f(x)dxIn(f)=j=1nwjf(xj)I(f) = \int_{-1}^1 f(x) \ud x \approx I_n(f) = \sum_{j=1}^n w_j f(x_j)

The nodes {xj}\{x_j\} are the zeros of Pn(x)P_n(x) and the weights are

wi=2(n+1)Pn(xi)Pn+1(xi),i=1,2,,nw_i = -\frac{2}{(n+1) P_n'(x_i) P_{n+1}(x_i)}, \qquad i=1,2,\ldots,n

and the error is

En(f)=22n+1(n!)4(2n+1)[(2n)!]2f(2n)(η)(2n)!=enf(2n)(η)(2n)!E_n(f) = \frac{2^{2n+1} (n!)^4}{(2n+1) [(2n)!]^2} \frac{f^{(2n)}(\eta)}{(2n)!} = e_n \frac{f^{(2n)}(\eta)}{(2n)!}

3Asymptotic behavior of error

Define

Mm=maxx[1,1]f(m)(x)m!,m0M_m = \max_{x \in [-1,1]} \frac{|f^{(m)}(x)|}{m!}, \qquad m \ge 0

For a large class of infinitely differentiable functions

mMmM<\sum_m M_m \le M < \infty

In fact, for many functions

limmMm=0\lim_{m \to \infty} M_m = 0

e.g., analytic functions like ex\ee^x, cos(x)\cos(x), etc. For such functions, the Gauss quadrature has error

En(f)enM2n|E_n(f)| \le e_n M_{2n}

The speed of convergence depends on ene_n. Using Stirling approximation

n!ennn2πnn! \approx \ee^{-n} n^n \sqrt{2\pi n}

we get

enπ4ne_n \approx \frac{\pi}{4^n}

E.g., e5=0.00293e_5 = 0.00293 while π/45=0.00307\pi/4^5 = 0.00307. Hence we get the asymptotic error bound

En(f)π4nM2n|E_n(f)| \le \frac{\pi}{4^n} M_{2n}

This is exponential convergence of the error. Compare this to trapezoidal which has En(f)c/n2|E_n(f)| \le c/n^2 and Simpson which has En(f)c/n4|E_n(f)| \le c/n^4. Hence for nice functions Gauss quadrature is very accurate.

4Relation to minimax approximation

We can see the optimal accuracy of Gauss quadrature by its relation to the best approximation.

Footnotes
  1. See Chapter [chap:hermint]{reference-type=“ref” reference=“chap:hermint”}