In the quadrature methods studied so far, the nodes were first chosen
and then we determine the weights via some function interpolation
process. The resulting methods have certain degree of precision in terms
of what polynomials they can exactly integrate.
Let us consider the task of computing
I ( f ) = ∫ a b w ( x ) f ( x ) d x I(f) = \int_a^b w(x) f(x) \ud x I ( f ) = ∫ a b w ( x ) f ( x ) d x where w ( x ) w(x) w ( x ) is some fixed positive
weight function by a formula of the type
I n ( f ) = ∑ j = 1 n w j f ( x j ) I_n(f) = \sum_{j=1}^n w_j f(x_j) I n ( f ) = j = 1 ∑ n w j f ( x j ) Here w j w_j w j are weights and should
not be confused with the weight function w ( x ) w(x) w ( x ) , though they will be
related to one another in some way.
Fix some n ≥ 1 n \ge 1 n ≥ 1 . Find the nodes { x j } \{ x_j \} { x j } and weights { w j } \{ w_j \} { w j }
so that I n ( f ) I_n(f) I n ( f ) has maximum degree of precision, i.e., maximize m m m
such that
I n ( p ) = I ( p ) , ∀ p ∈ P m I_n(p) = I(p), \qquad \forall p \in \poly_m I n ( p ) = I ( p ) , ∀ p ∈ P m The evaluation
of f f f might be expensive, hence we would like to find quadrature rules
that use few number of nodes but have high degree of precision.
1 Unit weight function ¶ Let us take the interval to be [ − 1 , + 1 ] [-1,+1] [ − 1 , + 1 ] and w ( x ) = 1 w(x)=1 w ( x ) = 1 . Then we want to
compute
∫ − 1 1 f ( x ) d x ≈ I n ( f ) = ∑ j = 1 n w j f ( x j ) \int_{-1}^1 f(x) \ud x \approx I_n(f) = \sum_{j=1}^n w_j f(x_j) ∫ − 1 1 f ( x ) d x ≈ I n ( f ) = j = 1 ∑ n w j f ( x j ) The
error
E n ( f ) = ∫ − 1 1 f ( x ) − ∑ j = 1 n w j f ( x j ) E_n(f) = \int_{-1}^1 f(x) - \sum_{j=1}^n w_j f(x_j) E n ( f ) = ∫ − 1 1 f ( x ) − j = 1 ∑ n w j f ( x j ) Note thet
E n : C [ − 1 , 1 ] → R E_n : \cts[-1,1] \to \re E n : C [ − 1 , 1 ] → R is a linear operator and in particular
E n ( a 0 + a 1 x + … a m x m ) = a 0 E n ( 1 ) + a 1 E n ( x ) + … + a m E n ( x m ) E_n(a_0 + a_1 x + \ldots a_m x^m) = a_0 E_n(1) + a_1 E_n(x) + \ldots + a_m E_n(x^m) E n ( a 0 + a 1 x + … a m x m ) = a 0 E n ( 1 ) + a 1 E n ( x ) + … + a m E n ( x m ) Hence
E n ( p ) = 0 , ∀ p ∈ P m E_n(p) = 0, \qquad \forall p \in \poly_m E n ( p ) = 0 , ∀ p ∈ P m if and only if
E n ( x i ) = 0 , 0 ≤ i ≤ m E_n(x^i) = 0, \qquad 0 \le i \le m E n ( x i ) = 0 , 0 ≤ i ≤ m 1.1 Case n = 1 n=1 n = 1 : ¶ The quadrature formula is
I 1 ( f ) = w 1 f ( x 1 ) I_1(f) = w_1 f(x_1) I 1 ( f ) = w 1 f ( x 1 ) and we have to find
w 1 w_1 w 1 and x 1 x_1 x 1 .
E 1 ( 1 ) = 0 ⟹ ∫ − 1 1 d x − w 1 = 0 E_1(1) = 0 \quad\implies\quad \int_{-1}^1 \ud x - w_1 = 0 E 1 ( 1 ) = 0 ⟹ ∫ − 1 1 d x − w 1 = 0
E 1 ( x ) = 0 ⟹ ∫ − 1 1 x d x − w 1 x 1 = 0 E_1(x) = 0 \quad\implies\quad \int_{-1}^1 x \ud x - w_1 x_1 = 0 E 1 ( x ) = 0 ⟹ ∫ − 1 1 x d x − w 1 x 1 = 0 which yields
w 1 = 2 , x 1 = 0 w_1 = 2, \qquad x_1 = 0 w 1 = 2 , x 1 = 0 so that
∫ − 1 1 f ( x ) d x ≈ 2 f ( 0 ) \int_{-1}^1 f(x) \ud x \approx 2 f(0) ∫ − 1 1 f ( x ) d x ≈ 2 f ( 0 ) This is just the
midpoint-rule. It uses only one point but is exact for linear
polynomials. Note that trapezoidal method uses two points but is also
exact only for linear polynomials. In terms of the number of nodes used,
the mid-point rule is more optimal since it needs less function
evaluation to give same degree of precision.
1.2 Case n = 2 n=2 n = 2 : ¶ The quadrature formula is
I 2 ( f ) = w 1 f ( x 1 ) + w 2 f ( x 2 ) I_2(f) = w_1 f(x_1) + w_2 f(x_2) I 2 ( f ) = w 1 f ( x 1 ) + w 2 f ( x 2 ) and we
have to determine four quantities w 1 , w 2 , x 1 , x 2 w_1, w_2, x_1, x_2 w 1 , w 2 , x 1 , x 2 . We can try to
make the formula exact for upto cubic polynomials
E 2 ( x i ) = ∫ − 1 1 x i d x − [ w 1 x 1 i + w 2 x 2 i ] = 0 , i = 0 , 1 , 2 , 3 E_2(x^i) = \int_{-1}^1 x^i \ud x - [w_1 x_1^i + w_2 x_2^i] = 0, \qquad i=0,1,2,3 E 2 ( x i ) = ∫ − 1 1 x i d x − [ w 1 x 1 i + w 2 x 2 i ] = 0 , i = 0 , 1 , 2 , 3 which gives us four equations
w 1 + w 2 = 2 w 1 x 1 + w 2 x 2 = 0 w 1 x 1 2 + w 2 x 2 2 = 2 / 3 w 1 x 1 3 + w 2 x 2 3 = 0 \begin{aligned}
w_1 + w_2 &=& 2 \\
w_1 x_1 + w_2 x_2 &=& 0 \\
w_1 x_1^2 + w_2 x_2^2 &=& 2/3 \\
w_1 x_1^3 + w_2 x_2^3 &=& 0
\end{aligned} w 1 + w 2 w 1 x 1 + w 2 x 2 w 1 x 1 2 + w 2 x 2 2 w 1 x 1 3 + w 2 x 2 3 = = = = 2 0 2/3 0 The unique solution is
w 1 = w 2 = 1 , x 1 = − 1 3 , x 2 = 1 3 w_1 = w_2 = 1, \qquad x_1 = -\frac{1}{\sqrt{3}}, \qquad x_2 = \frac{1}{\sqrt{3}} w 1 = w 2 = 1 , x 1 = − 3 1 , x 2 = 3 1 This rule uses 2 points but has degree of precision 3. Simpson rule also
has degree of precision 3 but uses 3 points.
1.3 Case of general n n n : ¶ There are 2 n 2n 2 n free parameters { w j , x j } \{w_j, x_j\} { w j , x j } , j = 1 , 2 , … , n j=1,2,\ldots,n j = 1 , 2 , … , n and we
make the quadrature exact for polynomials of degree upto 2 n − 1 2n-1 2 n − 1
E n ( x i ) = 0 , i = 0 , 1 , 2 , … , 2 n − 1 E_n(x^i) = 0, \qquad i=0,1,2,\ldots,2n-1 E n ( x i ) = 0 , i = 0 , 1 , 2 , … , 2 n − 1 which gives 2 n 2n 2 n equations
∑ j = 1 n w j x j i = { 0 i = 1 , 3 , … , 2 n − 1 2 i + 1 i = 0 , 2 , … , 2 n − 2 \sum_{j=1}^n w_j x_j^i = \begin{cases}
0 & i=1,3,\ldots,2n-1 \\
\frac{2}{i+1} & i=0,2,\ldots,2n-2
\end{cases} j = 1 ∑ n w j x j i = { 0 i + 1 2 i = 1 , 3 , … , 2 n − 1 i = 0 , 2 , … , 2 n − 2 We have system of non-linear equations and it is not
possible to solve them analytically. We will take a digression to study
some properties of orthogonal polynomials which will help us to derive
these Gauss quadrature rules.
2 Solution for general case ¶ We can find the quadrature rule without having to solve a matrix problem. Let f : [ − 1 , 1 ] → R f : [-1,1] \to \re f : [ − 1 , 1 ] → R and for some integer n ≥ 1 n \ge 1 n ≥ 1 , let { x j , 1 ≤ j ≤ n } \{ x_j, 1 \le j \le n \} { x j , 1 ≤ j ≤ n } be the roots of Legendre polynomial P n P_n P n . Consider the interpolation
f n ( x ) = ∑ j = 1 n f ( x j ) ℓ j ( x ) f_n(x) = \sum_{j=1}^n f(x_j) \ell_j(x) f n ( x ) = j = 1 ∑ n f ( x j ) ℓ j ( x ) where ℓ j \ell_j ℓ j are the degree n − 1 n-1 n − 1 Lagrange polynomials supported at the roots. Since f ( x j ) = f n ( x j ) f(x_j) = f_n(x_j) f ( x j ) = f n ( x j ) for 1 ≤ j ≤ n 1 \le j \le n 1 ≤ j ≤ n , we can write
f ( x ) = f n ( x ) + P n ( x ) R ( x ) f(x) = f_n(x) + P_n(x) R(x) f ( x ) = f n ( x ) + P n ( x ) R ( x ) for some function R R R . Now
I ( f ) = ∫ − 1 1 f ( x ) d x = ∫ − 1 1 f n ( x ) d x + ∫ − 1 1 P n ( x ) R ( x ) d x = ∑ j = 1 n w j f ( x j ) + ∫ − 1 1 P n ( x ) R ( x ) d x = I n ( f ) + ∫ − 1 1 P n ( x ) R ( x ) d x \begin{align*}
I(f) &= \int_{-1}^1 f(x) \ud x \\
&= \int_{-1}^1 f_n(x) \ud x + \int_{-1}^1 P_n(x) R(x) \ud x \\
&= \sum_{j=1}^n w_j f(x_j) + \int_{-1}^1 P_n(x) R(x) \ud x \\
&= I_n(f) + \int_{-1}^1 P_n(x) R(x) \ud x \\
\end{align*} I ( f ) = ∫ − 1 1 f ( x ) d x = ∫ − 1 1 f n ( x ) d x + ∫ − 1 1 P n ( x ) R ( x ) d x = j = 1 ∑ n w j f ( x j ) + ∫ − 1 1 P n ( x ) R ( x ) d x = I n ( f ) + ∫ − 1 1 P n ( x ) R ( x ) d x where
I n ( f ) = ∑ j = 1 n w j f ( x j ) , w j = ∫ − 1 1 ℓ j ( x ) d x I_n(f) = \sum_{j=1}^n w_j f(x_j), \qquad w_j = \int_{-1}^1 \ell_j(x) \ud x I n ( f ) = j = 1 ∑ n w j f ( x j ) , w j = ∫ − 1 1 ℓ j ( x ) d x If f ∈ P 2 n − 1 f \in \poly_{2n-1} f ∈ P 2 n − 1 then R ∈ P n − 1 R \in \poly_{n-1} R ∈ P n − 1 and
∫ − 1 1 P n ( x ) R ( x ) d x = 0 , ∀ R ∈ P n − 1 \int_{-1}^1 P_n(x) R(x) \ud x = 0, \qquad \forall R \in \poly_{n-1} ∫ − 1 1 P n ( x ) R ( x ) d x = 0 , ∀ R ∈ P n − 1 Thus the quadrature formula is exact for any f ∈ P 2 n − 1 f \in \poly_{2n-1} f ∈ P 2 n − 1 . Later, we will show that this is optimal and derive simpler formulae for the weights.