The Trapezoidal and Simpson rules were based on integrating a polynomial approximation. Such methods are called Newton-Cotes integration methods. If we choose n + 1 n+1 n + 1 distinct points in [ a , b ] [a,b] [ a , b ] and approximate the function f : [ a , b ] → R f : [a,b] \to \re f : [ a , b ] → R by interpolation
p n ( x ) = ∑ j = 0 n f ( x j ) ℓ j ( x ) , x ∈ [ a , b ] p_n(x) = \sum_{j=0}^n f(x_j) \ell_j(x), \qquad x \in [a,b] p n ( x ) = j = 0 ∑ n f ( x j ) ℓ j ( x ) , x ∈ [ a , b ] the integral can be approximated by
I n ( f ) = ∫ a b p n ( x ) d x = ∑ j = 0 n f ( x j ) ∫ a b ℓ j ( x ) d x = ∑ j = 0 n w j f ( x j ) I_n(f) = \int_a^b p_n(x) \ud x = \sum_{j=0}^n f(x_j) \int_a^b \ell_j(x) \ud x = \sum_{j=0}^n w_j f(x_j) I n ( f ) = ∫ a b p n ( x ) d x = j = 0 ∑ n f ( x j ) ∫ a b ℓ j ( x ) d x = j = 0 ∑ n w j f ( x j ) where the weights are given by
w j = ∫ a b ℓ j ( x ) d x w_j = \int_a^b \ell_j(x) \ud x w j = ∫ a b ℓ j ( x ) d x As an example for n = 3 n=3 n = 3 and using uniformly spaced points, we interpolate by a cubic polynomial and the integral is approximated as
I 3 ( f ) = 3 h 8 [ f 0 + 3 f 1 + 3 f 2 + f 3 ] I_3(f) = \frac{3h}{8}[f_0 + 3 f_1 + 3 f_2 + f_3] I 3 ( f ) = 8 3 h [ f 0 + 3 f 1 + 3 f 2 + f 3 ] For general n n n we can state the following error estimates.
(a) For n n n even, f ∈ C n + 2 [ a , b ] f \in \cts^{n+2}[a,b] f ∈ C n + 2 [ a , b ]
I ( f ) − I n ( f ) = C n h n + 3 f ( n + 2 ) ( η ) , η ∈ [ a , b ] I(f) - I_n(f) = C_n h^{n+3} f^{(n+2)}(\eta), \qquad \eta \in [a,b] I ( f ) − I n ( f ) = C n h n + 3 f ( n + 2 ) ( η ) , η ∈ [ a , b ] C n = 1 ( n + 2 ) ! ∫ 0 n μ 2 ( μ − 1 ) … ( μ − n ) d μ C_n = \frac{1}{(n+2)!} \int_0^n \mu^2 (\mu-1) \ldots (\mu-n) \ud \mu C n = ( n + 2 )! 1 ∫ 0 n μ 2 ( μ − 1 ) … ( μ − n ) d μ (b) For n n n odd and f ∈ C n + 1 [ a , b ] f \in \cts^{n+1}[a,b] f ∈ C n + 1 [ a , b ]
I ( f ) − I n ( f ) = C n h n + 2 f ( n + 1 ) ( η ) , η ∈ [ a , b ] I(f) - I_n(f) = C_n h^{n+2} f^{(n+1)}(\eta), \qquad \eta \in [a,b] I ( f ) − I n ( f ) = C n h n + 2 f ( n + 1 ) ( η ) , η ∈ [ a , b ] C n = 1 ( n + 1 ) ! ∫ 0 n μ ( μ − 1 ) … ( μ − n ) d μ C_n = \frac{1}{(n+1)!} \int_0^n \mu (\mu-1) \ldots (\mu-n) \ud \mu C n = ( n + 1 )! 1 ∫ 0 n μ ( μ − 1 ) … ( μ − n ) d μ Trapezoidal rule (n = 1 n=1 n = 1 ) has degree of precision one and Simpson rule (n = 2 n=2 n = 2 ) has degree of precision three. For n n n odd, Newton-Cotes has degree of precision of n n n while for n n n even, it has degree of precision n + 1 n+1 n + 1 .
On uniformly spaced nodes, the interpolating polynomial p n ( x ) p_n(x) p n ( x ) does not always converge. Hence the Newton-Cotes formulae may not converge also. Consider
I = ∫ − 4 4 d x 1 + x 2 = 2 tan − 1 ( 4 ) ≈ 2.6516 I = \int_{-4}^4 \frac{\ud x}{1 + x^2} = 2 \tan^{-1}(4) \approx 2.6516 I = ∫ − 4 4 1 + x 2 d x = 2 tan − 1 ( 4 ) ≈ 2.6516 Hence it is not advisable to use Newton-Cotes rule for large degree n n n . However, we can use moderate but fixed degree of the interpolating polynomial and build a composite rule. The convergence here is obtained as the length of the sub-intervals tends to zero but the polynomial degree in each sub-interval remains fixed.
Let F \mathcal{F} F be a set of continuous functions on a given interval [ a , b ] [a,b] [ a , b ] . We say that F \mathcal{F} F is dense in [ a , b ] [a,b] [ a , b ] if for every f ∈ C [ a , b ] f \in \cts[a,b] f ∈ C [ a , b ] and every ϵ > 0 \epsilon > 0 ϵ > 0 , ∃ f ϵ ∈ F \exists f_\epsilon \in \mathcal{F} ∃ f ϵ ∈ F such that
∥ f − f ϵ ∥ ∞ ≤ ϵ \norm{f - f_\epsilon}_\infty \le \epsilon ∥ f − f ϵ ∥ ∞ ≤ ϵ (1) By Weirstrass theorem, the set of all polynomials is dense in C [ a , b ] \cts[a,b] C [ a , b ] .
(2) Define
F = { f ∈ C [ a , b ] : ∃ a partition { x j } of [ a , b ] s.t. f ∣ [ x j − 1 , x j ] ∈ P 1 } \mathcal{F} = \{ f \in \cts[a,b] : \exists \textrm{ a partition $\{x_j\}$ of $[a,b]$ s.t. } f|_{[x_{j-1},x_j]} \in \poly_1 \} F = { f ∈ C [ a , b ] : ∃ a partition { x j } of [ a , b ] s.t. f ∣ [ x j − 1 , x j ] ∈ P 1 } then F \mathcal{F} F is dense in C [ a , b ] \cts[a,b] C [ a , b ] . This is the set of continuous, piecewise affine functions.
Let
I n ( f ) = ∑ j = 0 n w j , n f ( x j , n ) , n ≥ 1 I_n(f) = \sum_{j=0}^n w_{j,n} f(x_{j,n}), \qquad n \ge 1 I n ( f ) = j = 0 ∑ n w j , n f ( x j , n ) , n ≥ 1 be a sequence of numerical integration formulae that approximate
I ( f ) = ∫ a b f ( x ) d x I(f) = \int_a^b f(x) \ud x I ( f ) = ∫ a b f ( x ) d x Let F \mathcal{F} F be a family dense in C [ a , b ] \cts[a,b] C [ a , b ] . Then
I n ( f ) → I ( f ) ∀ f ∈ C [ a , b ] I_n(f) \to I(f) \qquad \forall f \in \cts[a,b] I n ( f ) → I ( f ) ∀ f ∈ C [ a , b ] if and only if
I n ( f ) → I ( f ) I_n(f) \to I(f) I n ( f ) → I ( f ) for all f ∈ F f \in \mathcal{F} f ∈ F
B = sup n ∑ j = 0 n ∣ w j , n ∣ < ∞ B = \sup_n \sum_{j=0}^n |w_{j,n}| < \infty B = sup n ∑ j = 0 n ∣ w j , n ∣ < ∞
(a) The forward implication is an application of the uniform boundedness principle. Consider
I n : F → R I_n : \mathcal{F} \to \re I n : F → R But why is F \mathcal{F} F a Banach space ? The set of polynomials is not a Banach space in the maximum norm.
(b) Now we prove the reverse implication. We have to use the density and
the given two conditions.
I ( f ) − I n ( f ) = I ( f ) − I ( f ϵ ) + I ( f ϵ ) − I n ( f ϵ ) + I n ( f ϵ ) − I n ( f ) ∣ I ( f ) − I n ( f ) ∣ ≤ ∣ I ( f ) − I ( f ϵ ) ∣ + ∣ I ( f ϵ ) − I n ( f ϵ ) ∣ + ∣ I n ( f ϵ ) − I n ( f ) ∣ ≤ ( b − a ) ∥ f − f ϵ ∥ ∞ + ∣ I ( f ϵ ) − I n ( f ϵ ) ∣ + ∥ f − f ϵ ∥ ∞ ∑ j = 0 n ∣ w j , n ∣ ⏟ ≤ B \begin{align*}
I(f) - I_n(f) &= I(f) - I(f_\epsilon) + I(f_\epsilon) - I_n(f_\epsilon) + I_n(f_\epsilon) - I_n(f) \\
|I(f) - I_n(f)|
&\le |I(f) - I(f_\epsilon)| + |I(f_\epsilon) - I_n(f_\epsilon)| + |I_n(f_\epsilon) - I_n(f)| \\
&\le (b-a)\norm{f - f_\epsilon}_\infty + |I(f_\epsilon) - I_n(f_\epsilon)| + \norm{f - f_\epsilon}_\infty \underbrace{\sum_{j=0}^n |w_{j,n}|}_{\le B}
\end{align*} I ( f ) − I n ( f ) ∣ I ( f ) − I n ( f ) ∣ = I ( f ) − I ( f ϵ ) + I ( f ϵ ) − I n ( f ϵ ) + I n ( f ϵ ) − I n ( f ) ≤ ∣ I ( f ) − I ( f ϵ ) ∣ + ∣ I ( f ϵ ) − I n ( f ϵ ) ∣ + ∣ I n ( f ϵ ) − I n ( f ) ∣ ≤ ( b − a ) ∥ f − f ϵ ∥ ∞ + ∣ I ( f ϵ ) − I n ( f ϵ ) ∣ + ∥ f − f ϵ ∥ ∞ ≤ B j = 0 ∑ n ∣ w j , n ∣ Using density, choose f ϵ f_\epsilon f ϵ such that
∥ f − f ϵ ∥ ∞ ≤ ϵ 2 ( b − a + B ) \norm{f - f_\epsilon}_\infty \le \frac{\epsilon}{2(b-a + B)} ∥ f − f ϵ ∥ ∞ ≤ 2 ( b − a + B ) ϵ Using
property (1), we can choose n n n large enough so that
∣ I ( f ϵ ) − I n ( f ϵ ) ∣ ≤ ϵ 2 |I(f_\epsilon) - I_n(f_\epsilon)| \le \frac{\epsilon}{2} ∣ I ( f ϵ ) − I n ( f ϵ ) ∣ ≤ 2 ϵ Then
∣ I ( f ) − I n ( f ) ∣ ≤ ϵ |I(f) - I_n(f)| \le \epsilon ∣ I ( f ) − I n ( f ) ∣ ≤ ϵ Since ε was arbitrary, we conclude that I n ( f ) → I ( f ) I_n(f) \to I(f) I n ( f ) → I ( f ) .
For the function f ( x ) = 1 / ( 1 + x 2 ) f(x) = 1/(1+x^2) f ( x ) = 1/ ( 1 + x 2 ) , x ∈ [ − 4 , 4 ] x \in [-4,4] x ∈ [ − 4 , 4 ] , the Newton-Cotes on uniform points does not converge. This is because while property (1) in above theorem is satisfied, property (2) does not hold for uniform points.