Skip to article frontmatterSkip to article content

3.3Bisection method

#%config InlineBackend.figure_format = 'svg'
from pylab import *

Let ff be a continuous function. Given an interval [a,b][a,b] such that

signf(a)signf(b),f(a)0f(b)\sign f(a) \ne \sign f(b), \qquad f(a) \ne 0 \ne f(b)

then [a,b][a,b] is called a non-trivial bracket for a root of ff.

3.3.1Main idea

The basic idea of bisection method is: starting from a non-trivial bracket, find a new bracketing interval which is smaller in size. As the name suggests, we divide the interval into two equal and smaller intervals. First divide [a,b][a,b] into [a,c][a,c] and [c,b][c,b] where c=12(a+b)c = \half(a+b). Then there are three possibilities.

  1. f(c)=0f(c) = 0, then we have found the root exactly.
  2. f(c)0f(c) \ne 0 and signf(c)signf(b)\sign f(c) \ne \sign f(b). Then [c,b][c,b] is a non-trivial bracket.
  3. f(c)0f(c) \ne 0 and signf(a)signf(c)\sign f(a) \ne \sign f(c). Then [a,c][a,c] is a non-trivial bracket.

We repeat the above process until the root is found or the length of the bracketing interval is reduced to a small desired size. The method may not give a unique value for the root since it can lie anywhere in the bracketing interval. We can take the mid-point of the bracketing interval as the estimate of the root, in which case the error in the root is at most equal to half the length of the bracketing interval.

The length of the bracketing interval reduces by half in each iteration, which means that the algorithm has a monotonic convergence behaviour. If

L0=ba=initial lengthL_0 = b-a = \textrm{initial length}

then after kk iterations

length of bracket interval=Lk=12kL0\textrm{length of bracket interval} = L_k = \frac{1}{2^k} L_0

As a convergence criterion, we can put a tolerance on the length: stop if LkϵL_k \le \epsilon for some ϵ>0\epsilon > 0 small. To achieve this tolerance, we require

klog2L0ϵiterationsk \approx \log_2 \frac{L_0}{\epsilon} \qquad \textrm{iterations}

If L0=1L_0 = 1 and ϵ=106\epsilon = 10^{-6} then we need about 20 iterations. If we demand more accuracy, then the number of iterations will increase, though only logarithmically.

3.3.2Algorithm

The bisection method is shown in Algorithm (1). It requires some input:

  1. initial bracketing interval [a,b][a,b]
  2. tolerance on function value δ
  3. tolerance on length of bracketing interval ε
  4. maximum number of iterations NN

In iterative algorithms, when we are not sure that the iterations will always converge, it is good put an upper limit on the number of iterations, since otherwise, the program may never end. But the bisection method is guranteed to satisfy the tolerance on bracketing interval, so it is safe to remove the limit on maximum iterations, but in the code below, we put an upper limit.

The way we have written the algorithm, we store the entire history of the computations in arrays a[k], b[k], c[k] which is not necessary. In actual practice, we only need to keep some of the latest results in memory and this is illustrated in Algorithm (2) which does not require any arrays.

Convergence criteria. We are checking the tolerance in terms of absolute error in the length of bracketing interval without regard to the size of the root.

It is better to use a relative error tolerance

c=12(a+b),If ba<ϵcreturn cc = \half(a+b), \qquad \textrm{If } |b - a| < \epsilon |c| \quad \textrm{return } c

which is done in the implementation below.

3.3.3Convergence rate

The bisection method always converges and the root rr is reached in the limit

r=limkckr = \lim_{k \to \infty} c_k

After kk iterations, if we take the mid-point of the bracketing interval as the estimate of the root, the error is bounded by

rckLk2=ba2k+1|r - c_k| \le \frac{L_k}{2} = \frac{b-a}{2^{k+1}}