Skip to article frontmatterSkip to article content

Bisection method

#%config InlineBackend.figure_format = 'svg'
from pylab import *

Let ff be a continuous function. Given an interval [a,b][a,b] such that

signf(a)signf(b),f(a)0f(b)\sign f(a) \ne \sign f(b), \qquad f(a) \ne 0 \ne f(b)

then [a,b][a,b] is called a {\em non-trivial bracket} for a root of ff. The basic idea of bisection method is: starting from a non-trivial bracket, find a new bracketing interval which is smaller in size. As the name suggests, we divide the interval into two equal and smaller intervals. First divide [a,b][a,b] into [a,c][a,c] and [c,b][c,b] where c=12(a+b)c = \half(a+b). Then there are three possibilities.

  1. f(c)=0f(c) = 0, then we have found the root exactly.
  2. f(c)0f(c) \ne 0 and signf(c)signf(b)\sign f(c) \ne \sign f(b). Then [c,b][c,b] is a non-trivial bracket.
  3. f(c)0f(c) \ne 0 and signf(a)signf(c)\sign f(a) \ne \sign f(c). Then [a,c][a,c] is a non-trivial bracket.

We repeat the above process until the root is found or the length of the bracketing interval is reduced to a small desired size. The method may not give a unique value for the root since it can lie anywhere in the bracketing interval. We can take the mid-point of the bracketing interval as the estimate of the root, in which case the error in the root is at most equal to half the length of the bracketing interval.

The length of the bracketing interval reduces by half in each iteration, which means that the algorithm has a monotonic convergence behaviour. If

L0=ba=initial lengthL_0 = b-a = \textrm{initial length}

then after kk iterations

Lk=12kL0L_k = \frac{1}{2^k} L_0

As a convergence criterion, we can put a tolerance on the length: stop if LkϵL_k \le \epsilon for some ϵ>0\epsilon > 0 small. To achieve this tolerance, we require

klog2L0ϵk \approx \log_2 \frac{L_0}{\epsilon}

If L0=1L_0 = 1 and ϵ=106\epsilon = 10^{-6} then we need about 20 iterations. If we demand more accuracy, then the number of iterations will increase, though only logarithmically. The bisection method is shown in Algorithm~(1). We have put an upper limit on the number of iterations and we also specify the stopping criterion in terms of the length of the bracketing interval and/or the function value.

The way we have written the algorithm, we store the entire history of computations which is not necessary. In actual practice, we only need to keep some of the latest results in memory and this is illustrated in Algorithm~(2).

We are checking the tolerance in terms of absolute error in the length of bracketing interval without regard to the size of the root. If ϵ=106\epsilon = 10^{-6} and if the root is near one, then bisection method will return roughly six accurate digits. If ϵ=106\epsilon = 10^{-6} and the root is 107\approx 10^{-7} then we cannot expect no figures of accuracy. On the other hand, if the root is 1\gg 1 and we use ϵ1\epsilon \ll 1, then we may never be able to achieve this tolerance or it may require far too many iterations. It is better to use a relative error tolerance

c=12(a+b),If ba<ϵcreturn cc = \half(a+b), \qquad \textrm{If } |b - a| < \epsilon |c| \quad \textrm{return } c

1Convergence rate

The bisection method always converges and the root α is reached in the limit

α=limkck\alpha = \lim_{k \to \infty} c_k

After kk iterations, if we take the mid-point of the bracketing interval as the estimate of the root, the error is bounded by

αckLk2=ba2k+1|\alpha - c_k| \le \frac{L_k}{2} = \frac{b-a}{2^{k+1}}