Divide and Conquer Strategy
1.2.3 Divide and Conquer Strategy
A powerful strategy for solving large scale problems is the divide and conquer strategy (one of the oldest military strategies). This is one of the most powerful algorithmic paradigms for designing efficient algorithms. The idea is to split a high-dimensional problem into multiple problems (typically two for sequential algorithms) of lower dimension. Each of these is then again split into smaller subproblems, and so forth, until a number of sufficiently small problems are obtained. The solution of the initial problem is then obtained by combining the solutions of the subproblems working backward in the hierarchy.
i =1 a i . The usual way to proceed is to use the recursion
We illustrate the idea on the computation of the sum s = n
s i =s i −1 +a i , i = 1 : n. Another order of summation is as illustrated below for n = 2 3 = 8:
s 0 = 0,
s 1:2
s 3:4
s 5:6
s 7:8
s 1:4
s 5:8
s 1:8
21 where s i :j =a i +···+a j . In this table each new entry is obtained by adding its two
1.2. Some Numerical Algorithms
neighbors in the row above. Clearly this can be generalized to compute an arbitrary sum of n = 2 k terms in k steps. In the first step we perform n/2 sums of two terms, then n/4
partial sums each of four terms, etc., until in the kth step we compute the final sum. This summation algorithm uses the same number of additions as the first one. But it has the advantage that it splits the task into several subtasks that can be performed in parallel . For large values of n this summation order can also be much more accurate than the conventional order (see Problem 2.3.5).
The algorithm can also be described in another way. Consider the summation algo- rithm
sum = s(i, j); if j = i + 1 then sum = a i +a j ;
else k = ⌊(i + j)/2⌋; sum = s(i, k) + s(k + 1, j);
end
for computing the sum s(i, j) = a i +···+a j , j > i. (Here and in the following ⌊x⌋ denotes the floor of x, i.e., the largest integer ≤ x. Similarly, ⌈x⌉ denotes the ceiling of x , i.e., the smallest integer ≥ x.) This function defines s(i, j) in a recursive way; if the sum consists of only two terms, then we add them and return with the answer. Otherwise we split the sum in two and use the function again to evaluate the corresponding two partial sums. Espelid [114] gives an interesting discussion of such summation algorithms.
The function above is an example of a recursive algorithm—it calls itself. Many computer languages (for example, MATLAB) allow the definition of such recursive algo- rithms. The divide and conquer is a top down description of the algorithm in contrast to the bottom up description we gave first.
Example 1.2.3.
Sorting the items of a one-dimensional array in ascending or descending order is one of the most important problems in computer science. In numerical work, sorting is frequently needed when data need to be rearranged. One of the best known and most efficient sorting algorithms, quicksort by Hoare [202], is based on the divide and conquer paradigm. To sort an array of n items, a[0 : n − 1], it proceeds as follows:
1. Select an element a(k) to be the pivot. Commonly used methods are to select the pivot randomly or select the median of the first, middle, and last element in the array.
2. Rearrange the elements of the array a into a left and right subarray such that no element in the left subarray is larger than the pivot and no element in the right subarray is smaller than the pivot.
3. Recursively sort the left and right subarray. The partitioning of a subarray a[l : r], l < r, in step 2 can proceed as follows. Place
the pivot in a[l] and initialize two pointers i = l, j = r + 1. The pointer i is incremented until an element a(i) is encountered which is larger than the pivot. Similarly, the pointer j
is decremented until an element a(j) is encountered which is smaller than the pivot. At this
22 Chapter 1. Principles of Numerical Calculations point the elements a(i) and a(j) are exchanged. The process continues until the pointers
cross each other. Finally, the pivot element is placed in its correct position. It is intuitively clear that this algorithm sorts the entire array and that no merging phase is needed.
There are many other examples of the power of the divide and conquer approach. It underlies the fast Fourier transform (Sec. 4.6.3) and is used in efficient automatic paral- lelization of many tasks, such as matrix multiplication; see [111].
Parts
» Numerical Methods in Scientific Computing
» Solving Linear Systems by LU Factorization
» Sparse Matrices and Iterative Methods
» Software for Matrix Computations
» Characterization of Least Squares Solutions
» The Singular Value Decomposition
» The Numerical Rank of a Matrix
» Second Order Accurate Methods
» Adaptive Choice of Step Size
» Origin of Monte Carlo Methods
» Generating and Testing Pseudorandom Numbers
» Random Deviates for Other Distributions
» Absolute and Relative Errors
» Fixed- and Floating-Point Representation
» IEEE Floating-Point Standard
» Multiple Precision Arithmetic
» Basic Rounding Error Results
» Statistical Models for Rounding Errors
» Avoiding Overflowand Cancellation
» Numerical Problems, Methods, and Algorithms
» Propagation of Errors and Condition Numbers
» Perturbation Analysis for Linear Systems
» Error Analysis and Stability of Algorithms
» Interval Matrix Computations
» Taylor’s Formula and Power Series
» Divergent or Semiconvergent Series
» Properties of Difference Operators
» Approximation Formulas by Operator Methods
» Single Linear Difference Equations
» Comparison Series and Aitken Acceleration
» Complete Monotonicity and Related Concepts
» Repeated Richardson Extrapolation
» Algebraic Continued Fractions
» Analytic Continued Fractions
» Bases for Polynomial Interpolation
» Conditioning of Polynomial Interpolation
» Newton’s Interpolation Formula
» Barycentric Lagrange Interpolation
» Iterative Linear Interpolation
» Fast Algorithms for Vandermonde Systems
» Complex Analysis in Polynomial Interpolation
» Multidimensional Interpolation
» Analysis of a Generalized Runge Phenomenon
» Bernštein Polynomials and Bézier Curves
» Least Squares Splines Approximation
» Operator Norms and the Distance Formula
» Inner Product Spaces and Orthogonal Systems
» Solution of the Approximation Problem
» Mathematical Properties of Orthogonal Polynomials
» Expansions in Orthogonal Polynomials
» Approximation in the Maximum Norm
» Convergence Acceleration of Fourier Series
» The Fourier Integral Theorem
» Fast Trigonometric Transforms
» Superconvergence of the Trapezoidal Rule
» Higher-Order Newton–Cotes’ Formulas
» Fejér and Clenshaw–Curtis Rules
» Method of Undetermined Coefficients
» Gauss–Christoffel Quadrature Rules
» Gauss Quadrature with Preassigned Nodes
» Matrices, Moments, and Gauss Quadrature
» Jacobi Matrices and Gauss Quadrature
» Multidimensional Integration
» Limiting Accuracy and Termination Criteria
» Convergence Order and Efficiency
» Higher-Order Interpolation Methods
» Newton’s Method for Complex Roots
» Unimodal Functions and Golden Section Search
» Minimization by Interpolation
» Ill-Conditioned Algebraic Equations
» Deflation and Simultaneous Determination of Roots
» Finding Greatest Common Divisors
» Permutations and Determinants
» Eigenvalues and Norms of Matrices
» Function and Vector Algorithms
» Textbooks in Numerical Analysis
» Encyclopedias, Tables, and Formulas
Show more