Problem Overview

A Divide & Conquer approach is fairly straightforward to concept understand.

  1. 1. Divide the problem instance into smaller subproblems.

  2. 2.  Solve each subproblem (recursively).

  3. 3.  Combine smaller solutions to solve the original instance.

Many applications of this concept exist in algorithms, today we will look at Merge SortQuick SortBinary Search Trees. We will also discover a new problem known as the Convex Hull which is a big problem in game development.

Merge Sort

The idea of Merge Sort is:

  • Imagine we recursively divided an array (we wanted to sort) into halves, until we reach single element partitions. 
  • We then recursively merge the partitions, where we have a process that maintains sorting after partitions are merged.
  • When we finally merge the last two partitions, we have a sorted array.

Lets take a look at some pseudocode:

ALGORITHM MergeSort (A[0 . . . n − 1])
/* Sort an array using a divide-and-conquer merge sort. */
/* INPUT : An array A[0 . . . n − 1] of orderable elements. */
/* OUTPUT : An array A[0 . . . n − 1] sorted in ascending order. */
1: if n > 1 then
2:     B = A[0...⌊n/2⌋−1]/*B is first half of A*/
3:     C = A[⌊n/2⌋...n−1]/*C is second half of A*/
4:     MergeSort (B)
5:     MergeSort (C)
6:     Merge (B,C,A) /* Merge B and C to help sort A */
7: end if

Given two sorted subarrays B, C, want to merge them together to form a sorted A.


  1. 1  Consider first element of each subarray, i.e, B[0] and C[0]. Compare them. Whichever one is smaller, copy to A[0], and increment current pointer of subarrays that has smaller element and A.

  2. 2  Repeat until one of subarray is empty. Then copy the rest of subarray to A.

Some things to note:

  • Guarantees O(n log n) time complexity, regardless of the original distribution of data – this sorting method is insensitive to the data distribution.
  • The main drawback in this method is the extra space required for merging two partitions/sub-arrays, e.g. B and C from pseudo-code.
  • Merge sort is a stable sorting method.

Quick Sort

  • Quicksort is such a sorting algorithm, often the best practical choice in terms of efficiency because of its good performance on the average case.
  • Merge sort has consistent behaviour for all inputs – what if we seek an algorithm that is fast for the average case?
  • QuickSort is a divide and conquer algorithm.


  1. 1. Select an element from the array for which, we hope, about half the elements will come before and a half after in a sorted array. Call this element the pivot.

  2. 2. Partition the array so that all elements with a value less than the pivot are in one subarray, and larger elements come in the other subarray.

  3. 3. Swap pivot into the position of an array that is between the partitions.

  4. 4. Recursively apply the same procedure on the two subarrays separately.

  5. 5. Terminate when only subarrays are of one element.

  6. 6. When terminate, because we do things in place, the resulting array is sorted.

  1. Best Case:

    • Occurs when the pivot repeatedly splits the dataset into two equal-sized subsets.

    • The complexity is O(nlog2n).

  2. Worst Case:

    • If the pivot is chosen poorly, one of the partitions may be empty, and the other reduced by only one element.

    • Then the quicksort is slower than brute-force sorting (due to partitioning overheads).

    • The complexity is n+(n−1)+(n−2)+...+1≈n2/2∈O(n2).

    • Occurs when the array is already sorted or reverse order sorted.

  3. Average case:

    • The number of comparisons is≈1.39nlogn.

    • 39% more comparisons than merge sort.

    • But faster than merge sort in practice because of the lower cost of other high-frequency operations, and uses considerably less space (no need to copy to temporary arrays).

Binary Search Trees


Convex Hull