Subsections

11.1 Comparison-Based Sorting

In this section, we present three sorting algorithms: merge-sort, quicksort, and heap-sort. Each of these algorithms takes an input array $ \ensuremath{\ensuremath{\mathit{a}}}$ and sorts the elements of $ \ensuremath{\ensuremath{\mathit{a}}}$ into non-decreasing order in $ O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ (expected) time. These algorithms are all comparison-based. These algorithms don't care what type of data is being sorted; the only operation they do on the data is comparisons using the $ \ensuremath{\mathrm{compare}(\ensuremath{\mathit{a}},\ensuremath{\mathit{b}})}$ method. Recall, from Section 1.2.4, that $ \ensuremath{\mathrm{compare}(\ensuremath{\mathit{a}},\ensuremath{\mathit{b}})}$ returns a negative value if $ \ensuremath{\ensuremath{\ensuremath{\mathit{a}}}}<\ensuremath{\ensuremath{\ensuremath{\mathit{b}}}}$, a positive value if $ \ensuremath{\ensuremath{\ensuremath{\mathit{a}}}}>\ensuremath{\ensuremath{\ensuremath{\mathit{b}}}}$, and zero if $ \ensuremath{\ensuremath{\ensuremath{\mathit{a}}}}=\ensuremath{\ensuremath{\ensuremath{\mathit{b}}}}$.


11.1.1 Merge-Sort

The merge-sort algorithm is a classic example of recursive divide and conquer: If the length of $ \ensuremath{\ensuremath{\mathit{a}}}$ is at most 1, then $ \ensuremath{\ensuremath{\mathit{a}}}$ is already sorted, so we do nothing. Otherwise, we split $ \ensuremath{\ensuremath{\mathit{a}}}$ into two halves, $ \ensuremath{\ensuremath{\ensuremath{\mathit{a_0}}}}=\ensuremath{\ensuremath{\e...
...,\ensuremath{\ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{n}}/2-1]}}$ and $ \ensuremath{\ensuremath{\ensuremath{\mathit{a_1}}}}=\ensuremath{\ensuremath{\e...
...ts,\ensuremath{\ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{n}}-1]}}$. We recursively sort $ \ensuremath{\ensuremath{\mathit{a_0}}}$ and $ \ensuremath{\ensuremath{\mathit{a_1}}}$, and then we merge (the now sorted) $ \ensuremath{\ensuremath{\mathit{a_0}}}$ and $ \ensuremath{\ensuremath{\mathit{a_1}}}$ to get our fully sorted array $ \ensuremath{\ensuremath{\mathit{a}}}$:
\begin{leftbar}
\begin{flushleft}
\hspace*{1em} \ensuremath{\mathrm{merge\_sort}...
...bf{return}} \ensuremath{\ensuremath{\mathit{a}}}\\
\end{flushleft}\end{leftbar}
An example is shown in Figure 11.1.

Figure 11.1: The execution of $ \ensuremath{\mathrm{merge\_sort}(\ensuremath{\mathit{a}},\ensuremath{\mathit{c}})}$
\includegraphics[width=\textwidth ]{figs-python/mergesort}

Compared to sorting, merging the two sorted arrays $ \ensuremath{\ensuremath{\mathit{a_0}}}$ and $ \ensuremath{\ensuremath{\mathit{a_1}}}$ is fairly easy. We add elements to $ \ensuremath{\ensuremath{\mathit{a}}}$ one at a time. If $ \ensuremath{\ensuremath{\mathit{a_0}}}$ or $ \ensuremath{\ensuremath{\mathit{a_1}}}$ is empty, then we add the next elements from the other (non-empty) array. Otherwise, we take the minimum of the next element in $ \ensuremath{\ensuremath{\mathit{a_0}}}$ and the next element in $ \ensuremath{\ensuremath{\mathit{a_1}}}$ and add it to $ \ensuremath{\ensuremath{\mathit{a}}}$:
\begin{leftbar}
\begin{flushleft}
\hspace*{1em} \ensuremath{\mathrm{merge}(\ensu...
...\ensuremath{\mathit{i_1}} \gets \ensuremath{i2}}\\
\end{flushleft}\end{leftbar}
Notice that the $ \ensuremath{\mathrm{merge}(\ensuremath{\mathit{a_0}},\ensuremath{\mathit{a_1}},\ensuremath{\mathit{a}},\ensuremath{\mathit{c}})}$ algorithm performs at most $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-1$ comparisons before running out of elements in one of $ \ensuremath{\ensuremath{\mathit{a_0}}}$ or $ \ensuremath{\ensuremath{\mathit{a_1}}}$.

To understand the running-time of merge-sort, it is easiest to think of it in terms of its recursion tree. Suppose for now that $ \ensuremath{\ensuremath{\mathit{n}}}$ is a power of two, so that $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}=2^{\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}}$, and $ \log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}$ is an integer. Refer to Figure 11.2. Merge-sort turns the problem of sorting $ \ensuremath{\ensuremath{\mathit{n}}}$ elements into two problems, each of sorting $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2$ elements. These two subproblem are then turned into two problems each, for a total of four subproblems, each of size $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/4$. These four subproblems become eight subproblems, each of size $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/8$, and so on. At the bottom of this process, $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2$ subproblems, each of size two, are converted into $ \ensuremath{\ensuremath{\mathit{n}}}$ problems, each of size one. For each subproblem of size $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2^{i}$, the time spent merging and copying data is $ O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2^i)$. Since there are $ 2^i$ subproblems of size $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2^i$, the total time spent working on problems of size $ 2^i$, not counting recursive calls, is

$\displaystyle 2^i\times O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2^i) = O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}) \enspace .
$

Therefore, the total amount of time taken by merge-sort is

$\displaystyle \sum_{i=0}^{\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}...
...mathit{n}}}}\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}) \enspace .
$

Figure 11.2: The merge-sort recursion tree.
\includegraphics[width=\textwidth ]{figs-python/mergesort-recursion}

The proof of the following theorem is based on preceding analysis, but has to be a little more careful to deal with the cases where $ \ensuremath{\ensuremath{\mathit{n}}}$ is not a power of 2.

Theorem 11..1   The merge_sort(a) algorithm runs in $ O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ time and performs at most $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}$ comparisons.

Proof. The proof is by induction on $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}$. The base case, in which $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}=1$, is trivial; when presented with an array of length 0 or 1 the algorithm simply returns without performing any comparisons.

Merging two sorted lists of total length $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}$ requires at most $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-1$ comparisons. Let $ C(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ denote the maximum number of comparisons performed by $ \ensuremath{\mathrm{merge\_sort}(\ensuremath{\mathit{a}},\ensuremath{\mathit{c}})}$ on an array $ \ensuremath{\ensuremath{\mathit{a}}}$ of length $ \ensuremath{\ensuremath{\mathit{n}}}$. If $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}$ is even, then we apply the inductive hypothesis to the two subproblems and obtain

$\displaystyle C(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ $\displaystyle \le \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-1 + 2C(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2)$    
  $\displaystyle \le \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-1 + 2((\ens...
...math{\mathit{n}}}}/2)\log(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2))$    
  $\displaystyle = \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-1 + \ensurema...
...suremath{\mathit{n}}}}\log(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2)$    
  $\displaystyle = \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-1 + \ensurema...
...ath{\ensuremath{\mathit{n}}}}-\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}$    
  $\displaystyle < \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}} \enspace .$    

The case where $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}$ is odd is slightly more complicated. For this case, we use two inequalities that are easy to verify:

$\displaystyle \log(x+1) \le \log(x) + 1 \enspace ,$ (11.1)

for all $ x\ge 1$ and

$\displaystyle \log(x+1/2) + \log(x-1/2) \le 2\log(x) \enspace ,$ (11.2)

for all $ x\ge 1/2$. Inequality (11.1) comes from the fact that $ \log(x)+1 = \log(2x)$ while (11.2) follows from the fact that $ \log$ is a concave function. With these tools in hand we have, for odd $ \ensuremath{\ensuremath{\mathit{n}}}$,

$\displaystyle C(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ $\displaystyle \le \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-1 + C(\lcei...
...rceil) + C(\lfloor \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2 \rfloor)$    
  $\displaystyle \le \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-1 + \lceil ...
...\rfloor\log \lfloor \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2 \rfloor$    
  $\displaystyle = \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-1 + (\ensurem...
...t{n}}}}/2 - 1/2) \log (\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2-1/2)$    
  $\displaystyle \le \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-1 + \ensure...
...{n}}}}/2+1/2) - \log (\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2-1/2))$    
  $\displaystyle \le \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-1 + \ensure...
...th{\mathit{n}}}}\log(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2) + 1/2$    
  $\displaystyle < \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}} + \ensuremath...
...suremath{\mathit{n}}}}\log(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2)$    
  $\displaystyle = \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}} + \ensuremath...
...suremath{\mathit{n}}}}(\log\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-1)$    
  $\displaystyle = \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}\log\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}} \enspace . \qedhere$    

$ \qedsymbol$

11.1.2 Quicksort

The quicksort algorithm is another classic divide and conquer algorithm. Unlike merge-sort, which does merging after solving the two subproblems, quicksort does all of its work upfront.

Quicksort is simple to describe: Pick a random pivot element, $ \ensuremath{\ensuremath{\mathit{x}}}$, from $ \ensuremath{\ensuremath{\mathit{a}}}$; partition $ \ensuremath{\ensuremath{\mathit{a}}}$ into the set of elements less than $ \ensuremath{\ensuremath{\mathit{x}}}$, the set of elements equal to $ \ensuremath{\ensuremath{\mathit{x}}}$, and the set of elements greater than $ \ensuremath{\ensuremath{\mathit{x}}}$; and, finally, recursively sort the first and third sets in this partition. An example is shown in Figure 11.3.
\begin{leftbar}
\begin{flushleft}
\hspace*{1em} \ensuremath{\mathrm{quick\_sort}...
...nsuremath{\mathit{q}}-\ensuremath{\mathit{i}})})\\
\end{flushleft}\end{leftbar}

Figure 11.3: An example execution of $ \ensuremath{\mathrm{quick\_sort}(\ensuremath{\mathit{a}},0,14)}$
\includegraphics[scale=0.90909]{figs-python/quicksort}
All of this is done in place, so that instead of making copies of subarrays being sorted, the $ \ensuremath{\mathrm{quick\_sort}(\ensuremath{\mathit{a}},\ensuremath{\mathit{i}},\ensuremath{\mathit{n}},\ensuremath{\mathit{c}})}$ method only sorts the subarray $ \ensuremath{\ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{i}}]}},\ld...
...th{\ensuremath{\mathit{a}}[\ensuremath{\mathit{i}}+\ensuremath{\mathit{n}}-1]}}$. Initially, this method is invoked with the arguments $ \ensuremath{\mathrm{quick\_sort}(\ensuremath{\mathit{a}},0,\mathrm{length}(\ensuremath{\mathit{a}}),c})$.

At the heart of the quicksort algorithm is the in-place partitioning algorithm. This algorithm, without using any extra space, swaps elements in $ \ensuremath{\ensuremath{\mathit{a}}}$ and computes indices $ \ensuremath{\ensuremath{\mathit{p}}}$ and $ \ensuremath{\ensuremath{\mathit{q}}}$ so that

$\displaystyle \ensuremath{\ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\math...
...it{i}}}} \le \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-1$}
\end{cases}$

This partitioning, which is done by the $ {\color{black} \textbf{while}}$ loop in the code, works by iteratively increasing $ \ensuremath{\ensuremath{\mathit{p}}}$ and decreasing $ \ensuremath{\ensuremath{\mathit{q}}}$ while maintaining the first and last of these conditions. At each step, the element at position $ \ensuremath{\ensuremath{\mathit{j}}}$ is either moved to the front, left where it is, or moved to the back. In the first two cases, $ \ensuremath{\ensuremath{\mathit{j}}}$ is incremented, while in the last case, $ \ensuremath{\ensuremath{\mathit{j}}}$ is not incremented since the new element at position $ \ensuremath{\ensuremath{\mathit{j}}}$ has not yet been processed.

Quicksort is very closely related to the random binary search trees studied in Section 7.1. In fact, if the input to quicksort consists of $ \ensuremath{\ensuremath{\mathit{n}}}$ distinct elements, then the quicksort recursion tree is a random binary search tree. To see this, recall that when constructing a random binary search tree the first thing we do is pick a random element $ \ensuremath{\ensuremath{\mathit{x}}}$ and make it the root of the tree. After this, every element will eventually be compared to $ \ensuremath{\ensuremath{\mathit{x}}}$, with smaller elements going into the left subtree and larger elements into the right.

In quicksort, we select a random element $ \ensuremath{\ensuremath{\mathit{x}}}$ and immediately compare everything to $ \ensuremath{\ensuremath{\mathit{x}}}$, putting the smaller elements at the beginning of the array and larger elements at the end of the array. Quicksort then recursively sorts the beginning of the array and the end of the array, while the random binary search tree recursively inserts smaller elements in the left subtree of the root and larger elements in the right subtree of the root.

The above correspondence between random binary search trees and quicksort means that we can translate Lemma 7.1 to a statement about quicksort:

Lemma 11..1   When quicksort is called to sort an array containing the integers $ 0,\ldots,\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-1$, the expected number of times element $ \ensuremath{\ensuremath{\mathit{i}}}$ is compared to a pivot element is at most $ H_{\ensuremath{\ensuremath{\ensuremath{\mathit{i}}}}+1} + H_{\ensuremath{\ensu...
...th{\ensuremath{\mathit{n}}}}-\ensuremath{\ensuremath{\ensuremath{\mathit{i}}}}}$.

A little summing up of harmonic numbers gives us the following theorem about the running time of quicksort:

Theorem 11..2   When quicksort is called to sort an array containing $ \ensuremath{\ensuremath{\mathit{n}}}$ distinct elements, the expected number of comparisons performed is at most $ 2\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}\ln \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}} + O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$.

Proof. Let $ T$ be the number of comparisons performed by quicksort when sorting $ \ensuremath{\ensuremath{\mathit{n}}}$ distinct elements. Using Lemma 11.1 and linearity of expectation, we have:

$\displaystyle \mathrm{E}[T]$ $\displaystyle = \sum_{i=0}^{\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-1...
...h{\ensuremath{\mathit{n}}}}-\ensuremath{\ensuremath{\ensuremath{\mathit{i}}}}})$    
  $\displaystyle = 2\sum_{i=1}^{\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}}H_i$    
  $\displaystyle \le 2\sum_{i=1}^{\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}}H_{\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}}$    
  $\displaystyle \le 2\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}\ln\ensurem...
...h{\mathit{n}}}} + O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}) \qedhere$    

$ \qedsymbol$

Theorem 11.3 describes the case where the elements being sorted are all distinct. When the input array, $ \ensuremath{\ensuremath{\mathit{a}}}$, contains duplicate elements, the expected running time of quicksort is no worse, and can be even better; any time a duplicate element $ \ensuremath{\ensuremath{\mathit{x}}}$ is chosen as a pivot, all occurrences of $ \ensuremath{\ensuremath{\mathit{x}}}$ get grouped together and do not take part in either of the two subproblems.

Theorem 11..3   The $ \ensuremath{\mathrm{quick\_sort}(\ensuremath{\mathit{a}},\ensuremath{\mathit{c}})}$ method runs in $ O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ expected time and the expected number of comparisons it performs is at most $ 2\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}\ln \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}} +O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$.


11.1.3 Heap-sort

The heap-sort algorithm is another in-place sorting algorithm. Heap-sort uses the binary heaps discussed in Section 10.1. Recall that the BinaryHeap data structure represents a heap using a single array. The heap-sort algorithm converts the input array $ \ensuremath{\ensuremath{\mathit{a}}}$ into a heap and then repeatedly extracts the minimum value.

More specifically, a heap stores $ \ensuremath{\ensuremath{\mathit{n}}}$ elements in an array, $ \ensuremath{\ensuremath{\mathit{a}}}$, at array locations $ \ensuremath{\ensuremath{\ensuremath{\mathit{a}}[0]}},\ldots,\ensuremath{\ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{n}}-1]}}$ with the smallest value stored at the root, $ \ensuremath{\ensuremath{\mathit{a}}[0]}$. After transforming $ \ensuremath{\ensuremath{\mathit{a}}}$ into a BinaryHeap, the heap-sort algorithm repeatedly swaps $ \ensuremath{\ensuremath{\mathit{a}}[0]}$ and $ \ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{n}}-1]}$, decrements $ \ensuremath{\ensuremath{\mathit{n}}}$, and calls $ \ensuremath{\mathrm{trickle\_down}(0)}$ so that $ \ensuremath{\ensuremath{\ensuremath{\mathit{a}}[0]}},\ldots,\ensuremath{\ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{n}}-2]}}$ once again are a valid heap representation. When this process ends (because $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}=0$) the elements of $ \ensuremath{\ensuremath{\mathit{a}}}$ are stored in decreasing order, so $ \ensuremath{\ensuremath{\mathit{a}}}$ is reversed to obtain the final sorted order.11.1Figure 11.4 shows an example of the execution of $ \ensuremath{\mathrm{heap\_sort}(\ensuremath{\mathit{a}},\ensuremath{\mathit{c}})}$.

Figure 11.4: A snapshot of the execution of $ \ensuremath{\mathrm{heap\_sort}(\ensuremath{\mathit{a}},\ensuremath{\mathit{c}})}$. The shaded part of the array is already sorted. The unshaded part is a BinaryHeap. During the next iteration, element $ 5$ will be placed into array location $ 8$.
\includegraphics[scale=0.90909]{figs-python/heapsort}


\begin{leftbar}
\begin{flushleft}
\hspace*{1em} \ensuremath{\mathrm{heap\_sort}(...
...math{\ensuremath{\mathit{a}}.\mathrm{reverse}()}\\
\end{flushleft}\end{leftbar}

A key subroutine in heap sort is the constructor for turning an unsorted array $ \ensuremath{\ensuremath{\mathit{a}}}$ into a heap. It would be easy to do this in $ O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}\log\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ time by repeatedly calling the BinaryHeap $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ method, but we can do better by using a bottom-up algorithm. Recall that, in a binary heap, the children of $ \ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{i}}]}$ are stored at positions $ \ensuremath{\ensuremath{\mathit{a}}[2i+1]}$ and $ \ensuremath{\ensuremath{\mathit{a}}[2i+2]}$. This implies that the elements $ \ensuremath{\ensuremath{\ensuremath{\mathit{a}}}}[\lfloor\ensuremath{\ensurema...
...ts,\ensuremath{\ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{n}}-1]}}$ have no children. In other words, each of $ \ensuremath{\ensuremath{\ensuremath{\mathit{a}}}}[\lfloor\ensuremath{\ensurema...
...ts,\ensuremath{\ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{n}}-1]}}$ is a sub-heap of size 1. Now, working backwards, we can call $ \ensuremath{\mathrm{trickle\_down}(\ensuremath{\mathit{i}})}$ for each $ \ensuremath{\ensuremath{\ensuremath{\mathit{i}}}}\in\{\lfloor \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2\rfloor-1,\ldots,0\}$. This works, because by the time we call $ \ensuremath{\mathrm{trickle\_down}(\ensuremath{\mathit{i}})}$, each of the two children of $ \ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{i}}]}$ are the root of a sub-heap, so calling $ \ensuremath{\mathrm{trickle\_down}(\ensuremath{\mathit{i}})}$ makes $ \ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{i}}]}$ into the root of its own subheap.

The interesting thing about this bottom-up strategy is that it is more efficient than calling $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ $ \ensuremath{\ensuremath{\mathit{n}}}$ times. To see this, notice that, for $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/2$ elements, we do no work at all, for $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/4$ elements, we call $ \ensuremath{\mathrm{trickle\_down}(\ensuremath{\mathit{i}})}$ on a subheap rooted at $ \ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{i}}]}$ and whose height is one, for $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}/8$ elements, we call $ \ensuremath{\mathrm{trickle\_down}(\ensuremath{\mathit{i}})}$ on a subheap whose height is two, and so on. Since the work done by $ \ensuremath{\mathrm{trickle\_down}(\ensuremath{\mathit{i}})}$ is proportional to the height of the sub-heap rooted at $ \ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{i}}]}$, this means that the total work done is at most

$\displaystyle \sum_{i=1}^{\log\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}...
...athit{n}}}}) = O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}) \enspace .
$

The second-last equality follows by recognizing that the sum $ \sum_{i=1}^{\infty} i/2^{i}$ is equal, by definition of expected value, to the expected number of times we toss a coin up to and including the first time the coin comes up as heads and applying Lemma 4.2.

The following theorem describes the performance of $ \ensuremath{\mathrm{heap\_sort}(\ensuremath{\mathit{a}},\ensuremath{\mathit{c}})}$.

Theorem 11..4   The $ \ensuremath{\mathrm{heap\_sort}(\ensuremath{\mathit{a}},\ensuremath{\mathit{c}})}$ method runs in $ O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ time and performs at most $ 2\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}} + O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ comparisons.

Proof. The algorithm runs in three steps: (1) transforming $ \ensuremath{\ensuremath{\mathit{a}}}$ into a heap, (2) repeatedly extracting the minimum element from $ \ensuremath{\ensuremath{\mathit{a}}}$, and (3) reversing the elements in $ \ensuremath{\ensuremath{\mathit{a}}}$. We have just argued that step 1 takes $ O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ time and performs $ O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ comparisons. Step 3 takes $ O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ time and performs no comparisons. Step 2 performs $ \ensuremath{\ensuremath{\mathit{n}}}$ calls to $ \ensuremath{\mathrm{trickle\_down}(0)}$. The $ i$th such call operates on a heap of size $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-i$ and performs at most $ 2\log(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-i)$ comparisons. Summing this over $ i$ gives

$\displaystyle \sum_{i=0}^{\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-i} ...
...ensuremath{\mathit{n}}}}\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}
$

Adding the number of comparisons performed in each of the three steps completes the proof. $ \qedsymbol$

11.1.4 A Lower-Bound for Comparison-Based Sorting

We have now seen three comparison-based sorting algorithms that each run in $ O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ time. By now, we should be wondering if faster algorithms exist. The short answer to this question is no. If the only operations allowed on the elements of $ \ensuremath{\ensuremath{\mathit{a}}}$ are comparisons, then no algorithm can avoid doing roughly $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}$ comparisons. This is not difficult to prove, but requires a little imagination. Ultimately, it follows from the fact that

$\displaystyle \log(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}!)
= \log...
...athit{n}}}} - O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})
\enspace .
$

(Proving this fact is left as Exercise 11.10.)

We will start by focusing our attention on deterministic algorithms like merge-sort and heap-sort and on a particular fixed value of $ \ensuremath{\ensuremath{\mathit{n}}}$. Imagine such an algorithm is being used to sort $ \ensuremath{\ensuremath{\mathit{n}}}$ distinct elements. The key to proving the lower-bound is to observe that, for a deterministic algorithm with a fixed value of $ \ensuremath{\ensuremath{\mathit{n}}}$, the first pair of elements that are compared is always the same. For example, in $ \ensuremath{\mathrm{heap\_sort}(\ensuremath{\mathit{a}},\ensuremath{\mathit{c}})}$, when $ \ensuremath{\ensuremath{\mathit{n}}}$ is even, the first call to $ \ensuremath{\mathrm{trickle\_down}(\ensuremath{\mathit{i}})}$ is with $ \ensuremath{\ensuremath{\mathit{i}}\gets \ensuremath{\ensuremath{\mathit{n}}/2-1}}$ and the first comparison is between elements $ \ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{n}}/2-1]}$ and $ \ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{n}}-1]}$.

Since all input elements are distinct, this first comparison has only two possible outcomes. The second comparison done by the algorithm may depend on the outcome of the first comparison. The third comparison may depend on the results of the first two, and so on. In this way, any deterministic comparison-based sorting algorithm can be viewed as a rooted binary comparison tree. Each internal node, $ \ensuremath{\ensuremath{\mathit{u}}}$, of this tree is labelled with a pair of indices $ \ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{i}}}$ and $ \ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{j}}}$. If $ \ensuremath{\ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{u}}.\ensur...
...math{\ensuremath{\mathit{a}}[\ensuremath{\mathit{u}}.\ensuremath{\mathit{j}}]}}$ the algorithm proceeds to the left subtree, otherwise it proceeds to the right subtree. Each leaf $ \ensuremath{\ensuremath{\mathit{w}}}$ of this tree is labelled with a permutation $ \ensuremath{\ensuremath{\ensuremath{\mathit{w}}.\ensuremath{\mathit{p}}[0]}},\...
...th{\ensuremath{\mathit{w}}.\ensuremath{\mathit{p}}[\ensuremath{\mathit{n}}-1]}}$ of $ 0,\ldots,\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}-1$. This permutation represents the one that is required to sort $ \ensuremath{\ensuremath{\mathit{a}}}$ if the comparison tree reaches this leaf. That is,

$\displaystyle \ensuremath{\ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\math...
...h{\mathit{w}}.\ensuremath{\mathit{p}}[\ensuremath{\mathit{n}}-1]}]} \enspace .
$

An example of a comparison tree for an array of size $ \ensuremath{\ensuremath{\mathit{n}}\gets \ensuremath{3}}$ is shown in Figure 11.5.
Figure 11.5: A comparison tree for sorting an array $ \ensuremath{\ensuremath{\ensuremath{\mathit{a}}[0]}},\ensuremath{\ensuremath{\ensuremath{\mathit{a}}[1]}},\ensuremath{\ensuremath{\ensuremath{\mathit{a}}[2]}}$ of length $ \ensuremath{\ensuremath{\mathit{n}}\gets \ensuremath{3}}$.
\includegraphics[width=\textwidth ]{figs-python/comparison-tree}

The comparison tree for a sorting algorithm tells us everything about the algorithm. It tells us exactly the sequence of comparisons that will be performed for any input array, $ \ensuremath{\ensuremath{\mathit{a}}}$, having $ \ensuremath{\ensuremath{\mathit{n}}}$ distinct elements and it tells us how the algorithm will reorder $ \ensuremath{\ensuremath{\mathit{a}}}$ in order to sort it. Consequently, the comparison tree must have at least $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}!$ leaves; if not, then there are two distinct permutations that lead to the same leaf; therefore, the algorithm does not correctly sort at least one of these permutations.

For example, the comparison tree in Figure 11.6 has only $ 4< 3!=6$ leaves. Inspecting this tree, we see that the two input arrays $ 3,1,2$ and $ 3,2,1$ both lead to the rightmost leaf. On the input $ 3,1,2$ this leaf correctly outputs $ \ensuremath{\ensuremath{\ensuremath{\mathit{a}}[1]}}=1,\ensuremath{\ensuremath...
...emath{\mathit{a}}[2]}}=2,\ensuremath{\ensuremath{\ensuremath{\mathit{a}}[0]}}=3$. However, on the input $ 3,2,1$, this node incorrectly outputs $ \ensuremath{\ensuremath{\ensuremath{\mathit{a}}[1]}}=2,\ensuremath{\ensuremath...
...emath{\mathit{a}}[2]}}=1,\ensuremath{\ensuremath{\ensuremath{\mathit{a}}[0]}}=3$. This discussion leads to the primary lower-bound for comparison-based algorithms.

Figure 11.6: A comparison tree that does not correctly sort every input permutation.
\includegraphics[width=\textwidth ]{figs-python/comparison-tree-b}

Theorem 11..5   For any deterministic comparison-based sorting algorithm $ \mathcal{A}$ and any integer $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}\ge 1$, there exists an input array $ \ensuremath{\ensuremath{\mathit{a}}}$ of length $ \ensuremath{\ensuremath{\mathit{n}}}$ such that $ \mathcal{A}$ performs at least $ \log(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}!) =
\ensuremath{\ensur...
...{\ensuremath{\mathit{n}}}}-O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ comparisons when sorting $ \ensuremath{\ensuremath{\mathit{a}}}$.

Proof. By the preceding discussion, the comparison tree defined by $ \mathcal{A}$ must have at least $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}!$ leaves. An easy inductive proof shows that any binary tree with $ k$ leaves has a height of at least $ \log k$. Therefore, the comparison tree for $ \mathcal{A}$ has a leaf, $ \ensuremath{\ensuremath{\mathit{w}}}$, with a depth of at least $ \log(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}!)$ and there is an input array $ \ensuremath{\ensuremath{\mathit{a}}}$ that leads to this leaf. The input array $ \ensuremath{\ensuremath{\mathit{a}}}$ is an input for which $ \mathcal{A}$ does at least $ \log(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}!)$ comparisons. $ \qedsymbol$

Theorem 11.5 deals with deterministic algorithms like merge-sort and heap-sort, but doesn't tell us anything about randomized algorithms like quicksort. Could a randomized algorithm beat the $ \log(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}!)$ lower bound on the number of comparisons? The answer, again, is no. Again, the way to prove it is to think differently about what a randomized algorithm is.

In the following discussion, we will assume that our decision trees have been ``cleaned up'' in the following way: Any node that can not be reached by some input array $ \ensuremath{\ensuremath{\mathit{a}}}$ is removed. This cleaning up implies that the tree has exactly $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}!$ leaves. It has at least $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}!$ leaves because, otherwise, it could not sort correctly. It has at most $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}!$ leaves since each of the possible $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}!$ permutation of $ \ensuremath{\ensuremath{\mathit{n}}}$ distinct elements follows exactly one root to leaf path in the decision tree.

We can think of a randomized sorting algorithm, $ \mathcal{R}$, as a deterministic algorithm that takes two inputs: The input array $ \ensuremath{\ensuremath{\mathit{a}}}$ that should be sorted and a long sequence $ b=b_1,b_2,b_3,\ldots,b_m$ of random real numbers in the range $ [0,1]$. The random numbers provide the randomization for the algorithm. When the algorithm wants to toss a coin or make a random choice, it does so by using some element from $ b$. For example, to compute the index of the first pivot in quicksort, the algorithm could use the formula $ \lfloor n b_1\rfloor$.

Now, notice that if we fix $ b$ to some particular sequence $ \hat{b}$ then $ \mathcal{R}$ becomes a deterministic sorting algorithm, $ \mathcal{R}(\hat{b})$, that has an associated comparison tree, $ \mathcal{T}(\hat{b})$. Next, notice that if we select $ \ensuremath{\ensuremath{\mathit{a}}}$ to be a random permutation of $ \{1,\ldots,\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}\}$, then this is equivalent to selecting a random leaf, $ \ensuremath{\ensuremath{\mathit{w}}}$, from the $ \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}!$ leaves of $ \mathcal{T}(\hat{b})$.

Exercise 11.12 asks you to prove that, if we select a random leaf from any binary tree with $ k$ leaves, then the expected depth of that leaf is at least $ \log k$. Therefore, the expected number of comparisons performed by the (deterministic) algorithm $ \mathcal{R}(\hat{b})$ when given an input array containing a random permutation of $ \{1,\ldots,n\}$ is at least $ \log(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}!)$. Finally, notice that this is true for every choice of $ \hat{b}$, therefore it holds even for $ \mathcal{R}$. This completes the proof of the lower-bound for randomized algorithms.

Theorem 11..6   For any integer $ n\ge 1$ and any (deterministic or randomized) comparison-based sorting algorithm $ \mathcal{A}$, the expected number of comparisons done by $ \mathcal{A}$ when sorting a random permutation of $ \{1,\ldots,n\}$ is at least $ \log(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}!) = \ensuremath{\ensure...
...{\ensuremath{\mathit{n}}}}-O(\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$.



Footnotes

... order.11.1
The algorithm could alternatively redefine the $ \ensuremath{\mathrm{compare}(\ensuremath{\mathit{x}},\ensuremath{\mathit{y}})}$ function so that the heap sort algorithm stores the elements directly in ascending order.
opendatastructures.org