Sorting is probably the fundamental algorithmic problem in computer science, and has a long history. Knuth [41] attributes the merge-sort algorithm to von Neumann (1945). Quicksort is due to Hoare [33]. The original heap-sort algorithm is due to Williams [65], but the version presented here (in which the heap is constructed bottom-up in time) is due to Floyd [23]. Lower-bounds for comparison-based sorting appear to be folklore. The following table summarizes the performance of these comparison-based algorithms:
comparisons | in-place | |||||
Merge-sort | worst-case | No | ||||
Quicksort | expected | Yes | ||||
Heap-sort | worst-case | Yes |
Each of these comparison-based algorithms has advantages and disadvantages. Merge-sort does the fewest comparisons and does not rely on randomization. Unfortunately, it uses an auxilliary array during its merge phase. Allocating this array can be expensive and is a potential point of failure if memory is limited. Quicksort is an in-place algorithm and is a close second in terms of the number of comparisons, but is randomized so this running time is not always guaranteed. Heap-sort does the most comparisons, but it is in-place and deterministic.
There is one setting in which merge-sort is a clear-winner; this occurs when sorting a linked-list. In this case, the auxiliary array is not needed; two sorted linked lists are very easily merged into a single sorted linked-list by pointer manipulations.
The counting-sort and radix-sort algorithms described here are due to Seward [59, Section 2.4.6]. However, variants of radix-sort have been used since the 1920's to sort punch cards using punched card sorting machines. These machines can sort a stack of cards into two piles based on the existence (or not) of a hole in a specific location on the card. Repeating this process for different hole locations gives an implementation of radix-sort.
Extend your version of radix sort so that it can handle signed values of both and and implement a version of radix-sort that sorts arrays of type .