#

9. Red-Black Trees

In this chapter, we present red-black trees, a version of binary search
trees with logarithmic height. Red-black trees are one of the most
widely used data structures. They appear as the primary search structure
in many library implementations, including the Java Collections Framework
and several implementations of the C++ Standard Template Library. They
are also used within the Linux operating system kernel. There are
several reasons for the popularity of red-black trees:

- A red-black tree storing
values has height at most
.
- The
and
operations on a red-black tree run
in
worst-case time.
- The amortized number of rotations performed during an
or
operation is constant.

The first two of these properties already put red-black trees
ahead of skiplists, treaps, and scapegoat trees.
Skiplists and treaps rely on randomization and their
running times are only expected. Scapegoat trees have a guaranteed
bound on their height, but
and
only run in
amortized time. The third property is just icing on the cake. It
tells us that that the time needed to add or remove an element
is
dwarfed by the time it takes to find
.^{9.1}
However, the nice properties of red-black trees come with a price:
implementation complexity. Maintaining a bound of
on the
height is not easy. It requires a careful analysis of a number of cases.
We must ensure that the implementation does exactly the right
thing in each case. One misplaced rotation or change of colour produces
a bug that can be very difficult to understand and track down.

Rather than jumping directly into the implementation of red-black trees,
we will first provide some background on a related data structure:
2-4 trees. This will give some insight into how red-black trees were
discovered and why efficiently maintaining them is even possible.

#### Footnotes

- ....
^{9.1}
- Note that skiplists and
treaps also have this property in the expected sense. See
Exercises 4.6 and 7.5.

**Subsections**
opendatastructures.org