The problem with random binary search trees is, of course, that they are
not dynamic.  They don't support the 
 or 
 operations
needed to implement the 
 interface.  In this section we describe
a data structure called a 
 that uses Lemma 7.1 to implement
the 
 interface.
A node in a 
 is like a node in a 
 in that it has
a data value, 
, but it also contains a unique numerical priority,
, that is assigned at random:
  class TreapNode : public BSTNode<Node, T> {
    friend class Treap<Node,T>;
    int p;
  };
In addition to being a binary search tree, the nodes in a 
The heap and binary search tree conditions together ensure that, once
the key (
) and priority (
) for each node are defined, the
shape of the 
 is completely determined. The heap property tells us that
the node with minimum priority has to be the root, 
, of the 
.
The binary search tree property tells us that all nodes with keys smaller
than 
 are stored in the subtree rooted at 
 and all nodes
with keys larger than 
 are stored in the subtree rooted at 
.
The important point about the priority values in a 
 is that they
are unique and assigned at random.  Because of this, there are
two equivalent ways we can think about a 
.  As defined above, a
 obeys the heap and binary search tree properties.  Alternatively,
we can think of a 
 as a 
 whose nodes
were added in increasing order of priority.  For example, the 
in Figure 7.5 can be obtained by adding the sequence of 
values 
Since the priorities are chosen randomly, this is equivalent to taking a random permutation of the keys -- in this case the permutation is
Lemma 7.2 tells us that 
s can implement the 
operation efficiently. However, the real benefit of a 
 is that
it can support the 
 and 
 operations.  To
do this, it needs to perform rotations in order to maintain the heap property.  Refer to Figure 7.6.
A rotation in a binary
search tree is a local modification that takes a parent 
 of a node 
and makes 
 the parent of 
, while preserving the binary search tree
property. Rotations come in two flavours: left or right
depending on whether 
 is a right or left child of 
, respectively.
The code that implements this has to handle these two possibilities
and be careful of a boundary
case (when 
 is the root) so the actual code is a little longer than
Figure 7.6 would lead a reader to believe:
  void rotateLeft(Node *u) {
    Node *w = u->right;
    w->parent = u->parent;
    if (w->parent != nil) {
      if (w->parent->left == u) {
        w->parent->left = w;
      } else {
        w->parent->right = w;
      }
    }
    u->right = w->left;
    if (u->right != nil) {
      u->right->parent = u;
    }
    u->parent = w;
    w->left = u;
    if (u == r) { r = w; r->parent = nil; }
  }
  void rotateRight(Node *u) {
    Node *w = u->left;
    w->parent = u->parent;
    if (w->parent != nil) {
      if (w->parent->left == u) {
        w->parent->left = w;
      } else {
        w->parent->right = w;
      }
    }
    u->left = w->right;
    if (u->left != nil) {
      u->left->parent = u;
    }
    u->parent = w;
    w->right = u;
    if (u == r) { r = w; r->parent = nil; }
  }
In terms of the 
Using rotations, we can implement the 
 operation as follows:
We create a new node, 
, and assign 
 and pick a random value
for 
.  Next we add 
 using the usual 
 algorithm
for a 
, so that 
 is now a leaf of the 
.
At this point, our 
 satisfies the binary search tree property,
but not necessarily the heap property.  In particular, it may be the
case that 
.  If this is the case, then we perform a
rotation at node 
=
 so that 
 becomes the parent of 
.
If 
 continues to violate the heap property, we will have to repeat this, decreasing 
's depth by one every time, until
 either becomes the root or 
.
  bool add(T x) {
    Node *u = new Node;
    u->x = x;
    u->p = rand();
    if (BinarySearchTree<Node,T>::add(u)) {
      bubbleUp(u);
      return true;
    }
    return false;
  }
  void bubbleUp(Node *u) {
    while (u->parent != nil && u->parent->p > u->p) {
      if (u->parent->right == u) {
        rotateLeft(u->parent);
      } else {
        rotateRight(u->parent);
      }
    }
    if (u->parent == nil) {
      r = u;
    }
  }
An example of an 
The running time of the 
 operation is given by the time it
takes to follow the search path for 
 plus the number of rotations
performed to move the newly-added node, 
, up to its correct location
in the 
.  By Lemma 7.2, the expected length of the
search path is at most 
.  Furthermore, each rotation
decreases the depth of 
.   This stops if 
 becomes the root, so
the expected number of rotations cannot exceed the expected length of
the search path.  Therefore, the expected running time of the 
operation in a 
 is 
.  (Exercise 7.3
asks you to show that the expected number of rotations performed during
an insertion is actually only 
.)
The 
 operation in a 
 is the opposite of the 
operation.  We search for the node, 
, containing 
 and then perform
rotations to move 
 downwards until it becomes a leaf and then we
splice 
 from the 
.  Notice that, to move 
 downwards, we can
perform either a left or right rotation at 
, which will replace 
with 
 or 
, respectively.
The choice is made by the first of the following that apply:
  bool remove(T x) {
    Node *u = findLast(x);
    if (u != nil && compare(u->x, x) == 0) {
      trickleDown(u);
      splice(u);
      delete u;
      return true;
    }
    return false;
  }
  void trickleDown(Node *u) {
    while (u->left != nil || u->right != nil) {
      if (u->left == nil) {
        rotateLeft(u);
      } else if (u->right == nil) {
        rotateRight(u);
      } else if (u->left->p < u->right->p) {
        rotateRight(u);
      } else {
        rotateLeft(u);
      }
      if (r == u) {
        r = u->parent;
      }
    }
  }
An example of the 
The trick to analyze the running time of the 
 operation is
to notice that this operation is the reverse of the 
 operation.
In particular, if we were to reinsert 
, using the same priority 
,
then the 
 operation would do exactly the same number of rotations
and would restore the 
 to exactly the same state it was in before
the 
 operation took place.  (Reading from bottom-to-top,
Figure 7.8 illustrates the insertion of the value 9 into a
.) This means that the expected running time of the 
on a 
 of size 
 is proportional to the expected running time
of the 
 operation on a 
 of size 
.  We conclude
that the expected running time of 
 is 
.
The following theorem summarizes the performance of the 
 data
structure:
It is worth comparing the 
 data structure to the 
data structure.  Both implement the 
 operations in 
expected time per operation.  In both data structures, 
 and 
involve a search and then a constant number of pointer changes
(see Exercise 7.3 below).  Thus, for both these
structures, the expected length of the search path is the critical value
in assessing their performance.  In a 
, the expected length
of a search path is