8.2 Discussion and Exercises

The term scapegoat tree is due to Galperin and Rivest [28], who define and analyze these trees. However, the same structure was discovered earlier by Andersson [3,5], who called them general balanced trees since they can have any shape as long as their height is small.

Experimenting with the $ \mathtt{ScapegoatTree}$ implementation will reveal that it is often considerably slower than the other $ \mathtt{SSet}$ implementations in this book. This may be somewhat surprising, since height bound of

$\displaystyle \log_{3/2}\ensuremath{\mathtt{q}} \approx 1.709\log \ensuremath{\mathtt{n}} + O(1)
$

is better than the expected length of a search path in a $ \mathtt{Skiplist}$ and not too far from that of a $ \mathtt{Treap}$. The implementation could be optimized by storing the sizes of subtrees explicitly at each node (or at least reusing already computed subtree sizes). Even with these optimizations, there will always be sequences of $ \mathtt{add(x)}$ and $ \mathtt{delete(x)}$ operation for which a $ \mathtt{ScapegoatTree}$ takes longer than other $ \mathtt{SSet}$ implementations.

This gap in performance is due to the fact that, unlike the other $ \mathtt{SSet}$ implementations discussed in this book, a $ \mathtt{ScapegoatTree}$ can spend a lot of time restructuring itself. Exercise 8.1 asks you to prove that there are sequences of $ \mathtt{n}$ operations in which a $ \mathtt{ScapegoatTree}$ will spend on the order of $ \ensuremath{\mathtt{n}}\log \ensuremath{\mathtt{n}}$ time in calls to $ \mathtt{rebuild(u)}$. This is in contrast to other $ \mathtt{SSet}$ implementations discussed in this book that only make $ O(\ensuremath{\mathtt{n}})$ structural changes during a sequence of $ \mathtt{n}$ operations. This is, unfortunately, a necessary consequence of the fact that a $ \mathtt{ScapegoatTree}$ does all its restructuring by calls to $ \mathtt{rebuild(u)}$ [15].

Despite their lack of performance, there are applications in which a $ \mathtt{ScapegoatTree}$ could be the right choice. This would occur any time there is additional data associated with nodes that cannot be updated in constant time when a rotation is performed, but that can be updated during a $ \mathtt{rebuild(u)}$ operation. In such cases, the $ \mathtt{ScapegoatTree}$ and related structures based on partial rebuilding may work. An example of such an application is outlined in Exercise 8.5.

Exercise 8..1   Show that, if we start with an empty $ \mathtt{ScapegoatTree}$ and call $ \mathtt{add(x)}$ for $ \ensuremath{\mathtt{x}}=1,2,3,\ldots,\ensuremath{\mathtt{n}}$, then the total time spent during calls to $ \mathtt{rebuild(u)}$ is at least $ c\ensuremath{\mathtt{n}}\log \ensuremath{\mathtt{n}}$ for some constant $ c>0$.

Exercise 8..2   The $ \mathtt{ScapegoatTree}$, as described in this chapter guarantees that the length of the search path does not exceed $ \log_{3/2}\ensuremath{\mathtt{q}}$. Design, analyze, and implement a modified version of $ \mathtt{ScapegoatTree}$ where the length of the search path does not exceed $ \log_{\ensuremath{\mathtt{b}}} \ensuremath{\mathtt{q}}$, where $ \mathtt{b}$ is a parameter with $ 1<\ensuremath{\mathtt{b}}<2$.

What does your analysis and/or experiments say about the amortized cost of $ \mathtt{find(x)}$, $ \mathtt{add(x)}$ and $ \mathtt{remove(x)}$ as a function of $ \mathtt{b}$?

Exercise 8..3   Analyze and implement a $ \mathtt{WeightBalancedTree}$. This is a tree in which each node $ \mathtt{u}$, except the root, maintains the balance invariant that $ \ensuremath{\mathtt{size(u)}} \le (2/3)\ensuremath{\mathtt{size(u.parent)}}$. The $ \mathtt{add(x)}$ and $ \mathtt{remove(x)}$ operations are identical to the standard $ \mathtt{BinarySearchTree}$ operations, except that any time the balance invariant is violated at a node $ \mathtt{u}$, the subtree rooted at $ \mathtt{u.parent}$ is rebuilt.

Your analysis should show that operations on a $ \mathtt{WeightBalancedTree}$ run in $ O(\log \ensuremath{\mathtt{n}})$ amortized time.

Exercise 8..4   Analyze and implement a $ \mathtt{CountdownTree}$. In a $ \mathtt{CountdownTree}$ each node $ \mathtt{u}$ keeps a timer $ \mathtt{u.t}$. The $ \mathtt{add(x)}$ and $ \mathtt{remove(x)}$ operations are exactly the same as in a standard $ \mathtt{BinarySearchTree}$ except that, whenever one of these operations affects $ \mathtt{u}$'s subtree, $ \mathtt{u.t}$ is decremented. When $ \ensuremath{\mathtt{u.t}}=0$ the entire subtree rooted at $ \mathtt{u}$ is rebuilt into a perfectly balanced binary search tree. When a node $ \mathtt{u}$ is involved in a rebuilding operation (either because $ \mathtt{u}$ is rebuilt or one of $ \mathtt{u}$'s ancestors is rebuilt) $ \mathtt{u.t}$ is reset to $ \ensuremath{\mathtt{size(u)}}/3$.

Your analysis should show that operations on a countdown tree run in $ O(\log \ensuremath{\mathtt{n}})$ amortized time. (Hint: First show that each node $ \mathtt{u}$ satisfies some version of a balance invariant.)

Exercise 8..5   Design and implement a $ \mathtt{Sequence}$ data structure that maintains a sequence (list) of elements. It supports these operations: The first two operations should run in $ O(\log \ensuremath{\mathtt{n}})$ amortized time. The third operation should run in constant-time.

The $ \mathtt{Sequence}$ data structure can be implemented by storing the elements in something like a $ \mathtt{ScapegoatTree}$, in the same order that they occur in the sequence. To implement $ \mathtt{testBefore(e1,e2)}$ in constant time, each element $ \mathtt{e}$ is labelled with an integer that encodes the path from the root to $ \mathtt{e}$. In this way, $ \mathtt{testBefore(e1,e2)}$ can be implemented just by comparing the labels of $ \mathtt{e1}$ and $ \mathtt{e2}$.

opendatastructures.org