Subsections


14.2 B-Trees

In this section, we discuss a generalization of binary trees, called $ B$-trees, which is efficient in the external memory model. Alternatively, $ B$-trees can be viewed as the natural generalization of 2-4 trees described in Section 9.1. (A 2-4 tree is a special case of a $ B$-tree that we get by setting $ B=2$.)

For any integer $ B\ge 2$, a $ B$-tree is a tree in which all of the leaves have the same depth and every non-root internal node, $ \ensuremath{\ensuremath{\mathit{u}}}$, has at least $ B$ children and at most $ 2B$ children. The children of $ \ensuremath{\ensuremath{\mathit{u}}}$ are stored in an array, $ \ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{children}}}$. The required number of children is relaxed at the root, which can have anywhere between 2 and $ 2B$ children.

If the height of a $ B$-tree is $ h$, then it follows that the number, $ \ell$, of leaves in the $ B$-tree satisfies

$\displaystyle 2B^{h-1} \le \ell \le 2(2B)^{h-1} \enspace .
$

Taking the logarithm of the first inequality and rearranging terms yields:

$\displaystyle h$ $\displaystyle \le \frac{\log \ell-1}{\log B} + 1$    
  $\displaystyle \le \frac{\log \ell}{\log B} + 1$    
  $\displaystyle = \log_B \ell + 1 \enspace .$    

That is, the height of a $ B$-tree is proportional to the base-$ B$ logarithm of the number of leaves.

Each node, $ \ensuremath{\ensuremath{\mathit{u}}}$, in $ B$-tree stores an array of keys $ \ensuremath{\ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}}}[0...
...suremath{\ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}}}[2B-1]$. If $ \ensuremath{\ensuremath{\mathit{u}}}$ is an internal node with $ k$ children, then the number of keys stored at $ \ensuremath{\ensuremath{\mathit{u}}}$ is exactly $ k-1$ and these are stored in $ \ensuremath{\ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}}}[0...
...nsuremath{\ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}}}[k-2]$. The remaining $ 2B-k+1$ array entries in $ \ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}}$ are set to $ \ensuremath{\ensuremath{\mathit{nil}}}$. If $ \ensuremath{\ensuremath{\mathit{u}}}$ is a non-root leaf node, then $ \ensuremath{\ensuremath{\mathit{u}}}$ contains between $ B-1$ and $ 2B-1$ keys. The keys in a $ B$-tree respect an order similar to the keys in a binary search tree. For any node, $ \ensuremath{\ensuremath{\mathit{u}}}$, that stores $ k-1$ keys,

$\displaystyle \ensuremath{\ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\math...
...nsuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}}}[k-2] \enspace .
$

If $ \ensuremath{\ensuremath{\mathit{u}}}$ is an internal node, then for every $ \ensuremath{\ensuremath{\ensuremath{\mathit{i}}}}\in\{0,\ldots,k-2\}$, $ \ensuremath{\ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}[\ensuremath{\mathit{i}}]}}$ is larger than every key stored in the subtree rooted at $ \ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{children}}[\ensuremath{\mathit{i}}]}$ but smaller than every key stored in the subtree rooted at $ \ensuremath{\ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{children}}[\ensuremath{\mathit{i}}+1]}}$. Informally,

$\displaystyle \ensuremath{\ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\math...
...hit{u}}.\ensuremath{\mathit{children}}[\ensuremath{\mathit{i}}+1]}} \enspace .
$

An example of a $ B$-tree with $ B=2$ is shown in Figure 14.2.

Figure 14.2: A $ B$-tree with $ B=2$.
\includegraphics[width=\textwidth ]{figs-python/btree-1}

Note that the data stored in a $ B$-tree node has size $ O(B)$. Therefore, in an external memory setting, the value of $ B$ in a $ B$-tree is chosen so that a node fits into a single external memory block. In this way, the time it takes to perform a $ B$-tree operation in the external memory model is proportional to the number of nodes that are accessed (read or written) by the operation.

For example, if the keys are 4 byte integers and the node indices are also 4 bytes, then setting $ B=256$ means that each node stores

$\displaystyle (4+4)\times 2B
= 8\times512=4096
$

bytes of data. This would be a perfect value of $ B$ for the hard disk or solid state drive discussed in the introduction to this chaper, which have a block size of $ 4096$ bytes.

The BTree class, which implements a $ B$-tree, stores a BlockStore, $ \ensuremath{\ensuremath{\mathit{bs}}}$, that stores BTree nodes as well as the index, $ \ensuremath{\ensuremath{\mathit{ri}}}$, of the root node. As usual, an integer, $ \ensuremath{\ensuremath{\mathit{n}}}$, is used to keep track of the number of items in the data structure:
\begin{leftbar}
\begin{flushleft}
\hspace*{1em} \ensuremath{\mathrm{initialize}(...
...th{\ensuremath{\mathit{n}} \gets \ensuremath{0}}\\
\end{flushleft}\end{leftbar}

14.2.1 Searching

The implementation of the $ \ensuremath{\mathrm{find}(\ensuremath{\mathit{x}})}$ operation, which is illustrated in Figure 14.3, generalizes the $ \ensuremath{\mathrm{find}(\ensuremath{\mathit{x}})}$ operation in a binary search tree. The search for $ \ensuremath{\ensuremath{\mathit{x}}}$ starts at the root and uses the keys stored at a node, $ \ensuremath{\ensuremath{\mathit{u}}}$, to determine in which of $ \ensuremath{\ensuremath{\mathit{u}}}$'s children the search should continue.

Figure 14.3: A successful search (for the value 4) and an unsuccessful search (for the value 16.5) in a $ B$-tree. Shaded nodes show where the value of $ \ensuremath{\ensuremath{\mathit{z}}}$ is updated during the searches.
\includegraphics[width=\textwidth ]{figs-python/btree-2}
More specifically, at a node $ \ensuremath{\ensuremath{\mathit{u}}}$, the search checks if $ \ensuremath{\ensuremath{\mathit{x}}}$ is stored in $ \ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}}$. If so, $ \ensuremath{\ensuremath{\mathit{x}}}$ has been found and the search is complete. Otherwise, the search finds the smallest integer, $ \ensuremath{\ensuremath{\mathit{i}}}$, such that $ \ensuremath{\ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}[\ensuremath{\mathit{i}}]}} > \ensuremath{\ensuremath{\ensuremath{\mathit{x}}}}$ and continues the search in the subtree rooted at $ \ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{children}}[\ensuremath{\mathit{i}}]}$. If no key in $ \ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}}$ is greater than $ \ensuremath{\ensuremath{\mathit{x}}}$, then the search continues in $ \ensuremath{\ensuremath{\mathit{u}}}$'s rightmost child. Just like binary search trees, the algorithm keeps track of the most recently seen key, $ \ensuremath{\ensuremath{\mathit{z}}}$, that is larger than $ \ensuremath{\ensuremath{\mathit{x}}}$. In case $ \ensuremath{\ensuremath{\mathit{x}}}$ is not found, $ \ensuremath{\ensuremath{\mathit{z}}}$ is returned as the smallest value that is greater or equal to $ \ensuremath{\ensuremath{\mathit{x}}}$.
\begin{leftbar}
\begin{flushleft}
\hspace*{1em} \ensuremath{\mathrm{find}(\ensur...
...bf{return}} \ensuremath{\ensuremath{\mathit{z}}}\\
\end{flushleft}\end{leftbar}
Central to the $ \ensuremath{\mathrm{find}(\ensuremath{\mathit{x}})}$ method is the $ \ensuremath{\mathrm{find\_it}(\ensuremath{\mathit{a}},\ensuremath{\mathit{x}})}$ method that searches in a $ \ensuremath{\ensuremath{\mathit{nil}}}$-padded sorted array, $ \ensuremath{\ensuremath{\mathit{a}}}$, for the value $ \ensuremath{\ensuremath{\mathit{x}}}$. This method, illustrated in Figure 14.4, works for any array, $ \ensuremath{\ensuremath{\mathit{a}}}$, where $ \ensuremath{\ensuremath{\ensuremath{\mathit{a}}}}[0],\ldots,\ensuremath{\ensuremath{\ensuremath{\mathit{a}}}}[k-1]$ is a sequence of keys in sorted order and $ \ensuremath{\ensuremath{\ensuremath{\mathit{a}}}}[k],\ldots,\ensuremath{\ensur...
...hit{a}}}}[\ensuremath{\ensuremath{\mathrm{length}(\ensuremath{\mathit{a}})}}-1]$ are all set to $ \ensuremath{\ensuremath{\mathit{nil}}}$. If $ \ensuremath{\ensuremath{\mathit{x}}}$ is in the array at position $ \ensuremath{\ensuremath{\mathit{i}}}$, then $ \ensuremath{\mathrm{find\_it}(\ensuremath{\mathit{a}},\ensuremath{\mathit{x}})}$ returns $ -\ensuremath{\ensuremath{\ensuremath{\mathit{i}}}}-1$. Otherwise, it returns the smallest index, $ \ensuremath{\ensuremath{\mathit{i}}}$, such that $ \ensuremath{\ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{i}}]}}>\ensuremath{\ensuremath{\ensuremath{\mathit{x}}}}$ or $ \ensuremath{\ensuremath{\ensuremath{\mathit{a}}[\ensuremath{\mathit{i}}]}}=\ensuremath{\ensuremath{\ensuremath{\mathit{nil}}}}$.
Figure 14.4: The execution of $ \ensuremath{\mathrm{find\_it}(\ensuremath{\mathit{a}},27)}$.
\includegraphics[scale=0.90909]{figs-python/findit}

\begin{leftbar}
\begin{flushleft}
\hspace*{1em} \ensuremath{\mathrm{find\_it}(\e...
...f{return}} \ensuremath{\ensuremath{\mathit{lo}}}\\
\end{flushleft}\end{leftbar}
The $ \ensuremath{\mathrm{find\_it}(\ensuremath{\mathit{a}},\ensuremath{\mathit{x}})}$ method uses a binary search that halves the search space at each step, so it runs in $ O(\log(\ensuremath{\ensuremath{\mathrm{length}(\ensuremath{\mathit{a}})}}))$ time. In our setting, $ \ensuremath{\ensuremath{\mathrm{length}(\ensuremath{\mathit{a}})}}=2B$, so $ \ensuremath{\mathrm{find\_it}(\ensuremath{\mathit{a}},\ensuremath{\mathit{x}})}$ runs in $ O(\log B)$ time.

We can analyze the running time of a $ B$-tree $ \ensuremath{\mathrm{find}(\ensuremath{\mathit{x}})}$ operation both in the usual word-RAM model (where every instruction counts) and in the external memory model (where we only count the number of nodes accessed). Since each leaf in a $ B$-tree stores at least one key and the height of a $ B$-Tree with $ \ell$ leaves is $ O(\log_B\ell)$, the height of a $ B$-tree that stores $ \ensuremath{\ensuremath{\mathit{n}}}$ keys is $ O(\log_B \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$. Therefore, in the external memory model, the time taken by the $ \ensuremath{\mathrm{find}(\ensuremath{\mathit{x}})}$ operation is $ O(\log_B \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$. To determine the running time in the word-RAM model, we have to account for the cost of calling $ \ensuremath{\mathrm{find\_it}(\ensuremath{\mathit{a}},\ensuremath{\mathit{x}})}$ for each node we access, so the running time of $ \ensuremath{\mathrm{find}(\ensuremath{\mathit{x}})}$ in the word-RAM model is

$\displaystyle O(\log_B \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})\times O(\log B) = O(\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}) \enspace .
$

14.2.2 Addition

One important difference between $ B$-trees and the BinarySearchTree data structure from Section 6.2 is that the nodes of a $ B$-tree do not store pointers to their parents. The reason for this will be explained shortly. The lack of parent pointers means that the $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ and $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ operations on $ B$-trees are most easily implemented using recursion.

Like all balanced search trees, some form of rebalancing is required during an $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ operation. In a $ B$-tree, this is done by splitting nodes. Refer to Figure 14.5 for what follows. Although splitting takes place across two levels of recursion, it is best understood as an operation that takes a node $ \ensuremath{\ensuremath{\mathit{u}}}$ containing $ 2B$ keys and having $ 2B+1$ children. It creates a new node, $ \ensuremath{\ensuremath{\mathit{w}}}$, that adopts $ \ensuremath{\ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{children}}...
...remath{\ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{children}}}}[2B]$. The new node $ \ensuremath{\ensuremath{\mathit{w}}}$ also takes $ \ensuremath{\ensuremath{\mathit{u}}}$'s $ B$ largest keys, $ \ensuremath{\ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}}}[B...
...suremath{\ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}}}[2B-1]$. At this point, $ \ensuremath{\ensuremath{\mathit{u}}}$ has $ B$ children and $ B$ keys. The extra key, $ \ensuremath{\ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}}}[B-1]$, is passed up to the parent of $ \ensuremath{\ensuremath{\mathit{u}}}$, which also adopts $ \ensuremath{\ensuremath{\mathit{w}}}$.

Notice that the splitting operation modifies three nodes: $ \ensuremath{\ensuremath{\mathit{u}}}$, $ \ensuremath{\ensuremath{\mathit{u}}}$'s parent, and the new node, $ \ensuremath{\ensuremath{\mathit{w}}}$. This is why it is important that the nodes of a $ B$-tree do not maintain parent pointers. If they did, then the $ B+1$ children adopted by $ \ensuremath{\ensuremath{\mathit{w}}}$ would all need to have their parent pointers modified. This would increase the number of external memory accesses from 3 to $ B+4$ and would make $ B$-trees much less efficient for large values of $ B$.

Figure 14.5: Splitting the node $ \ensuremath{\ensuremath{\mathit{u}}}$ in a $ B$-tree ($ B=3$). Notice that the key $ \ensuremath{\ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}}}[2]=\mathrm{m}$ passes from $ \ensuremath{\ensuremath{\mathit{u}}}$ to its parent.
 \includegraphics[width=\textwidth ]{figs-python/btree-split-1}  
  $ \ensuremath{\ensuremath{\mathit{u}}.\mathrm{split}()}$  
  $ \Downarrow$  
 \includegraphics[width=\textwidth ]{figs-python/btree-split-2}  

The $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ method in a $ B$-tree is illustrated in Figure 14.6. At a high level, this method finds a leaf, $ \ensuremath{\ensuremath{\mathit{u}}}$, at which to add the value $ \ensuremath{\ensuremath{\mathit{x}}}$. If this causes $ \ensuremath{\ensuremath{\mathit{u}}}$ to become overfull (because it already contained $ B-1$ keys), then $ \ensuremath{\ensuremath{\mathit{u}}}$ is split. If this causes $ \ensuremath{\ensuremath{\mathit{u}}}$'s parent to become overfull, then $ \ensuremath{\ensuremath{\mathit{u}}}$'s parent is also split, which may cause $ \ensuremath{\ensuremath{\mathit{u}}}$'s grandparent to become overfull, and so on. This process continues, moving up the tree one level at a time until reaching a node that is not overfull or until the root is split. In the former case, the process stops. In the latter case, a new root is created whose two children become the nodes obtained when the original root was split.

Figure 14.6: The $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ operation in a BTree. Adding the value 21 results in two nodes being split.
 \includegraphics[width=\textwidth ]{figs-python/btree-add-1}  
  $ \Downarrow$  
 \includegraphics[width=\textwidth ]{figs-python/btree-add-2}  
  $ \Downarrow$  
 \includegraphics[width=\textwidth ]{figs-python/btree-add-3}  

The executive summary of the $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ method is that it walks from the root to a leaf searching for $ \ensuremath{\ensuremath{\mathit{x}}}$, adds $ \ensuremath{\ensuremath{\mathit{x}}}$ to this leaf, and then walks back up to the root, splitting any overfull nodes it encounters along the way. With this high level view in mind, we can now delve into the details of how this method can be implemented recursively.

The real work of $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ is done by the $ \ensuremath{\mathrm{add\_recursive}(\ensuremath{\mathit{x}},\ensuremath{\mathit{ui}})}$ method, which adds the value $ \ensuremath{\ensuremath{\mathit{x}}}$ to the subtree whose root, $ \ensuremath{\ensuremath{\mathit{u}}}$, has the identifier $ \ensuremath{\ensuremath{\mathit{ui}}}$. If $ \ensuremath{\ensuremath{\mathit{u}}}$ is a leaf, then $ \ensuremath{\ensuremath{\mathit{x}}}$ is simply inserted into $ \ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}}$. Otherwise, $ \ensuremath{\ensuremath{\mathit{x}}}$ is added recursively into the appropriate child, $ \ensuremath{\ensuremath{\ensuremath{\mathit{u}}}}'$, of $ \ensuremath{\ensuremath{\mathit{u}}}$. The result of this recursive call is normally $ \ensuremath{\ensuremath{\mathit{nil}}}$ but may also be a reference to a newly-created node, $ \ensuremath{\ensuremath{\mathit{w}}}$, that was created because $ \ensuremath{\ensuremath{\ensuremath{\mathit{u}}}}'$ was split. In this case, $ \ensuremath{\ensuremath{\mathit{u}}}$ adopts $ \ensuremath{\ensuremath{\mathit{w}}}$ and takes its first key, completing the splitting operation on $ \ensuremath{\ensuremath{\ensuremath{\mathit{u}}}}'$.

After the value $ \ensuremath{\ensuremath{\mathit{x}}}$ has been added (either to $ \ensuremath{\ensuremath{\mathit{u}}}$ or to a descendant of $ \ensuremath{\ensuremath{\mathit{u}}}$), the $ \ensuremath{\mathrm{add\_recursive}(\ensuremath{\mathit{x}},\ensuremath{\mathit{ui}})}$ method checks to see if $ \ensuremath{\ensuremath{\mathit{u}}}$ is storing too many (more than $ 2B-1$) keys. If so, then $ \ensuremath{\ensuremath{\mathit{u}}}$ needs to be split with a call to the $ \ensuremath{\ensuremath{\mathit{u}}.\mathrm{split}()}$ method. The result of calling $ \ensuremath{\ensuremath{\mathit{u}}.\mathrm{split}()}$ is a new node that is used as the return value for $ \ensuremath{\mathrm{add\_recursive}(\ensuremath{\mathit{x}},\ensuremath{\mathit{ui}})}$.
\begin{leftbar}
\begin{flushleft}
\hspace*{1em} \ensuremath{\mathrm{add\_recursi...
...{return}} \ensuremath{\ensuremath{\mathit{nil}}}\\
\end{flushleft}\end{leftbar}

The $ \ensuremath{\mathrm{add\_recursive}(\ensuremath{\mathit{x}},\ensuremath{\mathit{ui}})}$ method is a helper for the $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ method, which calls $ \ensuremath{\mathrm{add\_recursive}(\ensuremath{\mathit{x}},\ensuremath{\mathit{ri}})}$ to insert $ \ensuremath{\ensuremath{\mathit{x}}}$ into the root of the $ B$-tree. If $ \ensuremath{\mathrm{add\_recursive}(\ensuremath{\mathit{x}},\ensuremath{\mathit{ri}})}$ causes the root to split, then a new root is created that takes as its children both the old root and the new node created by the splitting of the old root.
\begin{leftbar}
% latex2html id marker 64415\begin{flushleft}
\hspace*{1em} \e...
...return}} \ensuremath{\ensuremath{\mathit{true}}}\\
\end{flushleft}\end{leftbar}

The $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ method and its helper, $ \ensuremath{\mathrm{add\_recursive}(\ensuremath{\mathit{x}},\ensuremath{\mathit{ui}})}$, can be analyzed in two phases:

Downward phase:
During the downward phase of the recursion, before $ \ensuremath{\ensuremath{\mathit{x}}}$ has been added, they access a sequence of BTree nodes and call $ \ensuremath{\mathrm{find\_it}(\ensuremath{\mathit{a}},\ensuremath{\mathit{x}})}$ on each node. As with the $ \ensuremath{\mathrm{find}(\ensuremath{\mathit{x}})}$ method, this takes $ O(\log_B \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ time in the external memory model and $ O(\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ time in the word-RAM model.

Upward phase:
During the upward phase of the recursion, after $ \ensuremath{\ensuremath{\mathit{x}}}$ has been added, these methods perform a sequence of at most $ O(\log_B \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ splits. Each split involves only three nodes, so this phase takes $ O(\log_B
\ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ time in the external memory model. However, each split involves moving $ B$ keys and children from one node to another, so in the word-RAM model, this takes $ O(B\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ time.

Recall that the value of $ B$ can be quite large, much larger than even $ \log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}}$. Therefore, in the word-RAM model, adding a value to a $ B$-tree can be much slower than adding into a balanced binary search tree. Later, in Section 14.2.4, we will show that the situation is not quite so bad; the amortized number of split operations done during an $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ operation is constant. This shows that the (amortized) running time of the $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ operation in the word-RAM model is $ O(B+\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$.

14.2.3 Removal

The $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ operation in a BTree is, again, most easily implemented as a recursive method. Although the recursive implementation of $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ spreads the complexity across several methods, the overall process, which is illustrated in Figure 14.7, is fairly straightforward. By shuffling keys around, removal is reduced to the problem of removing a value, $ \ensuremath{\ensuremath{\ensuremath{\mathit{x}}}}'$, from some leaf, $ \ensuremath{\ensuremath{\mathit{u}}}$. Removing $ \ensuremath{\ensuremath{\ensuremath{\mathit{x}}}}'$ may leave $ \ensuremath{\ensuremath{\mathit{u}}}$ with less than $ B-1$ keys; this situation is called an underflow.

Figure 14.7: Removing the value 4 from a $ B$-tree results in one merge and one borrowing operation.
 \includegraphics[width=\textwidth ]{figs-python/btree-remove-full-1}  
  $ \Downarrow$  
 \includegraphics[width=\textwidth ]{figs-python/btree-remove-full-2}  
  $ \ensuremath{\mathrm{merge}(\ensuremath{\mathit{v}},\ensuremath{\mathit{w}})}$  
  $ \Downarrow$  
 \includegraphics[width=\textwidth ]{figs-python/btree-remove-full-3}  
  $ \ensuremath{\mathrm{shift\_lR}(\ensuremath{\mathit{w}},\ensuremath{\mathit{v}})}$  
  $ \Downarrow$  
 \includegraphics[width=\textwidth ]{figs-python/btree-remove-full-4}  

When an underflow occurs, $ \ensuremath{\ensuremath{\mathit{u}}}$ either borrows keys from, or is merged with, one of its siblings. If $ \ensuremath{\ensuremath{\mathit{u}}}$ is merged with a sibling, then $ \ensuremath{\ensuremath{\mathit{u}}}$'s parent will now have one less child and one less key, which can cause $ \ensuremath{\ensuremath{\mathit{u}}}$'s parent to underflow; this is again corrected by borrowing or merging, but merging may cause $ \ensuremath{\ensuremath{\mathit{u}}}$'s grandparent to underflow. This process works its way back up to the root until there is no more underflow or until the root has its last two children merged into a single child. When the latter case occurs, the root is removed and its lone child becomes the new root.

Next we delve into the details of how each of these steps is implemented. The first job of the $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ method is to find the element $ \ensuremath{\ensuremath{\mathit{x}}}$ that should be removed. If $ \ensuremath{\ensuremath{\mathit{x}}}$ is found in a leaf, then $ \ensuremath{\ensuremath{\mathit{x}}}$ is removed from this leaf. Otherwise, if $ \ensuremath{\ensuremath{\mathit{x}}}$ is found at $ \ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}[\ensuremath{\mathit{i}}]}$ for some internal node, $ \ensuremath{\ensuremath{\mathit{u}}}$, then the algorithm removes the smallest value, $ \ensuremath{x}'$, in the subtree rooted at $ \ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{children}}[\ensuremath{\mathit{i}}+1]}$. The value $ \ensuremath{x}'$ is the smallest value stored in the BTree that is greater than $ \ensuremath{\ensuremath{\mathit{x}}}$. The value of $ \ensuremath{x}'$ is then used to replace $ \ensuremath{\ensuremath{\mathit{x}}}$ in $ \ensuremath{\ensuremath{\mathit{u}}.\ensuremath{\mathit{keys}}[\ensuremath{\mathit{i}}]}$. This process is illustrated in Figure 14.8.

Figure 14.8: The $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ operation in a BTree. To remove the value $ \ensuremath{\ensuremath{\ensuremath{\mathit{x}}}}=10$ we replace it with the the value $ \ensuremath{\ensuremath{x}'}=11$ and remove 11 from the leaf that contains it.
 \includegraphics[width=\textwidth ]{figs-python/btree-remove-1}  
  $ \Downarrow$  
 \includegraphics[width=\textwidth ]{figs-python/btree-remove-2}  

The $ \ensuremath{\mathrm{remove\_recursive}(\ensuremath{\mathit{x}},\ensuremath{\mathit{ui}})}$ method is a recursive implementation of the preceding algorithm:
\begin{leftbar}
\begin{flushleft}
\hspace*{1em} \ensuremath{\mathrm{remove\_recu...
...bf{return}} \ensuremath{\ensuremath{\mathit{y}}}\\
\end{flushleft}\end{leftbar}

Note that, after recursively removing the value $ \ensuremath{\ensuremath{\mathit{x}}}$ from the $ \ensuremath{\ensuremath{\mathit{i}}}$th child of $ \ensuremath{\ensuremath{\mathit{u}}}$, $ \ensuremath{\mathrm{remove\_recursive}(\ensuremath{\mathit{x}},\ensuremath{\mathit{ui}})}$ needs to ensure that this child still has at least $ B-1$ keys. In the preceding code, this is done using a method called $ \ensuremath{\mathrm{check\_underflow}(\ensuremath{\mathit{x}},\ensuremath{\mathit{i}})}$, which checks for and corrects an underflow in the $ \ensuremath{\ensuremath{\mathit{i}}}$th child of $ \ensuremath{\ensuremath{\mathit{u}}}$. Let $ \ensuremath{\ensuremath{\mathit{w}}}$ be the $ \ensuremath{\ensuremath{\mathit{i}}}$th child of $ \ensuremath{\ensuremath{\mathit{u}}}$. If $ \ensuremath{\ensuremath{\mathit{w}}}$ has only $ B-2$ keys, then this needs to be fixed. The fix requires using a sibling of $ \ensuremath{\ensuremath{\mathit{w}}}$. This can be either child $ \ensuremath{\ensuremath{\ensuremath{\mathit{i}}}}+1$ of $ \ensuremath{\ensuremath{\mathit{u}}}$ or child $ \ensuremath{\ensuremath{\ensuremath{\mathit{i}}}}-1$ of $ \ensuremath{\ensuremath{\mathit{u}}}$. We will usually use child $ \ensuremath{\ensuremath{\ensuremath{\mathit{i}}}}-1$ of $ \ensuremath{\ensuremath{\mathit{u}}}$, which is the sibling, $ \ensuremath{\ensuremath{\mathit{v}}}$, of $ \ensuremath{\ensuremath{\mathit{w}}}$ directly to its left. The only time this doesn't work is when $ \ensuremath{\ensuremath{\ensuremath{\mathit{i}}}}=0$, in which case we use the sibling directly to $ \ensuremath{\ensuremath{\mathit{w}}}$'s right.
\begin{leftbar}
\begin{flushleft}
\hspace*{1em} \ensuremath{\mathrm{check\_under...
...nsuremath{\mathit{u}}, \ensuremath{\mathit{i}})}\\
\end{flushleft}\end{leftbar}
In the following, we focus on the case when $ \ensuremath{\ensuremath{\ensuremath{\mathit{i}}}}\neq 0$ so that any underflow at the $ \ensuremath{\ensuremath{\mathit{i}}}$th child of $ \ensuremath{\ensuremath{\mathit{u}}}$ will be corrected with the help of the $ (\ensuremath{\ensuremath{\ensuremath{\mathit{i}}}}-1)$st child of $ \ensuremath{\ensuremath{\mathit{u}}}$. The case $ \ensuremath{\ensuremath{\ensuremath{\mathit{i}}}}=0$ is similar and the details can be found in the accompanying source code.

To fix an underflow at node $ \ensuremath{\ensuremath{\mathit{w}}}$, we need to find more keys (and possibly also children), for $ \ensuremath{\ensuremath{\mathit{w}}}$. There are two ways to do this:

Borrowing:
If $ \ensuremath{\ensuremath{\mathit{w}}}$ has a sibling, $ \ensuremath{\ensuremath{\mathit{v}}}$, with more than $ B-1$ keys, then $ \ensuremath{\ensuremath{\mathit{w}}}$ can borrow some keys (and possibly also children) from $ \ensuremath{\ensuremath{\mathit{v}}}$. More specifically, if $ \ensuremath{\ensuremath{\mathit{v}}}$ stores $ \ensuremath{\mathrm{size}(\ensuremath{\mathit{v}})}$ keys, then between them, $ \ensuremath{\ensuremath{\mathit{v}}}$ and $ \ensuremath{\ensuremath{\mathit{w}}}$ have a total of

$\displaystyle B-2 + \ensuremath{\ensuremath{\mathrm{size}(\ensuremath{\mathit{w}})}} \ge 2B-2
$

keys. We can therefore shift keys from $ \ensuremath{\ensuremath{\mathit{v}}}$ to $ \ensuremath{\ensuremath{\mathit{w}}}$ so that each of $ \ensuremath{\ensuremath{\mathit{v}}}$ and $ \ensuremath{\ensuremath{\mathit{w}}}$ has at least $ B-1$ keys. This process is illustrated in Figure 14.9.

Figure 14.9: If $ \ensuremath{\ensuremath{\mathit{v}}}$ has more than $ B-1$ keys, then $ \ensuremath{\ensuremath{\mathit{w}}}$ can borrow keys from $ \ensuremath{\ensuremath{\mathit{v}}}$.
 \includegraphics[width=\textwidth ]{figs-python/btree-borrow-1}  
  $ \ensuremath{\mathrm{shift\_rL}(\ensuremath{\mathit{v}},\ensuremath{\mathit{w}})}$  
  $ \Downarrow$  
 \includegraphics[width=\textwidth ]{figs-python/btree-borrow-2}  

Merging:
If $ \ensuremath{\ensuremath{\mathit{v}}}$ has only $ B-1$ keys, we must do something more drastic, since $ \ensuremath{\ensuremath{\mathit{v}}}$ cannot afford to give any keys to $ \ensuremath{\ensuremath{\mathit{w}}}$. Therefore, we merge $ \ensuremath{\ensuremath{\mathit{v}}}$ and $ \ensuremath{\ensuremath{\mathit{w}}}$ as shown in Figure 14.10. The merge operation is the opposite of the split operation. It takes two nodes that contain a total of $ 2B-3$ keys and merges them into a single node that contains $ 2B-2$ keys. (The additional key comes from the fact that, when we merge $ \ensuremath{\ensuremath{\mathit{v}}}$ and $ \ensuremath{\ensuremath{\mathit{w}}}$, their common parent, $ \ensuremath{\ensuremath{\mathit{u}}}$, now has one less child and therefore needs to give up one of its keys.)

Figure 14.10: Merging two siblings $ \ensuremath{\ensuremath{\mathit{v}}}$ and $ \ensuremath{\ensuremath{\mathit{w}}}$ in a $ B$-tree ($ B=3$).
 \includegraphics[width=\textwidth ]{figs-python/btree-merge-1}  
  $ \ensuremath{\mathrm{merge}(\ensuremath{\mathit{v}},\ensuremath{\mathit{w}})}$  
  $ \Downarrow$  
 \includegraphics[width=\textwidth ]{figs-python/btree-merge-2}  


\begin{leftbar}
\begin{flushleft}
\hspace*{1em} \ensuremath{\mathrm{check\_under...
...} \gets \ensuremath{\ensuremath{\mathit{w}}.id}}\\
\end{flushleft}\end{leftbar}

To summarize, the $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ method in a $ B$-tree follows a root to leaf path, removes a key $ \ensuremath{x}'$ from a leaf, $ \ensuremath{\ensuremath{\mathit{u}}}$, and then performs zero or more merge operations involving $ \ensuremath{\ensuremath{\mathit{u}}}$ and its ancestors, and performs at most one borrowing operation. Since each merge and borrow operation involves modifying only three nodes, and only $ O(\log_B \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ of these operations occur, the entire process takes $ O(\log_B \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ time in the external memory model. Again, however, each merge and borrow operation takes $ O(B)$ time in the word-RAM model, so (for now) the most we can say about the running time required by $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ in the word-RAM model is that it is $ O(B\log_B \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$.


14.2.4 Amortized Analysis of $ B$-Trees

Thus far, we have shown that

  1. In the external memory model, the running time of $ \ensuremath{\mathrm{find}(\ensuremath{\mathit{x}})}$, $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$, and $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ in a $ B$-tree is $ O(\log_B \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$.
  2. In the word-RAM model, the running time of $ \ensuremath{\mathrm{find}(\ensuremath{\mathit{x}})}$ is $ O(\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ and the running time of $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ and $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ is $ O(B\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$.

The following lemma shows that, so far, we have overestimated the number of merge and split operations performed by $ B$-trees.

Lemma 14..1   Starting with an empty $ B$-tree and performing any sequence of $ m$ $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ and $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ operations results in at most $ 3m/2$ splits, merges, and borrows being performed.

Proof. The proof of this has already been sketched in Section 9.3 for the special case in which $ B=2$. The lemma can be proven using a credit scheme, in which
  1. each split, merge, or borrow operation is paid for with two credits, i.e., a credit is removed each time one of these operations occurs; and
  2. at most three credits are created during any $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ or $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ operation.
Since at most $ 3m$ credits are ever created and each split, merge, and borrow is paid for with with two credits, it follows that at most $ 3m/2$ splits, merges, and borrows are performed. These credits are illustrated using the symbol in Figures 14.5, 14.9, and 14.10.

To keep track of these credits the proof maintains the following credit invariant: Any non-root node with $ B-1$ keys stores one credit and any node with $ 2B-1$ keys stores three credits. A node that stores at least $ B$ keys and most $ 2B-2$ keys need not store any credits. What remains is to show that we can maintain the credit invariant and satisfy properties 1 and 2, above, during each $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ and $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ operation.

$ \qedsymbol$

14.2.4.0.1 Adding:

The $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ method does not perform any merges or borrows, so we need only consider split operations that occur as a result of calls to $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$.

Each split operation occurs because a key is added to a node, $ \ensuremath{\ensuremath{\mathit{u}}}$, that already contains $ 2B-1$ keys. When this happens, $ \ensuremath{\ensuremath{\mathit{u}}}$ is split into two nodes, $ \ensuremath{u}'$ and $ \ensuremath{u}''$ having $ B-1$ and $ B$ keys, respectively. Prior to this operation, $ \ensuremath{\ensuremath{\mathit{u}}}$ was storing $ 2B-1$ keys, and hence three credits. Two of these credits can be used to pay for the split and the other credit can be given to $ \ensuremath{u}'$ (which has $ B-1$ keys) to maintain the credit invariant. Therefore, we can pay for the split and maintain the credit invariant during any split.

The only other modification to nodes that occur during an $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ operation happens after all splits, if any, are complete. This modification involves adding a new key to some node $ \ensuremath{u}'$. If, prior to this, $ \ensuremath{u}'$ had $ 2B-2$ children, then it now has $ 2B-1$ children and must therefore receive three credits. These are the only credits given out by the $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ method.

14.2.4.0.2 Removing:

During a call to $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$, zero or more merges occur and are possibly followed by a single borrow. Each merge occurs because two nodes, $ \ensuremath{\ensuremath{\mathit{v}}}$ and $ \ensuremath{\ensuremath{\mathit{w}}}$, each of which had exactly $ B-1$ keys prior to calling $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ were merged into a single node with exactly $ 2B-2$ keys. Each such merge therefore frees up two credits that can be used to pay for the merge.

After any merges are performed, at most one borrow operation occurs, after which no further merges or borrows occur. This borrow operation only occurs if we remove a key from a leaf, $ \ensuremath{\ensuremath{\mathit{v}}}$, that has $ B-1$ keys. The node $ \ensuremath{\ensuremath{\mathit{v}}}$ therefore has one credit, and this credit goes towards the cost of the borrow. This single credit is not enough to pay for the borrow, so we create one credit to complete the payment.

At this point, we have created one credit and we still need to show that the credit invariant can be maintained. In the worst case, $ \ensuremath{\ensuremath{\mathit{v}}}$'s sibling, $ \ensuremath{\ensuremath{\mathit{w}}}$, has exactly $ B$ keys before the borrow so that, afterwards, both $ \ensuremath{\ensuremath{\mathit{v}}}$ and $ \ensuremath{\ensuremath{\mathit{w}}}$ have $ B-1$ keys. This means that $ \ensuremath{\ensuremath{\mathit{v}}}$ and $ \ensuremath{\ensuremath{\mathit{w}}}$ each should be storing a credit when the operation is complete. Therefore, in this case, we create an additional two credits to give to $ \ensuremath{\ensuremath{\mathit{v}}}$ and $ \ensuremath{\ensuremath{\mathit{w}}}$. Since a borrow happens at most once during a $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ operation, this means that we create at most three credits, as required.

If the $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ operation does not include a borrow operation, this is because it finishes by removing a key from some node that, prior to the operation, had $ B$ or more keys. In the worst case, this node had exactly $ B$ keys, so that it now has $ B-1$ keys and must be given one credit, which we create.

In either case--whether the removal finishes with a borrow operation or not--at most three credits need to be created during a call to $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ to maintain the credit invariant and pay for all borrows and merges that occur. This completes the proof of the lemma.

The purpose of Lemma 14.1 is to show that, in the word-RAM model the cost of splits, merges and joins during a sequence of $ m$ $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ and $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ operations is only $ O(Bm)$. That is, the amortized cost per operation is only $ O(B)$, so the amortized cost of $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ and $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ in the word-RAM model is $ O(B+\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$. This is summarized by the following pair of theorems:

Theorem 14..1 (External Memory $ B$-Trees)   A BTree implements the SSet interface. In the external memory model, a BTree supports the operations $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$, $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$, and $ \ensuremath{\mathrm{find}(\ensuremath{\mathit{x}})}$ in $ O(\log_B \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ time per operation.

Theorem 14..2 (Word-RAM $ B$-Trees)   A BTree implements the SSet interface. In the word-RAM model, and ignoring the cost of splits, merges, and borrows, a BTree supports the operations $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$, $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$, and $ \ensuremath{\mathrm{find}(\ensuremath{\mathit{x}})}$ in $ O(\log \ensuremath{\ensuremath{\ensuremath{\mathit{n}}}})$ time per operation. Furthermore, beginning with an empty BTree, any sequence of $ m$ $ \ensuremath{\mathrm{add}(\ensuremath{\mathit{x}})}$ and $ \ensuremath{\mathrm{remove}(\ensuremath{\mathit{x}})}$ operations results in a total of $ O(Bm)$ time spent performing splits, merges, and borrows.

opendatastructures.org