In this section we discuss a generalization of binary trees, called -trees, that is efficient in the external memory model. Alternatively, -trees can be viewed as the natural generalization of 2-4 trees described in Section 9.1. (A 2-4 tree is a special case of a -tree that we get by setting .)
For any integer , a -tree is a tree in which all leaves have the same depth and every non-root internal node, , has at least children and at most children. The children of are stored in an array, . The requirement on the number of children is relaxed at the root, which can have anywhere between 2 and children.
If the height of a -tree is , then it follows that the number, , of leaves in the -tree satisfies
Each node, , in -tree stores an array of keys . If is an internal node with children, then the number of keys stored at is exactly and these are stored in . The remaining array entries in are set to . If is a non-root leaf node, then contains between and keys. The keys in a -tree respect an ordering similar to the keys in a binary search tree. For any node, , that stores keys,
Note that all the data stored in a -tree node has size . Therefore, in an external memory setting, the value of in a -tree is chosen so that a node fits into a single external memory block. In this way, the time it takes to perform a -tree operation in the external memory model is proportional to the number of nodes that are accessed (read or written) by the operation.
For example, if the keys are 4 byte integers and the node indices are also 4 bytes, then setting means that each node stores
The class, which implements a -tree, stores a , , that stores nodes as well as the index, , of the root node. As usual, an integer, , is used to keep track of the number of items in the data structure:
The implementation of the operation, which is illustrated in Figure , generalizes the operation in a binary search tree. The search for starts at the root and uses the keys stored at a node, , to determine in which of 's children the search should continue.
|
We can analyze the running time of a -tree operation both in the usual word-RAM model (where every instruction counts) and in the external memory model (where we only count the number of nodes accessed). Since each leaf in a -tree stores at least one key and the height of a -Tree with leaves is , the height of a -tree that stores keys is . Therefore, in the external memory model, the time taken by the operation is . To determine the running time in the word-RAM model, we have to account for the cost of calling for each node we access, so the running time of in the word-RAM model is
One important difference between -trees and the data structure from Section 6.2 is that the nodes of a -tree do not store pointers to their parents. The reason for this will be explained shortly. The lack of parent pointers has the consequence that the and operations on -trees are most easily implemented using recursion.
Like all balanced search trees, some form of rebalancing is sometimes required during an operation. In a -tree, this is done by splitting nodes. Refer to Figure for what follows. Although splitting takes place across two levels of recursion, it is best understood as an operation that takes a node containing keys and having children. It creates a new node, , that adopts . The new node also takes 's largest keys, . At this point, has children and keys. The extra key, , is passed up to the parent of , which also adopts .
Notice that the splitting operation modifies 3 nodes: , 's parent, and the new node, . This is where it is important that the nodes of a -tree do not maintain parent pointers. If they did, then the children adopted by would all need to have their parent pointers modified. This would increase the number of external memory access from 3 to and would make -trees much less efficient for large values of .
The method in a -tree is illustrated in Figure . At a high level, this method finds a leaf, , at which to add the value . If this causes to become overfull (because it already contained keys), then is split. If this causes 's parent to become overfull then 's parent is also split, which may cause 's grandparent to become overfull, and so on. This process continues, moving up the tree one level at a time until reaching a node that is not overfull or until the root is split. In the former case, the process stops. In the latter case, a new root is created whose two children become the nodes obtained when the original root was split.
In summary, the high-level view of the method is that it walks from the root to a leaf searching for , adds to this leaf, and then walks back up to the root, splitting any overfull nodes it encounters along the way. With this high level view in mind, we can now delve into the details of how this method can be implemented recursively.
The real work of is done by the method, which adds the value to the subtree whose root, , has identifier . If is a leaf, then is just added into . Otherwise, is added recursively into the appropriate child, , of . The result of this recursive call is normally but may also be a reference to a newly-created node, , that was created because was split. In this case, adopts and takes its first key, completing the splitting operation on .
After has been added (either to or to a descendant of ), the method checks to see if is storing too many (more than ) keys. If so, then needs to be split with a call to the method. The result of calling , is a new node that is used as the return value for .
The method is just a helper for the method, which calls to insert into the root of the -tree. If causes the root to split, then a new root is created that takes as its children both the old root and the new node created when the old root was split.
The method and its helper, , can be analyzed in two phases:
Recall that the value of can be quite large, even much larger than . Therefore, in the word-RAM model, adding a value to a -tree can be much slower than adding into a balanced binary search tree. Later, in Section , we will show that the situation is not quite so bad; the amortized number of split operations done during an operation is constant. This shows that the (amortized) running time of the operation in the word-RAM model is .
The operation in a is, again, most easily implemented as a recursive method. Although the recursive implementation of spreads the complexity across several methods, the overall process, which is illustrated in Figure , is fairly straightforward. By shuffling keys around, removal is reduced to the problem of removing a value, , from some leaf, . Removing may leave with less than keys; this situation is called an underflow.
When an underflow occurs, either borrows keys from one of its siblings or is merged with one of its siblings. If is merged with a sibling, then 's parent will now have one less child and one less key, which can cause 's parent to underflow; this is again corrected by borrowing or merging and merging may cause 's grandparent to underflow. This process works its way back up to the root until there is no more underflow or until the root has its last two children merged into a single child. When the latter case occurs, the root is removed and its lone child becomes the new root.
Next we delve into the details of how each of these steps is implemented. The first job of the method is to find the element that should be removed. If is found in a leaf, then is removed from this leaf. Otherwise, if is found at for some internal node, , then the algorithm removes the smallest value, , in the subtree rooted at . The value is the smallest value stored in the that is greater than . The value of is then used to replace in . This process is illustrated in Figure .
|
The method is a recursive implementation of the preceding algorithm:
Note that, after recursively removing from the th child of , needs to ensure that this child still has at least keys. In the preceding code, this is done with a call to a method called , which checks for, and corrects, an underflow in the th child of . Let be the th child of . If has only keys, then this needs to be fixed. The fix requires using a sibling of . This can be either child of or child of . We will usually use child of , which is the sibling, , of directly to its left. The only time this doesn't work is when , in which case we use the sibling directly to 's right. In the following, we focus on the case when so that any underflow at the th child of will be corrected with the help of the st child of . The case is similar and the details can be found in the accompanying source code.
To fix an underflow at node , we need to find more keys (and possibly also children), for . There are two ways to do this:
To summarize, the method in a -tree follows a root to leaf path, removes a key from a leaf, , and then performs zero or more merge operations involving and its ancestors, and performs at most one borrowing operation. Since each merge and borrow operation involves modifying only 3 nodes, and only of these operations occur, the entire process takes time in the external memory model. Again, however, each merge and borrow operation takes time in the word-RAM model, so (for now) the most we can say about the running time required by in the word-RAM model is that it is .
Thus far, we have shown that
The following lemma shows that, so far, we have overestimated the number of merge and split operations performed by -trees.
To keep track of these credits the proof maintains the following credit invariant: Any non-root node with keys stores 1 credit and any node with keys stores 3 credits. A node that stores at least keys and most keys need not store any credits. What remains is to show that we can maintain the credit invariant and satisfy properties 1 and 2, above, during each and operation.
Each split operation occurs because a key is added to a node, , that already contains keys. When this happens, is split into two nodes, and having and keys, respectively. Prior to this operation, was storing keys, and hence 3 credits. Two of these credits can be used to pay for the split and the other credit can be given to (which has keys) to maintain the credit invariant. Therefore, we can pay for the split and maintain the credit invariant during any split.
The only other modification to nodes that occur during an operation happen after all splits, if any, are complete. This modification involves adding a new key to some node . If, prior to this, had children, then it now has children and must therefore receive 3 credits. These are the only credits given out by the method.
After any merges are performed, at most one borrow operation occurs, after which no further merges or borrows occur. This borrow operation occurs when we remove a key from a node, , that has keys. The node therefore has one credit and this credit goes towards the cost of the borrow. This single credit is not enough to pay for the borrow, so we create one credit to complete the payment.
At this point, we have created one credit and we still to show that the credit invariant can be maintained. In the worst case, 's sibling, , has exactly keys before the borrow so that, afterwards, both and have keys. This means that and each need to be storing a credit when the operation is complete. Therefore, in this case, we create an addition 2 credits to give to and . Since a borrow happens at most once during a operation, this means that we create at most 3 credits, as required.
If the operation does not include a borrow operation this is because it finishes by removing a key from some node that, prior to the operation, had or more keys. In the worst case, this node had exactly keys, so that it now has keys and must be given 1 credit, which we create.
In either case--whether the removal finishes with a borrow operation or not--at most 3 credits need to be created during a call to to maintain the credit invariant and pay for all borrows and merges that occur. This completes the proof of the lemma.
The purpose of Lemma is to show that, in the word-RAM model the cost of splits, merges and joins during a sequence of and operations is only . That is, the amortized cost per operation is only , so that the amortized cost of and in the word-RAM model is . This is summarized by the following pair of theorems:
opendatastructures.org