An implements the list interface using an array , called the backing array. The list element with index is stored in . At most times, is larger than strictly necessary, so an integer is used to keep track of the number of elements actually stored in . In this way, the list elements are stored in ,..., and, at all times, .
array<T> a; int n; int size() { return n; }
Accessing and modifying the elements of an using and is trivial. After performing any necessary bounds-checking we simply return or set, respectively, .
T get(int i) { return a[i]; } T set(int i, T x) { T y = a[i]; a[i] = x; return y; }
The operations of adding and removing elements from an are illustrated in Figure 2.1. To implement the operation, we first check if is already full. If so, we call the method to increase the size of . How is implemented will be discussed later. For now, it is sufficient to know that, after a call to , we can be sure that . With this out of the way, we now shift the elements right by one position to make room for , set equal to , and increment .
|
void add(int i, T x) { if (n + 1 > a.length) resize(); for (int j = n; j > i; j--) a[j] = a[j - 1]; a[i] = x; n++; }If we ignore the cost of the potential call to , then the cost of the operation is proportional to the number of elements we have to shift to make room for . Therefore the cost of this operation (ignoring the cost of resizing ) is .
Implementing the operation is similar. We shift the elements left by one position (overwriting ) and decrease the value of . After doing this, we check if is getting much smaller than by checking if . If so, then we call to reduce the size of .
T remove(int i) { T x = a[i]; for (int j = i; j < n - 1; j++) a[j] = a[j + 1]; n--; if (a.length >= 3 * n) resize(); return x; }If we ignore the cost of the method, the cost of a operation is proportional to the number of elements we shift, which is .
The method is fairly straightforward; it allocates a new array whose size is and copies the elements of into the first positions in , and then sets to . Thus, after a call to , .
void resize() { array<T> b(max(2 * n, 1)); for (int i = 0; i < n; i++) b[i] = a[i]; a = b; }
Analyzing the actual cost of the operation is easy. It allocates an array of size and copies the elements of into . This takes time.
The running time analysis from the previous section ignored the cost of calls to . In this section we analyze this cost using a technique known as amortized analysis. This technique does not try to determine the cost of resizing during each individual and operation. Instead, it considers the cost of all calls to during a sequence of calls to or . In particular, we will show:
There are two cases to consider. In the first case, is being called by because the backing array is full, i.e., . Consider the previous call to : after this previous call, the size of was , but the number of elements stored in was at most . But now the number of elements stored in is , so there must have been at least calls to since the previous call to .
The second case occurs when is being called by because . Again, after the previous call to the number of elements stored in was at least .2.1 Now there are elements stored in . Therefore, the number of operations since the last call to is at least
The following theorem summarizes the performance of an :
The is an efficient way to implement a . In particular, we can implement as and as , in which case these operations will run in amortized time.