Subsections

# 2.1ArrayStack: Fast Stack Operations Using an Array

An ArrayStack implements the list interface using an array , called the backing array. The list element with index is stored in . At most times, is larger than strictly necessary, so an integer is used to keep track of the number of elements actually stored in . In this way, the list elements are stored in ,..., and, at all times, .

    T[] a;
int n;
int size() {
return n;
}


## 2.1.1 The Basics

Accessing and modifying the elements of an ArrayStack using and is trivial. After performing any necessary bounds-checking we simply return or set, respectively, .

    T get(int i) {
if (i < 0 || i > n - 1) throw new IndexOutOfBoundsException();
return a[i];
}
T set(int i, T x) {
if (i < 0 || i > n - 1) throw new IndexOutOfBoundsException();
T y = a[i];
a[i] = x;
return y;
}


The operations of adding and removing elements from an ArrayStack are illustrated in Figure 2.1. To implement the operation, we first check if is already full. If so, we call the method to increase the size of . How is implemented will be discussed later. For now, it is sufficient to know that, after a call to , we can be sure that . With this out of the way, we now shift the elements right by one position to make room for , set equal to , and increment .

    void add(int i, T x) {
if (i < 0 || i > n) throw new IndexOutOfBoundsException();
if (n + 1 > a.length) resize();
for (int j = n; j > i; j--)
a[j] = a[j-1];
a[i] = x;
n++;
}

If we ignore the cost of the potential call to , the cost of the operation is proportional to the number of elements we have to shift to make room for . Therefore the cost of this operation (ignoring the cost of resizing ) is .

Implementing the operation is similar. We shift the elements left by one position (overwriting ) and decrease the value of . After doing this, we check if is getting much smaller than by checking if . If so, we call to reduce the size of .

    T remove(int i) {
if (i < 0 || i > n - 1) throw new IndexOutOfBoundsException();
T x = a[i];
for (int j = i; j < n-1; j++)
a[j] = a[j+1];
n--;
if (a.length >= 3*n) resize();
return x;
}

If we ignore the cost of the method, the cost of a operation is proportional to the number of elements we shift, which is .

## 2.1.2 Growing and Shrinking

The method is fairly straightforward; it allocates a new array whose size is and copies the elements of into the first positions in , and then sets to . Thus, after a call to , .

    void resize() {
T[] b = newArray(Math.max(n*2,1));
for (int i = 0; i < n; i++) {
b[i] = a[i];
}
a = b;
}


Analyzing the actual cost of the operation is easy. It allocates an array of size and copies the elements of into . This takes time.

The running time analysis from the previous section ignored the cost of calls to . In this section we analyze this cost using a technique known as amortized analysis. This technique does not try to determine the cost of resizing during each individual and operation. Instead, it considers the cost of all calls to during a sequence of calls to or . In particular, we will show:

Lemma 2..1   If an empty ArrayList is created and any sequence of calls to and are performed, then the total time spent during all calls to is .

Proof. We will show that anytime is called, the number of calls to or since the last call to is at least . Therefore, if denotes the value of during the th call to and denotes the number of calls to , then the total number of calls to or is at least

which is equivalent to

On the other hand, the total time spent during all calls to is

which will prove the lemma since is not more than . All that remains is to show that the number of calls to or between the th and the th call to is at least .

There are two cases to consider. In the first case, is being called by because the backing array is full, i.e., . Consider the previous call to : After this previous call, the size of was , but the number of elements stored in was at most . But now the number of elements stored in is , so there must have been at least calls to since the previous call to .

The second case to consider is when is being called by because . Again, after the previous call to the number of elements stored in was at least .1 Now there are elements stored in . Therefore, the number of operations since the last call to is at least

In either case, the number of calls to or that occur between the th call to and the th call to is at least , as required to complete the proof.

## 2.1.3 Summary

The following theorem summarizes the performance of an ArrayStack:

Theorem 2..1   An ArrayStack implements the List interface. Ignoring the cost of calls to , an ArrayStack supports the operations
• and in time per operation; and
• and in time per operation.
Furthermore, beginning with an empty ArrayStack, any sequence of and operations results in a total of time spent during all calls to .

The ArrayStack is an efficient way to implement a Stack. In particular, we can implement as and as , in which case these operations will run in amortized time.

#### Footnotes

....1
The in this formula accounts for the special case that occurs when and .
opendatastructures.org