Subsections

# 2.5: Building a Deque from Two Stacks

Next, we present a data structure, the that achieves the same performance bounds as an by using two s. Although the asymptotic performance of the is no better than that of the , it is still worth studying, since it offers a good example of how to make a sophisticated data structure by combining two simpler data structures.

A represents a list using two s. Recall that an is fast when the operations on it modify elements near the end. A places two s, called and , back-to-back so that operations are fast at either end.

  ArrayStack<T> front;
ArrayStack<T> back;


A does not explicitly store the number, , of elements it contains. It doesn't need to, since it contains elements. Nevertheless, when analyzing the we will still use to denote the number of elements it contains.

  int size() {
return front.size() + back.size();
}


The stores the list elements that whose indices are , but stores them in reverse order. The contains list elements with indices in in the normal order. In this way, and translate into appropriate calls to or on either or , which take time per operation.

  T get(int i) {
if (i < front.size()) {
return front.get(front.size() - i - 1);
} else {
return back.get(i - front.size());
}
}
T set(int i, T x) {
if (i < front.size()) {
return front.set(front.size() - i - 1, x);
} else {
return back.set(i - front.size(), x);
}
}


Note that if an index , then it corresponds to the element of at position , since the elements of are stored in reverse order.

Adding and removing elements from a is illustrated in Figure 2.4. The operation manipulates either or , as appropriate:

  void add(int i, T x) {
if (i < front.size()) {
} else {
}
balance();
}


The method performs rebalancing of the two s and , by calling the method. The implementation of is described below, but for now it is sufficient to know that ensures that, unless , and do not differ by more than a factor of 3. In particular, and .

Next we analyze the cost of , ignoring the cost of calls to . If , then gets implemented by the call to . Since is an , the cost of this is

 (2.1)

On the other hand, if , then gets implemented as . The cost of this is

 (2.2)

Notice that the first case (2.1) occurs when . The second case (2.2) occurs when . When , we cannot be sure whether the operation affects or , but in either case, the operation takes time, since and . Summarizing the situation, we have

Running time of

Thus, the running time of , if we ignore the cost of the call to , is .

The operation and its analysis resemble the operation and analysis.

  T remove(int i) {
T x;
if (i < front.size()) {
x = front.remove(front.size()-i-1);
} else {
x = back.remove(i-front.size());
}
balance();
return x;
}


## 2.5.1 Balancing

Finally, we turn to the operation performed by and . This operation ensures that neither nor becomes too big (or too small). It ensures that, unless there are fewer than two elements, each of and contain at least elements. If this is not the case, then it moves elements between them so that and contain exactly elements and elements, respectively.

  void balance() {
if (3*front.size() < back.size()
|| 3*back.size() < front.size()) {
int n = front.size() + back.size();
int nf = n/2;
array<T> af(max(2*nf, 1));
for (int i = 0; i < nf; i++) {
af[nf-i-1] = get(i);
}
int nb = n - nf;
array<T> ab(max(2*nb, 1));
for (int i = 0; i < nb; i++) {
ab[i] = get(nf+i);
}
front.a = af;
front.n = nf;
back.a = ab;
back.n = nb;
}
}


Here there is little to analyze. If the operation does rebalancing, then it moves elements and this takes time. This is bad, since is called with each call to and . However, the following lemma shows that, on average, only spends a constant amount of time per operation.

Lemma 2..2   If an empty is created and any sequence of calls to and are performed, then the total time spent during all calls to is .

Proof. We will show that, if is forced to shift elements, then the number of and operations since the last time any elements were shifted by is at least . As in the proof of Lemma 2.1, this is sufficient to prove that the total time spent by is .

We will perform our analysis using a technique knows as the potential method. Define the potential, , of the as the difference in size between and :

The interesting thing about this potential is that a call to or that does not do any balancing can increase the potential by at most 1.

Observe that, immediately after a call to that shifts elements, the potential, , is at most 1, since

Consider the situation immediately before a call to that shifts elements and suppose, without loss of generality, that is shifting elements because . Notice that, in this case,

Furthermore, the potential at this point in time is

Therefore, the number of calls to or since the last time shifted elements is at least . This completes the proof.

## 2.5.2 Summary

The following theorem summarizes the properties of a :

Theorem 2..4   A implements the interface. Ignoring the cost of calls to and , a supports the operations
• and in time per operation; and
• and in time per operation.
Furthermore, beginning with an empty , any sequence of and operations results in a total of time spent during all calls to and .

opendatastructures.org