Subsections


2.6 $ \mathtt{RootishArrayStack}$: A Space-Efficient Array Stack

One of the drawbacks of all previous data structures in this chapter is that, because they store their data in one or two arrays, and they avoid resizing these arrays too often, the arrays are frequently not very full. For example, immediately after a $ \mathtt{resize()}$ operation on an $ \mathtt{ArrayStack}$, the backing array $ \mathtt{a}$ is only half full. Even worse, there are times when only $ 1/3$ of $ \mathtt{a}$ contains data.

In this section, we discuss a data structure, the $ \mathtt{RootishArrayStack}$, that addresses the problem of wasted space. The $ \mathtt{RootishArrayStack}$ stores $ \mathtt{n}$ elements using $ O(\sqrt{\ensuremath{\mathtt{n}}})$ arrays. In these arrays, at most $ O(\sqrt{\ensuremath{\mathtt{n}}})$ array locations are unused at any time. All remaining array locations are used to store data. Therefore, these data structures waste at most $ O(\sqrt{\ensuremath{\mathtt{n}}})$ space when storing $ \mathtt{n}$ elements.

A $ \mathtt{RootishArrayStack}$ stores its elements in a list of $ \mathtt{r}$ arrays called blocks that are numbered $ 0,1,\ldots,\ensuremath{\mathtt{r}}-1$. See Figure 2.5. Block $ b$ contains $ b+1$ elements. Therefore, all $ \mathtt{r}$ blocks contain a total of

$\displaystyle 1+ 2+ 3+\cdots +\ensuremath{\mathtt{r}} = \ensuremath{\mathtt{r}}(\ensuremath{\mathtt{r}}+1)/2
$

elements. The above formula (allegedly discovered by the mathematician Gauß at the age of 9) can be obtained as shown in Figure 2.6.

Figure 2.5: A sequence of $ \mathtt{add(i,x)}$ and $ \mathtt{remove(i)}$ operations on a $ \mathtt{RootishArrayStack}$. Arrows denote elements being copied.
\includegraphics{figs/rootisharraystack}

  ArrayStack<T*> blocks;
  int n;

Figure 2.6: The number of white squares is $ 1+2+3+\cdots +\ensuremath {\mathtt {r}}$. The number of shaded squares is the same. Together the white and shaded squares make a rectangle consisting of $ \ensuremath {\mathtt {r}}(\ensuremath {\mathtt {r}}+1)$ squares.
\includegraphics{figs/gauss}

The elements of the list are laid out in the blocks as we might expect. The list element with index 0 is stored in block 0, the elements with list indices 1 and 2 are stored in block 1, the elements with list indices 3, 4, and 5 are stored in block 2, and so on. The main problem we have to address is that of determining, given an index $ \ensuremath{\mathtt{i}}$, which block contains $ \mathtt{i}$ as well as the index corresponding to $ \mathtt{i}$ within that block.

Determining the index of $ \mathtt{i}$ within its block turns out to be easy. If index $ \mathtt{i}$ is in block $ \mathtt{b}$, then the number of elements in blocks $ 0,\ldots,\ensuremath{\mathtt{b}}-1$ is $ \ensuremath{\mathtt{b}}(\ensuremath{\mathtt{b}}+1)/2$. Therefore, $ \mathtt{i}$ is stored at location

$\displaystyle \ensuremath{\mathtt{j}} = \ensuremath{\mathtt{i}} - \ensuremath{\mathtt{b}}(\ensuremath{\mathtt{b}}+1)/2
$

within block $ \mathtt{b}$. Somewhat more challenging is the problem of determining the value of $ \mathtt{b}$. The number of elements that have indices less than or equal to $ \mathtt{i}$ is $ \ensuremath{\mathtt{i}}+1$. On the other hand, the number of elements in blocks 0,...,b is $ (\ensuremath{\mathtt{b}}+1)(\ensuremath{\mathtt{b}}+2)/2$. Therefore, $ \mathtt{b}$ is the smallest integer such that

$\displaystyle (\ensuremath{\mathtt{b}}+1)(\ensuremath{\mathtt{b}}+2)/2 \ge \ensuremath{\mathtt{i}}+1 \enspace .
$

We can rewrite this equation as

$\displaystyle \ensuremath{\mathtt{b}}^2 + 3\ensuremath{\mathtt{b}} - 2\ensuremath{\mathtt{i}} \ge 0 \enspace .
$

The corresponding quadratic equation $ \ensuremath{\mathtt{b}}^2 + 3\ensuremath{\mathtt{b}} - 2\ensuremath{\mathtt{i}} = 0$ has two solutions: $ \ensuremath{\mathtt{b}}=(-3 + \sqrt{9+8\ensuremath{\mathtt{i}}}) / 2$ and $ \ensuremath{\mathtt{b}}=(-3 - \sqrt{9+8\ensuremath{\mathtt{i}}}) / 2$. The second solution makes no sense in our application since it always gives a negative value. Therefore, we obtain the solution $ \ensuremath{\mathtt{b}} = (-3 +
\sqrt{9+8i}) / 2$. In general, this solution is not an integer, but going back to our inequality, we want the smallest integer $ \ensuremath{\mathtt{b}}$ such that $ \ensuremath{\mathtt{b}} \ge (-3 + \sqrt{9+8i}) / 2$. This is simply

$\displaystyle \ensuremath{\mathtt{b}} = \left\lceil(-3 + \sqrt{9+8i}) / 2\right\rceil \enspace .
$

  int i2b(int i) {
      double db = (-3.0 + sqrt(9 + 8*i)) / 2.0;
      int b = (int)ceil(db);
      return b;
  }

With this out of the way, the $ \mathtt{get(i)}$ and $ \mathtt{set(i,x)}$ methods are straightforward. We first compute the appropriate block $ \mathtt{b}$ and the appropriate index $ \mathtt{j}$ within the block and then perform the appropriate operation:

  T get(int i) {
      int b = i2b(i);
      int j = i - b*(b+1)/2;
      return blocks.get(b)[j];
  }
  T set(int i, T x) {
      int b = i2b(i);
      int j = i - b*(b+1)/2;
      T y = blocks.get(b)[j];
      blocks.get(b)[j] = x;
      return y;
  }

If we use any of the data structures in this chapter for representing the $ \mathtt{blocks}$ list, then $ \mathtt{get(i)}$ and $ \mathtt{set(i,x)}$ will each run in constant time.

The $ \mathtt{add(i,x)}$ method will, by now, look familiar. We first check if our data structure is full, by checking if the number of blocks $ \mathtt{r}$ is such that $ \ensuremath{\mathtt{r}}(\ensuremath{\mathtt{r}}+1)/2 = \ensuremath{\mathtt{n}}$ and, if so, we call $ \mathtt{grow()}$ to add another block. With this done, we shift elements with indices $ \ensuremath{\mathtt{i}},\ldots,\ensuremath{\mathtt{n}}-1$ to the right by one position to make room for the new element with index $ \mathtt{i}$:

  void add(int i, T x) {
      int r = blocks.size();
      if (r*(r+1)/2 < n + 1) grow();
      n++;
      for (int j = n-1; j > i; j--)
              set(j, get(j-1));
      set(i, x);
  }

The $ \mathtt{grow()}$ method does what we expect. It adds a new block:

  void grow() {
      blocks.add(blocks.size(), new T[blocks.size()+1]);
  }

Ignoring the cost of the $ \mathtt{grow()}$ operation, the cost of an $ \mathtt{add(i,x)}$ operation is dominated by the cost of shifting and is therefore $ O(1+\ensuremath{\mathtt{n}}-\ensuremath{\mathtt{i}})$, just like an $ \mathtt{ArrayStack}$.

The $ \mathtt{remove(i)}$ operation is similar to $ \mathtt{add(i,x)}$. It shifts the elements with indices $ \ensuremath{\mathtt{i}}+1,\ldots,\ensuremath{\mathtt{n}}$ left by one position and then, if there is more than one empty block, it calls the $ \mathtt{shrink()}$ method to remove all but one of the unused blocks:

  T remove(int i) {
      T x = get(i);
      for (int j = i; j < n-1; j++)
              set(j, get(j+1));
      n--;
      int r = blocks.size();
      if ((r-2)*(r-1)/2 >= n) shrink();
      return x;
  }
  void shrink() {
      int r = blocks.size();
      while (r > 0 && (r-2)*(r-1)/2 >= n) {
              delete [] blocks.remove(blocks.size()-1);
              r--;
      }
  }

Once again, ignoring the cost of the $ \mathtt{shrink()}$ operation, the cost of a $ \mathtt{remove(i)}$ operation is dominated by the cost of shifting and is therefore $ O(\ensuremath{\mathtt{n}}-\ensuremath{\mathtt{i}})$.

2.6.1 Analysis of Growing and Shrinking

The above analysis of $ \mathtt{add(i,x)}$ and $ \mathtt{remove(i)}$ does not account for the cost of $ \mathtt{grow()}$ and $ \mathtt{shrink()}$. Note that, unlike the $ \mathtt{ArrayStack.resize()}$ operation, $ \mathtt{grow()}$ and $ \mathtt{shrink()}$ do not do any copying of data. They only allocate or free an array of size $ \mathtt{r}$. In some environments, this takes only constant time, while in others, it may require $ \Theta(\ensuremath{\mathtt{r}}))$ time.

We note that, immediately after a call to $ \mathtt{grow()}$ or $ \mathtt{shrink()}$, the situation is clear. The final block is completely empty and all other blocks are completely full. Another call to $ \mathtt{grow()}$ or $ \mathtt{shrink()}$ will not happen until at least $ \ensuremath{\mathtt{r}}-1$ elements have been added or removed. Therefore, even if $ \mathtt{grow()}$ and $ \mathtt{shrink()}$ take $ O(\ensuremath{\mathtt{r}})$ time, this cost can be amortized over at least $ \ensuremath{\mathtt{r}}-1$ $ \mathtt{add(i,x)}$ or $ \mathtt{remove(i)}$ operations, so that the amortized cost of $ \mathtt{grow()}$ and $ \mathtt{shrink()}$ is $ O(1)$ per operation.


2.6.2 Space Usage

Next, we analyze the amount of extra space used by a $ \mathtt{RootishArrayStack}$. In particular, we want to count any space used by a $ \mathtt{RootishArrayStack}$ that is not an array element currently used to hold a list element. We call all such space wasted space.

The $ \mathtt{remove(i)}$ operation ensures that a $ \mathtt{RootishArrayStack}$ never has more than 2 blocks that are not completely full. The number of blocks, $ \mathtt{r}$, used by a $ \mathtt{RootishArrayStack}$ that stores $ \mathtt{n}$ elements therefore satisfies

$\displaystyle (\ensuremath{\mathtt{r}}-2)(\ensuremath{\mathtt{r}}-1) \le \ensuremath{\mathtt{n}}
$

Again, using the quadratic equation on this gives

$\displaystyle \ensuremath{\mathtt{r}} \le (3+\sqrt{1+4\ensuremath{\mathtt{n}}})/2 = O(\sqrt{\ensuremath{\mathtt{n}}})
$

The last two blocks have sizes $ \mathtt{r}$ and $ \mathtt{r-1}$, so the space wasted by these two blocks is at most $ 2\ensuremath{\mathtt{r}}-1 = O(\sqrt{\ensuremath{\mathtt{n}}})$. If we store the blocks in (for example) an $ \mathtt{ArrayList}$, then the amount of space wasted by the $ \mathtt{List}$ that stores those $ \mathtt{r}$ blocks is also $ O(\ensuremath{\mathtt{r}})=O(\sqrt{\ensuremath{\mathtt{n}}})$. The other space needed for storing $ \mathtt{n}$ and other accounting information is $ O(1)$. Therefore, the total amount of wasted space in a $ \mathtt{RootishArrayStack}$ is $ O(\sqrt{\ensuremath{\mathtt{n}}})$.

Next, we argue that this space usage is optimal for any data structure that starts out empty and can support the addition of one item at a time. More precisely, we will show that, at some point during the addition of $ \mathtt{n}$ items, the data structure is wasting an amount of space at least in $ \sqrt{\ensuremath{\mathtt{n}}}$ (though it may be only wasted for a moment).

Suppose we start with an empty data structure and we add $ \mathtt{n}$ items one at a time. At the end of this process, all $ \mathtt{n}$ items are stored in the structure and they are distributed among a collection of $ \mathtt{r}$ memory blocks. If $ \ensuremath{\mathtt{r}}\ge \sqrt{\ensuremath{\mathtt{n}}}$, then the data structure must be using $ \mathtt{r}$ pointers (or references) to keep track of these $ \mathtt{r}$ blocks, and this is wasted space. On the other hand, if $ \ensuremath{\mathtt{r}} < \sqrt{\ensuremath{\mathtt{n}}}$ then, by the pigeonhole principle, some block must have size at least $ \ensuremath{\mathtt{n}}/\ensuremath{\mathtt{r}} > \sqrt{\ensuremath{\mathtt{n}}}$. Consider the moment at which this block was first allocated. Immediately after it was allocated, this block was empty, and was therefore wasting $ \sqrt{\ensuremath{\mathtt{n}}}$ space. Therefore, at some point in time during the insertion of $ \mathtt{n}$ elements, the data structure was wasting $ \sqrt{\ensuremath{\mathtt{n}}}$ space.

2.6.3 Summary

The following theorem summarizes the performance of the $ \mathtt{RootishArrayStack}$ data structure:

Theorem 2..5   A $ \mathtt{RootishArrayStack}$ implements the $ \mathtt{List}$ interface. Ignoring the cost of calls to $ \mathtt{grow()}$ and $ \mathtt{shrink()}$, a $ \mathtt{RootishArrayStack}$ supports the operations Furthermore, beginning with an empty $ \mathtt{RootishArrayStack}$, any sequence of $ m$ $ \mathtt{add(i,x)}$ and $ \mathtt{remove(i)}$ operations results in a total of $ O(m)$ time spent during all calls to $ \mathtt{grow()}$ and $ \mathtt{shrink()}$.

The space (measured in words)3 used by a $ \mathtt{RootishArrayStack}$ that stores $ \mathtt{n}$ elements is $ \ensuremath{\mathtt{n}} +O(\sqrt{\ensuremath{\mathtt{n}}})$.

2.6.4 Computing Square Roots

A reader who has had some exposure to models of computation may notice that the $ \mathtt{RootishArrayStack}$, as described above, does not fit into the usual word-RAM model of computation () because it requires taking square roots. The square root operation is generally not considered a basic operation and is therefore not usually part of the word-RAM model.

In this section, we take time to show that the square root operation can be implemented efficiently. In particular, we show that for any integer $ \ensuremath{\mathtt{x}}\in\{0,\ldots,\ensuremath{\mathtt{n}}\}$, $ \lfloor\sqrt{\ensuremath{\mathtt{x}}}\rfloor$ can be computed in constant-time, after $ O(\sqrt{\ensuremath{\mathtt{n}}})$ preprocessing that creates two arrays of length $ O(\sqrt{\ensuremath{\mathtt{n}}})$. The following lemma shows that we can reduce the problem of computing the square root of $ \mathtt{x}$ to the square root of a related value $ \mathtt{x'}$.

Lemma 2..3   Let $ \ensuremath{\mathtt{x}}\ge 1$ and let $ \ensuremath{\mathtt{x'}}=\ensuremath{\mathtt{x}}-a$, where $ 0\le a\le\sqrt{\ensuremath{\mathtt{x}}}$. Then $ \sqrt{x'} \ge \sqrt{\ensuremath{\mathtt{x}}}-1$.

Proof. It suffices to show that

$\displaystyle \sqrt{\ensuremath{\mathtt{x}}-\sqrt{\ensuremath{\mathtt{x}}}} \ge \sqrt{\ensuremath{\mathtt{x}}}-1 \enspace .
$

Square both sides of this inequality to get

$\displaystyle \ensuremath{\mathtt{x}}-\sqrt{\ensuremath{\mathtt{x}}} \ge \ensuremath{\mathtt{x}}-2\sqrt{\ensuremath{\mathtt{x}}}+1
$

and gather terms to get

$\displaystyle \sqrt{\ensuremath{\mathtt{x}}} \ge 1
$

which is clearly true for any $ \ensuremath{\mathtt{x}}\ge 1$. $ \qedsymbol$

Start by restricting the problem a little, and assume that $ 2^{\ensuremath{\mathtt{r}}} \le
\ensuremath{\mathtt{x}} < 2^{\ensuremath{\mathtt{r}}+1}$, so that $ \lfloor\log \ensuremath{\mathtt{x}}\rfloor=\ensuremath{\mathtt{r}}$, i.e., $ \mathtt{x}$ is an integer having $ \ensuremath{\mathtt{r}}+1$ bits in its binary representation. We can take $ \ensuremath{\mathtt{x'}}=\ensuremath{\mathtt{x}} - (\ensuremath{\mathtt{x}}\bmod 2^{\lfloor r/2\rfloor})$. Now, $ \mathtt{x'}$ satisfies the conditions of Lemma 2.3, so $ \sqrt{\ensuremath{\mathtt{x}}}-\sqrt{\ensuremath{\mathtt{x'}}} \le 1$. Furthermore, $ \mathtt{x'}$ has all of its lower-order $ \lfloor \ensuremath{\mathtt{r}}/2\rfloor$ bits equal to 0, so there are only

$\displaystyle 2^{\ensuremath{\mathtt{r}}+1-\lfloor \ensuremath{\mathtt{r}}/2\rfloor} \le 4\cdot2^{\ensuremath{\mathtt{r}}/2} \le 4\sqrt{\ensuremath{\mathtt{x}}}
$

possible values of $ \mathtt{x'}$. This means that we can use an array, $ \mathtt{sqrttab}$, that stores the value of $ \lfloor\sqrt{\ensuremath{\mathtt{x'}}}\rfloor$ for each possible value of $ \mathtt{x'}$. A little more precisely, we have

$\displaystyle \ensuremath{\mathtt{sqrttab}}[i]
= \left\lfloor
\sqrt{i 2^{\lfloor \ensuremath{\mathtt{r}}/2\rfloor}}
\right\rfloor \enspace .
$

In this way, $ \ensuremath{\mathtt{sqrttab}}[i]$ is within 2 of $ \sqrt{\ensuremath{\mathtt{x}}}$ for all $ \ensuremath{\mathtt{x}}\in\{i2^{\lfloor r/2\rfloor},\ldots,(i+1)2^{\lfloor r/2\rfloor}-1\}$. Stated another way, the array entry $ \ensuremath{\mathtt{s}}=\ensuremath{\mathtt{sqrttab}}[\ensuremath{\mathtt{x}}\ensuremath{\mathtt{>>}}\lfloor \ensuremath{\mathtt{r}}/2\rfloor]$ is either equal to $ \lfloor\sqrt{\ensuremath{\mathtt{x}}}\rfloor$, $ \lfloor\sqrt{\ensuremath{\mathtt{x}}}\rfloor-1$, or $ \lfloor\sqrt{\ensuremath{\mathtt{x}}}\rfloor-2$. From $ \mathtt{s}$ we can determine the value of $ \lfloor\sqrt{\ensuremath{\mathtt{x}}}\rfloor$ by incrementing $ \mathtt{s}$ until $ (\ensuremath{\mathtt{s}}+1)^2 > \ensuremath{\mathtt{x}}$.
  int sqrt(int x, int r) {
    int s = sqrtab[x>>r/2];
    while ((s+1)*(s+1) <= x) s++; // executes at most twice
    return s;
  }

Now, this only works for $ \ensuremath{\mathtt{x}}\in\{2^{\ensuremath{\mathtt{r}}},\ldots,2^{\ensuremath{\mathtt{r}}+1}-1\}$ and $ \mathtt{sqrttab}$ is a special table that only works for a particular value of $ \ensuremath{\mathtt{r}}=\lfloor\log \ensuremath{\mathtt{x}}\rfloor$. To overcome this, we could compute $ \lfloor\log \ensuremath{\mathtt{n}}\rfloor$ different $ \mathtt{sqrttab}$ arrays, one for each possible value of $ \lfloor\log \ensuremath{\mathtt{x}}\rfloor$. The sizes of these tables form an exponential sequence whose largest value is at most $ 4\sqrt{\ensuremath{\mathtt{n}}}$, so the total size of all tables is $ O(\sqrt{\ensuremath{\mathtt{n}}})$.

However, it turns out that more than one $ \mathtt{sqrttab}$ array is unnecessary; we only need one $ \mathtt{sqrttab}$ array for the value $ \ensuremath{\mathtt{r}}=\lfloor\log
\ensuremath{\mathtt{n}}\rfloor$. Any value $ \mathtt{x}$ with $ \log\ensuremath{\mathtt{x}}=\ensuremath{\mathtt{r'}}<\ensuremath{\mathtt{r}}$ can be upgraded by multiplying $ \mathtt{x}$ by $ 2^{\ensuremath{\mathtt{r}}-\ensuremath{\mathtt{r'}}}$ and using the equation

$\displaystyle \sqrt{2^{\ensuremath{\mathtt{r}}-\ensuremath{\mathtt{r'}}}x} = 2^...
...thtt{r}}-\ensuremath{\mathtt{r}}')/2}\sqrt{\ensuremath{\mathtt{x}}} \enspace .
$

The quantity $ 2^{\ensuremath{\mathtt{r}}-\ensuremath{\mathtt{r}}'}x$ is in the range $ \{2^{\ensuremath{\mathtt{r}}},\ldots,2^{\ensuremath{\mathtt{r}}+1}-1\}$ so we can look up its square root in $ \mathtt{sqrttab}$. The following code implements this idea to compute $ \lfloor\sqrt{\ensuremath{\mathtt{x}}}\rfloor$ for all non-negative integers $ \mathtt{x}$ in the range $ \{0,\ldots,2^{30}-1\}$ using an array, $ \mathtt{sqrttab}$, of size $ 2^{16}$.
  int sqrt(int x) {
    int rp = log(x);
    int upgrade = ((r-rp)/2) * 2;
    int xp = x << upgrade;  // xp has r or r-1 bits
    int s = sqrtab[xp>>(r/2)] >> (upgrade/2);
    while ((s+1)*(s+1) <= x) s++;  // executes at most twice
    return s;
  }

Something we have taken for granted thus far is the question of how to compute $ \ensuremath{\mathtt{r}}'=\lfloor\log\ensuremath{\mathtt{x}}\rfloor$. Again, this is a problem that can be solved with an array, $ \mathtt{logtab}$, of size $ 2^{\ensuremath{\mathtt{r}}/2}$. In this case, the code is particularly simple, since $ \lfloor\log \ensuremath{\mathtt{x}}\rfloor$ is just the index of the most significant 1 bit in the binary representation of $ \mathtt{x}$. This means that, for $ \ensuremath{\mathtt{x}}>2^{\ensuremath{\mathtt{r}}/2}$, we can right-shift the bits of $ \mathtt{x}$ by $ \ensuremath{\mathtt{r}}/2$ positions before using it as an index into $ \mathtt{logtab}$. The following code does this using an array $ \mathtt{logtab}$ of size $ 2^{16}$ to compute $ \lfloor\log \ensuremath{\mathtt{x}}\rfloor$ for all $ \mathtt{x}$ in the range $ \{1,\ldots,2^{32}-1\}$

  int log(int x) {
    if (x >= halfint)
      return 16 + logtab[x>>16];
    return logtab[x];
  }

Finally, for completeness, we include the following code that initializes $ \mathtt{logtab}$ and $ \mathtt{sqrttab}$:

  void inittabs() {
    sqrtab = new int[1<<(r/2)];
    logtab = new int[1<<(r/2)];
    for (int d = 0; d < r/2; d++)
      for (int k = 0; k < 1<<d; k++)
        logtab[1<<d+k] = d;
    int s = 1<<(r/4);                    // sqrt(2^(r/2))
    for (int i = 0; i < 1<<(r/2); i++) {
      if ((s+1)*(s+1) <= i << (r/2)) s++; // sqrt increases
      sqrtab[i] = s;
    }
  }

To summarize, the computations done by the $ \mathtt{i2b(i)}$ method can be implemented in constant time on the word-RAM using $ O(\sqrt{n})$ extra memory to store the $ \mathtt{sqrttab}$ and $ \mathtt{logtab}$ arrays. These arrays can be rebuilt when $ \mathtt{n}$ increases or decreases by a factor of 2, and the cost of this rebuilding can be amortized over the number of $ \mathtt{add(i,x)}$ and $ \mathtt{remove(i)}$ operations that caused the change in $ \mathtt{n}$ in the same way that the cost of $ \mathtt{resize()}$ is analyzed in the $ \mathtt{ArrayStack}$ implementation.



Footnotes

... words)3
Recall for a discussion of how memory is measured.
opendatastructures.org