2.2 FastArrayStack: An Optimized ArrayStack

Much of the work done by an ArrayStack involves shifting (by $ \mathtt{add(i,x)}$ and $ \mathtt{remove(i)}$) and copying (by $ \mathtt{resize()}$) of data. In the implementations shown above, this was done using $ \mathtt{for}$ loops. It turns out that many programming environments have specific functions that are very efficient at copying and moving blocks of data. In the C there are the $ \mathtt{memcpy(d,s,n)}$ and $ \mathtt{memmove(d,s,n)}$ functions. In C++ there is the $ \mathtt{std::copy(a0,a1,b)}$ and algorithm. In Java there is the $ \mathtt{System.arraycopy(s,i,d,j,n)}$ method.

    void resize() {
        T[] b = newArray(Math.max(2*n,1));
        System.arraycopy(a, 0, b, 0, n);
        a = b;
    }
    void add(int i, T x) {
        if (i < 0 || i > n) throw new IndexOutOfBoundsException();
        if (n + 1 > a.length) resize();
        System.arraycopy(a, i, a, i+1, n-i); 
        a[i] = x;
        n++;
    }
    T remove(int i) {
        if (i < 0 || i > n - 1) throw new IndexOutOfBoundsException();
        T x = a[i];
        System.arraycopy(a, i+1, a, i, n-i-1);
        n--; 
        if (a.length >= 3*n) resize();
        return x;
    }

These functions are usually highly optimized and may even use special machine instructions that can do this copying much faster than we could do using a $ \mathtt{for}$ loop. Although using these functions does not asymptotically decrease the running times, it can still be a worthwhile optimization. In the Java implementations here, the use of $ \mathtt{System.arraycopy(s,i,d,j,n)}$ resulted in speedups of a factor of 2-3 depending on the types of operations performed. Your mileage may vary.

opendatastructures.org