Subsections

# 5.3 Hash Codes

The hash tables discussed in the previous section are used to associate data with integer keys consisting of bits. In many cases, we have keys that are not integers. They may be strings, objects, arrays, or other compound structures. To use hash tables for these types of data, we must map these data types to -bit hash codes. Hash code mappings should have the following properties:

1. If and are equal, then and are equal.

2. If and are not equal, then the probability that should be small (close to ).

The first property ensures that if we store in a hash table and later look up a value equal to , then we will find --as we should. The second property minimizes the loss from converting our objects to integers. It ensures that unequal objects usually have different hash codes and so are likely to be stored at different locations in our hash table.

## 5.3.1 Hash Codes for Primitive Data Types

Small primitive data types like , , , and are usually easy to find hash codes for. These data types always have a binary representation and this binary representation usually consists of or fewer bits. In these cases, we just treat these bits as the representation of an integer in the range . If two values are different, they get different hash codes. If they are the same, they get the same hash code.

A few primitive data types are made up of more than bits, usually bits for some constant integer . (Java's and types are examples of this with .) These data types can be treated as compound objects made of parts, as described in the next section.

## 5.3.2 Hash Codes for Compound Objects

For a compound object, we want to create a hash code by combining the individual hash codes of the object's constituent parts. This is not as easy as it sounds. Although one can find many hacks for this (for example, combining the hash codes with bitwise exclusive-or operations), many of these hacks turn out to be easy to foil (see Exercises 5.7-5.9). However, if one is willing to do arithmetic with bits of precision, then there are simple and robust methods available. Suppose we have an object made up of several parts whose hash codes are . Then we can choose mutually independent random -bit integers and a random -bit odd integer and compute a hash code for our object with

Note that this hash code has a final step (multiplying by and dividing by ) that uses the multiplicative hash function from Section 5.1.1 to take the -bit intermediate result and reduce it to a -bit final result. Here is an example of this method applied to a simple compound object with three parts , , and :

The following theorem shows that, in addition to being straightforward to implement, this method is provably good:

Theorem 5..3   Let and each be sequences of bit integers in and assume for at least one index . Then

Proof. We will first ignore the final multiplicative hashing step and see how that step contributes later. Define:

Suppose that . We can rewrite this as:

 (5.4)

where

If we assume, without loss of generality that , then (5.4) becomes

 (5.5)

since each of and is at most , so their product is at most . By assumption, , so (5.5) has at most one solution in . Therefore, since and are independent ( are mutually independent), the probability that we select so that is at most .

The final step of the hash function is to apply multiplicative hashing to reduce our -bit intermediate result to a -bit final result . By Theorem 5.3, if , then .

To summarize,

## 5.3.3 Hash Codes for Arrays and Strings

The method from the previous section works well for objects that have a fixed, constant, number of components. However, it breaks down when we want to use it with objects that have a variable number of components, since it requires a random -bit integer for each component. We could use a pseudorandom sequence to generate as many 's as we need, but then the 's are not mutually independent, and it becomes difficult to prove that the pseudorandom numbers don't interact badly with the hash function we are using. In particular, the values of and in the proof of Theorem 5.3 are no longer independent.

A more rigorous approach is to base our hash codes on polynomials over prime fields; these are just regular polynomials that are evaluated modulo some prime number, . This method is based on the following theorem, which says that polynomials over prime fields behave pretty-much like usual polynomials:

Theorem 5..4   Let be a prime number, and let be a non-trivial polynomial with coefficients . Then the equation has at most solutions for .

To use Theorem 5.4, we hash a sequence of integers with each using a random integer via the formula

Note the extra term at the end of the formula. It helps to think of as the last element, , in the sequence . Note that this element differs from every other element in the sequence (each of which is in the set ). We can think of as an end-of-sequence marker.

The following theorem, which considers the case of two sequences of the same length, shows that this hash function gives a good return for the small amount of randomization needed to choose :

Theorem 5..5   Let be a prime, let and each be sequences of -bit integers in , and assume for at least one index . Then

Proof. The equation can be rewritten as

 (5.6)

Since , this polynomial is non-trivial. Therefore, by Theorem 5.4, it has at most solutions in . The probability that we pick to be one of these solutions is therefore at most .

Note that this hash function also deals with the case in which two sequences have different lengths, even when one of the sequences is a prefix of the other. This is because this function effectively hashes the infinite sequence

This guarantees that if we have two sequences of length and with , then these two sequences differ at index . In this case, (5.6) becomes

which, by Theorem 5.4, has at most solutions in . This combined with Theorem 5.5 suffice to prove the following more general theorem:

Theorem 5..6   Let be a prime, let and be distinct sequences of -bit integers in . Then

The following example code shows how this hash function is applied to an object that contains an array, , of values:

The preceding code sacrifices some collision probability for implementation convenience. In particular, it applies the multiplicative hash function from Section 5.1.1, with to reduce to a 31-bit value. This is so that the additions and multiplications that are done modulo the prime can be carried out using unsigned 63-bit arithmetic. Thus the probability of two different sequences, the longer of which has length , having the same hash code is at most

rather than the specified in Theorem 5.6.

opendatastructures.org