Reservoir sampling: who discovered Algorithm R?

Reservoir sampling is a class of algorithms for sampling from streaming data, i.e., a sequence of data that we can access only once. Presented below is Algorithm R, the first well-known reservoir sampling algorithm.

Reservoir sampling – Algorithm R. We wish to sample mm elements of LL with equal probability. To this end, we define a size-mm set RR (“reservoir”) and initialize it by setting R[i]=xiR[i] = x_i for 0im10 \leq i \leq m-1. For each index imi \geq m, we perform a simple random sampling on the set of integers between 00 and ii to choose kk. If 0km10 \leq k \leq m-1, then we set R[k]=xiR[k] = x_i; otherwise, we leave RR unchanged. RR is the desired sample, once we go through all elements in LL. \square

In Volume 2, Section 3.4.2 of The Art of Computer Programming, Knuth attributes Algorithm R to Alan G. Waterman. He, however, does not provide a reference, and there appears to be little information available on the matter. I sent a letter of inquiry to Knuth and received the following reply:

Alan Waterman wrote me in the 70s, with detailed comments and suggestions that I incorporated into the second edition of Volume 2—which I was happy to complete in 1975 or 1976. (Bad typesetting took over, and I had to develop TeX\TeX before that book could be published, finally, in 1981.)

The first edition had an inferior Algorithm 3.4.2R dating from 1962; Alan was responding to it.

My books contain lots of unpublished work that I happen to learn about in various ways. But if the author(s) do(es) publish, I try to give a citation.

I’m unaware that Alan ever did write a paper about this (or any of his other contributions that are cited elsewhere in the second edition). If you learn of any appropriate references, please let me know … and you will deserve a reward of 0x$1.00.

All I remember is that I was tickled pink to receive a letter from a real Harvard statistician who actually enjoyed my self-taught attempts at exposition of statistical methods…

Enclosed is a copy of the letter that survives in my files.

-don

(high-def)

Here are the relevant sections of Waterman’s 1975 letter to Knuth:

There is a better algorithm than reservoir sampling, section 3.4.2, when the length of the file is unknown. Bring in the first nn records, then for each k>nk > n, skip over the kkth record with probability (kn)/k(k-n)/k, and replace the iith item in the sample by the kkth record with probability 1/k1/k, for each ini \leq n. If it is necessary to pass the records to a reservoir (the text is not clear on this point), the replacements may be done in an internal table of indices to the reservoir.

Waterman’s algorithm replaces Knuth’s own algorithm from the first edition of The Art of Computer Programming. Below, a reply from Knuth to Waterman:

Fourth, your improved reservoir algorithm (why oh why didn’t I think of it?) will replace the old one in Section 3.4.2.

(high-def)

(high-def)

All in all, Algorithm R was known to Knuth and Waterman by 1975, and to a wider audience by 1981, when the second edition of The Art of Computer Programming volume 2 was published.

Meanwhile, A. I. McLeod and D. R. Bellhouse appear to have discovered Algorithm R independently, as [McLeod–Bellhouse 1983] presents Algorithm R without citing Knuth or Waterman.

Before the publication of Waterman’s algorithm, [Fan–Muller–Rezucha 1962] described a similar, but not quite the same, algorithm:

Procedure of Method 4:

For each tt, t=1,2,,nt = 1, 2, \cdots, n, obtain rtr_t, associate it with the item and its label ItI_t, and place these pieces of information in the reservoir. For each tt, t>nt > n, place rtr_t, the item, and its label ItI_t in the reservoir if rtr_t is equal to or less than the maximum value of rr of a specified subset of values of rr in the reservoir. The subset of values of rr used in the comparison consists of the DD smallest distinct values of rr in the reservoir at the time of the comparison, where D=nD = n if all the rr’s in this comparison subset are distinct; otherwise D<nD < n.

When t=Nt = N, i.e., all the items have been inspected, stage one is completed.

Stage 2. Search the reservoir and select the ItI_t’s associated with the subset of the nn smallest values of rr in the reservoir Difficulty can occur only when the values of the rr’s are not distinct. In this case it is possible that there will be one or more items in the reservoir with a value of rr equal to the largest value, say rur_u, of the comparison subset upon completion of stage 1. Let LL denote the number of items in this comparison subset upon completion of stage 1 which have values of rr less than rur_u and let MM be the number of items in the reservoir which have a value of rr equal to rur_u. To satisfy the original requirement of obtaining a random sample of exactly nn distinct items it will be necessary to utilize an additional selection procedure to select MM’ distinct items out of MM such that L+M=nL+M’ = n. This selection can be accomplished by using, for example, Method 1.

While it might be reasonable to say that the reservoir algorithm paradigm was discovered by Fan, Muller, and Rezucha, it seems unlikely that they were aware of Algorithm R before Knuth published Waterman’s algorithm.

As far as I can tell, most citations of Algorithm R credit Waterman. Curiously, however, the Wikipedia article on reservoir sampling makes no mention of Waterman, crediting instead J. S. Vitter via [Vitter 1985]. But then, the Vitter paper cites Waterman:

Algorithm R (which is is [sic] a reservoir algorithm due to Alan Waterman) works as follows: When the (t+1)(t+1)st record in the file is being processed, for tnt \geq n, the nn candidates form a random sample of the first tt records. The (t+1)(t+1)st record has a n/(t+1)n/(t+1) chance of being in a random sample of size nn of the first t+1t+1 records, and so it is made a candidate with probability n/(t+1)n/(t+1). The candidate it replaces is chosen randomly from the nn candidates. It is easy to see that the resulting set of nn candidates forms a random sample of the first t+1t+1 records.

This is probably why high school history teachers tell their students not to use Wikipedia for their essay homework.