# Please identify this algorithm: probabilistic top-k elements in a data stream

I remember hearing about the following algorithm some years back, but can’t find any reference to it online. It identifies the top k elements (or heavy hitters) in a data stream of n elements using only m counters. This is particularly useful for finding top search terms, network abusers, etc. while using minimal memory.

The algorithm: for each element,

1. If the element does not already have a counter and counters < m, create a counter for the element and initialize to 1.
2. Else if the element does have a counter, increment it.
3. Else if the element does not have a counter and counters > m, decrement an existing counter c. If c reaches 0, replace its corresponding element, with the current element. (c is an index into the list of existing counters, where c increases in round robin fashion for each element that reaches this step.)

I have found many other similar algorithms (many of which are listed, though not described, in this wikipedia article about streaming algorithms), but not this one.
I particularly like it because it is as simple to implement as it is to describe.

But I’d like to learn more about its probabilistic characteristics- if I’m only interested in the top 100 items, what effect does using 1,000 counters instead of 100 counters have?

Please identify this algorithm: probabilistic top-k elements in a data stream