Matthew A Grayson
Journal of Complexity
This paper introduces a class of probabilistic counting algorithms with which one can estimate the number of distinct elements in a large collection of data (typically a large file stored on disk) in a single pass using only a small additional storage (typically less than a hundred binary words) and only a few operations per element scanned. The algorithms are based on statistical observations made on bits of hashed values of records. They are by construction totally insensitive to the replicative structure of elements in the file; they can be used in the context of distributed systems without any degradation of performances and prove especially useful in the context of data bases query optimisation. © 1985.
Matthew A Grayson
Journal of Complexity
Ruixiong Tian, Zhe Xiang, et al.
Qinghua Daxue Xuebao/Journal of Tsinghua University
S.F. Fan, W.B. Yun, et al.
Proceedings of SPIE 1989
Harpreet S. Sawhney
IS&T/SPIE Electronic Imaging 1994